Jump to content

Ryzen 2700X OCed to 4.3Ghz (1.4v) across all cores, performance numbers included.

Master Disaster
30 minutes ago, GoldenLag said:

Zen was born from AMD's only option to regain market share and earn money. I have some doubts about them splitting their production line into two.

 

Clocks should be similar in the different core clusters unless yields are very bad at 7nm. Id see them maximizing yields on the 16 core part if it becomes thing. Though 8 core consumer might still be a thing. AMD seems to be doing a thing where they sell of their old dies in form of them G parts. At least thats how i see the G parts, as a rampdown on production

The 8c part could be on the less dense libraries available, allowing it to clock higher, while the 16c part would be on the most dense library to improve yield. This is the reason to split into Big & Little dies. This generation is the shot AMD has to really put pressure on Intel in the server space by throwing "ALL THE CORES!" at the issue. If the Zen2 core has a full set of AVX2 units (2x 256bit? 4x?), it'll only leave a small chunk of performance difference between the companies in the server space.

 

The design approach makes more sense than a 12c design used in all of the spaces. They already have their Mobile part, which is a very different design, so there's already 2 completely different Zen designs in the wild. Big & Little would allow for higher clocked/smaller core count & lower clocked/massive core count Epyc 2 SKUs. The design part, given the move to a very modular approach for AMD, is relatively easy. It's the validation that takes the most work. And AMD could get Epyc 2 out before Intel gets Icelake-SP out.

27 minutes ago, GoldenLag said:

You can mostly just run Stock. Use the OC when you need to squeeze the last bit of performance out, and or before your upgrade cycle appraches

It's more than that. It's that the difference between the Best OC and Stock is going to be extremely tiny on Ryzen Gen 2. Which means everyone gets the performance they pay for, which is great for the consumer. But it also moves the Enthusiast side of things to RAM sub-timings.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Sauron said:

The numbers are at 3.2GHz though, it's pretty impressive that it can compete as well as it does against far higher clocked intel chips.

I think the numbers in the charts are the RAM speed used, not the clock speeds (I assume all the tests were with the alleged overclock that would not run games, unless I misunderstood and it doesn't run games at stock either).

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Razor01 said:

 

 

Ok really need to throw that shit out of here.  Keller came in yeah, but he didn't make Athlon 64 lol, he wasn't the lead for A64 for long, which he left during the initial design phases of that CPU.  He left AMD in 1999.  And he was the lead for Athon at the end of its design phase if I remember correctly.  A64 was released in 2003. The design was done by another.  it takes 3 to 5 years to make a chip, that means he left right at the start of design.  And if you want to know where the design truly came from I suggest you look up Nexgen.  Because that is where AMD's Athlon and Athlon 64 designs came from.

 

And no Ryzen is not competitive in single core which is all because of the CPU design with its CCX modules.  This is a fatal flaw in Ryzen, a flaw AMD could not get around because its too costly to make a monolithic chip.  We know what financial situation AMD is in, they had no choice but to do it this way.  Its multithreaded performance is great around 15% better than Intel, but because of the CCX latency at time we can't see its multithreaded performance in real world applications.  Some times we can see it as well.  

 

Now I'm going to be very blunt about this.  Zen hasn't caught up to Intel with its tech.  Its better at one thing, its multithreaded performance.  Intel has been DOING nothing for the past oh DECADE?  And it took AMD a DECADE to get right behind Intel?  Still second place. 

Skylake should have been mostly complete in design at the time Bulldozer launched. Intel's lead is in memory latency (something they've always been better at) and Node quality. The Node lead might be about to disappear, which would really change things.

 

Intel has been doing a lot, actually, but their 14nm then 10nm Node troubles have been a massive roadblock. That's why they've been respinning Skylake for 4 years by the time they move on from it.

 

At the current generation, Intel's advantages are in tasks that either can leverage the AVX2/512 instructions or that thrash the memory really hard. This actually turns out to be more of a lead than people realize, but unless it's a Server task, you don't normally see it on the desktop for long enough to matter. The memory thrashing is a big part of the gaming difference. (Nvidia's driver stack still plays a big part of that in a bunch of games.)

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Cookybiscuit said:

7400 is irrelevant to the discussion when the 8400 exists and trounces the 1600.

yes, but it was relevant when picking between Ryzen and Intel untill lately because it made no sense to buy an expensive Z series board with a chep locked chip. also notice me talking in past tense about how it WAS not how it IS. now with H310 boards, the locked Intel chips offer the best value. Barely anyone would buy an I5 7400 over an R5 1600 because it didnt make sense too. I never said Ryzen was the ultimate choice or anything, just that it made sense in some cases.

I spent $2500 on building my PC and all i do with it is play no games atm & watch anime at 1080p(finally) watch YT and write essays...  nothing, it just sits there collecting dust...

Builds:

The Toaster Project! Northern Bee!

 

The original LAN PC build log! (Old, dead and replaced by The Toaster Project & 5.0)

Spoiler

"Here is some advice that might have gotten lost somewhere along the way in your life. 

 

#1. Treat others as you would like to be treated.

#2. It's best to keep your mouth shut; and appear to be stupid, rather than open it and remove all doubt.

#3. There is nothing "wrong" with being wrong. Learning from a mistake can be more valuable than not making one in the first place.

 

Follow these simple rules in life, and I promise you, things magically get easier. " - MageTank 31-10-2016

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, jde3 said:

Microsoft talks all day about innovation it's like their tag line.. but for the life of me I can't think of a single thing they have ever done that didn't involve buying some other company or stealing the idea from something else.

Continuum

Tear drop hinge

Unified messager

Tablet/desktop hybrid UI

Touchscreen table PCs

Self contained, stand alone tablets

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Drak3 said:

Continuum

Tear drop hinge

Unified messager

Tablet/desktop hybrid UI

Touchscreen table PCs

Self contained, stand alone tablets

Unified messanger was Apple first, touchscreen table pc was made by students of some uni first. Dunno about the rest (wtf is continuum), but it doesnt sound like much innovation, more along the lines of Apples "innovative designs" or intels ultrabook "innovation".

 

Link to comment
Share on other sites

Link to post
Share on other sites

35 minutes ago, Taf the Ghost said:

Skylake should have been mostly complete in design at the time Bulldozer launched. Intel's lead is in memory latency (something they've always been better at) and Node quality. The Node lead might be about to disappear, which would really change things.

 

Intel has been doing a lot, actually, but their 14nm then 10nm Node troubles have been a massive roadblock. That's why they've been respinning Skylake for 4 years by the time they move on from it.

 

At the current generation, Intel's advantages are in tasks that either can leverage the AVX2/512 instructions or that thrash the memory really hard. This actually turns out to be more of a lead than people realize, but unless it's a Server task, you don't normally see it on the desktop for long enough to matter. The memory thrashing is a big part of the gaming difference. (Nvidia's driver stack still plays a big part of that in a bunch of games.)

True!  We haven't see it materialize in their CPU's as you stated.

 

I don't think Intel's node advantage will disappear, They are having trouble with 10nm, but they brought some of the 10nm changes to 14nm.  like the fin design.  So at this point its unsure of what Intel's 10nm is like.  Also 7nm from the other fabs I think are still going to be around 10nm Intel.  Naming aside. 

 

Yeah memory utilization is a huge part of CPU design, well any chip design for that matter.  Programming around such problems is not an easy task.

 

Interestingly if one looks at CPU architecture, the current crop of architectures all came from PII designs. Nexgen was able to use RISC based design in a CISC based processor which gave AMD a huge upper hand with A64.  Think it was Raza who designed the K7 which was the precursor to K8, AKA A64.  Keller came at the launch of K7 and left shorty after, he was only at AMD for 2 years or so maybe even less.  Raza was there at the same time as Keller for K8, A64!

 

The same problem we see now in the graphics division of not enough talent, where engineers left because of management, is the same problem we saw with the CPU side of things a decade ago with AMD's CPU division.

 

When making a new Uarch there are chances it will fail, we have seen this happen in numerous chip designs where not everything fully works out as planned.  So Intel we don't know if they have truly new designs that just weren't good enough designs to bring to market but we can say they have a higher likely hood of making a new design now that will put them on top again. Just like what happened after A64 (well there were two problems here AMD also faltered with BD which most likely won't happen again at least not that badly) The last big change in microprocessor design is predictive branching.  This started with Pentium II.  But wasn't widely used till Intel's Sandy Bridge.  For AMD they finally started using it heavily with Ryzen.  So right there is a 8 year gap which Intel milked the crap out of their architectures lol.

 

 

8 years can be seen as 2 gens of architectures.  Intel is due for some major changes if they work out well, they will have the lead again and its going to be sizable.  If they don't I think we will see another rehash of what they already have which they are more then competitive enough to keep going, they really only need to improve their multithreaded performance which shouldn't be too hard.

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, hobobobo said:

Unified messanger was Apple first, touchscreen table pc was made by students of some uni first. Dunno about the rest (wtf is continuum), but it doesnt sound like much innovation, more along the lines of Apples "innovative designs" or intels ultrabook "innovation".

 

Microsoft introduced their unified messenger in WP 7.5

Microsoft introduced the first self contained, stand alone tablet. There have been companion devices prior, but those were companion devices.

Continuum is WP's desktop mode. Canonical and Samsung would come out with their versions: Convergeance and Dex

 

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, SpaceGhostC2C said:

I think the numbers in the charts are the RAM speed used, not the clock speeds (I assume all the tests were with the alleged overclock that would not run games, unless I misunderstood and it doesn't run games at stock either).

Oh I see, that could still be bottlenecking the zen chips though as the infinity fabric relies on fast memory...

Don't ask to ask, just ask... please 🤨

sudo chmod -R 000 /*

Link to comment
Share on other sites

Link to post
Share on other sites

Been out of the loop for a while, how well would this stack up next to a 4790k? Worth an upgrade or wait for Gen 3 and lower ddr4 prices next year?

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Drak3 said:

Microsoft introduced their unified messenger in WP 7.5

Microsoft introduced the first self contained, stand alone tablet. There have been companion devices prior, but those were companion devices.

Continuum is WP's desktop mode. Canonical and Samsung would come out with their versions: Convergeance and Dex

 

If you mean tablets then yea, i mean literally table touchscreen pc, as in conference table, a huge ass table sized touchscreen pc. Guess missread the first post. Not sure about messanger, i though apple had that b4 2014, but never looked into that.

Link to comment
Share on other sites

Link to post
Share on other sites

can someone enlarge that cpuz screens?

on work pc and cant look at all the numbers with my limited stuff here

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Cookybiscuit said:

People have been saying "effective multi-core utilization in games is just around the corner" since Bulldozer came out and it still hasn't happened, CPUs with fewer, stronger cores still win handily. Buying components in the hope that they'll be better in the future doesn't seem like too bright an idea, a 2500K is still a perfectly good CPU in 2018, a 8150 belongs in the bin. 

 

That said, it isn't all about gaming of course.

You are 5+ years obsolete in your knowledge it seems. Most AAA engines can easily use more than 8 cores these days. Especially open world sandbox games use a lot of cores, to the point where even the 7700k would micro stutter compared to even a 6 core Ryzen. A 2500k is ok at older games or non sandbox games. It will certainly struggle in modern open world games.

5 hours ago, Cookybiscuit said:

No it doesn't, and Haswell was five years ago.

absolute disaster.jpg

Your own sources show Ryzen having better IPC than Broadwell, so it's certainly better than Haswell. According to your own sources here. Good job.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, Notional said:

You are 5+ years obsolete in your knowledge it seems. Most AAA engines can easily use more than 8 cores these days. Especially open world sandbox games use a lot of cores, to the point where even the 7700k would micro stutter compared to even a 6 core Ryzen. A 2500k is ok at older games or non sandbox games. It will certainly struggle in modern open world games.

Your own sources show Ryzen having better IPC than Broadwell, so it's certainly better than Haswell. According to your own sources here. Good job.

doesnt 1700x and 4770k both have 3.9 boost?

and almost same scores in single thread?

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, Notional said:

You are 5+ years obsolete in your knowledge it seems. Most AAA engines can easily use more than 8 cores these days. Especially open world sandbox games use a lot of cores, to the point where even the 7700k would micro stutter compared to even a 6 core Ryzen. A 2500k is ok at older games or non sandbox games. It will certainly struggle in modern open world games.

Your own sources show Ryzen having better IPC than Broadwell, so it's certainly better than Haswell. According to your own sources here. Good job.

 

 

Actually most engines right now work well with 4, a few can work better over 4 but few and far between.  We are still using the same engines in games as we were 2 years ago too.

 

http://www.tomshardware.co.uk/multi-core-cpu-scaling-directx-11,review-33682-2.html

 

The only engine that seems to scale past this AOTS's engine which is not really used in any AAA games lol.  Engine life spans are around 5 years.  We are really only half way through the life span of current AAA engines.  Until DX11 is dead as a main render path we won't see scaling on more cores.

 

And you need to look at clock speed with Single Threaded performance C15, IPC is concerned.  All of this is prior to CCX latency issues too.  Application specifics affect IPC heavily.  We can say single threaded performance is anywhere between Haswell and Braodwell.

 

https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-Review-Now-and-Zen/Clock-Clock-Ryzen-Broadwell-E-Kaby-Lake

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Razor01 said:

 

 

Actually most engines right now work well with 4, a few can work better over 4 but few and far between.  We are still using the same engines in games as we were 2 years ago too.

 

http://www.tomshardware.co.uk/multi-core-cpu-scaling-directx-11,review-33682-2.html

 

The only engine that seems to scale past this AOTS's engine which is not really used in any AAA games lol.  Engine life spans are around 5 years.  We are really only half way through the life span of current AAA engines.  Until DX11 is dead as a main render path we won't see scaling on more cores.

 

And you need to look at clock speed with Single Threaded performance C15, IPC is concerned.  Application specifics effect IPC heavily.  We can say single threaded performance is anywhere between Haswell and Braodwell.

 

https://www.pcper.com/reviews/Processors/AMD-Ryzen-7-1800X-Review-Now-and-Zen/Clock-Clock-Ryzen-Broadwell-E-Kaby-Lake

The problem with your TH link is that it doesn't show 1% or 0.1% lowest. So you won't see any micro stutter. Hitman in that article shows improved performance above 4 cores.

 

Here's a Watch Dogs 2 bench, where you can see the massive improvements on a 7700K by just enabling HT: https://www.gamersnexus.net/game-bench/2808-watch-dogs-2-cpu-benchmark-thread-intensive-game/page-2

And if your cores run at a lower speed, then full 16 thread utilization:

 

Of course it's not ideal yet, as Intel's 10 year reign of quad cores CPU's have held back PC gaming. Also remember all these benchmarks are without much, if any, background processes.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Notional said:

The problem with your TH link is that it doesn't show 1% or 0.1% lowest. So you won't see any micro stutter. Hitman in that article shows improved performance above 4 cores.

 

Here's a Watch Dogs 2 bench, where you can see the massive improvements on a 7700K by just enabling HT: https://www.gamersnexus.net/game-bench/2808-watch-dogs-2-cpu-benchmark-thread-intensive-game/page-2

And if your cores run at a lower speed, then full 16 thread utilization:

 

Of course it's not ideal yet, as Intel's 10 year reign of quad cores CPU's have held back PC gaming. Also remember all these benchmarks are without much, if any, background processes.

Hmm ya know what was really holding us back in gaming,  consoles, you think Xbox and PS games which are the basis for most graphic engine designs were good for the industry?  To the point those console games didn't push Intel's CPU's.  Remember main thread was all most DX11 games had.  LL API's in consoles were already there prior to LLAPI's in PC's, so all that heavily threaded console games that were able to use 8 cores when put on to PC's ran just fine with 2-4 core CPU's lol.

 

See how easy it is to blame one or the other company for this?  Yeah and most games/engines are made with consoles in mind first and foremost.  Intel just sat on its ass because there was no competition from AMD.

 

Watch dog 2 Ryzen got eviscerated in that game lol

 

https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/8

 

You can't make generalized statements when talking about IPC and CPU Single thread performance, they are highly bound to many other things that are happening application side.  We need both IPC/single threaded performance and multi-threaded performance.  We can't have one or the other.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Razor01 said:

Hmm ya know what was really holding us back in gaming,  consoles, you think Xbox and PS games which are the basis for most graphic engine designs were good for the industry?  To the point those console games didn't push Intel's CPU's.  Remember main thread was all most DX11 games had.  LL API's in consoles were already there prior to LLAPI's in PC's, so all that heavily threaded console games that were able to use 8 cores when put on to PC's ran just fine with 2 core CPU's lol.

 

Watch dog 2 Ryzen got eviscerated in that game lol

 

https://techreport.com/review/31366/amd-ryzen-7-1800x-ryzen-7-1700x-and-ryzen-7-1700-cpus-reviewed/8

PS3 and Xbox360 certainly held back gaming. No doubt about that. Especially low quality textures and such. The second PS4 and Xbone released, we saw a huge increase in model detail, texture resolution/detail and vram usage. But on top of that, because game devs can use 6-7 (albeit weak) cores, that also improved.

 

WD2 did not understand Ryzen, and would treat SMT threads as full cores, and would bounce a lot of data between the CCX modules, thus increasing the latency by using infinity fabric. So that has less with multithreading to do, and more about proper hardware support.

 

Here's Far Cry 5 using 12 threads on the Coffee Lake CPU's: https://www.game-debate.com/news/24785/far-cry-5-pc-performance-report

 

Sure not maxing them out, but you never want CPU bottleneck anyways. By keeping all cores under 100%, you have much less problems with microstutter.

Watching Intel have competition is like watching a headless chicken trying to get out of a mine field

CPU: Intel I7 4790K@4.6 with NZXT X31 AIO; MOTHERBOARD: ASUS Z97 Maximus VII Ranger; RAM: 8 GB Kingston HyperX 1600 DDR3; GFX: ASUS R9 290 4GB; CASE: Lian Li v700wx; STORAGE: Corsair Force 3 120GB SSD; Samsung 850 500GB SSD; Various old Seagates; PSU: Corsair RM650; MONITOR: 2x 20" Dell IPS; KEYBOARD/MOUSE: Logitech K810/ MX Master; OS: Windows 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, Notional said:

PS3 and Xbox360 certainly held back gaming. No doubt about that. Especially low quality textures and such. The second PS4 and Xbone released, we saw a huge increase in model detail, texture resolution/detail and vram usage. But on top of that, because game devs can use 6-7 (albeit weak) cores, that also improved.

 

WD2 did not understand Ryzen, and would treat SMT threads as full cores, and would bounce a lot of data between the CCX modules, thus increasing the latency by using infinity fabric. So that has less with multithreading to do, and more about proper hardware support.

 

Here's Far Cry 5 using 12 threads on the Coffee Lake CPU's: https://www.game-debate.com/news/24785/far-cry-5-pc-performance-report

 

Sure not maxing them out, but you never want CPU bottleneck anyways. By keeping all cores under 100%, you have much less problems with microstutter.

Textures is not what I'm talking about, texture sizes only have to do with memory amounts, which are easy to change going from console to PC if needed, just render out new textures at higher resolutions as long as the base assets are saved.

 

"proper hardware support", do you know there is no way to select which cores are doing what over the CCX modules, how are programmers going to do that?  Individual split up threads and designated them to cores?  Not doable, without a system like NUMA in place specific for the CCX modules which can hide that latency.  That is where we see these engines that can utilize more cores still fail with Ryzen.  2 problems here one is IPC/single thread performance, which its low can't even deny that, and the second problem is the latency issue introduced with the CCX modules, which can't be hidden right now since there is no programmer control over this.  If and when such a system is put in place, that will change our entire programming paradigm of how to program multi threaded applications as well.  So everything we have done so far will be pretty much useless.  I don't even think AMD would have such resources to push such an endeavor and I doubt MS and Intel will even care about going down this road until Intel is forced to go down a modular design as well.

 

Look its a business world, its great AMD was able to get Ryzen out and put it to Intel and equalize the market with products, but there is so much more to this than getting the product out, its not automatically going to get them marketshare by the droves (so far the estimate of Ryzen's impact is 3% of the market), its not going to flip the market upside down when its so close in performance to Intel's products.  Its not going to shift the way we are programming right now because there are many things that were built on previous CPU's that we can't just throw away or make new applications to satisfy AMD's products.  Its not going to happen.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Notional said:

You are 5+ years obsolete in your knowledge it seems. Most AAA engines can easily use more than 8 cores these days. Especially open world sandbox games use a lot of cores, to the point where even the 7700k would micro stutter compared to even a 6 core Ryzen. A 2500k is ok at older games or non sandbox games. It will certainly struggle in modern open world games.

Your own sources show Ryzen having better IPC than Broadwell, so it's certainly better than Haswell. According to your own sources here. Good job.

Call me when anything Ryzen outperforms anything 8400 or faster, only then can you claim that multicore>stronger cores.

1 minute ago, valdyrgramr said:

That moment when the GPU is doing most of the work.

I guess that explains the 52% performance delta between the slowest and fastest CPUs in a benchmark where the GPU is the same in each test.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Drak3 said:

Continuum

Tear drop hinge

Unified messager

Tablet/desktop hybrid UI

Touchscreen table PCs

Self contained, stand alone tablets

I had to google a lot of this stuff because.. yeah.. not very popular stuff.. and.. seriously. The largest software maker in the world has to add an IM client that can talk to several services at once (an old idea) and using a tablet as a laptop as it's revolutionary improvements to the computing industry? please.. you don't even know what the word innovation means.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Cookybiscuit said:

Call me when anything Ryzen outperforms anything 8400 or faster, only then can you claim that multicore>stronger cores.

I guess that explains the 52% performance delta between the slowest and fastest CPUs in a benchmark where the GPU is the same in each test.

 

https://www.techspot.com/review/1608-core-i5-8400-vs-ryzen-5-1600-best-value/

 

Their seems to be plenty of test here the 8400 loses out to.

if you want to annoy me, then join my teamspeak server ts.benja.cc

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Razor01 said:

And no Ryzen is not competitive in single core which is all because of the CPU design with its CCX modules.  This is a fatal flaw in Ryzen, a flaw AMD could not get around because its too costly to make a monolithic chip.  We know what financial situation AMD is in, they had no choice but to do it this way.

That's why it's genius too. Scaling sideways is going to be something everyone is going to have to accept at some point so you can solve Moores law in software.

"Only proprietary software vendors want proprietary software." - Dexter's Law

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, jde3 said:

using a tablet as a laptop as it's revolutionary improvements to the computing industry?

They introduced the idea of the stand alone tablet when everyone, including Apple, were pushing companion devices that served a few functions at most, in the late 90's and early '00's. Prior to the "revolutionary" iPad.

 

7 minutes ago, jde3 said:

yeah.. not very popular stuff.. and.. seriously.

A good deal of it flopped because it wasn't marketted worth a fuck.

Come Bloody Angel

Break off your chains

And look what I've found in the dirt.

 

Pale battered body

Seems she was struggling

Something is wrong with this world.

 

Fierce Bloody Angel

The blood is on your hands

Why did you come to this world?

 

Everybody turns to dust.

 

Everybody turns to dust.

 

The blood is on your hands.

 

The blood is on your hands!

 

Pyo.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, jde3 said:

That's why it's genius too. Scaling sideways is going to be something everyone is going to have to accept at some point so you can solve Moores law in software.

 

Not talking about moore's law here, talking about weakness and strengths of an uarch.  Intel has the same opportunities has AMD has when it comes to moore's law.

 

AMD didn't make CCX modules because it had issues with moore's law.  It made it because it didn't have the option of doing different and bigger dies, they could only do one design for all their CPU's top to bottom.  (Two designs if  you want to consider APU's as one and chips without APU's as another)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×