Jump to content

Mini-news: AMD Zen 2 architecture said to have 16% higher IPC than the original 1st-gen Zen CPUs

Morgan MLGman
8 hours ago, Taf the Ghost said:

APUs with "easy" to attach HBM would probably be for Apple.

apple would like that, except for the fact that they seem to want to move to their own cpus too, so not sure how that would go

4 hours ago, GoldenLag said:

that is a single benchmark. IPC varies between different workloads.

 

the IPC impovement between Zen and Zen + is quite nice considering is was essentially a die-shrink with minor improvements to cache. 

my bet is that those cache improvements are more like cache fixes than anything else, they probably had some sort of problem in the layout that made them need to increase the wait cycles, then fixed it in the 2nd gen 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, cj09beira said:

my bet is that those cache improvements are more like cache fixes than anything else, they probably had some sort of problem in the layout that made them need to increase the wait cycles, then fixed it in the 2nd gen 

that is very likely to be the case. Zen+ really isnt a iteration on Zen. its only the cache that really got changed and not much at that. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, cj09beira said:

apple would like that, except for the fact that they seem to want to move to their own cpus too, so not sure how that would go

perhaps Pro devices in the laptop  series will continue with x86 untill Apple own SOC catches up. even then Apple will love to have a powerful GPU in their system. 

 

desktop is more or less guarenteed to stay X86 in the coming years

Link to comment
Share on other sites

Link to post
Share on other sites

51 minutes ago, GoldenLag said:

that is very likely to be the case. Zen+ really isnt a iteration on Zen. its only the cache that really got changed and not much at that. 

New microcode in the IMC and the most of the tweaks that were already in the Threadripper parts. For as good as Ryzen gen 1 was, they launched with practically the first batch of working silicon. Zen+ really just v2 of a bunch of systems that were new at launch. Or things that normally would have been addressed in a more Intel-like production cycle, but Zen was too good and AMD too broke to not rush it out as fast as possible. 

 

Cache is the huge thing going forward, in all sections of CPU & GPU technology. The actual "cores" themselves can do the processing far faster than they can be fed or feed it out. This is why the I/O is actually more important than the cores, relatively speaking, for a while. Even the move to large numbers of cores can be viewed in the regard.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, Stefan Payne said:

For that you need to have a design first.

They don't

So its only the usual "Intel will be back!!!111" claim without proof.

 

So why do you do that, when there is nothing to indicate your claim??

Even if they do that, that won't cange the design of the CPU, the inefficiency, the high power consumption. 

 

And the advantage AMD has with 7nm manufacturing over Intel with 14nm...

 

But we might see that next week, if the guys having 9900K are right...

https://www.intel.com/content/www/us/en/foundry/emib.html

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, pas008 said:

Yes and what product for Desktop uses will use that??


And how does it help bringing power consumption down??


From what I've heard so far, it looks like the 9900K might go up to 200W or even more....

Someone leaked a Cinebench power Consumption and they said it was 330W.

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Stefan Payne said:

Yes and what product for Desktop uses will use that??


And how does it help bringing power consumption down??


From what I've heard so far, it looks like the 9900K might go up to 200W or even more....

Someone leaked a Cinebench power Consumption and they said it was 330W.

already stated power consumption shit if you didnt read before but this isnt about power consumption

 

 

your hate is so funny

 

good good let the hate flow through you

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, pas008 said:

already stated power consumption shit if you didnt read before but this isnt about power consumption

You didn't state that they were that bad, this is what you said:

Quote

forget intel can do mesh emib(might not be able to get same power consumption though lol)

And how can or should EMIB help right now??

Yes, I know that they do have that technology but its comparable to the Interposer that are used by "the other side" (and more).

 

It helps to "glue" two chips together with lower latency, higher pincount and performance than ever before. But you still have two dies. 

 

As for the 8 Core Core i9-9900K, some people, who most certainly have access to this Chip are already bashing the power consumption, mocking the chip as a "design study". They say that you need not to think about OC with a normal Aircooler or AIO Water Cooler...

On another Site that leaked (is down right now), they did a Cinebench and that showed 330W. Not sure if they used the CPU Only mode or also used the GPU...

 

The Problem we have right now is that shrinks aren't that feasable as they don't necessarily decrease the cost per transistor.

The Problem we hav right now is that we are hard power limited!


And a 300W Desktop CPU isn't really a good idea...

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Stefan Payne said:

And a 300W Desktop CPU isn't really a good idea...

Presuming they overclock the pins off it to get there, but regular users wont be anywhere near that. Is there a good reason for it to take more than 33% over 8086k/8700k assuming straight core scaling? 5.2 on my 8086k is easy on air cooling and thermals are not the limit for my overclocking, unsafe voltage is.

 

Under real world tasks I'd go as far as to say performance per watt is probably no different to Ryzen in general.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Stefan Payne said:

You didn't state that they were that bad, this is what you said:

And how can or should EMIB help right now??

Yes, I know that they do have that technology but its comparable to the Interposer that are used by "the other side" (and more).

 

It helps to "glue" two chips together with lower latency, higher pincount and performance than ever before. But you still have two dies. 

 

As for the 8 Core Core i9-9900K, some people, who most certainly have access to this Chip are already bashing the power consumption, mocking the chip as a "design study". They say that you need not to think about OC with a normal Aircooler or AIO Water Cooler...

On another Site that leaked (is down right now), they did a Cinebench and that showed 330W. Not sure if they used the CPU Only mode or also used the GPU...

 

The Problem we have right now is that shrinks aren't that feasable as they don't necessarily decrease the cost per transistor.

The Problem we hav right now is that we are hard power limited!


And a 300W Desktop CPU isn't really a good idea...

 

On 10/17/2018 at 10:12 PM, Stefan Payne said:

For that you need to have a design first.

They don't

So its only the usual "Intel will be back!!!111" claim without proof.

 

So why do you do that, when there is nothing to indicate your claim??

Even if they do that, that won't cange the design of the CPU, the inefficiency, the high power consumption. 

 

And the advantage AMD has with 7nm manufacturing over Intel with 14nm...

 

But we might see that next week, if the guys having 9900K are right...

why the statements in bold italic?

isnt those counter statements?

 

please explain design study statement ring bus 8 core like we had many yrs in xeon and extreme lineup?

 

and to power consumption we already covered this intel uses more didnt we already cover this?

 

like i said let the hate flow through you

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/17/2018 at 11:18 PM, Brooksie359 said:

I have a 2700x and use it with a 240hz monitor. You can still hit pretty high fps with ryzen just not as high as something like an 8700k. To me it doesn't matter as I am still getting quite high fps. Also I only play overwatch on the 240hz which the 2700x can hit around 240 fps. For everything else I play at 4k. 

Hell, before I upgraded to an i7 6700K, I was using a FX-8350 at the time with an R9 290 and still get 200-250FPS in CSGO.

I don't read the reply to my posts anymore so don't bother.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×