Jump to content

AMD’s Ryzen 9 7900X3D and 7950X3D Won’t be Faster than the 7800X3D in Most Games

Summary

Team Red will roll out three new SKUs leveraging their 3D V-Cache tech, including: the Ryzen 7 7800X3D, Ryzen 9 7900X3D, and the 7950X3D. Sadly, the dual-CCD chips won’t get dedicated cache dies for each chiplet. As a result, all three V-Cache CPUs will feature the same 3D stacked L3 cache die, with the same amount of SRAM. This harkens to the Zen 1 launch when the disaggregated or chiplet architecture was first introduced. Several applications faced lag and stutters on account of the high cache latency. The dual-CCX design of the Ryzen 1000 CPUs led to unconventionally high latency in workloads that weren’t specifically optimized for them.

 

Quotes

Quote

The cache latency became an issue when a core or thread tried to access data from the L3 cache of another CCX module. Since the cache slices of the neighboring module were physically farther from its own, this would lead to a stall, thereby causing stutters and lags. The only way to address this was by teaching the Windows scheduler to store all data relevant to a core in its own respective CCX cache slice. Of course, such a broad optimization took years to kick in fully, and AMD eventually abandoned the CCX with Zen 3, unifying the L3 cache for the entire CCD.

 

The Ryzen 7000 3D V-Cache will bring back similar problems. AMD has essentially slapped a secondary compute chiplet without a separate cache die of its own. Therefore, games that use a single 8-core CCD will run just fine, but the few heavily threaded titles like 4X, strategy, and expansive open-word won’t fully benefit much from the additional cores of the second CCD. In more cases than not, the vanilla CCD will try to access the data on the V-Cache die of the other chiplet, leading to the same old problems we saw with Zen 1.

 

AMD is working on a whitelist to assign applications to a CCD based on their requirements. Early driver data indicates that there are three lists, namely “Default”, “Game Mode”, and “Mixed Reality”. League of Legends has been assigned to the first, and most other titles to the second. In contrast, other compute-oriented applications will likely adopt the non-cache die as the primary chiplet.

 

 

My thoughts

Thought this was an interesting report and analysis. As many of our own members of this forum made the same criticisms of the 7900X3D and 7950X3D design in another thread. According to Tom's Hardware, AMD is working with Microsoft on Windows based optimizations that will work in combination with a new AMD chipset driver; which identifies games that prefer the increased L3 cache capacity and then pins them into the CCD with the stacked cache. Games that instead prefer higher frequencies more than increased cache, will then be pinned into the bare CCD. AMD claims that the bare CCD can access the stacked L3 cache in the adjacent CCD, however this isn’t optimal and will be infrequent. The chip with the extra L3 cache will run games at a slower speed, although most games don’t operate at maximum clock speeds, thus you should still get an immense performance advantage. If this is all true, this could complicate things and the Ryzen 9 SKUs will be uncompetitive, while being no better than the 13900KS in comparison (value wise). It also means the Ryzen 9 X3D parts will be insignificantly faster than the 7800X3D, making them hard to recommend. It's still possible that productivity based workloads will be better on the Ryzen 9 X3D parts compared to the 7800X3D part. So, perhaps you get the same gaming performance as a 7800X3D but have much better productivity performance. Which I guess is AMD's intention. Although, if these Windows optimizations and profiles don't work properly, it could reduce the possible gaming performance of the 7900X3D and 7950X3D; which is not AMD's objective here.

 

Sources

https://www.hardwaretimes.com/amds-ryzen-9-7900x3d-and-7950x3d-wont-be-faster-than-the-7800x3d-in-most-games-report/

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah this makes sense, if you listened VERY carefully to what they said, the Chiplet with the 3D Vcache will be limited to 5 Ghz, and the other chiplet will have the normal 7950x Die that can boost to 5.7ghz. So its gonna be an actual pain in the ass for the Scheduler to properly chose which one to use and not use for probably a few months.

 

After all the bugs are ironed out, it should be a great  cpu for both gaming AND productivity, but you may question what would be the point, If you want a gaming chip use the 7800x3D, if you want a production chip get a 7950x, doing a hybrid may not make sense. Should be fun and interesting to test though at least and see some of the results

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BiG StroOnZ said:

AMD’s Ryzen 9 7900X3D and 7950X3D Won’t be Faster than the 7800X3D in Most Games

In other news the sun is bright..

 

Having the V-cache on both CCD's would have reduced multicore performance with almost no change to gaming, making the CPU:

 

Cost more

Have lower yields (cost more)

Have less supply of cache dies (cost more / reduce supply)

Perform worse

Run hotter as stacked CPU dies are harder to cool.

Clock lower.

 

Link to comment
Share on other sites

Link to post
Share on other sites

There are some niche productivity applications that want both cache and core count. This is why AMD actually invented V-Cache - for their EPYC line of server CPUs. Certain code compilation loads and simulations do actually benefit from V-Cache, but the 5800X3D only had 8 cores, so the gains vs something like the 5950X were a mixed bag. If the scheduling is intelligent, the 7950X3D could actually be a huge win for that market - however small it might be.

 

I'm actually most interested to see the 7900X3D though, because each chiplet should only have 6 cores. This means that the L3 cache per core will be increased by 33%. It's possible that, if the scheduling works well, then for lightly threaded games that benefit from heavily from cache, the 7900X3D might actually be the fastest.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Shimejii said:

So its gonna be an actual pain in the ass for the Scheduler to properly chose which one to use and not use for probably a few months.

Over a year at best. MS is already fumbling the Scheduler with Windows 11 bugs with existing Ryzen CPUs. On top of that, maintaining a separate app white/black list? Exactly how is that getting maintained and updated?

Ehh, I'll probably avoid these chips and just wait for ZEN 5 to hopefully rearchitect this properly. Not that AMD shouldn't be commended for their efforts, but the software side will be an issue.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, StDragon said:

Over a year at best. MS is already fumbling the Scheduler with Windows 11 bugs with existing Ryzen CPUs. On top of that, maintaining a separate app white/black list? Exactly how is that getting maintained and updated?

Ehh, I'll probably avoid these chips and just wait for ZEN 5 to hopefully rearchitect this properly. Not that AMD shouldn't be commended for their efforts, but the software side will be an issue.

I think hybrid chiplet architectures are going to be the way forward, so these issues need to start getting sorted out eventually. Intel is clearly all-in on P and E cores. I don't see why AMD couldn't go in a similar direction with even more variation - V-Cache die, regular "P" core die, their efficient C core dies, and maybe even with these dies across process nodes.

 

Yes, the software will be a headache, but we're starting to reach the limits of how smart we can make the rocks we put lightning inside of. We need to start being smarter about how we use them.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, YoungBlade said:

I think hybrid chiplet architectures are going to be the way forward, so these issues need to start getting sorted out eventually. Intel is clearly all-in on P and E cores. I don't see why AMD couldn't go in a similar direction with even more variation - V-Cache die, regular "P" core die, their efficient C core dies, and maybe even with these dies across process nodes.

 

Yes, the software will be a headache, but we're starting to reach the limits of how smart we can make the rocks we put lightning inside of. We need to start being smarter about how we use them.

With regards to new paradigms, I don't disagree with that assessment. However IMHO it's in both Intel's and AMD's mutual interest to cross-license tech to solidify on unified paradigms such as the aforementioned technologies you've mentioned; precisely because of the software side of things. They've done it in the past, and I see no reason why they wouldn't do so again now and into the future.

So AMD with P and E cores? Intel with V-Cache? Don't see why it couldn't happen.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't think the CCX nature of Ryzen (not limited to 1st gen) was the only problem for its gaming performance. Recent CPUs do not fix the CCX problem, it is still there. Starting Zen 3 all they did was move the problem a bit further out by making a CCX bigger. This is why I'd lean towards 8 core if I were to buy another recent Ryzen as that is as big as you can go before it gets complicated.

 

It seems a right mess getting the right game code on the right cores. While in theory you get the best of both worlds (clocks or cache) with the 7950X3D, that's only if everything works in practice. Then again, this is only if you're looking for top performance, which you probably are if you're looking at these CPUs. But in practice even if it is a little off it'll probably still be decent enough.

 

It does make me wonder. I've recently done some thread scaling testing on one game. I had long thought that a good 8c16t is all you need for high end gaming, but I'm rethinking that as 12 monolithic cores clearly performed better. I don't have data if that game would be happy with split CCX arrangement which would be necessary on Ryzen, and now we have unbalanced cache and clocks to throw into the mix. I don't have data on how hybrid CPUs behave either.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, YoungBlade said:

There are some niche productivity applications that want both cache and core count. This is why AMD actually invented V-Cache - for their EPYC line of server CPUs.

Also remember the cache does not impact the core frequencies as much. EPYC doesn't run nearly as high boost clocks and specific heat load per core is much lower so the cache almost always outweighs the minor frequency drop, and that drop isn't the same for 1T and nT either with 1T being bigger drop.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

It seems a right mess getting the right game code on the right cores. While in theory you get the best of both worlds (clocks or cache) with the 7950X3D, that's only if everything works in practice. Then again, this is only if you're looking for top performance, which you probably are if you're looking at these CPUs. But in practice even if it is a little off it'll probably still be decent enough.

Honestly just wait for reviews. Everyone could be raising fears over literally nothing heh. 

 

6 hours ago, YoungBlade said:

I think hybrid chiplet architectures are going to be the way forward, so these issues need to start getting sorted out eventually. Intel is clearly all-in on P and E cores. I don't see why AMD couldn't go in a similar direction with even more variation - V-Cache die, regular "P" core die, their efficient C core dies, and maybe even with these dies across process nodes.

 

Yes, the software will be a headache, but we're starting to reach the limits of how smart we can make the rocks we put lightning inside of. We need to start being smarter about how we use them.

AMD has already commented on this, their current Zen architecture at reduced clock rates already offers about as low power as possible with sufficient performance. Intel and AMD aren't the same, there isn't always the same gains and losses to be made from competing architectures. That's why at least officially AMD has no currently plans for any hybrid core architecture designs.

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, leadeater said:

Honestly just wait for reviews. Everyone could be raising fears over literally nothing heh. 

I think it was PCWorld who recently did a test of Intel hybrid CPUs on Win10 vs Win11. Conventional wisdom is you need Win11 to understand and use it properly. The conclusion was, they're the same within testing tolerances. So from my point of view, it isn't just "does it work well enough" but "does it work optimally"? I don't know, perhaps by manual affinity setting it could be tested to forcibly use just the cache cores, or just the fast cores, and let Windows decide, and see if there is much of a difference between them. I'm happy I don't do "professional" product benchmark testing. 😄 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Kinda expected this. Games don't really benefit more than 8c in general first of all. Also may depend on per game will freq or cache help more. Cache def helpa a ton in number of games. Sure does a lot in number of online games.

Can't wait to see benchmarks I'm interested in that 8c. Would be amazing if they found a way to stack cache and not affecting frequency. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

That's why at least officially AMD has no currently plans for any hybrid core architecture designs.

They say that, but they've published papers and filed patents surrounding hybrid core architecture design in the past couple of years, so I don't believe a word of it.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

That's why at least officially AMD has no currently plans for any hybrid core architecture designs.

 

7 minutes ago, tim0901 said:

They say that, but they've published papers and filed patents surrounding hybrid core architecture design in the past couple of years, so I don't believe a word of it.

I have to agree, with the existence of zen 4c/5c AS chiplets and not just atom style monoliths, I (pure speculation) strongly have doubts that they are not internally experimenting with varients of a zen 5 CPUs that use 1 or more chiplet of zen 5c and one or more of zen 5 attached to same IO chip.


with all the numa work they have done since zen 1 thread ripper, and now with the mixed cache sizes between chiplets here, they would not just not utilize all that core scheduling knowledge when there is heterogeneous computing tasks that can benefit from it. 

Given the experimental nature of such chips, I do believe AMD when they have no finalized plans on it, I just dont believe them when they say no plans

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, tim0901 said:

They say that, but they've published papers and filed patents surrounding hybrid core architecture design in the past couple of years, so I don't believe a word of it.

What exactly did AMD say? I read the earlier part being replied to as AMD not saying anything about going hybrid, which is different than AMD saying they have no plans to go hybrid.

 

Tech companies will look at near future and distant future potential tech to use when they think it is best used for future designs. They might not be offering it yet, but don't rule out a time in future where they may decide to do so.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Good deal of why I just went ahead with the normal 7950x.  I'm in the door with AM5 now so changing in a generation or two will not be a huge issue.  Especially if I find 12/24 is fine, although the difference wasn't that huge (about 60).  Noticing MC just increased the price though glad I didn't wait.

 

Plus a 700Mhz drop for using VCache is massive.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, tim0901 said:

They say that, but they've published papers and filed patents surrounding hybrid core architecture design in the past couple of years, so I don't believe a word of it.

Having no currently plans is not the same thing is not doing any research and development for it. No current plans mean that on their product roadmaps no product will be using such a design. AMD cannot publicly lie, that would simply end up as a potential lawsuit from investors.

 

It would be rather stupid to not be doing anything or filing patents otherwise you'll get locked out or left in a weaker position.

 

But what is currently true and accurate is that Zen 3 and Zen 4 are perfectly able to be run a very low power with good performance making designing a low power core architecture unnecessary. As AMD start to push for more performance per core and putting in more acceleration instruction sets like AVX512 and AI/ML matrix stuff the need or applicability of a more efficient core architecture be it power or die space will grow. 

 

Zen4c is already in this area, optimizing more for die area. But like I said, just because it "makes sense for Intel" doesn't at all mean it makes sense of AMD. If AMD didn't get so badly burnt by Bulldozer then a revisit to the concept of sharing execution units is also another perfectly valid approach, that itself was not a bad idea but the architecture delivered was outright weak which has given that entire thing a bad reputation.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, YoungBlade said:

Yes, the software will be a headache, but we're starting to reach the limits of how smart we can make the rocks we put lightning inside of. We need to start being smarter about how we use them.

That might be a problem because of money. The more difficult, the most it costs money. Gamedevs for example won't optimize games.

DAC/AMPs:

Klipsch Heritage Headphone Amplifier

Headphones: Klipsch Heritage HP-3 Walnut, Meze 109 Pro, Beyerdynamic Amiron Home, Amiron Wireless Copper, Tygr 300R, DT880 600ohm Manufaktur, T90, Fidelio X2HR

CPU: Intel 4770, GPU: Asus RTX3080 TUF Gaming OC, Mobo: MSI Z87-G45, RAM: DDR3 16GB G.Skill, PC Case: Fractal Design R4 Black non-iglass, Monitor: BenQ GW2280

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, CTR640 said:

That might be a problem because of money. The more difficult, the most it costs money. Gamedevs for example won't optimize games.

Why are you singling out game devs in particular? Devs never optimizing games is basically a meme - there's some truth to it, but you only hear about games that are poorly optimized. The rest of the time, when the game just runs well, people just play it.

 

If the game requires optimization for hybrid architectures to be able to run well, then that's what will happen. It will take a few decades, but eventually, I think that effectively all systems will be running hybrid CPUs. The consoles are probably going to have something like that in the next few generations. At that point, it's either optimize for that, or no one can run your game. Good luck selling that.

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, YoungBlade said:

The consoles are probably going to have something like that in the next few generations.

PS3 was already that. Game devs found it extremely hard and bitched about it the entire time. Being IBM Power/RISC was also part of it to.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, leadeater said:

PS3 was already that. Game devs found it extremely hard and bitched about it the entire time. Being IBM Power/RISC was also part of it to.

First, it's different when the CPU hardware in the console is highly specialized like that. I'm imagining that, in the future, the current approach of having a modified version of a standard chip put into a console will remain the norm. So there isn't going to be a choice: if you want to develop games, you'll have to develop them for hybrid architectures. You won't be able to jump ship from the PS7 to develop for the XBox Ranger 3DXO to get away from it.

 

Second, even with that issue, it's not like the PS3 had no games - it had thousands of them. Sure, some were poorly optimized, but it's not like switching back to more standard architectures made that stop.

 

But overall, my main point was that I think it's unfair to pick out game devs as somehow incapable of developing optimized software when it becomes difficult. Some of the best, most innovative, and adaptive software developers of all time have been game devs.

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, leadeater said:

PS3 was already that. Game devs found it extremely hard and bitched about it the entire time. Being IBM Power/RISC was also part of it to.

Xbox 360, the GC, the wii, the wiiU, were all IBM Power. 
So that wasnt part of bitching about the ps3.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, TrigrH said:

In other news the sun is bright..

Except at night.

4 hours ago, ewitte said:

Plus a 700Mhz drop for using VCache is massive.

A drop in maximum boost clock not clock in general. The actual difference might be negligible small.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, HenrySalayne said:

Except at night.

A drop in maximum boost clock not clock in general. The actual difference might be negligible small.

1-2 threads will likely be a large drop but yes if they are loading all 32 threads it already seems to drop between 4900-5300Mhz on a 7950x depending on cooling and if you change the TDP (4900 all core was with a 65w TDP I believe Gamer Nexus review).

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, starsmine said:

Xbox 360, the GC, the wii, the wiiU, were all IBM Power. 
So that wasnt part of bitching about the ps3.

It was. Programming for the Cell processor was a lot more complicated than those other hardware platforms. Game devs rely on the tools provided by the console manufacturer and Sony's wasn't as easy to use and learn as the others because even Sony at the time was learning how to do it followed by IBM also learning how to do it.

 

Cell had mixed ISA support between the cores, unlike Intel or ARM implementations.

 

Being IBM Power and Hybrid were joint complaints.

 

11 hours ago, YoungBlade said:

But overall, my main point was that I think it's unfair to pick out game devs as somehow incapable of developing optimized software when it becomes difficult. Some of the best, most innovative, and adaptive software developers of all time have been game devs.

 

They are not incapable. They don't get given the time, budget or both to do it almost always. Game devs can be quite above the average when it comes to programing skills and also what they know is areas of knowledge that others outside of the field would not have experience with, so some hot shot super star programmer isn't going to come in and be all awesome heh.

 

When people make those comments it comes from the stories we get of bad game releases full of bugs and it's the publisher that forced them to launch against the objections of the game studio actually making the game. It's happened SOO many times.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×