Jump to content

AMD Ryzen 7000X3D series coming February/April, 16-core Ryzen 9 7950X3D features 144MB cache: 21-30% higher gaming performance at 1080p (Update #3)

27 minutes ago, leadeater said:

Not a "good choice" wallet wise but you could always buy a 2 CCD product and disable one CCD so you have 8 cores, full memory bandwidth and the V-Cache.

The thought did occur to me that is a possible path, but doesn't make sense. If I were to buy such CPU I'd want to make use of all of it, outside of specific testing perhaps.

 

Didn't we go over the cache/bandwidth results for 1 vs 2 CCD Zen 4 in the past? I'm still not 100% sure I understand that to make the leap disabling one CCD of a 2 CCD CPU would still offer full bandwidth.

 

27 minutes ago, leadeater said:

Also it's not like it's "optimized" for anything, cache is cache. I don't think any of the front end is changed in how it works when the V-Cache is present, it's just a matter of more cache entries being supported leading to higher hit rates.

I'm not talking about microarchitecture decisions, but how the product is configured and marketed. In the video it was clearly said for gaming workloads the improvement from both CCDs getting vcache was not significant thus not good value to go that route. They optimised these CPUs for gaming performance and value.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, porina said:

In the video it was clearly said for gaming workloads the improvement from both CCDs getting vcache was not significant thus not good value to go that route.

Ah right, to be honest I have not watched the video. I didn't think the V-Cache over a single CCD in a dual CCD product was an actual thing and was just theoretical. That's a rather interesting choice by AMD.

Link to comment
Share on other sites

Link to post
Share on other sites

20% better than my 5800x3d 🥱 still skipping am5, another year or so ill upgrade my gpu and be happy until the am6 hype train comes rolling down

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

Finally, also a bit odd clock difference between SKUs. I'm interested in 8c so definitely waiting benchmarks. Being single CCD and 8c is a gaming sweet spot. Though clocks not higher as higher core parts and even with 3D-cache seems strange. So only on 1 CCD it's 3D cache? That rather interesting if so... how would that work, how would say a game run where it benefits from higher clock CCD and 3D-cache CCD along with CCD hopping. That would be very interesting video. 

Gaming wise it would make it a very tough choice it seems. I though they'd all have same clocks with 3D-cache and even higher on 8c part.

Looking forward to this tested.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Doobeedoo said:

Finally, also a bit odd clock difference between SKUs. I'm interested in 8c so definitely waiting benchmarks. Being single CCD and 8c is a gaming sweet spot. Though clocks not higher as higher core parts and even with 3D-cache seems strange. So only on 1 CCD it's 3D cache? That rather interesting if so... how would that work, how would say a game run where it benefits from higher clock CCD and 3D-cache CCD along with CCD hopping. That would be very interesting video. 

Gaming wise it would make it a very tough choice it seems. I though they'd all have same clocks with 3D-cache and even higher on 8c part.

Looking forward to this tested.

Waaaait a minute, only one ccd is stacked? oh my that'll be interesting, just we better brace for the wave of per ccd OC videos lol, well be seeing cppc3 soon to help the scheduler choose what's right for what situation 

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, lotus10101 said:

Waaaait a minute, only one ccd is stacked? oh my that'll be interesting, just we better brace for the wave of per ccd OC videos lol, well be seeing cppc3 soon to help the scheduler choose what's right for what situation 

I've heard from comments since it would make sense with these clocks. Didn't see the vid yet. But wasn't that a thing already though.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Doobeedoo said:

I've heard from comments since it would make sense with these clocks. Didn't see the vid yet. But wasn't that a thing already though.

Per ccd OC was already a thing but only used for tuning minor differences in core strength, this thing because one is stacked and the other isn't there will be a drastic difference

as far a cppc goes to my understand It just chooses the "strongest/fastest" core for heaviest work load, now that we have high clocks vs large cache we need a method to choose between the two per work load type not just heaviest workload

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, lotus10101 said:

Per ccd OC was already a thing but only used for tuning minor differences in core strength, this thing because one is stacked and the other isn't there will be a drastic difference

as far a cppc goes to my understand It just chooses the "strongest/fastest" core for heaviest work load, now that we have high clocks vs large cache we need a method to choose between the two per work load type not just heaviest workload

Yeah exactly that. Like knowing will a game benefit more from higher clocks alone or extra cache will give more boost. Can depend on type of game and code too. I guess CPU should somehow know how to muster max FPS though. We'll have to wait and see what benchmarks show.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Doobeedoo said:

Yeah exactly that. Like knowing will a game benefit more from higher clocks alone or extra cache will give more boost. Can depend on type of game and code too. I guess CPU should somehow know how to muster max FPS though. We'll have to wait and see what benchmarks show.

probably something along the lines Intel uses for Pcore vs Ecore

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

I still don't really understand who these CPUs are for.

The "average consumer" who will actually utilize a 500+ dollar CPU will probably be better off buying the higher core count part for the typical workload (assuming it's like last gen where the 8 core 3D cache part costs like the 12 core part). 

If you regularly run tasks that demand such expensive CPUs, you are probably better off with 50% more cores than more cache.

 

Sure they are better for gaming, but who actually buys these 3000-4000 dollar computers that benchmark sites and AMD uses to maximize the CPU bottleneck just to then play a 1080p? Previous gen, the gap became smaller the more GPU bottlenecked you were, and that pretty much always happened with modern games once running at somewhat high resolutions. Don't buy an RTX 4090 if you are planning on gaming at like 720p.

 

Besides, the non-3DX chips are fast enough for gaming anyway. And before people start throwing around the argument of "future proofing" I want to say that I think that's an incredibly stupid thing that almost never works out as planned. To me it seems like 9 out of 10 people who "future proof" by buying more expensive parts usually end up wasting that money in the end.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, starsmine said:

I mean.. as you should. You area just one generation old. Anyone upgrading every gen is in the wrong. 
You are MORE then fine until AM6

AM6?  It was almost 6 years between the release of AM4 and the release of AM5!  It will be way outdated by then.  The processors we have now can't even keep up with the 4090 at 4k including 7000 series!

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, ewitte said:

AM6?  It was almost 6 years between the release of AM4 and the release of AM5!  It will be way outdated by then.  The processors we have now can't even keep up with the 4090 at 4k including 7000 series!

AM5 so far only has zen5 confirmed and zen6 speculated.

Its three full generations at most, two at minimum. if you have zen3, you can wait for zen7.
Platform changes cost a lot of money for the amount of performance gain you get.

As far as the processors cant keep up? That's Kinda bullshit. there are edge cases of a bind happening at 1440p. but there are NONE at 4k, and turn on reflex and you debind the CPU a tad anyways. 
When you talk about CPU bottleneck, it just means the GPU cant run full speed ALL the time. the fact is you will still get better frame rates getting an even more powerful GPU then the 4090 at 1080, 1440, and 4k. 
Especially when future games will still crank up the GPU requirements harder and faster then the CPU ones because GPUs go up 1.5-2x per gen, rather then a CPUs 10-20%.

A AAA game 5 years from now will disproportionally require MORE gpu grunt then today vs  the additional cpu grunt it will ask for.

A bottleneck is only ever as large as the proportional time a GPU has to wait for a CPU to make a draw call. 

Think of it this way instead. 
Your GPU keeps bottlenecking your CPU, the CPU is doing nothing and is waiting for a draw call to come from the GPU, the 4090 is the first time we see that the GPU is not ALWAYS bottlenecking the CPU.

Link to comment
Share on other sites

Link to post
Share on other sites

Oh, already? I would have thought they might have waited a bit longer before releasing their x3D chips on for their next gen CPU.

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, starsmine said:

AM5 so far only has zen5 confirmed and zen6 speculated.

Its three full generations at most, two at minimum. if you have zen3, you can wait for zen7.
Platform changes cost a lot of money for the amount of performance gain you get.

As far as the processors cant keep up? That's Kinda bullshit. there are edge cases of a bind happening at 1440p. but there are NONE at 4k, and turn on reflex and you debind the CPU a tad anyways. 
 

From what I've heard they play not have similar platform support for AM5 as they did for AM4.  There were 4 completely new lines of chips on AM4 1000,2000,3000, 5000... I had at least 1 of each).

 

It absolutely HAS occurred in a *small* handful of games at 4k on the 4090.  The number is much higher at 1440p or with 5000 series.  You're getting most of the performance potential with 7000 series.  Even something like Port royal current Intel and AMD are about 15% higher than what I get with a 5900x.

AMD 7950x / Asus Strix B650E / 64GB @ 6000c30 / 2TB Samsung 980 Pro Heatsink 4.0x4 / 7.68TB Samsung PM9A3 / 3.84TB Samsung PM983 / 44TB Synology 1522+ / MSI Gaming Trio 4090 / EVGA G6 1000w /Thermaltake View71 / LG C1 48in OLED

Custom water loop EK Vector AM4, D5 pump, Coolstream 420 radiator

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, ewitte said:

AM6?  It was almost 6 years between the release of AM4 and the release of AM5!  It will be way outdated by then.  The processors we have now can't even keep up with the 4090 at 4k including 7000 series!

A PC can very easily last 6 years. You might have to turn down some settings, and maybe you can't expect 0.1% minimums over like 200 fps, but it's not like you need those things to begin with. 

 

I made due with a 1700X from the day it was launch, up until a few months ago (over 5 years), and I probably do more CPU intense tasks that most people on this forum.

 

Before that I had an i5-2500K, which lasted me ~6 years. 

 

My guess is that I won't be needing to upgrade my PC for 5+ years as well.

 

 

 

6 minutes ago, ewitte said:

It absolutely HAS occurred in a *small* handful of games at 4k on the 4090.  The number is much higher at 1440p or with 5000 series.  You're getting most of the performance potential with 7000 series.  Even something like Port royal current Intel and AMD are about 15% higher than what I get with a 5900x.

Do you actually need those 15% higher scores in a benchmark? Or do you just want that because you like buying new things and/or brag about it on the Internet?

How would 15% higher score in a benchmark improve your life and would it be worth spending hundreds of dollars to get it? 

Link to comment
Share on other sites

Link to post
Share on other sites

who's buying a 7950X3D and still running a 1080p monitor? After AMD was pushing 8k as the next step in gaming, why (outside of manipulating charts) would they choose to reverse course and show 1080p gaming on what will most likely be a >$700 processor which puts buyers squarely in the 1440p and 4k monitor range since noone with that kind of cash would run a 1080p screen when high refresh OLED panels are finally in the market.

The best gaming PC is the PC you like to game on, how you like to game on it

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, LAwLz said:

the non-3DX chips are fast enough for gaming anyway

There's two types of buyer. 1, fast enough. 2, fastest. Many people ask for fastest when they mean fast enough, but there will be some out there who do want fastest.

 

Even if the X3D CPUs are marketed towards gamers, they could be useful for other tasks too, especially what otherwise would be bandwidth bound compute tasks. This is the biggest selling point of them to me.

 

2 hours ago, starsmine said:

As far as the processors cant keep up? That's Kinda bullshit. there are edge cases of a bind happening at 1440p. but there are NONE at 4k, and turn on reflex and you debind the CPU a tad anyways. .

I forget the exact details but if you look at the DF coverage of 4090 when it came out, they found scenarios where the 4090 quite simply overwhelms the CPU even at 4k, leading to a less smooth gaming experience. Is it a hard CPU limit? No, but if your goal is "the best gaming experience" it is still undesirable.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, GhostRoadieBL said:

who's buying a 7950X3D and still running a 1080p monitor? After AMD was pushing 8k as the next step in gaming, why (outside of manipulating charts) would they choose to reverse course and show 1080p gaming on what will most likely be a >$700 processor which puts buyers squarely in the 1440p and 4k monitor range since noone with that kind of cash would run a 1080p screen when high refresh OLED panels are finally in the market.

Me, I'm going to get one of these and a 4070ti for the ultimate 1080p experience

 

Whats wrong with my 1080p 144Hz monitor? some people still rock crt HA

                          Ryzen 5800X3D(Because who doesn't like a phat stack of cache?) GPU - 7700Xt

                                                           X470 Strix f gaming, 32GB Corsair vengeance, WD Blue 500GB NVME-WD Blue2TB HDD, 700watts EVGA Br

 ~Extra L3 cache is exciting, every time you load up a new game or program you never know what your going to get, will it perform like a 5700x or are we beating the 14900k today? 😅~

Link to comment
Share on other sites

Link to post
Share on other sites

57 minutes ago, porina said:

There's two types of buyer. 1, fast enough. 2, fastest. Many people ask for fastest when they mean fast enough, but there will be some out there who do want fastest.

 

For better, for worse, and from my experience with the 5800X3D, it really depends on the game.  Poorly optimized games that are CPU-bound seem to really thrive with the 3D cache.  Example: Star Citizen, which I play a LOT of.  Once I dropped that 5800X3D in, performance in that game skyrocketed.  Which is sad, but expected.  However, pivot over to the new CoD MW2 title, and that's GPU-bound for me.  I play 4K, ultra (including AA and AO), without DLSS.  I have a 140FPS soft-cap to stay within the G-Sync range of my display, and the game usually stays right there at 140.  The GPU is at or near 95-98% use during the gaming because Reflex is keeping it from hitting 100%.  So, with something like CoD, the 3D cache doesn't matter as much.

 

The benchmarks from the reviewers will be key in making the decision.  I am definitely getting one of the three of them.  But I'm really hoping the reviewers take some time and specifically focus on the "bad" games that the previous 5800X3D made such a difference with.  Test each of those games with each of the new processors.  My gut tells me that, assuming Windows can keep everything scheduled properly (haaaahahahahahahaha *cough* *sputter* sorry there.  Hahahahahahahaa...) the 7950X3D will be the bruiser of the group.

 

Editing Rig: Mac Pro 7,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro W6800X Duo | Lots of SSD and NVMe storage |

Audio: Universal Audio Apollo Thunderbolt-3 Interface |

Displays: 3 x LG 32UL950-W displays |

 

Gaming Rig: PC

System Specs:  Asus ROG Crosshair X670E Extreme | AMD 7800X3D | 64GB G.Skill Trident Z5 NEO 6000MHz RAM | NVidia 4090 FE card (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case (RIP CaseLabs ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Display: Asus PG32UQX 4K/144Hz displayBenQ EW3280U display

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Quantum Magnitude CPU block | EK 4090FE waterblock | AlphaCool 480mm x 60mm rad | AlphaCool 560mm x 60mm rad | 13 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to comment
Share on other sites

Link to post
Share on other sites

So as some have been pointing out in this thread ~

 

AMD Confirms Ryzen 9 7950X3D and 7900X3D Feature 3DV Cache on Only One of the Two Chiplets:

 

Quote

In our older article, we explored two possibilities—one that the 3DV cache is available on both CCDs but halved in size for whatever reason; and the second more outlandish possibility that only one of the two CCDs has stacked 3DV cache, while the other is a normal planar CCD with just the on-die 32 MB L3 cache. As it turns out, the latter theory is right! 

 

AMD put out high-resolution renders of the dual-CCD 7000X3D processors, where only one of the two CCDs is shown having the L3D (L3 cache die) stacked on top.

 

hgg4JGi5t4fpksdF.thumb.jpg.49c79ea004998d3f763c9e933a2af9b6.jpg

 

YkztXJm1evhgHeS5.thumb.jpg.d786c569af3668e707792f26dd85b7d6.jpg

 

We predict that something similar is happening with the 12-core and 16-core 7000X3D processors—where gaming workloads can benefit from being localized to the 3DV cache-enabled CCD, and any spillover workloads (such as audio stack, network stack, background services, etc) are handled by the second CCD. In non-gaming workloads that scale across all 16 cores, the processor works like any other multi-core chip, it's just that the cores in the 3DV-enabled CCD have better performance from the larger victim cache. 

 

https://www.techpowerup.com/303084/amd-confirms-ryzen-9-7950x3d-and-7900x3d-feature-3dv-cache-on-only-one-of-the-two-chiplets

Link to comment
Share on other sites

Link to post
Share on other sites

Sigh, no one here likes to assume they will not just lower the pricing on their current line up...

Link to comment
Share on other sites

Link to post
Share on other sites

34 minutes ago, lostcattears said:

Sigh, no one here likes to assume they will not just lower the pricing on their current line up...

They are though.
image.thumb.png.962f971fca87284fc055b5bc7b78d0f0.png

New non x parts, and they come with a stock cooler.
65W so they dont clock nearly as high.
Still unlocked and overclockable.

Link to comment
Share on other sites

Link to post
Share on other sites

13 hours ago, leadeater said:

Ah right, to be honest I have not watched the video. I didn't think the V-Cache over a single CCD in a dual CCD product was an actual thing and was just theoretical. That's a rather interesting choice by AMD.

So Windows 11 is better optimized than 10 to handle Intel P and E cores based on usage and the efficiency range of the CPU. But with AMD's approach, I have several questions.

  • How would the OS CPU scheduler know which thread is optimized for more cache vs higher clock rates?
  • Will programs have to be updated with the developers intention to target a CCD based on their own testing / bench-marking?
  • Will AMD provide a utility to manually pin specific processes to a CCD?
  • Is that V-Cache shared across the Infinity Fabric? If so, what's the performance penalty traversing through and would that negate higher clock speeds from the non-V-Cache CCD?
Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×