Jump to content

Rumor: Next gen AMD Epyc to get 64 cores?

Tribalinius
Just now, Trixanity said:

Remember Windows RT? There's your answer.

What does RT have to do with Power 9? Im sorry, I'm a little confused by what you mean.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Dylanc1500 said:

What does RT have to do with Power 9? Im sorry, I'm a little confused by what you mean.

I think it was more of a comment around having a non x86 CPU company in the Windows market, ARM in that case.

 

IBM Power9 isn't fundamentally a weak and featureless architecture unlike ARM though. Also since a lot of people focus on high clock rate IBM would take the instant win on that front, they have a history of high clock rate CPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, leadeater said:

I think it was more of a comment around having a non x86 CPU company in the Windows market, ARM in that case.

 

IBM Power9 isn't fundamentally a weak and featureless architecture unlike ARM though. Also since a lot of people focus on high clock rated IBM would take the instant win on that front, they have a high of high clock rate CPUs.

That's what I thought he was meaning but at the same time I didn't know if he meant "NT" since NT4.0 supported PowerPC back in the day.

 

That's exactly why I think it would be something great as a competitor, not to mention it would be easy for them to scale it down to a consumer level. I mean if we want clock speed we could talk them into bring the z14 CPU to consumers, 10 cores at 5.2ghz with enough L2 and L3 cache to give some to your friend lol. Having that 4-way and 8-way multi threading with Power 9 makes me wonder how well that might work with typical consumer processes. Having them at 4.0 ghz  with 12 cores and 24 cores might be plenty though.

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Dylanc1500 said:

What does RT have to do with Power 9? Im sorry, I'm a little confused by what you mean.

Quite simple. It's been done before and failed. It'll fail again.

26 minutes ago, leadeater said:

I think it was more of a comment around having a non x86 CPU company in the Windows market, ARM in that case.

 

IBM Power9 isn't fundamentally a weak and featureless architecture unlike ARM though. Also since a lot of people focus on high clock rate IBM would take the instant win on that front, they have a history of high clock rate CPUs.

Doesn't really matter. You'd have to bring it to market with competitive prices. So you'd either have to sell expensive server chips for pennies or spend money scaling them down and selling them competitively. IBM wants neither. They've sold off anything remotely consumer oriented. 

 

Let's say despite all this they do bring them to market. It's still dead because no one is making software for it. So it'll have to rely heavily on emulation but we know there's a significant overhead to that and chances are it'll either rely on emulation for the next five or ten years or it'll be killed off beforehand. It's such a long term investment that is unlikely to pay off and by the time it does we're probably not using silicon anymore.

 

We know Microsoft have plans for ARM but it's not high performance. Rather it's low cost devices and scaling down Windows to new/small form factors with appropriate battery life. Doesn't hurt them to poke Intel a bit either.

 

They don't really benefit from adding IBM to the mix or rather the returns are limited in comparison.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Trixanity said:

Quite simple. It's been done before and failed. It'll fail again.

Doesn't really matter. You'd have to bring it to market with competitive prices. So you'd either have to sell expensive server chips for pennies or spend money scaling them down and selling them competitively. IBM wants neither. They've sold off anything remotely consumer oriented. 

 

Let's say despite all this they do bring them to market. It's still dead because no one is making software for it. So it'll have to rely heavily on emulation but we know there's a significant overhead to that and chances are it'll either rely on emulation for the next five or ten years or it'll be killed off beforehand. It's such a long term investment that is unlikely to pay off and by the time it does we're probably not using silicon anymore.

 

We know Microsoft have plans for ARM but it's not high performance. Rather it's low cost devices and scaling down Windows to new/small form factors with appropriate battery life. Doesn't hurt them to poke Intel a bit either.

 

They don't really benefit from adding IBM to the mix or rather the returns are limited in comparison.

Actually adopting current software and Windows to work with Power would be quite a simple task without any emulation. If you were around back during the heyday of power you might remember how much faster power was in Windows than x86. The instructions are quite similar at a metal level. I love working with Power arch as it is much easier to work with than competing offerings. Like I said though if you look at the arch you'll see how scalable it is. It would be possible for them to cut down the design and offer something more appropriately priced for consumers, however you also have to remember when you are as far ahead of the game in offerings and bringing things to market then it's more expensive to implement. No it would most likely never make sense for your typical $500 consumer, but it would if you are looking at a $8-10K workstation it is definitely possible not to mention I know lots of people roughly 30 years and older that would by a PC that said IBM in a heart beat because of their history and being an Iconic name.

 

Its all just theoretical though. IBM loves their placement in the market. I wouldn't expect anything more out of them.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, Trixanity said:

Let's say despite all this they do bring them to market. It's still dead because no one is making software for it. So it'll have to rely heavily on emulation but we know there's a significant overhead to that and chances are it'll either rely on emulation for the next five or ten years or it'll be killed off beforehand. It's such a long term investment that is unlikely to pay off and by the time it does we're probably not using silicon anymore.

I'm not so sure those issues are as big as they used to be, still very much there. Software development is much focused on multiple platform support, compatibility and application portability which has changed the tools used for software development.

 

If you have the ability within the toolsets you are using to target x86, ARM and Power then emulation isn't a problem because the application is native. It's not something we can do now but we had the start of this even with something as small as the shift to 64bit, we also saw this with Visual Studio/XNA for game development on Windows and Xbox. We are also seeing this now with OpenCL and CUDA which is a bigger architecture change than the previous examples.

 

Really what the actual issue is that will stop any of this is Windows itself, it's not going to happen unless Windows supports the Power architecture. IBM could do something at the hardware layer, they might even be able to do it with only 1% performance loss. Anyway it's not an impossible task or one that would be limited by performance it's limited by the willingness for it to happen and the tools to achieve it.

 

I do agree there is very little want for IBM to enter the consumers space and very little want on IBM's side to do it. This was more a "Boy that would be interesting to see".

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Dylanc1500 said:

Actually adopting current software and Windows to work with Power would be quite a simple task without any emulation. If you were around back during the heyday of power you might remember how much faster power was in Windows than x86. The instructions are quite similar at a metal level. I love working with Power arch as it is much easier to work with than competing offerings. Like I said though if you look at the arch you'll see how scalable it is. It would be possible for them to cut down the design and offer something more appropriately priced for consumers, however you also have to remember when you are as far ahead of the game in offerings and bringing things to market then it's more expensive to implement. No it would most likely never make sense for your typical $500 consumer, but it would if you are looking at a $8-10K workstation it is definitely possible not to mention I know lots of people roughly 30 years and older that would by a PC that said IBM in a heart beat because of their history and being an Iconic name.

 

Its all just theoretical though. IBM loves their placement in the market. I wouldn't expect anything more out of them.

I have to say when IBM essentially got kicked out of the console market I was extremely disappointed, looking at what IBM was doing at a more consumer product market was interesting and what as being achieved on their hardware was far more than most are willing to acknowledge. I understand why the shift was made but that more reflects the industry's unwillingness to develop both the skills and tools to properly deal with different hardware architectures.

 

Essentially it was a battle of accessibility versus specialty and accessibility won out, IBM hardware was considered too hard for the people wanting to enter the game development industry so the easiest solution was to get rid of it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Cinnabar Sonar said:

So how will this work?  8 core CCX?

Thats a bit hard to say, in one hand changing the amount of cores per ccx would increase cost of development a lot and reduce the benefits that the modularity of ryzen brings, but on the other side having 16 separate caches seems like a problem to me (maybe its not), developing the 4 core ccx would be wiser in terms of using money efficiently, as with one design it would improve perf across the board, 

i guess after writing all this i think amd will try their best to improve their 4 core ccx and then improve the infinity fabric to minimize problems coming from the divided cache.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, cj09beira said:

Thats a bit hard to say, in one hand changing the amount of cores per ccx would increase cost of development a lot and reduce the benefits that the modularity of ryzen brings, but on the other side having 16 separate caches seems like a problem to me (maybe its not), developing the 4 core ccx would be wiser in terms of using money efficiently, as with one design it would improve perf across the board, 

i guess after writing all this i think amd will try their best to improve their 4 core ccx and then improve the infinity fabric to minimize problems coming from the divided cache.

Well, one of the two must happen for this rumor to have any standing.  :P

Squeezing more CCX's onto a die may be a possibility as well.

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, cj09beira said:

Thats a bit hard to say, in one hand changing the amount of cores per ccx would increase cost of development a lot and reduce the benefits that the modularity of ryzen brings, but on the other side having 16 separate caches seems like a problem to me (maybe its not), developing the 4 core ccx would be wiser in terms of using money efficiently, as with one design it would improve perf across the board, 

i guess after writing all this i think amd will try their best to improve their 4 core ccx and then improve the infinity fabric to minimize problems coming from the divided cache.

@Cinnabar Sonar Most likely scenario is the CCX stays at 4c given the way AMD is approaching their design. And we just get CPU designs with 1-4 CCX. The CCX is functionally a NUMA node, as it stands, it just responds very very fast. The inter-CCX latency is the lowest core to core latency we've seen so far. My assumption is that they stick as any CCXs as they want below the L3 cache until they run into bandwidth issues. If it is going to be 16 CCXs for 64 cores, then it'll be 64 mb of L3 cache per Package. Or 16 mb per CCX. 

 

This makes sense, as well, With L3 being a victim cache and used for inter-core communication work, the better the L3 the generally better. Especially if we get more CCX per Package.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Taf the Ghost said:

@Cinnabar Sonar Most likely scenario is the CCX stays at 4c given the way AMD is approaching their design. And we just get CPU designs with 1-4 CCX. The CCX is functionally a NUMA node, as it stands, it just responds very very fast. The inter-CCX latency is the lowest core to core latency we've seen so far. My assumption is that they stick as any CCXs as they want below the L3 cache until they run into bandwidth issues. If it is going to be 16 CCXs for 64 cores, then it'll be 64 mb of L3 cache per Package. Or 16 mb per CCX. 

 

This makes sense, as well, With L3 being a victim cache and used for inter-core communication work, the better the L3 the generally better. Especially if we get more CCX per Package.

with the added clocks speeds of the next few nodes, hopefully we might see IF working at full effective dram frequency, to hopefully get better gaming perf (thanks to lower latencies), so that this "competition period" stays for as long as possible.

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Cinnabar Sonar and @cj09beira

 

I think the fun part about the Zen design, at least to talk about, is that it's this fascinating chunk of nested designs. It does make discussing it a tad confusing at times (dies, cores, packages and the like), but it's just interesting to watch in action.  Intel has 1 really good Core ("Core", in fact, is the name) that they tweak for certain purposes, but it's a singular design that they just copy & paste for as many cores on the CPU design they want. Intel has moved to a Mesh design for core-to-core because the designs are just packing in so many cores now.

 

AMD's approach limits certain aspects of the Upside. Intel could actually make a CPU with 256 cores if they want to. It'd take up most of the wafer, but the structure of their design approach allows for it. AMD couldn't do that. Everything is grouped in packages of 4. However, they can place more CCXs below the L3 Cache on a Zen Package. Though probably just 4. 

 

The Zen design does have an end-point, I'm pretty sure, but if they can get core-to-core latency even lower inside the CCX, they can probably increase the number of cores within it. So maybe 128 cores is the most they can fit within a single Epyc CPU in a few years.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, cj09beira said:

with the added clocks speeds of the next few nodes, hopefully we might see IF working at full effective dram frequency, to hopefully get better gaming perf (thanks to lower latencies), so that this "competition period" stays for as long as possible.

 

The Skylake-X game testing points to 4.3-4.5 Ghz as the saturation point for any non-Inclusive L3 Cache design for gaming under the current GPUs. In pretty much any game that wasn't Arma, X299 shows the same issues Ryzen does. You can cover up most of that with just pushing as much clocks as possible. See here for what I mean: 

 

https://www.techspot.com/review/1493-intel-core-i9-7980xe-and-7960x/page3.html

 

If the 12LPP improvements coming with Pinnacle Ridge can get the XFR up to 4.4-4.5 Ghz range, the difference goes away and is down to optimizations. Though I'd love to take the time to write up a longer discussion about high-FPS gaming is more about slightly breaking the Game Engines than it it about the hardware. Someone found the problem in the Source Engine that puts Ryzen behind Core at higher FPS by more than the Clock + IPC would suggest it should be.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Taf the Ghost said:

@Cinnabar Sonar The inter-CCX latency is the lowest core to core latency we've seen so far. 

Source?Wouldn't using just 1 ccx make more sense than 2+2 (for quad cores) and eliminate the difference between their and Intel's CPUs?We already know they can use just 1 ccx (in APUs).

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, MyName13 said:

Source?Wouldn't using just 1 ccx make more sense than 2+2 (for quad cores) and eliminate the difference between their and Intel's CPUs?We already know they can use just 1 ccx (in APUs).

They used 2 CCX for more L3 cache, hoping it would have better performance this way I guess and also to make sure all Ryzen desktop CPUs interact with systems the same way.

 

Core to core latency within a CCX is very good, it's not the outright best just equally good. See below for details, once you start hitting IF and L3 cache latency is crap. My belief is it's not the IF itself causing this but the L3 cache and signal timings, as in it's a data interaction issue and the number of times cache needs to write then read.

 

 

Quote
Ryzen 5 1600X Memory Data Rate Inter-Core Latency Range Inter-CCX Core-to-Core Latency Cross-CCX Core-to-Core Latency Cross-CCX Average Latency % Increase From 1333
1333 MT/s 14.8 - 14.9ns 40.4 - 42.0ns 197.6 - 229.8ns 224ns Baseline
2666 MT/s 14.8 - 14.9ns 40.4 - 42.6ns 119.2 - 125.4ns 120.74ns 46%
3200 MT/s 14.8 - 14.9ns 40.0 - 43.2ns 109.8 - 113.1ns 111.5ns 50%

 

Quote
Core i7-7700K Memory Data Rate       Inter-Core Latency          Core-To-Core Latency          Core-To-Core Average
1333 MT/s       14.8ns          38.6 - 43.2ns          41.5ns
2666 MT/s       14.8ns          29.4 - 45.5ns         42.13ns
3200 MT/s       14.7 - 14.8ns          40.8 - 46.5ns         43.08ns

http://www.tomshardware.co.uk/amd-ryzen-5-1600x-cpu-review,review-33858-2.html

 

latency-pingtimes.png

https://www.reddit.com/r/intel/comments/6ieva3/amd_ryzen_vs_x99_vs_x299_intercore_latency/ (pcper source I can't seem to find other than this)

 

ping-amd.png

 

https://www.pcper.com/reviews/Processors/Ryzen-Memory-Latencys-Impact-Weak-1080p-Gaming

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

They used 2 CCX for more L3 cache, hoping it would have better performance this way I guess and also to make sure all Ryzen desktop CPUs interact with systems the same way.

 

Core to core latency within a CCX is very good, it's not the outright best just equally good. See below for details, once you start hitting IF and L3 cache latency is crap. My belief is it's not the IF itself causing this but the L3 cache and signal timings, as in it's a data interaction issue and the number of times cache needs to write then read.

 

Cross ccx latencies look pretty bad when compared to the i7.If cross ccx latencies weren't that bad, would ryzen still be so far behind Intel in high refresh rate gaming (when comparing CPUs with the same amount of cores and clock rate)? AMD could have released single ccx CPUs because of high yields, but they didn't, r3 and r5 1400 have just 1 half of l3 cache so there's no point of using the 2nd ccx.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, MyName13 said:

Cross ccx latencies look pretty bad when compared to the i7.If cross ccx latencies weren't that bad, would ryzen still be so far behind Intel in high refresh rate gaming (when comparing CPUs with the same amount of cores and clock rate)? AMD could have released single ccx CPUs because of high yields, but they didn't, r3 and r5 1400 have just 1 half of l3 cache so there's no point of using the 2nd ccx.

Probably would still be slightly behind, it would mostly reduce the amount of games where there is rather evident issues with Ryzen. The cross CCX latency is so bad because communication across the CCX must go through the L3 cache so that is a write OP and a Read OP each requiring around 40ns, so basically you have a minimum of 80ns plus latency of the IF and signaling alignment. So with fast ram it's looking like the IF plus other factors is adding between 20-30 ns which isn't actually that bad.

 

Intel still has the clear lead in clock rate and that's what high refresh rate gaming wants, giving instructions to a GPU is non demanding task so just having more is better. 

 

Now that you mention it I'd really like to see some 1080p or 720p clock equivalent game benchmarks of Ryzen vs Intel, I don't think I've seen one of those.

Link to comment
Share on other sites

Link to post
Share on other sites

@Taf the Ghost what do you meant with "ccx bellow the L3" because right now each ccx has its own L3 cache

and you are right talking about ryzen is extremely fun :) 

the cs go thing is probably more related to latencies, so if i am right a 1700 with affinity locked to a single ccx would probably perform better 

Link to comment
Share on other sites

Link to post
Share on other sites

18 minutes ago, cj09beira said:

@Taf the Ghost what do you meant with "ccx bellow the L3" because right now each ccx has its own L3 cache

and you are right talking about ryzen is extremely fun :) 

the cs go thing is probably more related to latencies, so if i am right a 1700 with affinity locked to a single ccx would probably perform better 

What I meant was I was thinking of the structure the wrong way at the moment when I typed that. Lol. I was thinking of the memory controllers.  The CCX to CCX calls wouldn't be effected by calling up to at least 3 other CCXs, though I'm not sure if a 5th in a package would be an issue. If my understanding of the structure is correct, a 4 CCX design would allow the L3 Cache to be directly called from all 4 CCXs. 

1 hour ago, leadeater said:

Probably would still be slightly behind, it would mostly reduce the amount of games where there is rather evident issues with Ryzen. The cross CCX latency is so bad because communication across the CCX must go through the L3 cache so that is a write OP and a Read OP each requiring around 40ns, so basically you have a minimum of 80ns plus latency of the IF and signaling alignment. So with fast ram it's looking like the IF plus other factors is adding between 20-30 ns which isn't actually that bad.

 

Intel still has the clear lead in clock rate and that's what high refresh rate gaming wants, giving instructions to a GPU is non demanding task so just having more is better. 

 

Now that you mention it I'd really like to see some 1080p or 720p clock equivalent game benchmarks of Ryzen vs Intel, I don't think I've seen one of those.

It depends on the Game Engine and Game. Something like CS:GO & Dota have issues with the Source engine. Apparently it uses a call type that isn't quite up to spec, but it doesn't cause much issue for Intel. (It's the classic Internet Explorer problem, when one company has 90% market share, their product is the Standard.)

 

1 hour ago, MyName13 said:

Cross ccx latencies look pretty bad when compared to the i7.If cross ccx latencies weren't that bad, would ryzen still be so far behind Intel in high refresh rate gaming (when comparing CPUs with the same amount of cores and clock rate)? AMD could have released single ccx CPUs because of high yields, but they didn't, r3 and r5 1400 have just 1 half of l3 cache so there's no point of using the 2nd ccx.

No. The issue isn't really the CCX cross calls, though it doesn't help. The issue is more the L3 Cache & Nvidia's Driver Team. This took a good chunk to sort out, but HardwareCanucks did some testing with a 2600k vs 8700k. You can see the issue by looking at the gaming graphs.

 

Spoiler

 

 

Clock for Clock, the 8700k has about a 20% IPC uplift from some other testing people have done, though that still seems more of a Memory System issue than the core itself. So what's going on? I'm used to seeing functions operate this way, so I figured out a chunk of it a while ago. Basically, Nvidia and RTG, to a lesser extent, can leverage some part of the system of inclusive L3 Cache, consistent Core to Core calls and Intel's uArch to get an extra amount of FPS out of their cards. There's been a few games that have found a 10 FPS uplift or so by going a 1 CCX Ryzen option, but in all of the testing people did, 4+0 was like 1% faster than 2+2 layout over any larger sample of games. It seems like they may leave data in the L3 cache on the major cores being used to find some extra efficiency.

 

Since we're talking about a 6% difference at 1080p over a huge swath of games with a 1080 Ti (per HardOCP's extensive testing), it's really little things that add up. But most of those little things can be worked around by Developers. ROTR found a 20% uplift by fixing some things. 20% from game engine tweaks. (A lot of the early Ryzen problems were the games unable to recognize the amount of Cores, trying to treat them like Bulldozer cores. Along with a Windows Scheduler update in the Summer.) 

 

AMD's still behind a bit in memory subsystems (Intel has been stronger in this area for decades), but, at this point, it's really not the strength of the cores that are the problem. Both AMD & Intel's cores are better than can be feed by the rest of the system. That's why the 2600k can still saturate a high-end GPU 6 years later in a bunch of game engines. AMD will saturate as well as Intel if they can get to 4.5 Ghz or so. Even with the top-end GPUs. >5 Ghz on a few titles will still see a bump in minimums because you're brute forcing through engine bottlenecks. 

 

At the same time, the "smoothness" aspect of Ryzen is real, but it's a rather hard thing to tease out of the numbers. I have a feeling we'd lose most of the Tech Review community if we started talking about the analysis functions you'd need to run on Frame Time data to explain the issue. (Though I think that's also possibly an issue that crops up somewhere else in the pipeline that somehow drops a frame or two between the rendering and the display on Intel systems, as microstutters seem to show up but not on the frame time charts. But that's not confirmed, just a hunch. Though it also could be an issue that goes away with Coffeelake and it's actually a core saturation issue on Skylake.)

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/3/2017 at 12:51 AM, Tribalinius said:

So, I guess the rumor mill is going to start to spin regarding second gen everything AMD. According to Canard PC, we should expect a second gen Epyc with 64 cores, 256MB of L3 cache, 8x DDR4-3200 and 128 PCIe 4 lanes.

256 MB L3 cache? LOL

What does it mean, if i use live puppy linux or something like that this CPU is not going to use RAM at all? Can PC with this CPU work without RAM?

I have Dell C600 very very old laptop. It has 256 MB RAM and it works on arch linux pretty well for it's age 17 years LOL it was released in 2000

Computer users fall into two groups:
those that do backups
those that have never had a hard drive fail.

Link to comment
Share on other sites

Link to post
Share on other sites

@cj09beira

 

I can't find it at the moment, but someone did some testing with the different Ryzen configurations. The 6c had a latency advantage when calling between 6 mb & 8 mb over the 8c models. (L3 Cache calls.) It strikes me that moving to a 16 mb L3 Cache would make sense in that regard if they're actually being starved when doing heavy cross-core communication. Which goes to the rumor being plausible, as a larger L3 is needed for the Zen design.

Link to comment
Share on other sites

Link to post
Share on other sites

46 minutes ago, Taf the Ghost said:

What I meant was I was thinking of the structure the wrong way at the moment when I typed that. Lol. I

At the same time, the "smoothness" aspect of Ryzen is real, but it's a rather hard thing to tease out of the numbers. I have a feeling we'd lose most of the Tech Review community if we started talking about the analysis functions you'd need to run on Frame Time data to explain the issue. (Though I think that's also possibly an issue that crops up somewhere else in the pipeline that somehow drops a frame or two between the rendering and the display on Intel systems, as microstutters seem to show up but not on the frame time charts. But that's not confirmed, just a hunch. Though it also could be an issue that goes away with Coffeelake and it's actually a core saturation issue on Skylake.)

Even i5 8400 beats r5 1600x by over 25% so it's not just the clock rate.What is this "ryzen is smoother than CL" that some people mention?

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, MyName13 said:

Even i5 8400 beats r5 1600x by over 25% so it's not just the clock rate.What is this "ryzen is smoother than CL" that some people mention?

Aren't you the anti-AMD shill I've called out a few times?

Link to comment
Share on other sites

Link to post
Share on other sites

Ooh, the jump.

I wonder, like 2020 8-core being more closely being mainstream maybe.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×