Jump to content

i have been digging into this for some time and in my honest opinion, people should just get what's cheap right now, so that's ADL (alder lake) or zen 4. if you used a 12700k/7700x side by side a 9800X3D i honestly don't think anyone will be able to tell the difference, hell they may think the 12700k is faster because of intel's better north bridge making the system feel snappier. this may even extend down to the 10700k since intel actually nerfed the north bridge 11th gen onwards. most CPUs nowadays are GPU bound anyways during gaming since who is buying a 4070ti and playing at 1080p? and if you are cpu bound with a 4060 i would be very interested to see how you managed that since a 5600x can run that thing at 1080p.

Link to comment
https://linustechtips.com/topic/1587755-do-newer-cpus-matter-for-gaming/
Share on other sites

Link to post
Share on other sites

4 minutes ago, Penpilot said:

i have been digging into this for some time and in my honest opinion, people should just get what's cheap right now, so that's ADL (alder lake) or zen 4. if you used a 12700k/7700x side by side a 9800X3D i honestly don't think anyone will be able to tell the difference, hell they may think the 12700k is faster because of intel's better north bridge making the system feel snappier. this may even extend down to the 10700k since intel actually nerfed the north bridge 11th gen onwards. most CPUs nowadays are GPU bound anyways during gaming since who is buying a 4070ti and playing at 1080p?

You know that people do more than game on computers, right?

 

Also, you do know that north bridges haven't been a thing for a while now right?

Link to post
Share on other sites

6 minutes ago, Blue4130 said:

Also, you do know that north bridges haven't been a thing for a while now right?

just because they got integrated into the cpu doesn't mean they got Thanos snapped out of existence, amd has it on the io die and intel is monolithic, i don't know about ARL/MTL

 

6 minutes ago, Blue4130 said:

You know that people do more than game on computers, right?

my bad for not specifying I'm talking about gaming, fixed that.

Link to post
Share on other sites

47 minutes ago, Penpilot said:

just because they got integrated into the cpu doesn't mean they got Thanos snapped out of existence, amd has it on the io die and intel is monolithic, i don't know about ARL/MTL

They kind of did though. The way that they are integrated onto the die meant that fundamental changes had to be made. The northbridge ceased to exist and the IO die came into existence. They play similar roles but operate quite differently.

Link to post
Share on other sites

11 minutes ago, Blue4130 said:

They kind of did though. The way that they are integrated onto the die meant that fundamental changes had to be made. The northbridge ceased to exist and the IO die came into existence. They play similar roles but operate quite differently.

long and short of it is whatever it is intel has a better one and 4th to 10th gen had it the best.

Link to post
Share on other sites

6 minutes ago, Penpilot said:

long and short of it is whatever it is intel has a better one and 4th to 10th gen had it the best.

the ones i can recall for being complete garbage is the 2nd gen sandybridge that was limited to 21.33x mem multi alongside 1st gen clarkdale (32nm lga 1156) as i had issues with getting it stable at ddr3 2000 where my i5 750/760 were fine with ddr3 2000 (limited by bclk for these due to low 10x mem multi but i7 had 12x multi and capable of 2600+ given high enough bclk)

 

apparently haswell/broadwell-e 4th/5th gen x99 had pretty trash imc aswell but afaik these still do 3600 ish so i find it quite strange that its considered crap when its just shy of matisse/vermeer imc (3800-4000)

 

the ones that were exceptionally good would be haswell lga 1150 with ddr3 4000+ capabilities, shouldnt be too hard getting 3200 stable with a decent board and some hynix 4gbit

 

10th gen can do ~4600 at 1.45v afaik so pretty much the best g1 intel but still gets toppled by cezzane (5000 1:1 ~1.35v vsoc)

 

11th gen seems to have the strongest gear 2 imc if that ddr4 7200 wr is still standing

 

for intel ddr3 imcs

haswell > x58/ivy(e) > bloomfield/sandy-e > sandy > clarkdale

 

for intel ddr4 g1

10th > 9th/8th > x299 > 7th/6th > 11th > x99

Link to post
Share on other sites

2 hours ago, Penpilot said:

most CPUs nowadays are GPU bound anyways during gaming since who is buying a 4070ti and playing at 1080p? and if you are cpu bound with a 4060 i would be very interested to see how you managed that since a 5600x can run that thing at 1080p.

it really depends, some things can make the situation still CPU bound, even though you're seemingly not supposed to be

 

2 hours ago, Penpilot said:

my bad for not specifying I'm talking about gaming, fixed that.

It's just not as simple as a yes/no answer:

 

image.thumb.png.f2adf5285e0df95cf26a0dd8732e2466.png

 

  • Oh look, the 7800x3D is winning by ~23 avg fps, as always? Wrong. Not as always

 

Here the gap is ~12 avg fps:

 

image.thumb.png.da564907aba04895260b838365769d8e.png

 

 

Here the avg gap is ~9fps:

 

image.thumb.png.0918e8e668ec28e324acae36d73fc0dd.png

 

 

  • Oh look, here's a different YouTube and the Cyberpunk is only ~5 fps difference, so that means the gap is about ~25fps or less?
  • Well why don't we change the resolution to 1440p and see what happens?

 

image.thumb.png.93b6f626d86febfd55c2bbbc9ecc9cee.png

 

 

 

Woah there, look at that GAP:

 

image.thumb.png.9ca2bbbc5e9aa380b582a56b5aed90da.png

 

 

  • And let's just say, these gaps can get bigger on 1440p and 4K too, it really depends on several things, mainly what kind of game you're playing...
  • But you could say that technically at higher resolutions and graphics settings these differences aren't that significant, perhaps if you don't want to use up all budget or use it elsewhere, or if you don't want to wait to upgrade because you don't have enough for the best thing.

Note: Users receive notifications after Mentions & Quotes. 

Feel free: To ask any question, no matter what question it is, I will try to answer. I know a lot about PCs but not everything.

current PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti [further details on my profile]

PCs I used before:

  1. Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050
  2. Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050
  3. Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti
Link to post
Share on other sites

14 minutes ago, Blue4130 said:

Are you talking about the memory controller only or everything else that is not compute on the cpu?

im not talking about the memory controller at all *_*

 

7 minutes ago, podkall said:

It's just not as simple as a yes/no answer:

you talk like you can tell the difference between 149 fps and 166 fps or 188 and 220 or 86 and 95(maybe a trained eye can tell that one but not most people). there are people who cant tell the diffrence between 90 and 120hz, and most cant tell the difference between 120 and 144hz. 

 

Just now, Blue4130 said:

Then please explain your stance more. I really would like to know why you think the older gens were better than current

it was just a comment i made that supports my argument really doesn't come up in real life, just leave it. I'm still talking about the north bridge or whatever its called now.

Link to post
Share on other sites

5 minutes ago, Penpilot said:

you talk like you can tell the difference between 149 fps and 166 fps or 188 and 220 or 86 and 95(maybe a trained eye can tell that one but not most people). there are people who cant tell the diffrence between 90 and 120hz, and most cant tell the difference between 120 and 144hz. 

I can, even if the fps is above the Hz rate of Monitor,

 

PC performance and "Master Race" isn't as simple as it seems, the rabbit hole can go as deep as you delve into it,

 

here's a little introduction, 1%, 0.1% lows and consistency:

 

image.thumb.png.8d6b751b50e34db82d2027c6b53416cb.png

 

image.thumb.png.dbb72b6e3a385a053bbc1750c0c8d9e5.png

 

image.thumb.png.48502de439dc8030c9c796b2439dc227.png

 

 

  • And this is just benchmarks, not everyone is running just Windows + Game and absolutely nothing else in the background.

Note: Users receive notifications after Mentions & Quotes. 

Feel free: To ask any question, no matter what question it is, I will try to answer. I know a lot about PCs but not everything.

current PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti [further details on my profile]

PCs I used before:

  1. Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050
  2. Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050
  3. Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti
Link to post
Share on other sites

3 hours ago, Penpilot said:

i have been digging into this for some time and in my honest opinion, people should just get what's cheap right now, so that's ADL (alder lake) or zen 4. if you used a 12700k/7700x side by side a 9800X3D i honestly don't think anyone will be able to tell the difference, hell they may think the 12700k is faster because of intel's better north bridge making the system feel snappier. this may even extend down to the 10700k since intel actually nerfed the north bridge 11th gen onwards. most CPUs nowadays are GPU bound anyways during gaming since who is buying a 4070ti and playing at 1080p? and if you are cpu bound with a 4060 i would be very interested to see how you managed that since a 5600x can run that thing at 1080p.

Really that's not new, I didn't really need to upgrade my 4770K from 2014 until at least  2017, generational improvements weren't very large already

Atm if you have someting dating from Intel 11th gen or earlier or some Ryzen 3000 you will definitely see an improvement with a new CPU

The only recent "game changer" is the 3D VCache for gaming, nothing much else

People also upgrade for more raw power, chips with more cores for productivity apps

 

AMD R9  7950X3D CPU/ Asus ROG STRIX X670E-E board/ 2x32GB G-Skill Trident Z Neo 6000CL30 RAM ASUS TUF Gaming AMD Radeon RX 7900 XTX OC Edition GPU/ Phanteks P600S case /  Arctic Liquid Freezer III 360 ARGB cooler/  2TB WD SN850 NVme + 2TB Crucial T500  NVme  + 4TB Toshiba X300 HDD / Corsair RM850x PSU/ Alienware AW3420DW 34" 120Hz 3440x1440p monitor / ASUS ROG AZOTH keyboard/ Logitech G PRO X Superlight mouse / Audeze Maxwell headphones

Link to post
Share on other sites

56 minutes ago, PDifolco said:

People also upgrade for more raw power, chips with more cores for productivity apps

Don't forget people that want more smooth and highest quality possible experience, or need a lot of fps in general

Note: Users receive notifications after Mentions & Quotes. 

Feel free: To ask any question, no matter what question it is, I will try to answer. I know a lot about PCs but not everything.

current PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti [further details on my profile]

PCs I used before:

  1. Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050
  2. Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050
  3. Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti
Link to post
Share on other sites

In a game I play regularly that is hard capped at 60fps, I can easily tell the difference between running it on my old 7980XE and current 7800X3D. Hint: fps isn't everything.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

4 hours ago, porina said:

In a game I play regularly that is hard capped at 60fps, I can easily tell the difference between running it on my old 7980XE and current 7800X3D. Hint: fps isn't everything.

gracemont has better st performance than that sky lake cpu. its like saying yes i can tell the difference between playing on a 13900k with all the raptor coves disabled and the 7800x3d(not trying to sound harsh but that's what my head made of what you said).

 

8 hours ago, podkall said:

here's a little introduction, 1%, 0.1% lows and consistency:

these benchmarks are done at 1080p, when really using a cpu+gpu combo like this, its gpu bound and in that case intel actually comes out faster than amd since the e cores take on any really bad interruption keeping fps more stable than amd even though amd has better lows and outperforms intel while cpu bound. so again, no real testing around this topic exists.

 

 

Link to post
Share on other sites

18 minutes ago, Penpilot said:

gracemont has better st performance than that sky lake cpu. its like saying yes i can tell the difference between playing on a 13900k with all the raptor coves disabled and the 7800x3d(not trying to sound harsh but that's what my head made of what you said).

To expand on what I said, the game in question runs consistently at the 60fps cap on both CPUs. So no difference? There's quality of life outside of that. Load times. I never got a stopwatch out to measure it, but the 7800X3D system isn't just "a bit" faster, it is a LOT faster at loading. By that I mean, not a "I think I can feel something" difference. Way more obvious than that. Before anyone says it, it isn't the storage or GPU either. I've even put the game on Optane 280GB SSD. It is probably CPU limiting on loads. GPU was moved between both systems.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Alienware AW3225QF (32" 240 Hz OLED)
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, iiyama ProLite XU2793QSU-B6 (27" 1440p 100 Hz)
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to post
Share on other sites

51 minutes ago, Penpilot said:

these benchmarks are done at 1080p, when really using a cpu+gpu combo like this, its gpu bound and in that case intel actually comes out faster than amd since the e cores take on any really bad interruption keeping fps more stable than amd even though amd has better lows and outperforms intel while cpu bound. so again, no real testing around this topic exists.

Could I have source for this? I know that games rarely use E cores, as you'd be passing a task onto something with 1 thread and slower clock speed.

Note: Users receive notifications after Mentions & Quotes. 

Feel free: To ask any question, no matter what question it is, I will try to answer. I know a lot about PCs but not everything.

current PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti [further details on my profile]

PCs I used before:

  1. Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050
  2. Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050
  3. Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti
Link to post
Share on other sites

16 hours ago, podkall said:

Could I have source for this?

not any "source", just a general trend i noticed in gpu bottleneck situations across many creators and websites.

 

16 hours ago, podkall said:

I know that games rarely use E cores, as you'd be passing a task onto something with 1 thread and slower clock speed.

to clarify, i was talking about windows doing windows bs. so like random background tasks that happen for no reason can be taken up by e cores while game threads are undisturbed, unlike amd where the game threads are the only threads. games don't use e cores, except for ARL maybe since idk how intel expects a game to run with 8 threads(maybe that's why it sometimes performs like the 12900k).

 

 

Link to post
Share on other sites

2 hours ago, Penpilot said:

not any "source", just a general trend i noticed in gpu bottleneck situations across many creators and websites.

image.thumb.png.b08fa12c1e80e55eb092b472fe3b7e34.png

 

 

2 hours ago, Penpilot said:

to clarify, i was talking about windows doing windows bs. so like random background tasks that happen for no reason can be taken up by e cores while game threads are undisturbed, unlike amd where the game threads are the only threads. games don't use e cores, except for ARL maybe since idk how intel expects a game to run with 8 threads(maybe that's why it sometimes performs like the 12900k).

Yes, but you don't use all cores/threads, so there always will be free cores/threads to do Windows bs.

 

 

Note: Users receive notifications after Mentions & Quotes. 

Feel free: To ask any question, no matter what question it is, I will try to answer. I know a lot about PCs but not everything.

current PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti [further details on my profile]

PCs I used before:

  1. Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050
  2. Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050
  3. Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti
Link to post
Share on other sites

It depends usually on the kind of games you play, even then they still matter the higher-end your other parts are. One thing is for sure I think is that all you need to do is to ensure that whatever CPU you have performs well enough in conjunction to say your video card. A lot of CPUs today are still viable enough to play well with the 6800XTs of the world, even at 1080p.  I think building your rig according to your needs is the best route to take, whether its gaming or otherwise.

 

Though they matter less starting at 1440p onwards, as the screenshot from @podkall above show us. Then again it also shows us that CPUs like any Ryzen 5000 series and 12th Intel Core are still potent enough to not bottleneck the typical GPUs much, if people still wanna game hard without breaking your budget much (though I will go straight to AM5 if building all-new).

 

 

Personal PC (Main):

AMD Ryzen 7 5700X | AMD Radeon RX 7700 XT (ASUS DUAL OC) | Kingston FURY Beast 32GB DDR4-3200MHz CL16 (2x16GB) | Gigabyte B450M-DS3H rev 1.0 | Samsung 970 EVO Plus 250GB SSD (OS Drive) / TeamGroup L3 EVO 120GB SSD / Western Digital Blue (2019) 1TB HDD 7200RPM | ID Cooling SE-214 XT | Tecware Arc M Black ARGB Case Cougar GEX-750 750W 80+ Gold | AOC 24G2E5 24" 75Hz IPS Freesync | AWP AIM PRO 1000VA UPS w/LED 

 

Secondary (sibling's gaming builds):

Desktop (home server/HTPC)

Intel Core i5-6400 | AMD Radeon RX 570 (Gigabyte Gaming OC 4GB)HyperX Fury 16GB DDR4-2666 (1x16GB) | ASUS H110M-K | Western Digital Blue 3D NAND 250GB SSD (OS Drive) / Seagate Barracuda (2017) 1TB HDD 5900 RPM | Silverstone Strider Essential Bronze 500W 80+ Bronze | Samsung Series 4 LED (2014) 40" | Akari AVR-SVC 500 Servo-Type AVR

Laptop

MSI Bravo 15 C7V (AMD Ryzen 5 7535HS | NVIDIA GeForce RTX 4050 6GB | 40GB DDR5-4800 (1x8GB, 1x32GB) | 2x KINGSTON NV2 500GB NVME SSD Gen 4)

 

 

 

Link to post
Share on other sites

47 minutes ago, Penpilot said:

this aligns my point? the CPUs with the highest lows are intel ones, with only the 7800X3d being the exception.

wow, whopping 2fps difference, let me get my microscope so I can see the difference

Note: Users receive notifications after Mentions & Quotes. 

Feel free: To ask any question, no matter what question it is, I will try to answer. I know a lot about PCs but not everything.

current PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti [further details on my profile]

PCs I used before:

  1. Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050
  2. Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050
  3. Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti
Link to post
Share on other sites

4 hours ago, podkall said:

wow, whopping 2fps difference, let me get my microscope so I can see the difference

Yeah but it's true, overall Intel monolithic dies had more consistency than the Ryzen chiplet architecture that always was  RAM bandwidth-starved... Until they put 3D cache to allleviate the problem!

But now the new Intel Core 200 architecture seems worse than Ryzen for consistency..

AMD R9  7950X3D CPU/ Asus ROG STRIX X670E-E board/ 2x32GB G-Skill Trident Z Neo 6000CL30 RAM ASUS TUF Gaming AMD Radeon RX 7900 XTX OC Edition GPU/ Phanteks P600S case /  Arctic Liquid Freezer III 360 ARGB cooler/  2TB WD SN850 NVme + 2TB Crucial T500  NVme  + 4TB Toshiba X300 HDD / Corsair RM850x PSU/ Alienware AW3420DW 34" 120Hz 3440x1440p monitor / ASUS ROG AZOTH keyboard/ Logitech G PRO X Superlight mouse / Audeze Maxwell headphones

Link to post
Share on other sites

27 minutes ago, PDifolco said:

Yeah but it's true, overall Intel monolithic dies had more consistency than the Ryzen chiplet architecture that always was  RAM bandwidth-starved... Until they put 3D cache to allleviate the problem!

But now the new Intel Core 200 architecture seems worse than Ryzen for consistency..

sure, but that still doesn't drop the fact that 1% lows are just 1% lows, and that few fps deviation means absolutely nothing if in approx the difference is similar, and especially if they get beaten by average..

Note: Users receive notifications after Mentions & Quotes. 

Feel free: To ask any question, no matter what question it is, I will try to answer. I know a lot about PCs but not everything.

current PC:

Ryzen 5 5600 |16GB DDR4 3200Mhz | B450 | GTX 1080 ti [further details on my profile]

PCs I used before:

  1. Pentium G4500 | 4GB/8GB DDR4 2133Mhz | H110 | GTX 1050
  2. Ryzen 3 1200 3,5Ghz / OC:4Ghz | 8GB DDR4 2133Mhz / 16GB 3200Mhz | B450 | GTX 1050
  3. Ryzen 3 1200 3,5Ghz | 16GB 3200Mhz | B450 | GTX 1080 ti
Link to post
Share on other sites

9 hours ago, PDifolco said:

But now the new Intel Core 200 architecture seems worse than Ryzen for consistency..

thats probably because of only 8 game threads, if intel made a 16+16 or even a 12+16 they probably would have beaten zen 5 in bot average and lows. but instead intel somehow shot themselves in the foot twice with the same generation, they even cancelled adamantine (WHY??). maybe ARL laptop will be better.

 

9 hours ago, PDifolco said:

Yeah but it's true, overall Intel monolithic dies had more consistency than the Ryzen chiplet architecture

i don't think the chiplets are the problem, 7th-11th gen had comparatively worse lows than their zen competition, 12th gen fixed that leading me to think that its the hybrid architecture that is helping them.

 

14 hours ago, podkall said:

wow, whopping 2fps difference, let me get my microscope so I can see the difference

the difference between 66 and 76 fps(13600k vs 5600x) is more noticeable than the difference between 120 and 144, you don't need a microscope for that, just eyes.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×