Jump to content

Ryzen 5000 Series Reviews Discrepancy

Disclaimer: I am posting this in the tech news category as this is related to the recent Zen 3 Launch, Linus and Luke has nudged the topic in the most recent WAN show 06/11/20 but in my opinion, there is more to it.
 

Zen 3 has launched and you're after a new cpu you probably have seen at least several benchmarks by different tech youtubers and news outlets by now.
Some of them show new ryzens beating intel by quite a margin, and some of them put them on par with i9-10900k and ever slower intel SKUs.

This has been addressed by LTT in the latest WAN show episode.
Linus attributes the discrepancies in reviews to the RAM configurations used for testing, but there is one more factor at play here.
 

Uh... is our Ryzen 5000 Review WRONG?? - WAN Show November 6 , 2020


Some reviewers are still using rtx 2080 ti or using 3080 but testing on 1080p ultra settings. One would think that it should not matter much for 1080p testing, right? Not quite.
If we'll look at 1% and 0.1% low performance, on this chart below (Gamers Nexus 5600x review) we can spot that all new Ryzens are performing better than i9-10900k on average, yet Intel still has better lows:

image.png.2fa8232e4d8cb89939e9e2ba099a3cc2.png

This appears to be the case across many games, the reason Ryzen comes out on top is that above test is done on 1080p medium, not ultra.

Anandtech included results for various settings and resolutions which is a helpful example:
F1 2019 768p ultra low, new Ryzens stomp intel into the ground, same game on 1080p ultra and... how come it's so blue at the top of the chart?

image.png.2927afe8be2a295eb410554e46431f59.pngimage.png.4970334679bcfc74c76cbb4709760ff1.png


Conclusion:
It occurs to me that new Ryzens have much higher performance ceiling than we thought, the reason why reviews end up being so different is the unexpected GPU bottleneck.
When new Zen 3 CPUs and Intel 10th gen are limited by a GPU on the top end, Intel comes out on top due to better low frame performance.
When there is no (or lesser) limit imposed by GPU, even ryzen 5 5600x beats i9-10900k in the average FPS. This is especially visible in CS:GO benchmarks which is one of the least GPU intensive titles.

It looks like low frame performance improves for Zen 3 CPUs by a mile, with a faster RAM kit.
That's why in LTT review, 1% low fps results look to be on par with Intel, as they have used 3600Mhz CL14 kit, contrary to multiple reviewers that are using 3200Mhz kits.
 

Sources

-Wan show
-Gamers Nexus 5600x review
-Anandtech review
-My own reflections after watching/reading bazillion reviews

Link to comment
Share on other sites

Link to post
Share on other sites

that doesn't make much sense, a slower GPU shouldn't be causing that, even with the 3080 intel has better lows.

I would say it's either the architecture, the ram or just bad optimization for the new architecture.

Main PC [The Rig of Theseus]:

CPU: i5-8600K @ 5.0 GHz | GPU: GTX 1660 | RAM: 16 GB DDR4 3000 MHz | Case: Lian Li PC-O11 Dynamic | PSU: Corsair RM 650i | SSD: Corsair MP510 480 GB |  HDD: 2x 6 TB WD Red| Motherboard: Gigabyte Z390 Aorus Pro | OS: Windows 11 Pro for Workstations

 

Secondary PC [Why did I bother]:

CPU: AMD Athlon 3000G | GPU: Vega 3 iGPU | RAM: 8 GB DDR4 3000 MHz | Case: Corsair 88R | PSU: Corsair VS 650 | SSD: WD Green M.2 SATA 120 GB | Motherboard: MSI A320M-A PRO MAX | OS: Windows 11 Pro for Workstations

 

Server [Solution in search of a problem]:

Model: HP DL360e Gen8 | CPU: 1x Xeon E5-2430L v1 | RAM: 12 GB DDR3 1066 MHz | SSD: Kingston A400 120 GB | OS: VMware ESXi 7

 

Server 2 electric boogaloo [A waste of electricity]:

Model: intel NUC NUC5CPYH | CPU: Celeron N3050 | RAM: 2GB DDR3L 1600 MHz | SSD: Kingston UV400 120 GB | OS: Debian Bullseye

 

Laptop:

Model: ThinkBook 14 Gen 2 AMD | CPU: Ryzen 7 4700U | RAM: 16 GB DDR4 3200 MHz | OS: Windows 11 Pro

 

Photography:

 

Cameras:

Full Frame digital: Sony α7

APS-C digital: Sony α100

Medium Format Film: Kodak Junior SIX-20

35mm Film:

 

Lenses:

Sony SAL-1870 18-70mm ƒ/3.5-5.6 

Sony SAL-75300 75-300mm ƒ/4.5-5.6

Meike MK-50mm ƒ/1.7

 

PSA: No, I didn't waste all that money on computers, (except the main one) my server cost $40, the intel NUC was my old PC (although then it had 8GB of ram, I gave the bigger stick of ram to a person who really needed it), my laptop is used and the second PC is really cheap.

I like tinkering with computers and have a personal hatred towards phones and everything they represent (I daily drive an iPhone 7, or a 6, depends on which one works that day)

Link to comment
Share on other sites

Link to post
Share on other sites

15 minutes ago, mbntr said:

that doesn't make much sense, a slower GPU shouldn't be causing that, even with the 3080 intel has better lows.

I would say it's either the architecture, the ram or just bad optimization for the new architecture.

 

Once the GPU bottlenecks it comes down to how 1% and 0.1% low performance impacts average framerate. Intel does better in that department though it appears to be heavily RAM influenced.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, CarlBar said:

 

Once the GPU bottlenecks it comes down to how 1% and 0.1% low performance impacts average framerate. Intel does better in that department though it appears to be heavily RAM influenced.

I'm sure if there is enough noise GN Steve will do a bunch of tests across ram speeds and timing, and IF clocks, to see how that impacts things.

Link to comment
Share on other sites

Link to post
Share on other sites

This is still interesting. Presuming there is no bug in the matter, it seems to suggest that there is a notable difference in the "smoothness" of performance being presented by Intel's current offerings.

 

Maybe this is nothing more or less than a symptom of being monolithic vs chiplet, where even though Ryzen is faster 99% of the time, some accesses to some core complexes can end up slower than than the performance delta between Intel and Zen 3? It might seem to make sense, but a convincing explanation doesn't make it right of course.  At more heavily GPU bound situations then, the tiny fluctuations that might occur now become accentuated compared to the whole as the rest of the time, framerate is tied to a uniform level.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, RacA said:

It occurs to me that new Ryzens have much higher performance ceiling than we thought, the reason why reviews end up being so different is the unexpected GPU bottleneck.

You can't say the GPU "bottleneck" is unexpected. It is one reason reviewers have to include insanely low resolutions/settings that no actual user would use, just to show the difference the CPU makes.

 

2 hours ago, RacA said:

That's why in LTT review, 1% low fps results look to be on par with Intel, as they have used 3600Mhz CL14 kit, contrary to multiple reviewers that are using 3200Mhz kits.

This is another debate to open up. Just what speed ram should be used for testing? For example, Anandtech policy is to use the officially supported ram speed of the CPU/platform. Anything above that is overclocking, and where do you draw the line on what to overclock or how much? I think the main reason many don't consider high speed ram to be an overclock is the relative ease of buying a high rated kit, turn on XMP and more often than not it probably works. It doesn't always work, and it still catches people out.

 

Even if we limit the scope to PC gaming market, so we exclude office PCs for example, how much of that market actually runs above spec ram speeds? Reviewers have limited time so testing every possible variation is not going to happen from the start. IMO any review should include a reference point at official speeds, and if they choose to, include something higher to indicate possible improvement. I don't know if anyone has done a wider ram speed scaling test for Zen 3 yet, but I'll look forward to seeing one. 

 

1 hour ago, CarlBar said:

Once the GPU bottlenecks it comes down to how 1% and 0.1% low performance impacts average framerate. Intel does better in that department though it appears to be heavily RAM influenced.

Intel marketing: Best gaming 1% low results of any CPU, on every other Thursday, for game titles that have an even number of letters in its name.

 

I don't think they're desperate enough to go that route, but I couldn't resist :D 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

It's worth nothing that the RAM Linus used, is indeed faster than the RAM GN used in their test.

3600 cl14 vs 3200 cl14. This is a latency of 7.77ns vs 8.5ns. 

(Not to mention, GN used 4x8GB which may introduce additional latency/burden on the memory controller, compared to 2x8GB)

 

There might be other factor at play, but it could explain why the LTT F1 results are so much closer/better than others.

Though it's kinda hard comparing both benchmarks when LTT tested at the highest preset, but GN used Medium for lord knows why. Not to mention they all used 1080p.

I get that they want to test the "cpu" and not the "gpu", but if they just crank it up to the highest possible setting, once they start getting CPU bottlenecked, we can still see the difference in performances between the various CPUs if they use the same GPU, no?

image.thumb.png.4fbb3f72fb1afee4a2d9fce65dea33cb.png

image.thumb.png.aa7c7371144e6bb13a03dc55c72c8f3b.png

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, TetraSky said:

I get that they want to test the "cpu" and not the "gpu", but if they just crank it up to the highest possible setting, once they start getting CPU bottlenecked, we can still see the difference in performances between the various CPUs if they use the same GPU, no?

The higher the settings the more GPU bottleneck can be introduced to the benchmark, based on a specific game.
When GPU starts to be limiting factor we're seeing flawed results, in this case, we're not letting Zen 3 achieve it's max performance, and it's average FPS value ends up being lower.

Link to comment
Share on other sites

Link to post
Share on other sites

There's an art to teasing out CPU issues with Games. You have to land your settings in the right place where you're not just basically doing a GPU Driver specific Latency test (that's why 720p benchmarks end up being with highend GPUs). Because too low of res/settings and you're not really testing anything of note, but too much and you run into hard GPU bottlenecks. Each game is a little different, but somewhere in the 1080p/Medium to 1440p/high for each game will generally do it. Gaming Performance is still very much about draw calls & latency more than any actual IPC.

 

Most of this got kicked off by the TechPowerUp's results, which look a lot like some where GPU driver issue they need to test. Also, both AMD & Intel are very memory sensitive in high-end gaming. So memory settings used is something to be mindful of.

Link to comment
Share on other sites

Link to post
Share on other sites

if i had to guess this may come down to an optimization issue that will probably be resolved in the near future? For a brand new cpu, especially ones with major arch changes, I expect to have some oddness.

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, RacA said:

When GPU starts to be limiting factor we're seeing flawed results, in this case, we're not letting Zen 3 achieve it's max performance, and it's average FPS value ends up being lower.

The results are not flawed as such. They represent what you get in a mixed CPU/GPU (and other components) environment. The flaw is trying to read such results as a CPU-only indication of performance, where to get close to that you have to run unrealistic scenarios at which point the value needs to be questioned. If you want a pure CPU test, then pick a pure CPU test. That's why some love Cinebench, because it scales very well with CPU cores/threads/IPC and is hardly impacted by ram.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

I'm sure if there is enough noise GN Steve will do a bunch of tests across ram speeds and timing, and IF clocks, to see how that impacts things.

Didn't he mention offhand in either the 5950 or 5900 review that they would later be doing memory testing?

Link to comment
Share on other sites

Link to post
Share on other sites

Would be interesting to see tests with various RAM kits tested side by side both sub 1080p and 1080p low and high too. Just to have more detailed picture.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

I honestly feel like the standardized kit of RAM any tech reviewer should consider using should be a 3000 or 3200 MHz kit of either 2x16GB or 4x8GB at a low CL. Anymore than that and you are giving AMD an unfair advantage. But if you start using 2666 MHz RAM then that is not an accurate representation of what PC and gaming enthusiasts will be buying. I believe both CPUs are on a more even playing field when the test bench has 3000 MHz RAM and I don't believe it will start skewing results. Besides, we are gamers looking at gaming benchmarks, so we will avoid 2666 MHz like the plague. 

 

Sure your performance improves on both sides if you increase RAM speeds, but we all know that the IF clock can play an important role in Ryzen performance. 

I am normally team Red, but I will give Intel the benefit of the doubt. Besides I start to ignore reviews that don't use a fair test bench or seem to have flawed testing methodology. 

Fuck you scalpers, fuck you scammers, fuck all of you jerks that charge way too much to tech-illiterate people. 

Unless I say I am speaking from experience or can confirm my expertise, assume it is an educated guess.

Current setup: Ryzen 5 3600, MSI MPG B550, 2x8GB DDR4-3200, RX 5600 XT (+120 core, +320 Mem), 1TB WD SN550, 1TB Team MP33, 2TB Seagate Barracuda Compute, 500GB Samsung 860 Evo, Corsair 4000D Airflow, 650W 80+ Gold. Razer peripherals. 

Also have a Alienware Alpha R1: i3-4170T, GTX 860M (≈ a 750 Ti). 2x4GB DDR3L-1600, Crucial MX500

My past and current projects: VR Flight Sim: https://pcpartpicker.com/user/nathanpete/saved/#view=dG38Jx (Done!)

A do it all server for educational use: https://pcpartpicker.com/user/nathanpete/saved/#view=vmmNcf (Cancelled)

Replacement of my friend's PC nicknamed Donkey, going from 2nd gen i5 to Zen+ R5: https://pcpartpicker.com/user/nathanpete/saved/#view=WmsW4D (Done!)

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Nathanpete said:

I honestly feel like the standardized kit of RAM any tech reviewer should consider using should be a 3000 or 3200 MHz kit of either 2x16GB or 4x8GB at a low CL. Anymore than that and you are giving AMD an unfair advantage. But if you start using 2666 MHz RAM then that is not an accurate representation of what PC and gaming enthusiasts will be buying. I believe both CPUs are on a more even playing field when the test bench has 3000 MHz RAM and I don't believe it will start skewing results. Besides, we are gamers looking at gaming benchmarks, so we will avoid 2666 MHz like the plague. 

 

Sure your performance improves on both sides if you increase RAM speeds, but we all know that the IF clock can play an important role in Ryzen performance. 

I am normally team Red, but I will give Intel the benefit of the doubt. Besides I start to ignore reviews that don't use a fair test bench or seem to have flawed testing methodology. 

Most people buy 3200 at the least because anything lower doesn't even make sense as you can get a 3200 kit for super cheap. If people test with anything under 3200 than I would have to question the testing methodology as it would be purposely hampering performance. You can even find 3600 mhz kits for super affordable prices so idk why anyone would be upset about testing with those kits. If it was an excessive speed like 4000 mhz which cost significantly more and most won't buy it then I could understand but with 3600 you can easily buy a kit for around the same price as lower clocked kits. 

Link to comment
Share on other sites

Link to post
Share on other sites

Yeah I remember that some reviews benchmarked CSGO to hit 1200FPSMax 720AVG frames a second on 1080p.

Seems like memory latency is more important over memory clocks for this generation.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Brooksie359 said:

Most people buy 3200 at the least because anything lower doesn't even make sense as you can get a 3200 kit for super cheap. If people test with anything under 3200 than I would have to question the testing methodology as it would be purposely hampering performance. You can even find 3600 mhz kits for super affordable prices so idk why anyone would be upset about testing with those kits. If it was an excessive speed like 4000 mhz which cost significantly more and most won't buy it then I could understand but with 3600 you can easily buy a kit for around the same price as lower clocked kits. 

Did you not consider the fact that 3600 MHz give AMD an unfair advantage over Intel. I'm not team blue, I just want an even playing field. 3200 MHz is a more realistic option for gamers but I believe it would still give AMD a small unfair advantage in the infinity fabric. Hence why I also suggested 3000 MHz. And a small unfair advantage is significant when we make our buying decisions based on performance differences of like 2%. 

Fuck you scalpers, fuck you scammers, fuck all of you jerks that charge way too much to tech-illiterate people. 

Unless I say I am speaking from experience or can confirm my expertise, assume it is an educated guess.

Current setup: Ryzen 5 3600, MSI MPG B550, 2x8GB DDR4-3200, RX 5600 XT (+120 core, +320 Mem), 1TB WD SN550, 1TB Team MP33, 2TB Seagate Barracuda Compute, 500GB Samsung 860 Evo, Corsair 4000D Airflow, 650W 80+ Gold. Razer peripherals. 

Also have a Alienware Alpha R1: i3-4170T, GTX 860M (≈ a 750 Ti). 2x4GB DDR3L-1600, Crucial MX500

My past and current projects: VR Flight Sim: https://pcpartpicker.com/user/nathanpete/saved/#view=dG38Jx (Done!)

A do it all server for educational use: https://pcpartpicker.com/user/nathanpete/saved/#view=vmmNcf (Cancelled)

Replacement of my friend's PC nicknamed Donkey, going from 2nd gen i5 to Zen+ R5: https://pcpartpicker.com/user/nathanpete/saved/#view=WmsW4D (Done!)

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Nathanpete said:

Did you not consider the fact that 3600 MHz give AMD an unfair advantage over Intel. I'm not team blue, I just want an even playing field. 3200 MHz is a more realistic option for gamers but I believe it would still give AMD a small unfair advantage in the infinity fabric. Hence why I also suggested 3000 MHz. And a small unfair advantage is significant when we make our buying decisions based on performance differences of like 2%. 

You're making it sound like Intel doesn't benefit from RAM at all. Which brings up the thing, has AMD been nerfed the entire time because of poor choice of RAM by testers? Shit swings both ways...

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, RejZoR said:

You're making it sound like Intel doesn't benefit from RAM at all. Which brings up the thing, has AMD been nerfed the entire time because of poor choice of RAM by testers? Shit swings both ways...

AMD still only officially support 3200. Anything above that is an overclock. If it is relatively easy and reachable to run faster ram, it is fine to run faster on both sides, but baseline (supported) speeds should also be a reference point. If anything it is AMD that is pushing it a bit, since they openly stated 3600 as the performance sweet spot for Zen 2, so that was a speed grade people gravitated towards.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, porina said:

AMD still only officially support 3200. Anything above that is an overclock. If it is relatively easy and reachable to run faster ram, it is fine to run faster on both sides, but baseline (supported) speeds should also be a reference point. If anything it is AMD that is pushing it a bit, since they openly stated 3600 as the performance sweet spot for Zen 2, so that was a speed grade people gravitated towards.

I mean, X99 also officially supported 2400MHz. How many people actually ran that? If price difference is not big, it's natural to take faster one with lower timings.

Link to comment
Share on other sites

Link to post
Share on other sites

Im pretty sure I found the reason for the wild review swings

There is a MAJOR issue I reported and confirmed about the MSI BIOS here

Basically the jist is the current iteration of MSI's 500 series BIOS causes WHEA errors W/ Zen 3 above 3200 MHz memory frequency and memory latency above 100ns (Should be sub-70 at least).

Looking at the reviews with major deviation like hardware unboxed and anandtech, they all share one thing in common: MSI motherboards

image.png.58df3655d54dd656548e63983db3940a.png

 

Issue found.  The deviance is caused by a buggy BIOS.

These benchmarks need to be re-done on non-MSI motherboards

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Nathanpete said:

Did you not consider the fact that 3600 MHz give AMD an unfair advantage over Intel. I'm not team blue, I just want an even playing field. 3200 MHz is a more realistic option for gamers but I believe it would still give AMD a small unfair advantage in the infinity fabric. Hence why I also suggested 3000 MHz. And a small unfair advantage is significant when we make our buying decisions based on performance differences of like 2%. 

No it wouldn't. That's not giving an unfair advantage when you can buy 3600 mhz memory for around the same price as lower speed kits. If you are running lower speed kits just to make sure amd doesn't perform as well that would be giving intel an unfair advantage. Idk why you feel that reviewers should purposely hamper performance of amd chips that way people are misinformed and more likely to buy intel rather than know the true potential of amd chips. You have to realize most of the people watching these videos are tech enthusiasts and are more than likely going to buy 3600 mhz kits because it is known to be good for amd. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Brooksie359 said:

No it wouldn't. That's not giving an unfair advantage when you can buy 3600 mhz memory for around the same price as lower speed kits. If you are running lower speed kits just to make sure amd doesn't perform as well that would be giving intel an unfair advantage. Idk why you feel that reviewers should purposely hamper performance of amd chips that way people are misinformed and more likely to buy intel rather than know the true potential of amd chips. You have to realize most of the people watching these videos are tech enthusiasts and are more than likely going to buy 3600 mhz kits because it is known to be good for amd. 

Additionally reviewers in the past have gone through the effort to find the most optimal memory configurations for Intel systems and have standardized on what is known to be overall the best ram configuration for an Intel system. It is totally fair thing to do for AMD so they also know what to standardize on for those too. It's not like reviewers just choose at random or guess which ram to use, no they actually tested it or looked at someone else who has and used that information to create their standardized test system.

 

If using faster ram in an Intel system resulted in performance above margin of error they would be using it, reviewers are not short on ram or vendors willing to supply it to them. 

Link to comment
Share on other sites

Link to post
Share on other sites

Linustechtips.com, where not running AMD at overclocked settings and Intel at stock is "unfair"... 

 

Maybe people should dial back their fanboyism a little? You're literally advocating for using overclocked settings on one platform but not the other. 

Why not agree upon using the highest clocked RAM supported by both platforms? At least for the" stock" tests. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×