Jump to content

AMD says these are the same... We DISAGREE... - Testing 12 of the Same CPU Video and Process Doc

AdamFromLTT

 

 

Do you really know if your CPU is performing the same as the ones we review? We don’t know. But we know that if we want to increase our testing capacity, we need to PARALLELIZE. But that means we need nearly identical test benches. And trying to make that happen sent us down a far deeper rabbit hole than we could have anticipated.

--------------------------------------------
“Testing 12 of the Same CPUs”

 

AMD CPU Variance Testing Documentation

 

Testing Dates: 2023-06-04 to 2023-08-01

AMD-CPUVAR-2023X3RyTM9CtE5SkE3C0t2EX7jzlff7Nz4AlM40LDVw

For testing variance of the CPUs, all hardware will remain the same on the bench. The same motherboard, graphics card, SSD and other system components need to be used for all testing across all chips to ensure that the only variable is the CPU itself. We verify that the fans for the Noctua cooler are installed in the same spot every time. Every time we change a CPU, we ensure that we go into the BIOS, set the BIOS settings to default, save and restart, then go in and set the BIOS settings to our standard configuration, EXPO on, Re-Bar on, IGPU disabled, fan curves are set appropriately and then save and restart again.

 

TJlg1QRhgeKRJbJE8sqTsqVLtSWjKrGQT47z88qI

Picture 1: Test Bench hardware configuration

 

After testing all of the CPUs, we use one as the “control” CPU to test the variance of the 3 motherboards and 3 sets of RAM. When testing the motherboards, we utilize the same RAM used in the CPU variance testing. When testing the RAM sets, we utilize the motherboard that was used in the CPU variance testing. This is to see what (if any) variance there will be between the 3 benches. Again, for every hardware change, we repeat the same BIOS adjustments as above.

 

All testing should be done within the environmental chamber at 20°C with the test bench oriented in the same direction for purposes of airflow. There will be taped markings on where to align the test bench to ensure that it is located in the same spot after any hardware changes.

 

0N_Y6vC8zJsU9vPRZ6HQlUA9jHvOog_9TinYjtMr

Picture 2: PTM7950 pad orientation and size used for test reference.

 

For testing, we will be using PTM7950 pads cut to the same size for each CPU to take out any guesswork about thermal paste applications. When using the PTM pad make sure to heat soak it before testing.

To help assist with less run-to-run variance, we will be using Bill2’s Process Manager to run all applications that are not games at High Priority (not Real-Time as that can crash the system in some instances). The profiles for each application exe will need to be set up in the Process Manager to ensure that they run in High Priority every time during the testing. We verify that these profiles are all working correctly as intended before collecting data for the project. This is especially true of Cinebench R23 as it actually sets itself to Below Normal Priority by default, and if we wait for the program to finish initializing on open before changing the priority, it will again try to set itself back to Below Normal Priority.

 

To start setting up the system we will need the following hardware:

 

Graphics Card: NVIDIA RTX 4090 FE (Asset Tag: C2940)

Storage: Samsung 980 Pro 2TB (Asset Tag: C5765)

Cooling: Noctua NH-D15 + 2x NF-A15 PWM fans (Asset Tag: C3382)

Thermal Interface Material: PTM7950

Motherboard: Gigabyte X670E Aorus Xtreme (Asset Tag: C3511)

Memory: 2X16GB G.Skill Trident Z5 Neo 6000 MT/s CL30-38-38-96 (SN 20032 & 20031)

Power Supply: MSI MEG Ai1300P (Asset Tag: C2947)
CPU: 12x Ryzen 9 7800X3D (Asset Tags: C4782, C5281, C5371, C5383, C5424, C5504, C5530, C5540, C5546, C5643, C5660, C5715)

 

Everything will be installed on the same Open Benchtable V2 for portability. We start by installing the power supply, motherboard and SSD on the bench. We will start testing with the C4782 CPU, so install that CPU into the socket. Finish building the system (firstly with just regular thermal paste as we will be installing the PTM after system configuration has been completed), update the BIOS to version F9a, set the BIOS settings appropriately to our standard configuration, and then install the custom Windows image used for all our test benches.

Once the image has finished installing, we install all the latest drivers for the hardware from AMD’s site for the chipset and from Gigabyte’s site for any others that did not get installed correctly. We go through installing all required programs and games used for this project. Some are self-contained in Markbench but we will be testing the following:


image.png

 

image.png

 

image.png

 

After installing all of the applications and games, we obtain all the current Windows updates available, then once all the updates have been installed we pause the Windows updates for 5 weeks to ensure that there are no changes to the operating system.

We install Bill2’s Process Manager as mentioned above and set profiles for all the exe’s that perform the benchmark (some applications launch a separate exe that contains the benchmark’s load). If unsure what the exe is called, we run the benchmark and check it in the task manager to see which exe is running and where it is located. Once this has been set up, clear all temp files in the %temp% folder, optimize the SSD, set the recycling bin to 15MB, and then shut down the system so we can prepare the CPU for its testing. 

 

CPU VARIANCE TESTING

We remove the NH-D15, clean all thermal paste off the CPU and heatsink with Isopropyl alcohol to ensure that the surfaces are clean of any residue, and then we install the PTM pad.

Install the PTM by first peeling one of the plastic protective films, and aligning the PTM as close to the image above as possible. Ensure you squeeze out any air bubbles when installing it onto the CPU before removing the second protective plastic film. When installing the PTM pad, removing the plastic film is easiest with a set of tweezers and a guitar pick or spudger to make sure that the pad doesn’t try to also peel back off the CPU’s surface when removing the plastic protective film. 

 

After installing the PTM pad, we install the Noctua NH-D15 with the orientation so that you can read the Noctua logo in the same orientation as the Aorus Xtreme chipset heatsink’s wording, making sure to torque down the screws on the heatsink till they stop. 

jaa2vYvsV-JlYlg7yK5ZMvlVkfVIhuNy28jhbzMA

Picture 3: Noctua NH-D15 orientation

 

Once the heatsink and fans have been reinstalled on the system boot the system back up into the BIOS. As iterated above, we set the BIOS settings to default, save and restart, then go in and set the BIOS settings to our standard configuration, EXPO on, Re-Bar on, IGPU disabled, fan curves set appropriately and then save and restart again. Run OCCT for 30 minutes to let the pad flow as it should, shut off the system for 30 minutes to let the PTM cool, then turn on the system and begin testing.

 

When running non-game applications for benchmarking, we ensure that all other applications such as Steam are closed in the tray, keeping only Bill2’s Process Manager open, utilizing MarkBench for any of our automated non-game benchmarks and closing it for any that do not use it. Steam updates can come in at any time and cause the run to become invalid so it must remain closed for any non-game tests. On the flip side, when running the game benchmarks, ensure that there are no other application windows open on the system or anything else running in the tray aside from Steam and MarkBench to ensure its run-to-run consistency.
 

For all MarkBench tests, we ensure that we tag the sessions with the appropriate asset tag for traceability. For any screenshots, label the screenshot with the appropriate application name, run number and asset tag code for the CPU.

 

After any major test (such as SpecWS), we restart the system to flush the RAM to ensure consistency. Every time we restart the system, we make sure to wait for about 5 minutes to let the OS do any required system load checks. 

 

Once all tests have been completed, before removing the CPU, we ensure that the runs are all valid. Once the CPU has been removed, if it is required to reinstall back into the system we will have to redo ALL tests as we have effectively changed the “configuration”.

 

After finishing testing the first CPU and verifying all the runs are valid, we shut down the system, and let it cool for approximately 15 minutes to allow the PTM to turn back to a solid state. This helps with the cleaning process as it is much easier to simply scrape most of the hardened PTM off the CPU rather than deal with it as a liquid or paste. We remove the heatsink from the CPU, scrape all the PTM from the heatsink and CPU using a plastic spudger or guitar pick, then clean both surfaces again with isopropyl alcohol. After cleaning them, remove the CPU, and repeat the process with the next CPU. After installing a new CPU, we reinstall the chipset drivers to verify that the X3D’s V-Cache is running optimally before running the tests followed by a restart.

 

RAM VARIANCE TESTING


Once all CPU tests have been completed, with the last CPU installed still in the motherboard, we will use it to check any variance of the RAM next. In a similar process to the CPU variance testing, we will be simply swapping the RAM kits on the motherboard while leaving the same CPU in the system (C5715). This should be easier to control as we are simply removing the one fan from the Noctua cooler to get access to the RAM slots.

The RAM kits we will need to test variance vs our current set used to test the CPUs are: SN ending 20034 & 20033 and SN ending 73909 & 73910 (Asset Tags: C5914 & C5915). 


We remove the first set of RAM used to test the CPU variance and make sure to keep note of which set it is and what order it was installed in (SN ending in 20031 & 20032) as we will be using that set for the motherboard variance later. Then install the next set of RAM and repeat all the tests listed above. Similar to the CPU, reset the BIOS again and readjust the BIOS settings appropriately. This will help ensure that the RAM is running optimally.

 

We tag the sessions with the CPU asset tag as well as the RAM used for any MarkBench tests and for any screenshots, we label them with the appropriate application name, run number, CPU asset tag, and RAM serial number digits (last 5 digits) for traceability purposes.

 

Before removing the RAM, all runs are validated (similar to the CPU testing) as we want to leave the system configured as is to help control any potential variable of even the RAM being installed in the opposite slots.

 

MOTHERBOARD VARIANCE TESTING


Once all RAM tests have been completed, we will now be checking for any motherboard variance between the three we will be using for our benches using the CPU (C5715) and RAM (SN 20032 & 20031) from the CPU variance testing. This will be the one where installation differences could play more of a factor. However, by using the same cooler, CPU, and RAM (ensuring that they are always installed in the same order for the slots) and using PTM pads should limit any installation variance issues.

The motherboards we will need to test variance vs the one used for the CPU testing are: C2941 and C5378.

 

We will have to tear down most of the system to test the motherboard swaps. Don’t forget to take out the SSD from the system! We pay close attention to the orientation of the heatsink and its motherboard mounts, the RAM order and that the SSD is installed in the same slot.


Once all the other system components have been installed including the PTM, we boot up the system. We verify in the BIOS that the motherboard is running version F9a, if not flash it. Similar to the CPU, reset the BIOS again and readjust the BIOS settings appropriately. This will help ensure that the system is configured the same as the other motherboard.

 

We tag the sessions with the CPU asset tag and the appropriate motherboard asset tag for any MarkBench tests. For any screenshots, label the screenshots with the appropriate application name, run number, CPU asset tag, and motherboard asset tag for traceability purposes.

 

Before removing all of the hardware from the system, and verifying all runs are valid (similar to the CPU testing) we want to leave the system configured as is to help control any potential variable of hardware being installed ever so slightly differently.






 

End of Documentation
 

Link to comment
Share on other sites

Link to post
Share on other sites

Hey Labs,

I know you have programmers on the team. Please look into NtSuspendProcess from ntdll.dll, this should allow you to suspend individual processes during testing and should help reduce background performance drains without having to gut a windows install which I know was brought up in the video as a concern.

PLEASE QUOTE ME IF YOU ARE REPLYING TO ME

Desktop Build: Ryzen 7 2700X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 48GB Corsair DDR4 @ 3000MHz, RX5700 XT 8GB Sapphire Nitro+, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to comment
Share on other sites

Link to post
Share on other sites

IMPRESSIVE.   What you have at 14:36  is very interesting. Applying geometry, basically a variant of the Pythagorean theorem to find the distance between points in a 12 d space.  The script for this listens like ... like it could be a scientific paper.  Like if you wrote it up in the proper format and submitted it to a journal they'd at least send it for peer review. 

If you all really want to apply some pressure, maybe consider doing that.  If it gets published it's not just a you tube video it is a study that can't be ignored. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Even though Factorio was just an unexpected example, it's testing was still far from perfect. Mainly because, in case of Factorio, everything highly above 60 UPS does not really matter (it is not FPS after all) and is highly inconsistent. Factorio benchmarks should be taken on saves that DO drop even 7800X3D below 60 UPS, or at least below 80-90 UPS (they do exist, easiest way is just ask in official Factorio discord server. Decent options are Flame_Sla's 30k SPM or 50k SPM belt maps, though). Not like that's something really critical, but if there will be more tests with Factorio - please consider changing the map to get more realistic tests.

Link to comment
Share on other sites

Link to post
Share on other sites

Hi There, 

 

Just finished watching the video and i have 2 questions.

 

1. Why is LTT keeping major videos like "How LTT Labs Test Products" behind the floatplane paywall, doesnt this go against them being open and sharing the process of labs content with the community?

I get floatplane is something there trying to push and they have a lot of exclusive content there but i would think sharing this video with the 99% of the community would be the better community spirit they are going for. IMO 

 

2. Since GamersNexus made the video they cause issues at LTT it seems like they are going out of there way not to talk about GamersNexus or even mention them. I don't keep up with the drama going on between Youtube channels its just an observation and i am happy to be wrong if I've missed something. 

 

Thanks

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, joshfrog said:

1. Why is LTT keeping major videos like "How LTT Labs Test Products" behind the floatplane paywall, doesnt this go against them being open and sharing the process of labs content with the community?

I get floatplane is something there trying to push and they have a lot of exclusive content there but i would think sharing this video with the 99% of the community would be the better community spirit they are going for. IMO 

I also though this anytime Linus would bring up doing an interview with the new CEO ever since he said he wanted to. Having such a big community interest video locked behind the floatplane paywall just seems to me a less of an interest in the community and just to push people to flaotplane.

Link to comment
Share on other sites

Link to post
Share on other sites

Distance equation update. Euclidean distance was the chosen distance. Mahalanobis distance is better in higher dimensions calculations because it is also unit less. That means you don't need to transform the data so cine bench won't overshadow the rest. Mahalanobis distance takes a point and distribution of that data. Since you have done all this testing you can have a good idea what that distribution is. In practice I have done this and only have used gaussian distribution. That should work here too. Once you calculate it, closer to zero, closer to the mean the cpu is. 

Link to comment
Share on other sites

Link to post
Share on other sites

For the first time I found the graphs shown in the video not so helpful. Linus says difference in CPUs but graphs show same number, really??? You can change the range atleast for eg instead of y scale as 0-100 the graphs can be 99-100 if all the numbers vary between 99 and 100 and the scale can be mentioned in the graph image in the corner. How useful and representative that would be

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, joshfrog said:

2. Since GamersNexus made the video they cause issues at LTT it seems like they are going out of there way not to talk about GamersNexus or even mention them. I don't keep up with the drama going on between Youtube channels its just an observation and i am happy to be wrong if I've missed something. 

Personally, that's what I'd do if I were LTT. Regardless of how good or bad the relationship is privately between the two parties, I'd just put GamersNexus on an internal "taboo topics" list for public videos to avoid any kind of press that could come of that. It's just simpler and less risky for the business' public image, nothing personal. Probably things would go on that list regularly (at least for a time) when any subject is found to be overly controversial and just isn't a hill worth dying on.

Link to comment
Share on other sites

Link to post
Share on other sites

Regarding the "rant" Linus did near the end, about 100s of parts needed for conclusive tests. What if LTT crowd-sources the hardware for these tests with huge amounts of parts? They could let people buy the part on their website, get it for the Labs, use it there for some time and then finally ship to the buyer. Of course with full transparency and all that. I feel like a lot of people would like to help the cause and would be willing to wait a little longer for their part. It's a logistical nightmare, but still way cheaper than actually buying all the parts. And you could spice it up with something extra like some exclusive LTT merch for people who order parts this way. I know it's probably too hard to implement and isn't worth it, but still. Theoretically possible 🤷‍♂️

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, smcoakley said:

Personally, that's what I'd do if I were LTT. Regardless of how good or bad the relationship is privately between the two parties, I'd just put GamersNexus on an internal "taboo topics" list for public videos to avoid any kind of press that could come of that. It's just simpler and less risky for the business' public image, nothing personal. Probably things would go on that list regularly (at least for a time) when any subject is found to be overly controversial and just isn't a hill worth dying on.

"Don't mention the war!"

 

2 hours ago, Naveen Prashanth said:

For the first time I found the graphs shown in the video not so helpful. Linus says difference in CPUs but graphs show same number, really??? You can change the range atleast for eg instead of y scale as 0-100 the graphs can be 99-100 if all the numbers vary between 99 and 100 and the scale can be mentioned in the graph image in the corner. How useful and representative that would be

I see where you are going with this, but I have to disagree. Cutting off major part of a pillar diagram to enhance the differences can easily be extreemly misleading in terms of how big the differences are.

Absolute numbers are better unless you are very specific and careful.

mITX is awesome! I regret nothing (apart from when picking parts or have to do maintainance *cough*cough*)

Link to comment
Share on other sites

Link to post
Share on other sites

As a bit of an excel nerd and an academic, the whole numbercrushing and testing issues were very satisfying to listen to. Im glad you found a workable solution, (until some wildcard just flips the table on you 😉 )

mITX is awesome! I regret nothing (apart from when picking parts or have to do maintainance *cough*cough*)

Link to comment
Share on other sites

Link to post
Share on other sites

I found the video quite interesting but seems like this got past the editors, the 1% lows and averages are switched between 11:41 and 11:56 and the same happens between 12:13 and 12:22. Seems like the either the legend wasn't changed or the colors were switched around as compared at 8:33.Screenshot2024-01-26213920.thumb.png.3d2d5c044c05202ffe3fb8fe61a4b0ab.png

Link to comment
Share on other sites

Link to post
Share on other sites

Not seeing anything in the process doc about any steps you might be taking to ensure a consistent and repeatable mounting pressure for the cpu and heatsink. That could be affecting thermal transfer and causing slight variances.

Link to comment
Share on other sites

Link to post
Share on other sites

Does LTT post their 3D print files?  Those CPU holders look very useful. 

"And I'll be damned if I let myself trip from a lesser man's ledge"

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, lordcheeto said:

Not seeing anything in the process doc about any steps you might be taking to ensure a consistent and repeatable mounting pressure for the cpu and heatsink. That could be affecting thermal transfer and causing slight variances.

I don't think that's too relevant in this case. They use Noctua coolers with their SecuFirm mounting system wich limits the maximum mounting pressure. As long as the screws are fully tightend every time there shouldn't be much if any variation.

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, rcmaehl said:

Hey Labs,

I know you have programmers on the team. Please look into NtSuspendProcess from ntdll.dll, this should allow you to suspend individual processes during testing and should help reduce background performance drains without having to gut a windows install which I know was brought up in the video as a concern.

If they really wanted to measure the CPU performance in isolation, it would be much better to have a server-like setup... no GPU, no Windows, just a very lightweight Linux distro and an SSH connection.

I realize all of their test and benchmarks require Windows... sadly... but it would get rid of many variables that affect the final result. And you can definitely benchmark a CPU in different ways than just videogames and cinebench.

Code compilation, big data processing, difficult calculations like factorization or pi computation etc.

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, westfarn said:

Distance equation update. Euclidean distance was the chosen distance. Mahalanobis distance is better in higher dimensions calculations because it is also unit less.

The advantage of the method they chose is that Euclidean distance is relatable to the average person who knows a bit of algebra.   If you know a^2+b^2= c^2   then you can imagine just adding more letters to that equation to derive a 12 dimensional distance between points.    In short the audience has a prayer of getting it without needing any advanced knowledge. 

 

12 minutes ago, geeshta said:

If they really wanted to measure the CPU performance in isolation, it would be much better to have a server-like setup... no GPU, no Windows, just a very lightweight Linux distro and an SSH connection.

Your idea would probably be simpler in a lot of ways too. 

12 minutes ago, geeshta said:

I realize all of their test and benchmarks require Windows... sadly... but it would get rid of many variables that affect the final result. And you can definitely benchmark a CPU in different ways than just videogames and cinebench.

Code compilation, big data processing, difficult calculations like factorization or pi computation etc.

Perhaps but videogames and cinebench are things that both they and their audience know and can relate to.  

 

I feel like they made this video complex enough without loosing touch with the casual audience. 

Link to comment
Share on other sites

Link to post
Share on other sites

Stop asking for more government and regulations Linus, I don't want to these chips be even more expensive. This task didn't fall on youtubers, you guys wanted this task and also used as a job.

Nobody in the real life really cares about it, people want to play their games in peace and 8% on this performance scale means nothing. We're not on the FX 8350 era anymore were each frame counts... This only means something for overclockers and specs enthusiasts who plays Afterburner first and the game in second.

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/25/2024 at 9:04 PM, AdamFromLTT said:

After installing the PTM pad, we install the Noctua NH-D15 with the orientation so that you can read the Noctua logo in the same orientation as the Aorus Xtreme chipset heatsink’s wording, making sure to torque down the screws on the heatsink till they stop. 

 

Was it validated that screw torque won't make a difference?

 

Otherwise the instructions should call out an exact pattern the screw should be tightened and torque values. 

Further if the data should be very reliable SOP might contain testing the torque driver before and after the tests to make sure the tool stayed within calibration for all tests.

 

 

@AdamFromLTT Watching the video: With questions like this it's critical to understand significant figures: https://en.wikipedia.org/wiki/Significant_figures

As an example: 187 and 187.0 isn't the same number.

 

In the video: The exact same dataset is shown (chart) but the claimed spread and standard deviation is different.

I suppose internally the calculation was made with the "exact" (in the sense of all digits the spreadsheet returned) while for illustration it was rounded (?) to 3 digits. 

image.thumb.png.cfbd320d8be0266da9d45bd8c6858c1e.png

image.thumb.png.8d3607c38f43a7b3db1e04d3b87e0b4a.png

 

With this out of the way as a viewer you have to doubt any number that follows.  For example do you truly have 3 significant digits with the Euclidian distance or is that just artistic impression of data?  Who tells me that it won't be just two or one digit? In this case the entire chart and conclusion would fall apart as you couldn't distinguish  between a 4.95, 4.98 and 4.99 score.

Overall there wasn't any uncertainty estimation or calculations shown making me doubt if they where done further pushing the work towards debatable.

TL;DR Every results/measurement has a uncertainty. As such a value without uncertainty is worthless and not good scientific practice.

People never go out of business.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/25/2024 at 4:08 PM, Uttamattamakin said:

IMPRESSIVE.   What you have at 14:36  is very interesting. Applying geometry, basically a variant of the Pythagorean theorem to find the distance between points in a 12 d space. 

https://en.wikipedia.org/wiki/K-means_clustering#:~:text=k-means clustering is a,a prototype of the cluster.

 

It's not a new idea. 

It's also not terrible to use in this case as each benchmark realistically SHOULD have similar weight. The usual issue occurs when each coordinate is arbitrary. 

 

 

23 hours ago, westfarn said:

Distance equation update. Euclidean distance was the chosen distance. Mahalanobis distance is better in higher dimensions calculations because it is also unit less. That means you don't need to transform the data so cine bench won't overshadow the rest. Mahalanobis distance takes a point and distribution of that data. Since you have done all this testing you can have a good idea what that distribution is. In practice I have done this and only have used gaussian distribution. That should work here too. Once you calculate it, closer to zero, closer to the mean the cpu is. 

probably doesn't matter since they scaled the scores. I'm assuming something like min -> 0 max -> 1  with interpolation between. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, cmndr said:

probably doesn't matter since they scaled the scores. I'm assuming something like min -> 0 max -> 1  with interpolation between. 

So they normalized all the vectors to 1.  Something like a Hilbert Space.    There are many complicated ways to do this and talk about it.  

Relating it back to math everyone knows is a very good way to teach it and talk about it.   You know building bridges to cross. 

Link to comment
Share on other sites

Link to post
Share on other sites

    
16:06 
Wait, do they really, do not control the secondary timings on the RAM? You want to have them entered manually, all of them. 
 

   
 
 
 
Spoiler
CPU : Intel 14gen i7-14700K
COOLER :  Thermalright Peerless Assassin 120 White + thermaltake toughfan 12 white + Thermal Grizzly - CPU Contact Frame Intel 13./14. +  Coollaboratory Liquid Ultra
GPU : MSI RTX 2070 Armor @GPU 2050MHz Mem 8200MHz -> USB C 10Gb/s cable 2m -> Unitek 4x USB HUB 10 Gb/s (Y-HB08003)
MOBO : MSI MEG Z690 UNIFY
RAM :  Corsair VENGEANCE DDR5 RAM 64 GB (2 x 32 GB) 6400 MHz CL32 (CMK64GX5M2B6400C32)
SSD : Intel Optane 905P 960GB U.2 (OS) + 2 x WD SN850X 4TB + 2 x PNY CS3140 2TB + ASM2824 PCIe switch -> 4 x Plextor M8PeG 1TB + flexiDOCK MB014SP-B -> Crucial MX500 2TB + GoodRam Iridium PRO 960GB + Samsung 850 Pro 512GB
HDD : WD White 18TB WD180EDFZ + SATA port multiplier adp6st0-j05 (JMB575) ->  WD Gold 8TB WD8002FRYZ + WD Gold 4TB WD4002FYYZ + WD Red PRO 4TB WD4001FFSX + WD Green 2TB WD20EARS
EXTERNAL
HDD/SSD : 
XT-XINTE LM906 (JMS583) -> Plextor M8PeG 1TB + WD My Passport slim 1TB + LaCie Porsche Design Mobile Drive 1TB USB-C + Zalman ZM-VE350 -> Goodram IRDM PRO 240GB
PSU :  Super Flower leadex platinum 750 W biały -> Bitfenix alchemy extensions białe/białe + AsiaHorse 16AWG White 
UPS :  CyberPower CP1500EPFCLCD -> Brennenstuhl primera-line 8 -> Brennenstuhl primera-line 10
LCD :  LG 32UD59-B + LG flatron IPS236 -> Silverstone SST-ARM11BC
CASE :  Fractal R5 Biały + Lian Li BZ-H06A srebrny + 6 x Thermaltake toughfan 14 white + Thermalright TL-B8W
SPEAKERS :  Aune S6 Pro -> Topping PA3-B -> Polk S20e black -> Monoprice stand 16250
HEADPHONES :  TOSLINK 2m -> Aune S6 Pro -> 2 x Monoprice Premier 1.8m 16AWG 3-pin XLR -> Monoprice Monolith THX AAA 887 -> 4-pin XLR na 2 x 3.5mm 16 cores OCC 2m Cable -> HiFiMAN Edition XS -> sheepskin pads + 4-pin XLR na 2 x 2.5mm ABLET silver 2m  Cable -> Monoprice Monolith M1060 + Brainwavz HM100 -> Brainwavz sheepskin oval pads + Wooden double Ɪ Stand + Audio-Technica ATH-MSR7BK -> sheepskin pads + Multibrackets MB1893 + Sennheiser Momentum 3 +  Philips Fidelio X2HR/00 + JBL J88 White
MIC :  Tonor TC30 -> Mozos SB38
KEYBOARD : Corsair STRAFE RGB Cherry MX Silent (EU) + Glorious PC Gaming Race Stealth Slim - Full Size Black + PQI MyLockey
MOUSE :  Logitech MX ERGO + 2 x Logitech MX Performance + Logitech G Pro wireless + Logitech G Pro Gaming -> Hotline Games 2.0 Plus + Corsair MM500 3xl + Corsair MM300 Extended + Razer goliathus control
CONTROLLERS :  Microsoft xbox series x controller pc (1VA-00002) -> brainwavz audio Controller Holder UGC2 + Microsoft xbox 360 wireless black + Ravcore Javelin
NET :  Intel x520-DA2 -> 2 x FTLX8571D3BCV-IT + 2 x ASUS ZenWiFi Pro XT12
NAS :  Qnap TS-932X-2G -> Noctua NF-P14s redux 1200 PWM -> Kingston 16GB 2400Mhz CL14 (HX424S14IB/16) -> 9 x Crucial MX500 2TB ->  2 x FTLX8571D3BCV-IT -> 2 x Digitus (DK-HD2533-05/3)
Link to comment
Share on other sites

Link to post
Share on other sites

Really stupid basic question here.

Why weren't the test rig CPU's (when testing non CPU stuff) all locked to the same frequencies with the same memory timings for consistency? Disable thermal based clock speed limits and just nerf the CPU to only run PBO with no XFR or whatever they're calling it now. Yes you'll get slightly lower FPS results but you're testing GPU not CPU so you should still be getting valid data even if you're a couple percent lower than other people running a bench mark on a single system with a 'golden sample' CPU. I thought this whole lab thing was about producing large amounts of consistent data but then you're not controlling very controllable variables that seem like they'd be exactly the kind of thing you'd want to control?

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/27/2024 at 10:54 AM, Bitter said:

Why weren't the test rig CPU's (when testing non CPU stuff) all locked to the same frequencies with the same memory timings for consistency?

We still want our test results to reflect the average consumer's experience.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×