Jump to content

We were WRONG about RAM – Or were we?

AdamFromLTT

For CPUs back in 2013, we said RAM speed didn’t matter. But as CPUs and GPUs have gotten faster and we move to DDR5, everyone’s suddenly very interested in memory. What changed, and were we wrong then, too?

 

Buy an Intel Core i9-12900K: https://geni.us/F14VY0

Buy a ASUS Maximus Z690 Hero: https://geni.us/OpTR1c

Buy a RTX 3080: https://geni.us/TKBrXv

Buy a DDR5 RAM 6000Mhz Ram Kit: https://geni.us/JiANMU

 

Purchases made through some store links may provide some compensation to Linus Media Group.

 

Link to comment
Share on other sites

Link to post
Share on other sites

 

In the video they call the GK104 anemic compared to GP106, however tier-for-tier it's the GP-106 that's more anemic as it's a lower tier than GK104; imagine how much faster the 1060 (6gb the REAL 1060) would have been had it had the mid-range GP104 chip it should have had.


No, performance does not matter it's NOT mid-range performance, because it's not on a mid-range chip. The real midrange performance is on the 1070 and 1080 which do indeed have the GP104

 

image.thumb.png.4764ec8b1da044f81bc52df4a2567cb3.png

AMD Ryzen 5 5600 | MSI B450 Tomahawk | Corsair LPX 16GB 3000MHz CL16 | XFX RX 6700 XT QICK 319 | Corsair TX 550M 80+ Gold PSU

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, xFluing said:

 

In the video they call the GK104 anemic compared to GP106, however tier-for-tier it's the GP-106 that's more anemic as it's a lower tier than GK104; imagine how much faster the 1060 (6gb the REAL 1060) would have been had it had the mid-range GP104 chip it should have had.


No, performance does not matter it's NOT mid-range performance, because it's not on a mid-range chip. The real midrange performance is on the 1070 and 1080 which do indeed have the GP104

 

image.thumb.png.4764ec8b1da044f81bc52df4a2567cb3.png

Merged to official video thread.

^^^^ That's my post ^^^^
<-- This is me --- That's your scrollbar -->
vvvv Who's there? vvvv

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, LogicalDrm said:

Merged to official video thread.

Thanks, had no idea this was a thing.

AMD Ryzen 5 5600 | MSI B450 Tomahawk | Corsair LPX 16GB 3000MHz CL16 | XFX RX 6700 XT QICK 319 | Corsair TX 550M 80+ Gold PSU

Link to comment
Share on other sites

Link to post
Share on other sites

So correct me if I'm wrong, but to me this looks like it's exactly like most people have been saying these last few years, faster RAM is better, but within reason. One shouldn't really go with DDR4-2400 (heck my DDR3 is that fast, lol), but at the same time paying a large premium for DDR4 memory over 4000MHz shouldn't really give you a noticeable improvement either, so it's best to stick to something in between.

 

Obviously DDR5 is much newer and therefore the range of available speeds isn't as big yet and the price premiums for higher speeds are bigger, but I would expect it to pan out in a very similar fashion.

Meanwhile in 2024: Ivy Bridge-E has finally retired from gaming (but is still not dead).

Desktop: AMD Ryzen 9 7900X; 64GB DDR5-6000; Radeon RX 6800XT Reference / Server: Intel Xeon 1680V2; 64GB DDR3-1600 ECC / Laptop:  Dell Precision 5540; Intel Core i7-9850H; NVIDIA Quadro T1000 4GB; 32GB DDR4

Link to comment
Share on other sites

Link to post
Share on other sites

Yes you were wrong then and you are still dead wrong. Why?

 

1. Your testing methodology is completely flawed. If a game doesn't manage to load certain texture fast enough into video memory it won't stop its rendering and wait for the texture to be full loaded but instead it will render its scene without that specific texture.Therefore the result of slow memory won't be drop in FPS but instead in objects pooping into the scene. 

Yes this was an issue years back at the time of Win2K and early WinXP but the way games are rendered has changed a lot from then. And this change wasn't just to avoid performance drops but in order to increase stability. Because back then failing to load specific texture in time might have not caused only FPS drop but even a crash of the entire rendering pipeline which in turn could crash the entire system.

 

2. Your claim that since modern graphics cards have much more video memory available this leads to more data being moved from RAM to video memory is also flawed. More video memory means bigger chance for game to load all of its textures into video memory at once.Once those textures are loaded into video memory there is no need to load them again therefore having more video memory actually reduces games RAM utilization. Therefore RAM speed will have far less impact on game performance.

Another important thing to note is the fact that not all games support dynamic texture loading meaning that they always load all the textures into video memory right from the start. So you should focus testing on games that do support dynamic texture loading. GTA IV is one of games that I know to heavily rely on dynamic texture loading. So memory speed should have bigger impact in it.

 

If you really want to measure memory impact on game performance you should be measuring games UPS (updates per second) which is what will be impacted most by memory performance. Unfortunately not many games provide UPS statistics so measuring UPS performance is a lot harder. Not to mention that when UPS performance is affected it is hard to know whether this is due to memory bottleneck or CPU bottleneck since both end up showing as high CPU utilization since CPU is basically waiting for data from memory to be transferred into its CACHE registers.

The only time when you know that your memory is a bottleneck is when spreading a workload to more CPU cores no longer gives any more performance benefits.

Link to comment
Share on other sites

Link to post
Share on other sites

The video still wrong back then... AMD FX was a lot more sensitive to ram than Intel Core.

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

If more RAM speed is a benefit or not is largely application dependent.

 

Beyond asset loading a GPU won't in itself be affected by the CPU's RAM speed. Obviously depending on how much CPU work we need to do for our GPU work, for more GPU accelerated application the CPU can more or less idle, while most games tends to run a lot of stuff on the CPU and sending a lot of updates to the GPU and back is fairly expected and this would make the CPU's performance affect things more.

 

For CPU centered applications then RAM bandwidth is partly proportional to the size of the dataset and how our application moves through said dataset. Here the cache is one thing that affects things a lot, but also some applications are too sporadic in how they work with their data for cache to be much of help. (Dwarfs Fortress on a large world size is a good example of an application that doesn't show much signs of caring much about cache size. But there is plenty of other CPU applications showing a similar disregard for cache as well.)

 

However, if the memory calls requested by our cores are mostly fullfilled by cache, then it isn't too hard for the CPU to bottleneck on core performance before the memory bandwidth becomes the limit. However, other times our cores can end up waiting for memory to just respond, since latency of memory can at times be rather huge, and here bandwidth isn't always making a major difference. (however, more bandwidth means faster completion times of RAM calls, so overall access latency should be smaller.)

 

Then there is inter process communication bottlenecks too. Sometimes our process might send off a call to some other process, then we have to wait until the kernel schedules that other process' thread onto a core, works through its backlog of calls, and returns the answer. And by that point our own process' likely isn't having its own thread on a core, so yet again we have to wait for the kernel to switch our thread back in. This can add many ten's of ms worth of waiting, even if the completion time of the call could be only a few hundred ns or a handful of µs in itself. (This though depends on kernel scheduling settings/behavior, and this is a whole can of worms in itself. But often one avoids being dependent on such calls, but rather schedule them on the side while one's own thread continues forth doing its own things only returning back to the answer later down the line when its semaphore states that it is finished. But this approach isn't always a viable option. Having more threads can however speed things up.)

 

In the end, software is a jungle.

And it is often hard to make a good general recommendation for picking hardware, since it will be very dependent on what exact software one will run and how it is developed. And a year down the line a few software updates have likely rocked the boat and other specs would have been better.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, themrsbusta said:

The video still wrong back then... AMD FX was a lot more sensitive to ram than Intel Core.

 

Maybe, but AMD FX was also irrelevant. 

Corps aren't your friends. "Bottleneck calculators" are BS. Only suckers buy based on brand. It's your PC, do what makes you happy.  If your build meets your needs, you don't need anyone else to "rate" it for you. And talking about being part of a "master race" is cringe. Watch this space for further truths people need to hear.

 

Ryzen 7 5800X3D | ASRock X570 PG Velocita | PowerColor Red Devil RX 6900 XT | 4x8GB Crucial Ballistix 3600mt/s CL16

Link to comment
Share on other sites

Link to post
Share on other sites

G'day guys, joined the forum so I could throw in what I've learned over the years.

 

I started work in the game industry back in 2002, and specialized in the graphic performance and compatibility side of Quality Assurance, while also doing some very basic dabbling with graphics programming.  I also talked a LOT with the coders back in the day also.

 

So before "Resizable Bar" (data going direct from SSD to GPU Memory) was added, to get game assets from the hard drive onto the computer in a state where the game was playable it would look a little something like this

* Initial game code would be loaded up from hard drive, it would go into ram, and then a good chunk of it would also be copied onto the cpu cache.

* Game assets such as textures and models, would be loaded from hard drive -> ram -> piped through cpu -> then loaded into graphic card

* Each frame rendered, the CPU would say "Hey graphic card, use that character model, and just turn him to the left a little would you?"

* GPU would then use what it knows from the GPU memory and do the calculations, not needing to touch ram or CPU or hard drive again

* BUT, if CPU said "hey graphic card, theres a new model and texture we want to put on the screen, can you do that please?"

* The Program on the CPU would likely know that the asset hasn't been loaded onto the graphic card yet, so it'll need to fetch it from the hard drive -> ram -> pass through cpu -> pcie express bus -> load onto graphic card.... thus you'd have a short stall

 

Now lets say your graphic card memory is full, too many people have custom models and textures and we reach the threshold of what can live on the GPU's memory

* Now the GPU will say to the CPU, "Hey I'm full, need to offload some stuff to RAM"

* CPU steps in and plays middle man to coordinate gpu memory data being copied back onto normal System Memory

* The data for the new assets need to be loaded onto the GPU from the pipeline above

Thus another short stall

 

So as assets get loaded and unloaded every now and then as players move around in the game, or parts of the levels are streamed in and out (Very obvious in Unreal based games and modern large open world games),  the GPU, CPU and RAM are ferrying data back and fourth between each other.  So you have multiple bottle necks which would contribute to a sudden dip in framerate.

* Memory speed

* CPU speed

* CPU cache size

* CPU core count in regards to how many are given jobs to ferrying data about (sometimes it just one, but if a good "job" system is setup in the game's code then it could be lots, that's engine dependent, and if the came coders actually use that feature of the engine)

* PCI Express bandwidth (how many lanes, what generation)

* GPU's ability to multitask data going in and out of the GPU memory

* And finally GPU memory speed.

 

Unless you had a really slow GPU or CPU, the bulk of the time the PCI Express bus I believe was the main limiting factor.  BUT good game coders were aware of that, so they'd do all sorts of tricks to not delay things too much when transferring.  You only need to talk to coders who worked on PlayStation 1,2,3 to see how crazy they are in regards to data wrangling and being very aware of how long things take.   They can tell you about the nanoseconds wasted with just chips communicating to one another 😛

 

Anywho; I strongly suspect, that worrying about ram speed is the least of peoples concerns when trying to improve performance on their gaming PC. As a lot of it will come down to the fundamentals of how much GPU memory they have, and how the game engine is coded to handle streaming in and out of assets.

 

With the rise of Unreal Engine 5, I suspect we'll see a BIG jump in asset size requirements due to everyone now uploading 8k textures with million poly models, and thats when the pipeline between hard drive, ram, cpu, pcie and gpu will really be pushed.  But thankfully we have Re-Bar tech now which will also hide some of these issues, as a lot of the asset data will load from the SSD/Hardrive itself direct to the GPU over the PCIExpress bus bypassing the cpu and system memory; be used for a while on scene, and then unloaded form the GPU and not likely be loaded again until the player returns to that area of the game world.

 

Hope that gives you an inkling into it a bit more 🙂  

Link to comment
Share on other sites

Link to post
Share on other sites

8k asset wont be a thing. seeing even current game engines are not using 4k assets.

flight sim 2020 ran into the asset problem and has to stream it.  hell even cod tried it. Most were not ever aware that they did it.

more vram is need then. what consumers have for gpu atm. 

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Somerandomtechyboi said:

Why no ryzen?

Because they wanted to do apples to apples to there old test bench

Everyone, Creator初音ミク Hatsune Miku Google commercial.

 

 

Cameras: Main: Canon 70D - Secondary: Panasonic GX85 - Spare: Samsung ST68. - Action cams: GoPro Hero+, Akaso EK7000pro

Dead cameras: Nikion s4000, Canon XTi

 

Pc's

Spoiler

Dell optiplex 5050 (main) - i5-6500- 20GB ram -500gb samsung 970 evo  500gb WD blue HDD - dvd r/w

 

HP compaq 8300 prebuilt - Intel i5-3470 - 8GB ram - 500GB HDD - bluray drive

 

old windows 7 gaming desktop - Intel i5 2400 - lenovo CIH61M V:1.0 - 4GB ram - 1TB HDD - dual DVD r/w

 

main laptop acer e5 15 - Intel i3 7th gen - 16GB ram - 1TB HDD - dvd drive                                                                     

 

school laptop lenovo 300e chromebook 2nd gen - Intel celeron - 4GB ram - 32GB SSD 

 

audio mac- 2017 apple macbook air A1466 EMC 3178

Any questions? pm me.

#Muricaparrotgang                                                                                   

 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Middcore said:

 

Maybe, but AMD FX was also irrelevant. 

Yeah I work with a lot of retired systems from that period and mostly it’s intel core series.

Everyone, Creator初音ミク Hatsune Miku Google commercial.

 

 

Cameras: Main: Canon 70D - Secondary: Panasonic GX85 - Spare: Samsung ST68. - Action cams: GoPro Hero+, Akaso EK7000pro

Dead cameras: Nikion s4000, Canon XTi

 

Pc's

Spoiler

Dell optiplex 5050 (main) - i5-6500- 20GB ram -500gb samsung 970 evo  500gb WD blue HDD - dvd r/w

 

HP compaq 8300 prebuilt - Intel i5-3470 - 8GB ram - 500GB HDD - bluray drive

 

old windows 7 gaming desktop - Intel i5 2400 - lenovo CIH61M V:1.0 - 4GB ram - 1TB HDD - dual DVD r/w

 

main laptop acer e5 15 - Intel i3 7th gen - 16GB ram - 1TB HDD - dvd drive                                                                     

 

school laptop lenovo 300e chromebook 2nd gen - Intel celeron - 4GB ram - 32GB SSD 

 

audio mac- 2017 apple macbook air A1466 EMC 3178

Any questions? pm me.

#Muricaparrotgang                                                                                   

 

Link to comment
Share on other sites

Link to post
Share on other sites

I've always used this forum like it had nothing to do with LTT... because LTT just makes cringeworthy content with half-assed seriousness compared to what their very own forum's users can produce. This video is pretty self explanatory.

Asus ROG G531GT : i7-9750H - GTX 1650M +700mem - MSI RX6600 Armor 8G M.2 eGPU - Samsung 16+8GB PC4-2666 - Samsung 860 EVO 500G 2.5" - 1920x1080@145Hz (172Hz) IPS panel

Family PC : i5-4570 (-125mV) - cheap dual-pipe cooler - Gigabyte Z87M-HD3 Rev1.1 - Kingston HyperX Fury 4x4GB PC3-1600 - Corsair VX450W - an old Thermaltake ATX case

Test bench 1 G3260 - i5-4690K - 6-pipe cooler - Asus Z97-AR - Panram Blue Lightsaber 2x4GB PC3-2800 - Micron CT500P1SSD8 NVMe - Intel SSD320 40G SSD

iMac 21.5" (late 2011) : i5-2400S, HD 6750M 512MB - Samsung 4x4GB PC3-1333 - WT200 512G SSD (High Sierra) - 1920x1080@60 LCD

 

Test bench 2: G3260 - H81M-C - Kingston 2x4GB PC3-1600 - Winten WT200 512G

Acer Z5610 "Theatre" C2 Quad Q9550 - G45 Express - 2x2GB PC3-1333 (Samsung) - 1920x1080@60Hz Touch LCD - great internal speakers

Link to comment
Share on other sites

Link to post
Share on other sites

When I got my optiplex 5050 (has a intel i5 6500) over a year ago I chucked in 4GB + 8gb of 2133 mhz ram to the system that has 2400 this little has like no impact. Since looking at my work load of.

1. What I use my pc for.

2. igpu 

3.old games 

what I am trying to say is if you want the max then yes but look at your work load.

Everyone, Creator初音ミク Hatsune Miku Google commercial.

 

 

Cameras: Main: Canon 70D - Secondary: Panasonic GX85 - Spare: Samsung ST68. - Action cams: GoPro Hero+, Akaso EK7000pro

Dead cameras: Nikion s4000, Canon XTi

 

Pc's

Spoiler

Dell optiplex 5050 (main) - i5-6500- 20GB ram -500gb samsung 970 evo  500gb WD blue HDD - dvd r/w

 

HP compaq 8300 prebuilt - Intel i5-3470 - 8GB ram - 500GB HDD - bluray drive

 

old windows 7 gaming desktop - Intel i5 2400 - lenovo CIH61M V:1.0 - 4GB ram - 1TB HDD - dual DVD r/w

 

main laptop acer e5 15 - Intel i3 7th gen - 16GB ram - 1TB HDD - dvd drive                                                                     

 

school laptop lenovo 300e chromebook 2nd gen - Intel celeron - 4GB ram - 32GB SSD 

 

audio mac- 2017 apple macbook air A1466 EMC 3178

Any questions? pm me.

#Muricaparrotgang                                                                                   

 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Middcore said:

 

Maybe, but AMD FX was also irrelevant. 

But this is the biggest difference of back then: AMD Ryzen.

The memory controller from AMD is different than Intel and strangely this newer Intel generations have similar memory controllers to Ryzen.

Made In Brazil 🇧🇷

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, ChiggenWingz said:

G'day guys, joined the forum so I could throw in what I've learned over the years.

 

I started work in the game industry back in 2002, and specialized in the graphic performance and compatibility side of Quality Assurance, while also doing some very basic dabbling with graphics programming.  I also talked a LOT with the coders back in the day also.

 

So before "Resizable Bar" (data going direct from SSD to GPU Memory) was added, to get game assets from the hard drive onto the computer in a state where the game was playable it would look a little something like this

* Initial game code would be loaded up from hard drive, it would go into ram, and then a good chunk of it would also be copied onto the cpu cache.

* Game assets such as textures and models, would be loaded from hard drive -> ram -> piped through cpu -> then loaded into graphic card

* Each frame rendered, the CPU would say "Hey graphic card, use that character model, and just turn him to the left a little would you?"

* GPU would then use what it knows from the GPU memory and do the calculations, not needing to touch ram or CPU or hard drive again

* BUT, if CPU said "hey graphic card, theres a new model and texture we want to put on the screen, can you do that please?"

* The Program on the CPU would likely know that the asset hasn't been loaded onto the graphic card yet, so it'll need to fetch it from the hard drive -> ram -> pass through cpu -> pcie express bus -> load onto graphic card.... thus you'd have a short stall

 

Now lets say your graphic card memory is full, too many people have custom models and textures and we reach the threshold of what can live on the GPU's memory

* Now the GPU will say to the CPU, "Hey I'm full, need to offload some stuff to RAM"

* CPU steps in and plays middle man to coordinate gpu memory data being copied back onto normal System Memory

* The data for the new assets need to be loaded onto the GPU from the pipeline above

Thus another short stall

 

So as assets get loaded and unloaded every now and then as players move around in the game, or parts of the levels are streamed in and out (Very obvious in Unreal based games and modern large open world games),  the GPU, CPU and RAM are ferrying data back and fourth between each other.  So you have multiple bottle necks which would contribute to a sudden dip in framerate.

* Memory speed

* CPU speed

* CPU cache size

* CPU core count in regards to how many are given jobs to ferrying data about (sometimes it just one, but if a good "job" system is setup in the game's code then it could be lots, that's engine dependent, and if the came coders actually use that feature of the engine)

* PCI Express bandwidth (how many lanes, what generation)

* GPU's ability to multitask data going in and out of the GPU memory

* And finally GPU memory speed.

 

Unless you had a really slow GPU or CPU, the bulk of the time the PCI Express bus I believe was the main limiting factor.  BUT good game coders were aware of that, so they'd do all sorts of tricks to not delay things too much when transferring.  You only need to talk to coders who worked on PlayStation 1,2,3 to see how crazy they are in regards to data wrangling and being very aware of how long things take.   They can tell you about the nanoseconds wasted with just chips communicating to one another 😛

 

Anywho; I strongly suspect, that worrying about ram speed is the least of peoples concerns when trying to improve performance on their gaming PC. As a lot of it will come down to the fundamentals of how much GPU memory they have, and how the game engine is coded to handle streaming in and out of assets.

 

With the rise of Unreal Engine 5, I suspect we'll see a BIG jump in asset size requirements due to everyone now uploading 8k textures with million poly models, and thats when the pipeline between hard drive, ram, cpu, pcie and gpu will really be pushed.  But thankfully we have Re-Bar tech now which will also hide some of these issues, as a lot of the asset data will load from the SSD/Hardrive itself direct to the GPU over the PCIExpress bus bypassing the cpu and system memory; be used for a while on scene, and then unloaded form the GPU and not likely be loaded again until the player returns to that area of the game world.

 

Hope that gives you an inkling into it a bit more 🙂  

Overall a good informative post that is mostly correct.

 

Though, a lot of games loads assets preemptively long before it will be required to be drawn. (usually handled by various trigger zones in games, the most obvious ones being loading screens when moving between scenes, but a lot of games handle it through other means of segmentation. Likewise is there normally triggers for offloading assets too.)

 

Some developers though don't care about figuring out what assets will be where and when so therefore don't bother about preemptively loading asset data leading to rather huge frame drops if they also are stupid enough to require the data to render the next frame. (A lot of games just shrugs it shoulder's and don't care about missing assets, when they finally do load in they will pop into place as they should. Often times the issue is solved by having a very low res version available of the asset and switching over to the higher res when that eventually loads. LOD is a savior.)

 

Usually a game will have to use both of the tricks above (asset loading/unloading triggers and LOD). For static things in a map we can very trivially know when and where assets will be required and not. For stuff that actually moves then the situation gets harder. (however, since one can still segment a map, one can keep track of what currently exists in a given segment and therefor know that when one is heading towards entering it that we will need to start loading in said assets. But there is many ways to skin this cat.)

 

Then there is resizable bar, this has nothing to do with the GPU accessing storage directly.

 

Resizable bar is when the base address register (BAR) size of the PCIe package can be of a larger size than standard. The BAR is effectively just a reference, often used as a pointer in VRAM (but not always), but is effectively up to the GPU software to do whatever it pleases with. (The BAR is effectively just some user defined metadata. Without Resizable bar this is only 27 bits (256 MB if used as a VRAM pointer, since PCIe works with 16 bits not 8, not that anything stops a developer from taking 8 bit steps per address increment), with resizable bar we get 38 bits (512 GB if used as VRAM Pointer), with Expanded resizable bar we get 263 bits (this is about enough to individually address every atom in the universe, completely a waste of bits unless we also use the bits for more than an address pointer, so using bits for priority information among other things is useful here.))

 

Direct Storage (a Microsoft standard) is when a PCIe device can access storage directly, as to "skip" the CPU. This is however something not all that new, PCIe and even PCI devices have more or less been able to communicate directly with each other since its inception. But often times it isn't wise for one device to go talking with another, since this can cause all sorts of weird and wonderful software issues unless accounted for (ie, the system would often just crash). Direct Storage more or less provides a standard way for this communication to happen. (another example of PICe devices communicating without the OS's/CPU's involvement is AMD's Crossfire multi GPU technology.)

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Nystemy said:

Resizable bar is when the base address register (BAR) size of the PCIe package can be of a larger size than standard.

Ah yes, I got my terminology wrong and confused, cheers for clarifying.

My intention was to convey the direct storage system thats being between the pcie storage devices and the graphic card 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

LMAO - testing a latency dependant question with DDR5...

Here's a hint

 

 

 

 

latency.jpg

Edited by WihGlah

9900K  / Asus Maximus Formula XI / 32Gb G.Skill RGB 4266mHz / 2TB Samsung 970 Evo Plus & 1TB Samsung 970 Evo / EVGA 3090 FTW3.

2 loops : XSPC EX240 + 2x RX360 (CPU + VRMs) / EK Supremacy Evo & RX480 + RX360 (GPU) / Optimus W/B. 2 x D5 pumps / EK Res

8x NF-A2x25s, 14 NF-F12s and a Corsair IQ 140 case fan / CM HAF Stacker 945 / Corsair AX 860i

LG 38GL950G & Asus ROG Swift PG278Q / Duckyshine 6 YOTR / Logitech G502 / Thrustmaster Warthog & TPR / Blue Yeti / Sennheiser HD599SE / Astro A40s

Valve Index, Knuckles & 2x Lighthouse V2

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, ChiggenWingz said:

Ah yes, I got my terminology wrong and confused, cheers for clarifying.

My intention was to convey the direct storage system thats being between the pcie storage devices and the graphic card 🙂

which wont be a thing. till we stop using up to terminology on storage.

 

MSI x399 sli plus  | AMD theardripper 2990wx all core 3ghz lock |Thermaltake flo ring 360 | EVGA 2080, Zotac 2080 |Gskill Ripjaws 128GB 3000 MHz | Corsair RM1200i |150tb | Asus tuff gaming mid tower| 10gb NIC

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, WihGlah said:

LMAO - testing a latency dependant question with DDR5...

Here's a hint

 

 

 

 

latency.jpg

I scanned the thread but didn't find a reference to DDR5 latency? Very good score by the way. Or were you just high-lighting the contrast between DDR4 and DDR5?

For reference here is my 9600KF (not a daily driver overclock)..

 

image.png.d27a8f3756f8d41a6908a0d45a0e1455.png

Hardware and Overclocking Enthusiast
 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Fast_N_Curious said:

I scanned the thread but didn't find a reference to DDR5 latency? Very good score by the way. Or were you just high-lighting the contrast between DDR4 and DDR5?

For reference here is my 9600KF (not a daily driver overclock)..

 

image.png.d27a8f3756f8d41a6908a0d45a0e1455.png

DDR5 has much greater bandwidth, but as it runs in gear 2 it gets much worse latency. That's why DDR5 machines get much worse 1% lows. So long as the assets are kept in the CPU cache it doesn't matter, but the moment you need to page the ram, you get jarring fps dips. Of course the 1% lows are the exact moment when interesting things happen.(like in a gunfight)

 

Co-incidentally it's the same reason that the 5800X3D is so much better than every other AMD CPU - huge cache, so it doesn't get the typical AMD FPS drops.

 

9900K  / Asus Maximus Formula XI / 32Gb G.Skill RGB 4266mHz / 2TB Samsung 970 Evo Plus & 1TB Samsung 970 Evo / EVGA 3090 FTW3.

2 loops : XSPC EX240 + 2x RX360 (CPU + VRMs) / EK Supremacy Evo & RX480 + RX360 (GPU) / Optimus W/B. 2 x D5 pumps / EK Res

8x NF-A2x25s, 14 NF-F12s and a Corsair IQ 140 case fan / CM HAF Stacker 945 / Corsair AX 860i

LG 38GL950G & Asus ROG Swift PG278Q / Duckyshine 6 YOTR / Logitech G502 / Thrustmaster Warthog & TPR / Blue Yeti / Sennheiser HD599SE / Astro A40s

Valve Index, Knuckles & 2x Lighthouse V2

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I'm a little surprised no one mentioned the change is OS since the original video came out. That can play a factor too.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×