Jump to content

Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!

TheCoder2019

Summary

The first DDR5 memory modules have been created, and pictured. It is also expected to reach over 10,000 MHz!!! They are expecting to release this stuff later this year!

DDR5-Memory-Modules-With-PMIC-Pictured-_

 

Quotes

Quote

 Jiahe Jinwei has announced that it has received the first batch of DDR5 memory modules from its assembly line based in the Shenzhen Pingshan factory. The memory modules are now being mass-produced and are expected to launch later this year with the next-generation platforms from Intel and AMD. Intel is said to take a lead in offering the next-gen memory support first on its next-gen Alder Lake platform comprising of the Z690 chipset-based motherboards

 

My thoughts

 I won't be able to try this, but the fact that it has the potential to be 10GHz is insanely mind blowing! And for 4800MHz speeds, it only takes 1.1v! Normal RAM would usually take ~1.2 - 1.65v so it will be more power efficient. Also this is my first post here, hope it's good!

 

Sources

 https://wccftech.com/mainstream-ddr5-memory-modules-pictured-rolling-out-for-mass-production/amp/

As Someone with the username “</TheCoder2019_”, my coding skills are atrocious.

Here are my specs:

Spoiler

 

MSI PRO-VLH H310M

Intel Core i3-8100 (Thanks, @Schnoz!)

GTX 1060 OC 3GB or Intel UHD 630

16GB (2x8) Cosair Vengeance LPX CL16 - 2400MHz

GAMDIAS Argus M1

 

An old friend of mine - Intel stock cooler (temps through the roof like 60 C under load)

 

 

Linux Apps you NEED!

Spoiler

tmux

dhcpd

git

 

 


 

 

 

 

 

 

 

Hi! I love RGB! Who doesn't? Karens that don't have colorful lights on their Facebook page

Link to comment
Share on other sites

Link to post
Share on other sites

It’s over 9000!!

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, TheCoder2019 said:

It is also expected to reach over 10,000 MHz!!!

"RAM speed is getting too overkill"
JEDEC:

I'm gonna pretend I didn't see that - Ray Charles - Restoration (colorized  version in comments) : MemeRestoration

 

And built in ECC too, fuck me this bad boy is gonna be leaps and bounds faster & stronger than DDR4 in the later version.

Edited by SorryClaire

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

10,000 MHz sounds good and all but DDR4 can reach 5100 already - and still using anything above 3600 increases latency, as controller can't handle it well enough.

 

I'm very curious what speeds would be optimal for the CPUs expected and what latencies will they reach.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Ydfhlx said:

10,000 MHz sounds good and all but DDR4 can reach 5100 already - and still using anything above 3600 increases latency, as controller can't handle it well enough.

 

I'm very curious what speeds would be optimal for the CPUs expected and what latencies will they reach.

DDR4 controller != DDR5 controller.

 

The technology changes within DDR5 will allow these faster speeds at the same latency, maybe less, but if and i doubt it's higher will be very marginal.

Link to comment
Share on other sites

Link to post
Share on other sites

Hmm... 10GHz might be far fetch. Even the consumer CPU's memory controller can hardly reach that insane frequency.

 

With that said, I think 10GHz is probably not going to happen any time soon (if not ever in DDR5). But depends, maybe DDR5 life cycle last a decade, and then we can see more and more DDR5 RAM running at 10GHz. Still, for our current situation, it doesn't really give us a lot of benefits.

I have ASD (Autism Spectrum Disorder). More info: https://en.wikipedia.org/wiki/Autism_spectrum

 

I apologies if my comments or post offends you in any way, or if my rage got a little too far. I'll try my best to make my post as non-offensive as much as possible.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, SorryClaire said:

And built in ECC too

Just remember not all ECC is equal. DDR5 will come with chip level ECC as standard, presumably to help with yields at higher speeds and capacity. much in the same way SSDs use ECC so minor defects can be tolerated. Module level ECC will still be a separate option.

 

1 hour ago, Ydfhlx said:

10,000 MHz sounds good and all but DDR4 can reach 5100 already - and still using anything above 3600 increases latency, as controller can't handle it well enough.

You have to look at it like with like. DDR4 at anything near 5000 is overclocked to the sky. DDR5 5000 is a walk in the park, and 10000 will be a similar level of overclock. Also don't let any current memory controller limitations dictate what ram could or should do. Right now, if you're running DDR4 ram above 3200 you're outside standards and you should consider yourself lucky to get anything. There are many threads on this forum with people buying 3600 ram and it not working with a particular system.

 

39 minutes ago, leadeater said:

You people and your modern systems 🙃

I see you like your latencies tight 😄

 

13 minutes ago, Chiyawa said:

Hmm... 10GHz might be far fetch. Even the consumer CPU's memory controller can hardly reach that insane frequency.

It will happen, more a matter of when. Look at how many DDR4 modules there are with overclocked chips. The same will happen in time with DDR5. Not on day 1, but it didn't happen with DDR4 either.

 

13 minutes ago, Chiyawa said:

With that said, I think 10GHz is probably not going to happen any time soon (if not ever in DDR5). But depends, maybe DDR5 life cycle last a decade, and then we can see more and more DDR5 RAM running at 10GHz. Still, for our current situation, it doesn't really give us a lot of benefits.

CPUs have been ram bandwidth starved for a long time, made even worse since AMD started the core wars. More bandwidth can't come fast enough especially on consumer platforms.

 

Also, without looking up the dates, DDR4 had a good run. I don't know if it stuck around longer than expected like PCIe 3.0, with 4.0 having a short existence and 5.0 already looming on the horizon.

 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, porina said:

I see you like your latencies tight 😄

Yes, but... way back when I just threw money at it. Those are the default XMP settings for it, profile #1. Profile #2 is command rate 1, instead of 2, which is actually perfectly stable as well but I cbf changing it lol.

 

I could get these to much better since the memory modules are water cooled but the gains really just aren't worth the effort anymore.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Yes, but... way back when I just threw money at it. Those are the default XMP settings for it, profile #1. Profile # is command rate 1, instead of 2, which is actually perfectly stable as well but I cbf changing it lol.

 

I could get these to much better since the memory modules are water cooled but the gains really just aren't worth the effort anymore.

I still have one DDR3 era system also with 2400 modules in them, but I don't think the rated latencies are that tight. Compare it with DDR4 of similar speed, that latency is tight though. Without looking it up, I think DDR3 did get defined to 2400, but probably not at those latencies 😄 

 

Also with you on the tinkering. I just want it to work rather than spend ages tinkering with it to maybe gain a small % difference. Also ram is the last thing I'd manually tune, risk of data corruption is too high.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, porina said:

I still have one DDR3 era system also with 2400 modules in them, but I don't think the rated latencies are that tight. Compare it with DDR4 of similar speed, that latency is tight though. Without looking it up, I think DDR3 did get defined to 2400, but probably not at those latencies 😄 

They are G.Skill TridentX and were literally the most expensive at the time. There are actually some faster, at least Mhz wise, G.Skill DDR3 but they weren't out at the time. I'm actually impressed by them since they are in Quad channel mode so my IMC on my 4930k must actually be half decent even though it OC's like crap.

 

If you're interested: https://www.gskill.com/product/165/173/1532071930/F3-2400C10Q-16GTXTridentXDDR3-2400MHz-CL10-12-12-1.65V16GB-(4x4GB)

 

https://www.gskill.com/products/2/165/173/TridentX (All the way up to 3100Mhz, DDR3)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, porina said:

CPUs have been ram bandwidth starved for a long time, made even worse since AMD started the core wars. More bandwidth can't come fast enough especially on consumer platforms.

 

Also, without looking up the dates, DDR4 had a good run. I don't know if it stuck around longer than expected like PCIe 3.0, with 4.0 having a short existence and 5.0 already looming on the horizon.

Well, the only problem is that our RAM data width is limited. We are still using 64 bit-width (72-bit for ECC) which exist since the DDR era from 2002. If we are going to reduce the bottleneck, especially in multi core CPU, increasing bit-width might be a better solution. Then again, increasing bit-width will increase the complexity of the motherboard manufacturing, and the cost can rise exponentially. Still, I do believe increasing bit-width will help solve the bottlenecking, and also make parallel computing more viable, with all the AI programming and software that are bound to be more popular in coming future.

 

Yeah, DDR4 has a good run, but their life span is only about 7 years only, if I recall correctly. I remember the first time I see a PC using DDR4 RAM is at 2014.

I have ASD (Autism Spectrum Disorder). More info: https://en.wikipedia.org/wiki/Autism_spectrum

 

I apologies if my comments or post offends you in any way, or if my rage got a little too far. I'll try my best to make my post as non-offensive as much as possible.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Chiyawa said:

increasing bit-width might be a better solution

It is already a thing. Add more memory channels. I have to wonder if a re-factoring of the product segments is needed. Dual channel is still fine for what I'd call value mainstream, but performance mainstream (say, anything with 8 or more fast cores?) could go to 4 channels, with HEDT going even higher.

 

2 minutes ago, Chiyawa said:

Yeah, DDR4 has a good run, but their life span is only about 7 years only, if I recall correctly. I remember the first time I see a PC using DDR4 RAM is at 2014.

I think the first systems were probably around X99 chipset, so Haswell-E? That seems about the right time, give or take a year or so as I'm too lazy to look up. Mainstream consumer DDR4 was with Skylake in 2015.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Chiyawa said:

Well, the only problem is that our RAM data width is limited. We are still using 64 bit-width (72-bit for ECC) which exist since the DDR era from 2002. If we are going to reduce the bottleneck, especially in multi core CPU, increasing bit-width might be a better solution. Then again, increasing bit-width will increase the complexity of the motherboard manufacturing, and the cost can rise exponentially. Still, I do believe increasing bit-width will help solve the bottlenecking, and also make parallel computing more viable, with all the AI programming and software that are bound to be more popular in coming future.

 

Yeah, DDR4 has a good run, but their life span is only about 7 years only, if I recall correctly. I remember the first time I see a PC using DDR4 RAM is at 2014.

DDR5 also has 2 independent channels so effective memory access latencies are going to be lower which in many cases is more important than a wider bus with more raw bandwidth. And it's not like increasing ram bus width would be too much of an issue motherboard wise as you'd just remove the ability to use 4 slots on mainstream (2DPC) and when it comes to HEDT and server well those have been managing up to 8 channels per socket anyway so I don't think it'll be much of an issue really.

 

I think it's just not that attractive to increase the bus width as it'll actually increase latencies and that is not appealing at all. Personally I think that is where HBM fits in and we shouldn't really try and blur those lines, however it's far to impractical to utilize HBM with CPUs and system memory as it is right now. Ideally memory pages could be moved to where it's best for the workload and with memory management hints assistance data loaded in to HBM in time for computation.

Link to comment
Share on other sites

Link to post
Share on other sites

The latency is going on higher numbers.

On that advertising, it was cl40.

And it's not easy to run ddr4 at above 3600 mhz on even the latest generations of CPU.

Hope that new generation cpuz will run them easily above 5000 without crashes, that's the ecc memory for

Screenshot_20210426-112548.jpg

Link to comment
Share on other sites

Link to post
Share on other sites

56 minutes ago, Chiyawa said:

Well, the only problem is that our RAM data width is limited. We are still using 64 bit-width (72-bit for ECC) which exist since the DDR era from 2002. If we are going to reduce the bottleneck, especially in multi core CPU, increasing bit-width might be a better solution.

The 64-bit DIMM format had been here since the mid-90s, when SDRAM was released, to match the interface width of the Pentium's FSB and allow more memory capacity on a single module instead of using matched pairs of the older 32-bit SIMM memory.

Since then the default memory channel width has remained 64 bits. Going wider will not be beneficial, particularly when the clock rates are going up and the signal tolerances are getting tighter. Intel actually already tried to make a shift to a narrow memory bus by using RDRAM with the first P4 systems and it failed due to bad business practices mostly. Now with DDR5 the whole industry is finally ready to go narrow and fast. Let's hope AMD can keep the Infinity Fabric scaling up with the future DDR5-based Zen CPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, DuckDodgers said:

Let's hope AMD can keep the Infinity Fabric scaling up with the future DDR5-based Zen CPUs.

Shouldn't actually be a problem, the IFOP bandwidth is actually double that of DDR4, by design, and as we know clocked to the memory memclk so they can either go with that approach again in DDR5 generation or decouple it, though that might actually be worse even if it allows an increase in bandwidth since you'll have to factor in syncing the buses.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

DDR5 also has 2 independent channels so effective memory access latencies are going to be lower which in many cases is more important than a wider bus with more raw bandwidth.

I look forward to the confusion that will bring when people talk about modules and channels... and as always if a particular workload is more affected by bandwidth and/or latency will vary, but bandwidth is the more easily addressable bottleneck compared to latency, which has remained roughly constant through the DDR generations. I think the splitting to twice as many half as wide channels might only benefit heavily random access workloads and it'll be interesting to see how that works. I'm sure the usual tech sites will try to do some kind of DDR4 vs DDR5 test, depending on what platform options are available at the time. Skylake supported DDR3 and DDR4 for example so there was some of that testing at the time.

 

1 hour ago, leadeater said:

I think it's just not that attractive to increase the bus width as it'll actually increase latencies and that is not appealing at all. Personally I think that is where HBM fits in and we shouldn't really try and blur those lines, however it's far to impractical to utilize HBM with CPUs and system memory as it is right now. Ideally memory pages could be moved to where it's best for the workload and with memory management hints assistance data loaded in to HBM in time for computation.

I'd really hate to think of the impact using HBM would have on general CPU use cases. Latency is horrible by design (low clock, very wide bus), it is geared towards bandwidth. Too far in one direction for a general use case. Maybe it'll make some kind of sense if everyone ran CPUs with so many cores they start to look more like GPUs as we know them today.

 

1 hour ago, PeachGr said:

The latency is going on higher numbers.

On that advertising, it was cl40.

Early talk of DDR5 modules will likely be to JEDEC standard timings. The vast majority of enthusiast marketed DDR4 modules do not run at standard timings, but are set much tighter. So, again compare like with like. I'm sure enthusiast modules will follow with more aggressive timings.

 

16 minutes ago, leadeater said:

Shouldn't actually be a problem, the IFOP bandwidth is actually double that of DDR4, by design

I was of the understanding when running 1:1, unidirectional IF bandwidth equalled dual channel ram bandwidth. So I guess your statement would be correct if you're looking at a single DDR4 channel. Please correct me if I'm reading it differently than intended.

 

IF was a pain point of Zen 2, especially when AMD clarified that (at least for consumer models) the CCD to IOD write bandwidth over IF was only half that of the read. For single CCD CPUs write bandwidth to memory was awful. You could only attain full ram potential with two CCD models, but they're already short on bandwidth with one.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Chiyawa said:

Hmm... 10GHz might be far fetch. Even the consumer CPU's memory controller can hardly reach that insane frequency.

 

With that said, I think 10GHz is probably not going to happen any time soon (if not ever in DDR5). But depends, maybe DDR5 life cycle last a decade, and then we can see more and more DDR5 RAM running at 10GHz. Still, for our current situation, it doesn't really give us a lot of benefits.

That's because it's not 10,000MHz but 10,000MT/s at 5GHz.

Link to comment
Share on other sites

Link to post
Share on other sites

49 minutes ago, porina said:

I'd really hate to think of the impact using HBM would have on general CPU use cases. Latency is horrible by design (low clock, very wide bus), it is geared towards bandwidth. Too far in one direction for a general use case. Maybe it'll make some kind of sense if everyone ran CPUs with so many cores they start to look more like GPUs as we know them today.

Yep, that's why I said you need both. Neither can replace the other and making one more like the other makes it worse for what it's actually good for in the first place.

 

There are already CPUs with on package HBM, Fujitsu A64FX and Intel Sapphire Rapids supports it. I believe the A64FX is only HBM though but the entire SoC design is high vector math throughput anyway so a little to specific for purpose for this discussion.

 

But you really need a unified memory hierarchy to truly make use of HBM for CPUs, without tradeoffs, but that's too expensive outside of servers. 

 

Then there is sort of the reverse situation happening too, Samsung putting processing cores in to HBM chips but I guess that is more akin to GPU/ASIC/FPGA usage.

 

49 minutes ago, porina said:

I was of the understanding when running 1:1, unidirectional IF bandwidth equalled dual channel ram bandwidth. So I guess your statement would be correct if you're looking at a single DDR4 channel. Please correct me if I'm reading it differently than intended.

 

IF was a pain point of Zen 2, especially when AMD clarified that (at least for consumer models) the CCD to IOD write bandwidth over IF was only half that of the read. For single CCD CPUs write bandwidth to memory was awful. You could only attain full ram potential with two CCD models, but they're already short on bandwidth with one.

I don't remember it being half but then I don't remember a lot about it anymore lol. I know the IF link between the CCD and IOD is 100GBps each, consumer platform.

 

My problem is I care more about EPYC than Ryzen and there is a big ass difference between EPYC and Ryzen IOD

AMD EPYC 7002 And Ryzen 3000 Chiplet Design

 

The IOD in EPYC actually has sub NUMA domains/memory channel/controller locality, so I fear I could be talking out my ass if I were to talk about Ryzen.

 

But I assume you are meaning cases like this?

r/Amd - Ryzen 3000 IF 32 bytes read / 16 bytes write

 

I understand that is actually just a fault in the test, or the way it's being interpreted. Per core the bandwidth is limited to 25GB/s as each core has 2 loads but only 1 store. So if you utilize more than a single core the write bandwidth will increase, however each core is limited to that 25GB/s write and 50GB/s read. It has no affect on the computation throughput as the cores themselves are the limiter and cannot process anymore bandwidth.

 

It's an architecture thing of the CCD/CCX/cores themselves not the Infinity Fabric.

 

Link to comment
Share on other sites

Link to post
Share on other sites

What’s the catch? Someone please tell me.
 

For desktops, almost everywhere I look the fastest memory is 3200mhz without overclocking.

 

TSMC says chip shortage will continue into 2023. So...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Jet_ski said:

What’s the catch? Someone please tell me.

Catch? You can't buy it like everything else and even if you could you wouldn't be able to buy a CPU that could use it anyway?

 

Other than that like every DDR generation switch over, never buy in to the first product cycle iteration if you are looking for best performance as often it's only as good as or slightly worse than the best of the last generation. Just wait a bit and check reviews until you know it's better than what came before.

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, leadeater said:

I don't remember it being half but then I don't remember a lot about it anymore lol. I know the IF link between the CCD and IOD is 100GBps each, consumer platform.

Got a reference for that? I don't think I've ever seen that figure expressed in this context. I did wonder if the enterprise parts had full bandwidth both ways, and if so, the combined bidirectional bandwidth could equal that.

 

Quote

I understand that is actually just a fault in the test, or the way it's being interpreted. Per core the bandwidth is limited to 25GB/s as each core has 2 loads but only 1 store. So if you utilize more than a single core the write bandwidth will increase, however each core is limited to that 25GB/s write and 50GB/s read. It has no affect on the computation throughput as the cores themselves are the limiter and cannot process anymore bandwidth.

 

It's an architecture thing of the CCD/CCX/cores themselves not the Infinity Fabric.

I agree that single cores will be unlikely to saturate the bandwidth, and with the approx. 50GB/s read, 25GB/s write (assuming 3200 ram or equivalent async settings), but my understanding remains that is due to the IF link. If it was indeed a problem with testing single cores, then having two CCDs would not have increased the bandwidth as you'd still be single core limited.

 

Quote

This is an expected result. Client workloads do very little pure writing, so the CCD/IOD link is 32B/cycle while reading and 16B/cycle for writing. This allowed us to save power and area inside the package to spend on other, more beneficial areas for tangible performance benefits.

https://www.overclockers.com/amd-ryzen-9-3900x-and-ryzen-7-3700x-cpu-review/

 

Quote above from AMD. Sounds like IF design choice/limitation to me and the numbers work out as above.

 

If it was an early software limitation and not a hard limit by IF then counter-examples should exist now. I'm not aware of any.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Is this DDR5 running at 10000 MHz or 10000 MT/s?

 

Also, it looks like we get the same bumpyish contact edge, which is nice. It's a LOT easier to seat DDR4 than DDR3 or DDR1.

elephants

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, FakeKGB said:

Is this DDR5 running at 10000 MHz or 10000 MT/s?

I'd always assume the latter whenever anyone gives what sounds like marketing speeds.

 

It looks like the highest defined standard speed grade will eventually reach 8400 but it wont be a surprise if overclocked modules will push over 9000.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×