Jump to content

DDRDDR5 - Multi-ranked DIMMS to push DDR beyond 17,000 MT/s

BachChain

Summary

AMD and JEDEC have collaborated to develop and announce MRDIMM. This open JEDEC standard allows multiplexing DDR modules together in order to drastically increase bandwidth. First gen will combine two 4,400MT/s DDR5 DIMMS to get 8,800MT/s effective data rate, and third gen will target 17,000MT/s. No specific roadmap has yet been given, but it's speculated that first gen may arrive in 2024 with zen 5 or granite rapids.

 

Quotes

Quote

MRDIMM's objective is to double the bandwidth with existing DDR5 DIMMs. The concept is simple: combine two DDR5 DIMMs to deliver twice the data rate to the host. Furthermore, the design permits simultaneous access to both ranks. For example, you combine two DDR5 DIMMs at 4,400 MT/s, but the output results in 8,800 MT/s. According to the presentation, a special data buffer or mux combines the transfers from each rank, effectively converting the two DDRs (double data rate) into a single QDR (quad data rate).

 

My thoughts

As far as I can tell, MRDIMMs will only work with specific, very low, DDR speeds. In that case, I question how useful this will be. There are, available today (which is still very early in DDR5's lifespan), regular DDR5 kits that hold bandwidth ratings only slightly less than gen 1's theoretical maximum. On top of that, DDR6, which is planned to launch years before gen3, is expected to offer a comparable maximum bandwidth.

 

Sources

https://www.techpowerup.com/306762/amd-and-jedec-create-ddr5-mrdimms-with-17-600-mt-s-speeds

https://www.tomshardware.com/news/amd-advocates-ddr5-mrdimms-with-speeds-up-to-17600-mts

Link to comment
Share on other sites

Link to post
Share on other sites

So. duel channel in a single slot. 

Where is OP finding 8800 MT/s DDR5

This sounds nice for high density servers. EPYCs are using 12 channels now and it takes a lot of board space, so to be able to put 6 modules in to get the speed of 12 might free up board space for other things. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, starsmine said:

So. duel channel in a single slot. 

Where is OP finding 8800 MT/s DDR5

This sounds nice for high density servers. EPYCs are using 12 channels now and it takes a lot of board space, so to be able to put 6 modules in to get the speed of 12 might free up board space for other things. 

Technically DDR5 is already dual-channel in a single slot.  I question how they're going to fit what would presumably need twice as many pins/traces when they're already struggling to make DDR5 work.

Router:  Intel N100 (pfSense) WiFi6: Zyxel NWA210AX (1.7Gbit peak at 160Mhz)
WiFi5: Ubiquiti NanoHD OpenWRT (~500Mbit at 80Mhz) Switches: Netgear MS510TXUP, MS510TXPP, GS110EMX
ISPs: Zen Full Fibre 900 (~930Mbit down, 115Mbit up) + Three 5G (~800Mbit down, 115Mbit up)
Upgrading Laptop/Desktop CNVIo WiFi 5 cards to PCIe WiFi6e/7

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Alex Atkin UK said:

Technically DDR5 is already dual-channel in a single slot.

sure, its double pumped 64 bit rather then a single pump 128. No one is going to say you get double the bandwidth out of it doing that.

No one calls a 12 channel epyc 24 channel. No one calls Zen 4 or alderlake/raptor lake quad channel. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, starsmine said:

So. duel channel in a single slot. 

Where is OP finding 8800 MT/s DDR5

This sounds nice for high density servers. EPYCs are using 12 channels now and it takes a lot of board space, so to be able to put 6 modules in to get the speed of 12 might free up board space for other things. 

Some workloads do required all of that bandwidth though, specially given the amount of cores we have in those systems nowadays, so doubling the speed for the same space is also a really valid option.

9 minutes ago, BachChain said:

As far as I can tell, MRDIMMs will only work with specific, very low, DDR speeds. In that case, I question how useful this will be. There are, available today (which is still very early in DDR5's lifespan), regular DDR5 kits that hold bandwidth ratings only slightly less than gen 1's theoretical maximum. On top of that, DDR6, which is planned to launch years before gen3, is expected to offer a comparable maximum bandwidth.

That's for servers, so it's really useful. (L)RDIMMs usually don't go way past 5000MHz, so getting about double that is really nice.

3 minutes ago, Alex Atkin UK said:

Technically DDR5 is already dual-channel in a single slot.  I question how they're going to fit what would presumably need twice as many pins/traces when they're already struggling to make DDR5 work.

There's a chip responsible for buffering the calls between the ram modules and the memory controller in the CPU, like the regular chip found in registered dimms but on steroids.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

Worse than you all think, AMD is MRDIMM using JDEC open standard and Intel is gong with similar but their own closed MCRDIMM. So now we have to be careful what memory we get, mostly a used market option but either way it's basically signaling the death of being able to transfer memory between any server you want because "DDR" is ubiquitous. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Worse than you all think, AMD is MRDIMM using JDEC open standard and Intel is gong with similar but their own closed MCRDIMM. So now we have to be careful what memory we get, mostly a used market option but either way it's basically signaling the death of being able to transfer memory between any server you want because "DDR" is ubiquitous. 

COME GET YOUR RAMBUS PENTIUM 4s

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, starsmine said:

COME GET YOUR RAMBUS PENTIUM 4s

BUST OUT THE RAM COOLERS

Press quote to get a response from someone! | Check people's edited posts! | Be specific! | Trans Rights

I am human. I'm scared of the dark, and I get toothaches. My name is Frill. Don't pretend not to see me. I was born from the two of you.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, Alex Atkin UK said:

Technically DDR5 is already dual-channel in a single slot.  I question how they're going to fit what would presumably need twice as many pins/traces when they're already struggling to make DDR5 work.

DDR5 is dual narrower sub channels that are addressable. This requires dual rank (quad rank probably will work too), a rank is addressable also, buffers requests from both ranks and then uses QDR between the DIMM and the CPU IMC

 

2560px-SDR_DDR_QDR.svg.png

So the DDR5 sub data channels will be maintained however they will be operating in QDR mode. The buffer chip is translating between DDR and QDR memory operations.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, BachChain said:

My thoughts

As far as I can tell, MRDIMMs will only work with specific, very low, DDR speeds. In that case, I question how useful this will be. There are, available today (which is still very early in DDR5's lifespan), regular DDR5 kits that hold bandwidth ratings only slightly less than gen 1's theoretical maximum. On top of that, DDR6, which is planned to launch years before gen3, is expected to offer a comparable maximum bandwidth.

It'll get faster in time. 
Also I'd generally expect double 'slow' memory speeds to still be faster than the reasonably fast stuff in practice, though it will likely need more parallelism to extract that bandwidth. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

I posted this 2 days ago, but not as news since it hadn't gone mainstream yet.

 

3 hours ago, starsmine said:

sure, its double pumped 64 bit rather then a single pump 128. No one is going to say you get double the bandwidth out of it doing that.

They're the same "pumped"-ness, the difference is 2x64 vs 1x128.

 

3 hours ago, starsmine said:

No one calls Zen 4 or alderlake/raptor lake quad channel. 

This remains technically wrong but everyone that matters seems to have agreed to it. Reminds me of the MHz vs MT/s thing which was only corrected relatively recently. 

 

3 hours ago, leadeater said:

Worse than you all think, AMD is MRDIMM using JDEC open standard and Intel is gong with similar but their own closed MCRDIMM.

I had seen a SK Hynix press release from last year that they were working with Intel on MCRDIMMs. Are these still different things? At a high level they sound like they're doing the same thing.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, porina said:

I had seen a SK Hynix press release from last year that they were working with Intel on MCRDIMMs. Are these still different things? At a high level they sound like they're doing the same thing.

Same thing, just protocol/standard incompatible far as I know.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Senzelian said:

SLI for RAM. Nice! Can we also get some fancy SLI bridges for these? 😶

Only for servers. And I think you mean this?

 



Introducing the DRAM COMB BRIDGE

 

image.png.bca5a4a6c572f81beed8eb0c121345d1.png

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, Senzelian said:

SLI for RAM. Nice! Can we also get some fancy SLI bridges for these? 😶

This will probably never happen. That would increase the path traces and it's one of the most important thing about RAM. Each mm between the CPU, memory controller and memory bank itself is crucial (pun intended)

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Franck said:

This will probably never happen. That would increase the path traces and it's one of the most important thing about RAM. Each mm between the CPU, memory controller and memory bank itself is crucial (pun intended)

 

This doesn't increase pin count. All this is doing is sending 4 signals per cycle instead of 2. Maybe the traces need to be slightly better for signal integrity but everything else is unchanged motherboard wise. Only the DIMMs and CPUs will change.

 

And it's 100% happening, Intel already have real technical demo hardware with it and it's part of the product roadmap. This is a server thing so far, that's it. Desktop in future? Maybe, not yet sure it makes sense cost or performance wise. More to this than just increasing bandwidth. 

 

LRDIMMs never made it to desktop platform, never will. MRDIMM/MCRDIMM could well be the same.

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, leadeater said:

This doesn't increase pin count. All this is doing is sending 4 signals per cycle instead of 2. Maybe the traces need to be slightly better for signal integrity but everything else is unchanged motherboard wise. Only the DIMMs and CPUs will change.

 

And it's 100% happening, Intel already have real technical demo hardware with it and it's part of the product roadmap. This is a server thing so far, that's it. Desktop in future? Maybe, not yet sure it makes sense cost or performance wise. More to this than just increasing bandwidth. 

 

LRDIMMs never made it to desktop platform, never will. MRDIMM/MCRDIMM could well be the same.

 

I think he might have missed the sarcasm/memeing vis a vis the SLI bridges bit. or at least thats how i read his post.

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, leadeater said:

LRDIMMs never made it to desktop platform, never will. MRDIMM/MCRDIMM could well be the same.

I don't think even RDIMMs did, best you can get is unbuffered ECC afaik.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, leadeater said:

Desktop in future? Maybe, not yet sure it makes sense cost or performance wise. More to this than just increasing bandwidth. 

 

LRDIMMs never made it to desktop platform, never will. MRDIMM/MCRDIMM could well be the same.

LRDIMMs, correct me if I'm off, was primarily to help allow higher capacities to cram in some seriously big ram capacity. Not something that makes sense on desktop.

 

More bandwidth is a benefit to desktop. It seems we're never going to get beyond 4 slots on a consumer board, and HEDT shows no sign of returning in a consumer friendly manner. So apart from the gradual increase in ram speeds and eventual move to DDR6, we're not going to see radical changes there.

 

What hurdles are there to this appearing on desktop? I am assuming the slots will be physically compatible between regular DIMMs and the alternate ones. If not, that pretty much kills the idea already, or we need yet another variation of the "standard". Higher quality mobos may be needed, especially in the design of the ram traces to support the higher data rate. This may be solved by only implementing this feature on Intel Z or AMD X chipset mobos as an additional high end differentiation feature. This would also help resolve the other problem: module capacity would have to be high for this to work. Only speaking for today, to get the half reasonably performing ram in DDR5 you have to get at least 16GB modules which are 1Rx8. Adding another rank would make the minimum module capacity bump up to 32GB.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, porina said:

More bandwidth is a benefit to desktop. It seems we're never going to get beyond 4 slots on a consumer board, and HEDT shows no sign of returning in a consumer friendly manner. So apart from the gradual increase in ram speeds and eventual move to DDR6, we're not going to see radical changes there.

 

We could always go with 1DPC and have quad channel mainstream CPUs by default, but that point is another segmentation point for manufacturers.

17 minutes ago, porina said:

What hurdles are there to this appearing on desktop?

The same as (L)RDIMMs: you need a memory controller that's capable of handling it.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, igormp said:

We could always go with 1DPC and have quad channel mainstream CPUs by default, but that point is another segmentation point for manufacturers.

That's probably not done for cost reasons. We'd need even more pins on the CPU socket and mobo layout would get more complicated across the whole range.

 

3 minutes ago, igormp said:

The same as (L)RDIMMs: you need a memory controller that's capable of handling it.

I was thinking about technical reasons other than "it isn't currently supported". Obviously if it were to take off they'll put the support in. This method doesn't sound like it needs extra connectivity between CPU and memory. The improvement is from hitting the link harder.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

LRDIMMs, correct me if I'm off, was primarily to help allow higher capacities to cram in some seriously big ram capacity. Not something that makes sense on desktop

High capacity at the expense of latency and per DIMM bandwidth. LRDIMMs don't and realistically can't come with the highest speeds possible, unlike ECC UDIMM which can so long as they don't get pushed too far causing too many memory errors.

 

1 hour ago, porina said:

More bandwidth is a benefit to desktop. It seems we're never going to get beyond 4 slots on a consumer board, and HEDT shows no sign of returning in a consumer friendly manner. So apart from the gradual increase in ram speeds and eventual move to DDR6, we're not going to see radical changes there.

Yep but MRDIMM is going to come with a tradeoff to memory latency and when you factor it against DDR5-8000 which is already a thing MRDIMM-8800 doesn't make a whole lot of sense when effective performance would be lower than that of DDR5-8000. Of course not everything can or will be able to actually function at DDR5-8000 but it won't be that long until new products can.

 

So you either pick between implementing a more costly QDR buffered memory design in the CPU which requires more die space and is overall more complex or you optimize the existing DDR unbuffered memory design to support higher transfer rates. MRDIMM will also cost more itself as well.

 

For desktop platforms that support overlocking and non-JDEC memory configurations keeping with the existing approach to me seems more logical.

 

1 hour ago, porina said:

What hurdles are there to this appearing on desktop?

See above.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

I don't think even RDIMMs did, best you can get is unbuffered ECC afaik.

If you stretch the definition just far enough, right to the limit, Threadripper Pro supports both. That's "desktop" right? 😅

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×