Jump to content

Mainstream DDR5 Memory Modules Pictured, Rolling Out For Mass Production & Coming To Intel & AMD’s Next-Gen Platforms Soon!

TheCoder2019
5 minutes ago, igormp said:

They max at 128gb because you can only have 4 sticks of unbuffered ECC, which cap out at 32gb for each stick. Those same sticks should go up to 128gb on DDR5.

Currently, it's not an artificial limit, it's an actual lack of hw support for registered DIMMs on those platforms. If they were to limit the new platforms to the same 128gb, then you'd need an artificial limit, which would cause a major backlash.

Perhaps it's not an artificial limit. But when read the specifications of what CPUs support max addressable memory, that should be theoretical, not based on current DIMM capacities, right??

 

That is to say, there's nothing technically preventing a 128GB DDR4 stick based on a smaller node, no? If I recall, the only reason to go registered is because you can't cram more transistors per chip, so you just pack more chips on a board. There's a limit as to how many chips can be on a DIMM that the CPU will address directly before it has to go through a register IO chip.

 

BTW, for anyone not aware, Windows 10 Home only supports 128GB RAM, whereas Pro and Enterprise will go up to 512GB. I don't know if that's just "supported", or a hard limit though.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, StDragon said:

But when read the specifications of what CPUs support max addressable memory, that should be theoretical, not based on current DIMM capacities, right??

That's usually what they validated using the current DIMM capacities at the time. One nice example of this happening can be seen here.

Some CPUs have flaws that limit the max ram you can have, such as what happened on ivy/haswell. One would hope that newer CPUs are well designed and should be able to handle more than what they validated.

23 minutes ago, StDragon said:

That is to say, there's nothing technically preventing a 128GB DDR4 stick based on a smaller node, no? If I recall, the only reason to go registered is because you can't cram more transistors per chip, so you just pack more chips on a board. There's a limit as to how many chips can be on a DIMM that the CPU will address directly before it has to go through a register IO chip.

Yeah, it should be doable. As an example, AMD's 1st and 2nd gen TRs only announced support for 128gb, since there were no 32gb sticks available at the time. Once they came out, people found it that it worked without any problems.

On the other hand, 3rd gen TR (non-PRO) has support for 512gb of ram, but there's no way to get that since those don't support registered dimms, so you're stuck with 8x32gb until 64gb udimms become a thing (if ever).

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

48 minutes ago, igormp said:

I feel you

image.png.c98279d6b212f4b45c45b7b666201d2b.png

 

Luckily I can still upgrade to 128gb (which I'll likely do until the end of this year), should be enough until DDR5 is more standardized and high-density sticks become available.

64gb consumer sticks should be a thing without problems. In theory, you could have 4x more density (up to 128gb per stick with dual rank).

What was your "Committed" / pagefile usage, though?  (Also idk what OS you're using, that looks a little different than the memory summary in my Windows 10 task manager.)

Mine was 370GB in the screenshot I posted earlier.
I'll put the full (not cropped) screenshot in a spoiler, it includes File Explorer open to the two SSDs that have (highlighted) pagefile configured on them.

Spoiler

1678891927_Screenshot(1068).thumb.png.850acce5c4b6333dc1fb2d39e3db0a52.png

 

 

I really hope Intel/AMD don't kneecap mainstream DDR5 to 128GB, not even on the 1st gen.  (Although I'm planning on waiting for 2nd-gen chipsets/CPUs before I upgrade from my Z97/DDR3 setup.)  I want an absolute minimum of 512GB or 1TB, preferably 2TB RAM, or have the same jump (or more) over my current desktop or laptop, as I had over my dad's old laptop I was using.

Current Laptop: 64GB (from May 2019; was 40GB from October 2016 (about when it became my daily driver), or 8GB from December 2015)

Current desktop: 32GB (February 2015; had 16GB when I built it in January 2015, quickly realized that wasn't enough.)
Dad's old laptop: 2GB (he got it August/September 2015, I started using it around March/April when my desktop's motherboard I had then (with 4GB RAM / 3GB due to 32-bit Windows) died.

Dad's laptop to my desktop was a (32/2) 16x jump.  Another 16X over my current laptop's 64GB would be (16*64) 1TB RAM.  And, if you calculated directly from my dad's old laptop to my laptop's current capacity, and that same size jump over my laptop, that would be (64/2=32*64...) 2TB RAM.


And if I got a system with 8 or more slots, or support for RDIMMs/LRDIMMs, that RAM capacity requirement would be a few powers of 2 higher.  (For example, if I got a system with, say, 16 DIMM slots and support for LRDIMMs, I would hope to eventually put more RAM in it than my desktop has 3.5" HARD DRIVE capacity (2x 8TB + 10TB = 26TB) right now, and bonus points for also exceeding the other HDDs I have sitting on a shelf (2x 8TB, 10TB, 2x 14TB = 54TB + the other 26TB = 80TB).

(BTW I didn't count a few sub-1TB drives I still have, or my SSDs; also maybe calculating the maximum possible HDD capacity my current system would support, based on motherboard ports, PCIe slots (to add 40-port HBAs like the Highpoint Rocket 750), etc ... might make it a bit more difficult for my next system to have more RAM than my current system supports HDD capacity.)

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, PianoPlayer88Key said:

What was your "Committed" / pagefile usage, though?  (Also idk what OS you're using, that looks a little different than the memory summary in my Windows 10 task manager.)

Around 8gb on my swap partition. I'm using linux. I avoid having stuff on swap like the plague, since it gets unbearably slow to use stuff.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

On 4/25/2021 at 11:22 PM, TheCoder2019 said:

Summary

The first DDR5 memory modules have been created, and pictured. It is also expected to reach over 10,000 MHz!!! They are expecting to release this stuff later this year!

DDR5-Memory-Modules-With-PMIC-Pictured-_

 

Quotes

 

My thoughts

 I won't be able to try this, but the fact that it has the potential to be 10GHz is insanely mind blowing! And for 4800MHz speeds, it only takes 1.1v! Normal RAM would usually take ~1.2 - 1.65v so it will be more power efficient. Also this is my first post here, hope it's good!

 

Sources

 https://wccftech.com/mainstream-ddr5-memory-modules-pictured-rolling-out-for-mass-production/amp/

Notice that the article says 10000 Mhz is in "research" phase. Meaning this is basically extreme overclocking stuff. Considering that DDR4 has been clocked above 7000 Mhz it's not as impressive as it sounds.

 

Here is a chart showing actual DDR5 speeds:

 

Mainstream-DDR5-Memory-Modules-For-Consumer-Desktop-Platforms.png.1e5c562356c51407423d97c68c4aed9a.png

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

Around 8gb on my swap partition. I'm using linux. I avoid having stuff on swap like the plague, since it gets unbearably slow to use stuff.

Ahh, I wondered if it was Linux, wasn't sure what task manager interface that was.

My swapfile usage was a bit over 303 GB, between swapfiles on 2 SSDs.  There was a bit over 116GB on a 1TB 2.5" SATA (WD Blue 3D) SSD, and a bit under 187GB on a 2TB M.2 NVMe (Silicon Power P34A80) SSD.
Having swap on SSD does help with performance from what I've observed.
Yeah, I hear people have said that you shouldn't have swap on SSDs because of using up the write cycles, but I think a lot of recent TLC SSDs have good enough endurance so that shouldn't be much of an issue.  My 1TB Blue 3D specifies 400TB endurance, while my 2TB P34A80 specifies about 3.1PB endurance.
Datacenter SSDs with their (IIRC) tens of petabytes of endurance would probably be even better, but last I checked even before storage mining rumors started coming out, those weren't cheap.  OTOH I don't like the low endurance of a lot of QLC SSDs... basically when possible, I like SSDs with at least 1 PB endurance per TB capacity, or, > 1 DWPD over 5+ years.  (Hmm, I wonder how much you can write to a HDD before it dies, vs. an SSD....)

 

BTW I can't upgrade my RAM, 64GB is the max my laptop supports (DDR4 - 16GB per stick, has 4 SO-DIMM slots, Clevo P750DM-G came out before 32GB SO-DIMMs existed), and 32GB is the max on my desktop (DDR3 - 8GB per stick, 4 DIMM slots, ASRock Z97 Extreme6).
My plan for a while now has been to upgrade the desktop to DDR5, and wait for DDR6 or DDR7 to upgrade the laptop.  Then I might see if I can skip to DDR8/9/10/whatever on the desktop - I don't like to have to tear apart an entire system just to upgrade one component like a motherboard or case any more often than I have to.  I'd rather replace a dead-of-old-age Seasonic Prime PSU a few times before I replace one of those other 2 parts.  Beyond that, I might want to leapfrog the two - for example, laptop on DDR6, desktop on DDR8, laptop on DDR10, desktop on DDR12, and so on.

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, PianoPlayer88Key said:

Having swap on SSD does help with performance from what I've observed.

I have mine on a nvme, but it's still slow as hell when compared to having things on your regular ram. Having to swap the pages between the disk (no matter how fast) and back to ram is painful.

31 minutes ago, PianoPlayer88Key said:

BTW I can't upgrade my RAM, 64GB is the max my laptop supports (DDR4 - 16GB per stick, has 4 SO-DIMM slots, Clevo P750DM-G came out before 32GB SO-DIMMs existed)

Aren't 32gb so-dimm sticks supported on your laptop? It'd be worth a try IMO, even if clevo hasn't validated it.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, igormp said:

I have mine on a nvme, but it's still slow as hell when compared to having things on your regular ram. Having to swap the pages between the disk (no matter how fast) and back to ram is painful.

Yeah, not having to use swap at all would be nice. 🙂  But I definitely notice a difference between swapping to NVMe vs swapping to platter HDD.
I was just thinking ... with PCIe 5.0, or possibly PCIe 6.0 if it gets fast tracked (and I wait for that) ... I'm guessing NVMe SSDs (especially x8 or x16, not just the pleb x4 ones) would be faster, at least for sequential transfers, than my DDR3 RAM, and maybe even faster than my DDR4.

 

I'll put a couple screenshots in a spoiler.

Both have 2 runs each (4 total) of CrystalDiskMark (peak & real-world) on an NVMe SSD, and a RAM Disk.

First is my desktop (i7-4790K, ASRock Z97 Extreme6, 32GB (8GBx4) G.Skill Ares DDR3-1600 CL9 DIMM, 1TB Samsung 970 Evo).

Second is my laptop (i7-6700K, Clevo P750DM-G, 64GB (16GBx4) G.Skill Ripjaws DDR4-2133 CL15 SO-DIMM, 2TB Silicon Power P34A80).

(I'll have to edit this post from my laptop after I submit on my desktop to put the 2nd screenshot in.)

Spoiler

1143913550_Screenshot(57).thumb.png.594b9167f7deaf1ce72ab596cdd93411.png

 

1848488236_Screenshot(2190).thumb.png.64418d248691a96e8217abf419665831.png

Wow, the laptop is only 4x the pixels (via an Asus VG289Q; desktop is using a Dell U2414H), but the screenshot is like 16.5x the size (8.5MB vs 500kb).

 

Quote

Aren't 32gb so-dimm sticks supported on your laptop? It'd be worth a try IMO, even if clevo hasn't validated it.

I doubt it.  Mine's a Skylake laptop (which I believe capped at 16GB SO-DIMMs), and I have a hunch there isn't even an official Kaby Lake BIOS update for it.
There might be something unofficial, like PremaMod if he's still doing them, or something that might also add support for Coffee Lake, as I've seen things about modding this laptop to support CFL.
But ... while I'd like the extra cores, I really would like them in a lower thermal / power envelope.  Having it be on the same 14nm process (and not a die shrink) is making me have cold feet about doing another possible in-socket upgrade.
I already upgraded from the i3-6100 that I originally had, to the i7-6700K.  Got the 6700K for $259 in November 2016, and I think I paid less for both CPUs combined, than I would have for just the 6700K if I'd gotten it with the laptop in December 2015.  (I think it was going for about $420 around then; I paid $130 for the i3.)  I was on a limited budget and at the time, storage capacity was a priority, and future upgradeability.  (Started with 8GB RAM, the i3, and a 2TB Seagate/Samsung M9T hard drive.)

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, PianoPlayer88Key said:

I was just thinking ... with PCIe 5.0, or possibly PCIe 6.0 if it gets fast tracked (and I wait for that) ... I'm guessing NVMe SSDs (especially x8 or x16, not just the pleb x4 ones) would be faster, at least for sequential transfers, than my DDR3 RAM, and maybe even faster than my DDR4.

While speed would surely be better (if companies still target high speed NVMes instead of having same speeds but using fewer lanes), latency would still be an issue.

39 minutes ago, PianoPlayer88Key said:

I doubt it.  Mine's a Skylake laptop (which I believe capped at 16GB SO-DIMMs), and I have a hunch there isn't even an official Kaby Lake BIOS update for it.

Since you live in the US, buying a kit from amazon and seeing if it works shouldn't be a problem, right? If it doesn't, you could just return it.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, igormp said:

While speed would surely be better (if companies still target high speed NVMes instead of having same speeds but using fewer lanes), latency would still be an issue.

Yeah, true.

Reminds me, I've off-and-on thought about putting together a NAS build.  Since this isn't a NAS topic I won't go into much detail on the rest of my ideas for the build, other than....
I was wanting support for a lot of 3.5" SATA HDDs, basically, with the max being determined mostly by the motherboard form factor and PCIe generation (or CPU PCIe lanes, if it supports more / newer generation).

From my limited research, the Highpoint Rocket 750 supports 40 SATA devices per card, and uses a PCIe 2.0 x8 interface.  (It has 10 SAS ports, each of which can break out to 2 SATA ports.)  Largest 3.5" HDD I know of today is 20TB.
Mini ITX supports 1 slot, * 1 card = 40 drives, or 800 TB.
Mini DTX: 2 slots = 80 drives, or 1.6 PB.

Micro ATX: 4 slots = 160 drives, or 3.2 PB.
Full ATX & SSI EEB: 7 slots = 280 drives, or 5.6 PB.
XL ATX I *think* is 9 slots, or 360 drives (7.2 PB), and I've seen a couple 11-slot server motherboards, so theoretically 440 HDDs (8.8 PB)
Edit #4398046511104: I forgot to calculate how many SAS ports would physically fit on the motherboard, but I'm not going to go through all that now for this post.
Or....if I calculated based on the 256 PCIe 4.0 lanes of 2 Epyc Rome/Milan CPUs... that's 1024 PCIe 2.0 lanes, divided by 8 = 128 cards, at 40 drives each = 5,120 HDDs, or 102.4 PB.

As for how many NVMe SSDs you'd need to support that much storage using NVMe, without going into QLC .... largest non-QLC NVMe SSD I'm aware of is 4 TB, so 102.4 PB / 4 TB = 25,600 SSDs. 🙂  (Or if you went more reasonable and just did the standard ATX ... that's 5.6 PB / 4 TB = 1,400 SSDs.)

Then, to bring it back to RAM ....  I heard in one or two places about a recommendation that if you're doing deduplication on FreeNAS/TrueNAS, you have 5 GB RAM per 1 TB storage capacity.  (I've only seen it once or twice, and most other "advice" I've seen just mentions 1 GB RAM per TB storage, and doesn't mention deduplication.)  Sooo.....
ATX / EEB: 3.2 PB Storage = 16 TB RAM.
2x Epyc Rome/Milan: 102.4 PB Storage = 512 TB RAM.

Problem is finding a motherboard, case & PSU that supports that, and doesn't break the bank. 😄 (My preferred price cap is such that a single reasonably priced HDD, like 12-16TB assuming they don't get expensive, costs more than the entire rest of the system, so I'd likely be limiting myself to SSI EEB or smaller, and not an Epyc platform unless I wait until they're the same price then as LGA 771/1366 is now.)


 

 

Quote

Since you live in the US, buying a kit from amazon and seeing if it works shouldn't be a problem, right? If it doesn't, you could just return it.

Ah ... but I prefer to not buy things just to turn around and return them.  Yeah I've done that once or twice, but that's cause I got the wrong thing.
For example, when I built my desktop, I started with G.Skill Sniper RAM, then swapped it for Ares, cause the heatspreaders were lower, and it turned out to be a good $60 or so cheaper anyway - $200 vs $260 for 32GB.  The Snipers were having a hard time fitting under the fan on my Hyper 212 Evo.

 

Spoiler

1664567196_IMG_1407-G.SkillSniperRAMabittootallunderHyper212Evo.thumb.JPG.fe412d13a594dc0dce9be02bdba97bff.JPG

The upper pic was taken during the narrow window of time when I only had 2 sticks / 16 GB total.

I then bought the other 2 sticks of Sniper to match ... then decided they were too high about the same time I saw the Ares on sale, so I RMA'd the Snipers and bought the Ares ... all from Newegg.

 

67723685_PXL_20210429_220148135-G.SkillAresRAMHyper212Evo.thumb.jpg.16d48f3b94ddf9ff6a95695605191ac5.jpg

 

 

Also, even if I had 128GB in my laptop, I'd still be using pagefile.
I've had 370GB committed a couple times ... so I'd need a *MINIMUM* of 384 GB RAM (or 512GB if going strictly by powers of 2, not having # of DIMMs divisible by 3, etc) to not need swap, or preferably 1 or 2 TB or more to have some breathing room.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, PianoPlayer88Key said:

Or....if I calculated based on the 256 PCIe 4.0 lanes of 2 Epyc Rome/Milan CPUs... that's 1024 PCIe 2.0 lanes, divided by 8 = 128 cards, at 40 drives each = 5,120 HDDs, or 102.4 PB.

That's not how it works. If you're downgrading the 256 PCIe 4.0 lanes to 2.0 speeds, you'd have 256 PCIe 2.0 lanes. That's why most PCIe 5.0 SSDs have the same speed and capacity of the current 4.0 ones, but use half the lanes.

 

 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, igormp said:

That's not how it works. If you're downgrading the 256 PCIe 4.0 lanes to 2.0 speeds, you'd have 256 PCIe 2.0 lanes. That's why most PCIe 5.0 SSDs have the same speed and capacity of the current 4.0 ones, but use half the lanes.

Ahh, well I was thinking bifurcating / splitting them ... I thought at least on some server platforms you could bifurcate, say, a PCIe 4.0 x16 slot into 8 PCIe 3.0 x4 devices?  Or is there no way to double the number of lanes if you step down a generation, even via risers, etc?

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, PianoPlayer88Key said:

Ahh, well I was thinking bifurcating / splitting them ... I thought at least on some server platforms you could bifurcate, say, a PCIe 4.0 x16 slot into 8 PCIe 3.0 x4 devices?

No, you can't do that, a lane is a physical trace, you can't just double it.

5 minutes ago, PianoPlayer88Key said:

Or is there no way to double the number of lanes if you step down a generation, even via risers, etc?

Yes. The only way to do so would be using a PLX chip, but then you're getting into high-end, server-grade hardware.

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

31 minutes ago, igormp said:

No, you can't do that, a lane is a physical trace, you can't just double it.

Yes. The only way to do so would be using a PLX chip, but then you're getting into high-end, server-grade hardware.

Ah ... but even with PLX, I have a hunch that would still not get past a platform bandwidth limit?
For example, let's say you put 8 of those cards that take 4 NVMe drives each, in this board (see spoiler) .... I'm guessing you wouldn't have (3.5*2*4 + 3.5*4*4 -- 4 slots are 8x so 2 SSDs each I guess, also I'm assuming PCIe 3.0 and rounding to 3.5 GB/s) 84 GB/s combined transfer rate?  I have a feeling LGA 1155 couldn't do that...

Spoiler

475648848_MSIBigBangMarshalB3Boardfrontback.thumb.jpg.fe2dded0ac9e36b5b06d0642ee1123c6.jpg

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, PianoPlayer88Key said:

... but even with PLX, I have a hunch that would still not get past a platform bandwidth limit?

Well, yeah, the cpu would be a bottleneck in the end if you did hit all of the drives at once. 

FX6300 @ 4.2GHz | Gigabyte GA-78LMT-USB3 R2 | Hyper 212x | 3x 8GB + 1x 4GB @ 1600MHz | Gigabyte 2060 Super | Corsair CX650M | LG 43UK6520PSA
ASUS X550LN | i5 4210u | 12GB
Lenovo N23 Yoga

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, igormp said:

Well, yeah, the cpu would be a bottleneck in the end if you did hit all of the drives at once. 

Ahh.

(Was editing the previous post when you replied, was going to put this in it)
 

I hit a bandwidth limit of (I think) a different type when running DBAN on a bunch of HDDs in my desktop last year.  Had 10 drives connected to my ASRock Z97 Extreme6 (CPU = i7-4790K), plus 2 more drives plugged into a ByteCC BT-PESAPA PCIe to IDE/SATA card.  (I forget if the card was plugged into a 1x or 16x slot though.)

There's too many pics to attach even in a spoiler (would exceed the 20MB limit), so instead I put them in a shared Google Photos album.


For example .. the 5TB HDD (NAG226TK) starts off at about 65 MB/s in the first screenshot when the total bandwidth is 1193 MB/s.  (A 2TB drive is running ~138 MB/s, while a 4TB is ~163 MB/s.)  A couple of the last screenshots show a list of all the drives that were connected, when the process was completed.
Later, when some of the smaller drives (and probably a couple other 4TB and/or 5TB drives - there were 3 of each) were done, that 5TB drive jumped up to around 100-140 MB/s.

 

Basically I think I hit a limit of around 1 GB/s (or a bit more) on the chipset / SATA lanes.

 

(Also somewhere else I thought I had a few pictures where a DBAN was taking like over a week to complete ... I think it was with a few 8 and 10TB drives, and multiple passes.)

 

 

 

Was just thinking ... I wonder if there are any platforms where the total PCIe bandwidth is as much greater than the RAM bandwidth, as a typical consumer system's dual-channel RAM is faster than a hard drive... 😄  (So that, instead of RAM being much faster than your storage, it would be much slower 😮)

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, dilpickle said:

Notice that the article says 10000 Mhz is in "research" phase. Meaning this is basically extreme overclocking stuff. Considering that DDR4 has been clocked above 7000 Mhz it's not as impressive as it sounds.

That DDR4 running at 7000 was the most extreme of extreme overclocks using LN2 and you're not going to see that in any system running normally. It's better to look at the highest speed grades offered normally on DDR4 for comparison, we're looking around 4600 for easily available kits. I think I saw some 5000 kits offered before but they're the most cherry picked samples. Also DDR4 is very mature. That overclock was achieved 6 years after DDR4s consumer availability. We're not even at launch of DDR5 yet.

 

DDR5 eventually running at 7000 is within the standards defined range, so it will be relatively easy to buy and get working when the time comes. 10000 is above standards, so they will be pushed chips but as comparison probably not worse than the commercially available high speed kits of today.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

I know current consumer DDR4 memory goes up to around 5000mhz, but they're pretty crazy priced. Since DDR5 can go up until 10000mhz, I hope the prices will be justified.

Because of DDR5 releasing soon I have been hesitating to spend more money on DDR4 to upgrade to 32GB actually. Ram prices are pretty bad too right now, I miss the time when 16GB could be bought for <$60.

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×