Jump to content

Samsung Tech Day 2021: DDR6-17000, GDDR6+, GDDR7 and HBM3 Roadmap

Lightwreather

Summary

As reported by Computerbase, Samsung used the Tech Day 2021 event to showcase its future memory technology roadmap. The company shared plans on memory development pursuits for the coming years, with expected revisions and developments of standard technologies such as DDR RAM, GDDR graphics memory, and HBM3. 

 

Quotes

Quote

DDR6/DDR6LP:

Samsung announced that standard DDR6 speeds are expected to hit 12,800 MTs - while overclocked DDR6 memory (as in, any operational frequency above JEDEC's standard) could hit a ceiling at around 17,000 MT/s. Paired with the expectation of doubled memory channels for each DDR6 stick (quad-channel memory sticks compared to DDR5's dual-channel connection) and quadrupled memory bank size (64 compared to DDR5's maximum 16), DDR6 should enable an incredible jump in pure throughput and memory capacity. The low-power version of DDR6, DDR6LP, will achieve the same 17,000 MT/s operational speeds, but at 20% lower energy consumption.

GDDR6+:

Samsung is also currently developing extensions on the technology that would enable GDDR6+ chips to operate at up to 27 Gbps.

GDDR7:

While the technology doesn't currently have a date for its debut, Samsung expects the technology to be engineered at up to 32 Gbps throughput, slightly less than double the highest bandwidth available with GDDR6.

HBM3:

Lastly, Samsung has confirmed that HBM3 development is running as scheduled. The company didn't confirm speeds or stack density in its Tech Day presentation and mentioned only that market availability is expected for Q2 2022. Samsung is taking a different approach to HBM3 than other memory manufacturers, however

 

My thoughts

So, here I am reporting on 4 products announced at the same time. Well, I better get on it. DDR6 seems to be really neat, however, I don't see this coming to market anytime in the near future, possibly after 2 years with Lunar lake or Zen 5/6 (whatever is there), we might see DDR6 adoption, but I still doubt it. Now the performance gains touted seem to be really good however. Now onto GDDR6+, meh, nothing of great interest, just slighlty better bandwith than the GDDR6X. Now GDDR7, that seems more interesting, roughly double the bandwidth of gddr6. HBM3, that's  little sparse on the details. NOw it should be noted that all of these products are still technically a roadmap, we probably won't see very much of these for sometime, and adoption and retailling of these technologies farther still. But I must admit, this is still pretty cool.

 

Sources

Tom's Hardware

ComputersBase.de

"A high ideal missed by a little, is far better than low ideal that is achievable, yet far less effective"

 

If you think I'm wrong, correct me. If I've offended you in some way tell me what it is and how I can correct it. I want to learn, and along the way one can make mistakes; Being wrong helps you learn what's right.

Link to comment
Share on other sites

Link to post
Share on other sites

If we look at consumer excluding HEDT, then DDR3 lifespan was Nehalem 2009, moving to DDR4 with Skylake in 2015. Now in 2021 we have DDR5. So that is roughly 6 years between DDR generations in mainstream. Is DDR6 going to be much shorter than that cycle?

 

Standards based DDR6 running at 12800 sounds reasonable if we assume 2x with each generation. We're at 3200 with DDR4, and 6400 is expected in DDR5 as it matures. I think the last widely supported DDR3 speed was 1600, although faster grades might be defined. So, not that you can't go faster, but as a typical speed that gets widely adopted when mature, that seems reasonable. XMP modules can obviously go beyond that.

 

I don't see this as a surprise that it is being worked on already. The only question is when, and for that, I'm not holding my breath.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Well in future cards we'll see GDDR7 though I still wonder are we going to see consumer flagship cards maybe with HBM3 as well. Especially how faster GPUs will be and with MCM approach.

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

imagine how many years it would take DDR6 to drop to DDR3 prices.

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

DDR5, barely out in the world, still fresh and new
Yet DDR6 is already on the horizon to replace it...

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, williamcll said:

imagine how many years it would take DDR6 to drop to DDR3 prices.

DDR3 pricing when? The last DDR3 I have on record of buying was a 2x4GB kit of 1866, at £31 in late 2015, just after the mainstream launch of DDR4 systems. I remember buying that on price, with a floor of 1600 speed since that was the officially supported speed at the time.

 

In a quick look at a major UK seller today, cheapest DDR4 3600 is equivalent to around £32/8GB. It is better value to buy 16 or 32GB kits than 8GB sticks. It is only slightly cheaper if you look at slower speed grades. DDR4 with speeds comparable to DDR5 is lot more, cheapest DDR4-4600 kits I can find are almost £100/8GB.


Of course, both the above pricings are at the end of life for each type of ram, and I haven't factored in inflation. DDR4 is probably cheaper than DDR3. They are outgoing now. Can't find numbers for my early purchases of DDR4, but from memory I wasn't getting much change from £200 for 16GB, but they were high end modules and not cheaper base grades.

 

DDR5 pricing is hard to find since supply is constrained, but Crucial UK site lists 4800 at £60/8GB. Less than 2x over DDR4-3600 but far cheaper than high speed DDR4. It is looking similar to the pattern we see before. DDR6 I'd expect will move similarly when it eventually launches, unless it does something so radically different to cause it to behave differently. Comparing it to DDR3 is pretty nonsensical.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

59 minutes ago, TetraSky said:

Yet DDR6 is already on the horizon to replace it...

Just because DDR5 is released doesn't mean they take a break before looking at DDR6. It is continuous development. They'll be looking at improving DDR5. They'll be looking at what will go into DDR6. They'll be thinking about what DDR7 might look like. It'll be early development. Don't expect any product for quite some time.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, porina said:

If we look at consumer excluding HEDT, then DDR3 lifespan was Nehalem 2009, moving to DDR4 with Skylake in 2015. Now in 2021 we have DDR5. So that is roughly 6 years between DDR generations in mainstream. Is DDR6 going to be much shorter than that cycle?

 

Standards based DDR6 running at 12800 sounds reasonable if we assume 2x with each generation. We're at 3200 with DDR4, and 6400 is expected in DDR5 as it matures. I think the last widely supported DDR3 speed was 1600, although faster grades might be defined. So, not that you can't go faster, but as a typical speed that gets widely adopted when mature, that seems reasonable. XMP modules can obviously go beyond that.

 

I don't see this as a surprise that it is being worked on already. The only question is when, and for that, I'm not holding my breath.

I mean I kind of thought the same thing about PCIe 5.0. PCIe 2.0 was adopted first on Intel back in 2007 then PCIe 3.0 in 2013 on AMD, then in 2019 we got PCIe 4.0 on AMD again and now just 2 years later we already got PCIe 5.0 on Intel. Though DDR6 will probably be more useful when launched than PCIe 5.0 is today.

Link to comment
Share on other sites

Link to post
Share on other sites

Would it kill them to keep the metric the same and not jump between MT/s and Gbps. I am still of the opinion that neither really tells you the true 'speed' of the memory and would just rather use effective clock (I know this is about as useful as judging a CPU by it's clock x cores, which is not a true representation). MT/s is like a little closer as there you take the 'IPC' of our imaginary CPU into consideration but you are still far from 'real world' performance.
The move to MT/s for RAM just feels like a bunch of people hitting the first peak of the Dunning-Kruger 'graph'
To get to a comparable number for performance you need to take so much other things into consideration. If they want to be correct they would use something closer to the Baud rate with maybe modulation, or just stick to effective clock like most of the industry is doing. 
I'm happy with Mhz as it's all relative within each generation without getting too technical.

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, ouroesa said:

Would it kill them to keep the metric the same and not jump between MT/s and Gbps. I am still of the opinion that neither really tells you the true 'speed' of the memory and would just rather use effective clock (I know this is about as useful as judging a CPU by it's clock x cores, which is not a true representation). MT/s is like a little closer as there you take the 'IPC' of our imaginary CPU into consideration but you are still far from 'real world' performance.
The move to MT/s for RAM just feels like a bunch of people hitting the first peak of the Dunning-Kruger 'graph'
To get to a comparable number for performance you need to take so much other things into consideration. If they want to be correct they would use something closer to the Baud rate with maybe modulation, or just stick to effective clock like most of the industry is doing. 
I'm happy with Mhz as it's all relative within each generation without getting too technical.

The values in MT/s are for CPU memory - DDR6. The values in Gbps are video memory - GDDR6+/GDDR7. They are presented differently because the process in which the two are integrated in systems differs and different specifications are important for each scenario.

 

CPU memory comes in DIMMs with a fixed width memory bus - 64 bits per channel for DDR4 or 2x32 bits per channel for DDR5. As such, the frequency of the memory module is what dictates the performance of the memory system. Peak bandwidth is also a pretty much useless statistic when you're talking CPU memory as it's very rare that you are actually able to saturate it with a CPU workload (at least when it comes to consumer/mobile markets) which is evidenced by just how little memory bandwidth matters in our systems. Even Zen 3 only sees gains of ~7% when going from 2133Mhz to 4000MHz DDR4, despite the raw bandwidth nearly doubling. The gains we see here are coming from the improvements in round-trip latency - how quickly a small, isolated query to memory can be answered - which are due to the frequency and CAS latency, rather than the increased peak bandwidth. This is also why you can observe performance increases from tightening your memory timings, despite them having (basically) zero input when calculating peak memory bandwidth (frequency * bus width / 2). It also explains why most workloads see basically no difference in performance between DDR4 and DDR5 on Alder lake  - because the sets of DDR5 that you can purchase today are running at such low frequencies (for DDR5) that the round-trip latency is basically the same as you get with decent DDR4. The few workloads that do love memory bandwidth (such as compression) unsurprisingly love DDR5 as well.

 

VRAM, by contrast, can be integrated in a wide variety of different ways with a large range of memory bus widths, but the frequency of the individual chips is almost always the same (or at least very similar) within a generation. Therefore, the size of the memory bus - the number of memory chips used - is what dictates the performance of the memory system. As such, peak memory bandwidth in Gbps is the only benchmark available to compare performance with - which also just so happens to be an actually meaningful metric in GPU workloads. GPUs are so parallelized that they actually use the hundreds of GB/s of memory bandwidth that they have available to them - GPU's like the DDR4 1030 are a great example of this, where the massively reduced memory throughput vs the GDDR5 version resulted in piss-poor performance in comparison.

 

TL;DR: CPU memory is generally latency limited, while GPU memory is generally bandwidth limited. Each is labelled with the statistic that is meaningful for its corresponding bottleneck.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

Will it ever happen that we get for example 4GB HBM memory together on the CPU package? (Together with normal memory, not instead of)

“Remember to look up at the stars and not down at your feet. Try to make sense of what you see and wonder about what makes the universe exist. Be curious. And however difficult life may seem, there is always something you can do and succeed at. 
It matters that you don't just give up.”

-Stephen Hawking

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/20/2021 at 7:57 PM, TetraSky said:

DDR5, barely out in the world, still fresh and new
Yet DDR6 is already on the horizon to replace it...

https://custompc.raspberrypi.com/articles/ddr6-is-already-on-the-road-map

 

JEDEC is expecting to have the DDR6 standard ready in 2025

 

Quote

As we roll into DDR5 this year at 4600MHz, JEDEC expects this tech to scale up to 8000MHz by 2024, and that’s the ‘official’ plug-and-play spec, not just overclocked modules. This is followed by DDR6 launching in 2025 at 9600MHz and scaling beyond 16GHz by 2028.

 

Ryzen 7 5800X     Corsair H115i Platinum     ASUS ROG Crosshair VIII Hero (Wi-Fi)     G.Skill Trident Z 3600CL16 (@3800MHzCL16 and other tweaked timings)     

MSI RTX 3080 Gaming X Trio    Corsair HX850     WD Black SN850 1TB     Samsung 970 EVO Plus 1TB     Samsung 840 EVO 500GB     Acer XB271HU 27" 1440p 165hz G-Sync     ASUS ProArt PA278QV     LG C8 55"     Phanteks Enthoo Evolv X Glass     Logitech G915      Logitech MX Vertical      Steelseries Arctis 7 Wireless 2019      Windows 10 Pro x64

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/20/2021 at 11:22 PM, cj09beira said:

hbm 3 finally, it took a while.

Yup, can't wait! 👀

 

 

On 11/20/2021 at 11:20 AM, porina said:

If we look at consumer excluding HEDT, then DDR3 lifespan was Nehalem 2009, moving to DDR4 with Skylake in 2015. Now in 2021 we have DDR5. So that is roughly 6 years between DDR generations in mainstream. Is DDR6 going to be much shorter than that cycle?

no, i agree, we can roughly expect a 6 year cycle again, thats also what I personally call a generation,  roughly every 6 years currently (yes, people will buy perhaps buy several cpus/gpus/etc during that time-frame,  but they shouldn't have to, and any gains are usually minimal,  if they chose wisely the first time)

 

 

 

The direction tells you... the direction

-Scott Manley, 2021

 

Softwares used:

Corsair Link (Anime Edition) 

MSI Afterburner 

OpenRGB

Lively Wallpaper 

OBS Studio

Shutter Encoder

Avidemux

FSResizer

Audacity 

VLC

WMP

GIMP

HWiNFO64

Paint

3D Paint

GitHub Desktop 

Superposition 

Prime95

Aida64

GPUZ

CPUZ

Generic Logviewer

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Reading that DDR5 uses CL40.. oof.. Really high latency but default high speeds.. Guess its better than nothing but still.

Useful threads: PSU Tier List | Motherboard Tier List | Graphics Card Cooling Tier List ❤️

Baby: MPG X570 GAMING PLUS | AMD Ryzen 9 5900x /w PBO | Corsair H150i Pro RGB | ASRock RX 7900 XTX Phantom Gaming OC (3020Mhz & 2650Memory) | Corsair Vengeance RGB PRO 32GB DDR4 (4x8GB) 3600 MHz | Corsair RM1000x |  WD_BLACK SN850 | WD_BLACK SN750 | Samsung EVO 850 | Kingston A400 |  PNY CS900 | Lian Li O11 Dynamic White | Display(s): Samsung Oddesy G7, ASUS TUF GAMING VG27AQZ 27" & MSI G274F

 

I also drive a volvo as one does being norwegian haha, a volvo v70 d3 from 2016.

Reliability was a key thing and its my second car, working pretty well for its 6 years age xD

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, MultiGamerClub said:

Reading that DDR5 uses CL40.. oof.. Really high latency but default high speeds.. Guess its better than nothing but still.

Latency is always pretty bad on the early kits. It will get better.

It's also worth noting that CL is measured in cycles, so the higher frequency you have, the less each X of CL matters.

5600MT/s and CL36 will probably be fairly standard in like a year or so, and at that point the latency is the same as DDR4 2800MT/s at CL18.

Link to comment
Share on other sites

Link to post
Share on other sites

18 hours ago, tim0901 said:

The values in MT/s are for CPU memory - DDR6. The values in Gbps are video memory - GDDR6+/GDDR7. They are presented differently because the process in which the two are integrated in systems differs and different specifications are important for each scenario.

 

CPU memory comes in DIMMs with a fixed width memory bus - 64 bits per channel for DDR4 or 2x32 bits per channel for DDR5. As such, the frequency of the memory module is what dictates the performance of the memory system. Peak bandwidth is also a pretty much useless statistic when you're talking CPU memory as it's very rare that you are actually able to saturate it with a CPU workload (at least when it comes to consumer/mobile markets) which is evidenced by just how little memory bandwidth matters in our systems. Even Zen 3 only sees gains of ~7% when going from 2133Mhz to 4000MHz DDR4, despite the raw bandwidth nearly doubling. The gains we see here are coming from the improvements in round-trip latency - how quickly a small, isolated query to memory can be answered - which are due to the frequency and CAS latency, rather than the increased peak bandwidth. This is also why you can observe performance increases from tightening your memory timings, despite them having (basically) zero input when calculating peak memory bandwidth (frequency * bus width / 2). It also explains why most workloads see basically no difference in performance between DDR4 and DDR5 on Alder lake  - because the sets of DDR5 that you can purchase today are running at such low frequencies (for DDR5) that the round-trip latency is basically the same as you get with decent DDR4. The few workloads that do love memory bandwidth (such as compression) unsurprisingly love DDR5 as well.

 

VRAM, by contrast, can be integrated in a wide variety of different ways with a large range of memory bus widths, but the frequency of the individual chips is almost always the same (or at least very similar) within a generation. Therefore, the size of the memory bus - the number of memory chips used - is what dictates the performance of the memory system. As such, peak memory bandwidth in Gbps is the only benchmark available to compare performance with - which also just so happens to be an actually meaningful metric in GPU workloads. GPUs are so parallelized that they actually use the hundreds of GB/s of memory bandwidth that they have available to them - GPU's like the DDR4 1030 are a great example of this, where the massively reduced memory throughput vs the GDDR5 version resulted in piss-poor performance in comparison.

 

TL;DR: CPU memory is generally latency limited, while GPU memory is generally bandwidth limited. Each is labelled with the statistic that is meaningful for its corresponding bottleneck.

Thanks for the in depth and lengthy yet basic explanation - I am aware of this.
Your examples can be used against your own reasoning, when replacing Mhz with MT/s - 'Even Zen 3 only sees gains of ~7% when going from 2133MT/s to 4000MT/s DDR4, despite the raw bandwidth nearly doubling'.  Effective bandwidth takes memory timings (longer delays , lowers bytes written and read over time), bus width (throughput roughly scales with bus widths if the modules can keep up), MHz (MT/s) and data rate into account and you need to look at all these together. Isolating transfers/s is a bad idea. We don't care about CPU performance, we are talking about memory performance. 
In both MT/s and Gbps, it gives only a rough, general idea of overall performance, and we should only really care about performance. So since we are using loose terms already, we might as well standardise things and go with 'effective bandwidth' - https://www.sciencedirect.com/topics/computer-science/effective-bandwidth

In short - an N-bit wide data bus can transfer N bits in one clock cycle, after a latency of d clock cycles (or t seconds).

I'm not being difficult on purpose, I just don't see the point of using 2 different, incorrect terms (in order to seem smart, I guess?) when we could use a single, incorrect term. 

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, ouroesa said:

We don't care about CPU performance, we are talking about memory performance. 

Except the only reason we as consumers care about memory performance is that it impacts CPU/GPU performance. The only way we can see the impact of improved memory performance is that it improves our benchmarks. Nobody cares how much faster DDR5 is over DDR4 on paper if it gives them no quantifiable performance benefit in the real world. Absolute memory performance is - to most people - meaningless.

 

The MT/s value that's slapped on the box of some DDR4 is not intended as a measurement of absolute memory performance, just like how GHz is not a measurement of absolute CPU performance. It is merely an independent variable that the user can control - it's the variable in your BIOS that you change when overclocking - that has an impact on performance. You don't advertise using effective bandwidth because that would be confusing as hell to actually use in the real world. It would lead to an endless stream of questions like "what speed should I set my RAM to to make it run at its advertised performance rating?" or, if the motherboard scaled with effective bandwidth "what frequency and CAS is equivalent to the 24GB/s effective bandwidth preset?"

 

If you really care about the objective performance of the memory system - go look at the data sheet where you'll find an actually accurate answer. The numbers used for advertising purposes are for consumers who only care about the impact of a memory system on their PC, rather than its isolated performance.

CPU: i7 4790k, RAM: 16GB DDR3, GPU: GTX 1060 6GB

Link to comment
Share on other sites

Link to post
Share on other sites

On 11/20/2021 at 5:20 AM, porina said:

If we look at consumer excluding HEDT, then DDR3 lifespan was Nehalem 2009, moving to DDR4 with Skylake in 2015. Now in 2021 we have DDR5. So that is roughly 6 years between DDR generations in mainstream. Is DDR6 going to be much shorter than that cycle?

 

Standards based DDR6 running at 12800 sounds reasonable if we assume 2x with each generation.

Would be strange for sure, given that the chip shortage means that most people probably won't get their hands on ddr5 for a while. And given that ddr6 most likely wont be backwards compatible or anything, I think I'd hold off on upgrading my mobo for a while.

 

Can a CPU even use 12800 mhz? I know that chips want faster memory now, but that sounds very unneccesary for normal work. But on the other hand 4000 mhz was considered useless just a few years ago and its becoming more widespread now.

 

Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler
Spoiler

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, tim0901 said:

Except the only reason we as consumers care about memory performance is that it impacts CPU/GPU performance. The only way we can see the impact of improved memory performance is that it improves our benchmarks. Nobody cares how much faster DDR5 is over DDR4 on paper if it gives them no quantifiable performance benefit in the real world. Absolute memory performance is - to most people - meaningless.

 

The MT/s value that's slapped on the box of some DDR4 is not intended as a measurement of absolute memory performance, just like how GHz is not a measurement of absolute CPU performance. It is merely an independent variable that the user can control - it's the variable in your BIOS that you change when overclocking - that has an impact on performance. You don't advertise using effective bandwidth because that would be confusing as hell to actually use in the real world. It would lead to an endless stream of questions like "what speed should I set my RAM to to make it run at its advertised performance rating?" or, if the motherboard scaled with effective bandwidth "what frequency and CAS is equivalent to the 24GB/s effective bandwidth preset?"

 

If you really care about the objective performance of the memory system - go look at the data sheet where you'll find an actually accurate answer. The numbers used for advertising purposes are for consumers who only care about the impact of a memory system on their PC, rather than its isolated performance.

 

We do not care about CPU performance when talking about memory performance, if we did, they would list some form of metric for each CPU on the memory spec sheet. Yes, sometimes certain CPU's may take advantage of the faster memory but that is all circumstantial. We care about memory performance and then the end user will do a cost/performance comparison for their use case and choose the best memory for their specific application. These are separate things. 

"The numbers used for advertising purposes are for consumers who only care about the impact of a memory system on their PC, rather than its isolated performance." - nope, MT/s has NOTHING to do with the impact of memory in their PC, this is an isolated number. If I have 2133MT/s memory and replace it with 3200MT/s memory, my CPU performance WILL NOT increase by 50% The actual impact on your PC will have more to do with your setup and use case. 

 

Think about it this way, when shopping for car tyres, you don't go looking for tyres that was only specifically tested on your car (maybe you are lucky and find a review done with your car, but generally not). You look at the relative performance between different tyres. Going from tiny, skinny wheels to super sticky drag slicks will do almost nothing of you have a very low power car that cant take advantage of the added grip. So what you do, is compare tyres relative to each other and then select the best tyre for your use case - same with memory. 

 

To avoid confusion, effective bandwidth should be calculated and shown at the rated speed and timing of the memory. When you buy memory, they usually state the 'effective clock' and timings, and this is what the MT/s is, the effective clock but it ignores timings (and rank) - this is where my issue comes in. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Shreyas1 said:

Can a CPU even use 12800 mhz? I know that chips want faster memory now, but that sounds very unneccesary for normal work. But on the other hand 4000 mhz was considered useless just a few years ago and its becoming more widespread now.

It's the good old "depends on your use case", also keeping in mind that beyond a certain point there are diminishing returns. At one extreme we have workloads like Cinebench that are essentially unaffected by ram speed. Gaming, depending on the title, might see modest gains to a point. My other interest is finding prime numbers. The software is essentially the same as Prime95, which is more often misunderstood or dismissed as unrealistic. It is a very real workload. While it has a reputation for being punishing on CPU, at bigger FFT sizes it is also punishing on the cache and ram subsystems too. For practical purposes, high core count consumer tier CPUs are ram bandwidth limited in that scenario. As we get more cores, we need more bandwidth to feed it. With Ryzen, AMD have really thrown that balance out of the window and we have far more core potential than ram bandwidth to feed it. However they've chosen a different path to mitigate it, by providing bigger caches. So it will depend on the workload and the overall platform balance of resources.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, ouroesa said:

We do not care about CPU performance when talking about memory performance, if we did, they would list some form of metric for each CPU on the memory spec sheet.

The main reason you cannot use bandwidth as a rating (properly and why it's not used primarily) on a memory kit is because the memory controller is on the CPU and for memory modules designed to go in to systems with different CPUs with different archecture and different memory controllers means even if you were able to set every timing and sub timing to exactly the same the bandwidth, running a memory bandwidth measurement tool, will be different.

 

7 hours ago, ouroesa said:

To avoid confusion, effective bandwidth should be calculated and shown at the rated speed and timing of the memory.

It actually is, the reason why you don't think it's a thing is because it is known to be flawed and useless.

 

Quote

Crucial 16GB Single DDR4 2400 MT/s (PC4-19200) 

PC4-19200 means 19.2GB/s bandwidth (per channel)

 

You're asking for something that actually already is a thing and widely discarded as useless. You'll see it shown on cheap low end stuff and omitted on higher end products. Since it's just multiplying the MT/s value by 8 it really does make no difference at all, double the MT/s is double the GB/s.

 

Quote

Because there are 8 bits in a byte, we multiply the data rate by 8, and get the speed of the memory: 1600x8=12800, 2400x8=19200, where these results are your transfer rates in Megabytes per second (MB/s). 

https://www.crucial.com/support/articles-faq-memory/differences-in-memory-speed-and-data-rate

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×