Jump to content

It's a bird, it's a plane, it's...UltraRAM - New experimental NAND that can do literally everything

My BS detector is pegging loud. Until this is licensed and produced to the likes of Micron, Samsung, etc, it's just vaporware.

 

What are the capacities they're claiming again?

Link to comment
Share on other sites

Link to post
Share on other sites

Neat. Like to see new memory tech. The NAND we use for SSDs show limits in terms of cell layering when expanding to get more capacity, endurance sufferes and so does speed. Relying on complex measures like better controller, limited cashing and limited write/replace function. 

I'm still curious what's with Samsung's Z NAND that was in every way better SSD but only saw enterprise model, nothing consumer.

We're already at great place with SSDs in general, as far as sequential speed definitely. On IOPS side it doesn't seem to scale as much.

But just imagining one day to have a single purpose memory that is and ca be both RAM and storage. 

| Ryzen 7 7800X3D | AM5 B650 Aorus Elite AX | G.Skill Trident Z5 Neo RGB DDR5 32GB 6000MHz C30 | Sapphire PULSE Radeon RX 7900 XTX | Samsung 990 PRO 1TB with heatsink | Arctic Liquid Freezer II 360 | Seasonic Focus GX-850 | Lian Li Lanccool III | Mousepad: Skypad 3.0 XL / Zowie GTF-X | Mouse: Zowie S1-C | Keyboard: Ducky One 3 TKL (Cherry MX-Speed-Silver)Beyerdynamic MMX 300 (2nd Gen) | Acer XV272U | OS: Windows 11 |

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, LAwLz said:

Are you sure that was because of write cycles and not just the drive failing for other reasons? Because from what I've gathered, things like the controller in an SSD failing seems far more common than the actual NAND cells not being able to hold charge anymore. 

Yea 100% NAND in my case; I had a program I needed that would occasionally thrash the SSD by using up all available space and still attempting writing more.  Pair that with cheaper, not high quality drives and it's a recipe for it.  Either way, either the stress of having to perform things like wear leveling and such as well is what I suspect can't confirm the controllers to fail.

 

8 hours ago, LAwLz said:

Anyway, sounds promising but I won't hold my breath for this. This might be a repeat of Optane. 

I doubt it will be like optane...I personally don't think this will even hit the commercial market (except maybe as a few prototypes or maybe a few thousand products).  In general this just feels like "future tech" where they are trying to drum up some capital investments while doing research and maybe generate a few patents they could sell later on.

 

2 hours ago, porina said:

Also I'm still cautious about its use as a general ram replacement, especially if used without wear levelling as seems to be proposed. Claimed cycle life may be high compared to flash but it isn't practically unlimited like ram.

Actually, if this does come to fruition though (at the price point of DRAM) then this could be a game changer; even if it was limited to 1 million cycles.  That's a big if though.  Some practical applications is keeping the actual runtime code in this kind of RAM, as it would be "faster" to access while not needing power to maintain and the code  itself rarely would be changed.  Then when put into "sleep" quickly transfer the RAM to here (as RAM to RAM should only take less than a second in most cases).  When it's booted back up copy back to RAM (pretty much instant boot...but no power draw while in sleep).

 

Optane was more proposed as almost a caching solution. It was more expensive than a SSD per GB, but "faster" and more wear resilient than an SSD; so it "fit" into the market of using a smaller sized to essentially buffer the HDD reads/writes.  It was still also cheaper per GB in the RAM space, which was where it was thought it might become a thing (server side)...but I think the advent of just better cheaper NAND kind of won out along with the fact that you still needed large amounts of power to perform the PE cycle compared to DDR.  This hits a bit differently if true though as the power required seems to be pretty small relatively, has better read speeds compared to RAM (this one I'm skeptical of though as I think they are just measuring the gate speed and not the system as a whole), and still remains persistent.  These things, if true, would I think enable it to properly compete in the RAM market.

 

You might even find limited use in something like graphics cards (if they could get it to billions of PE cycles)...and if it hits the price point of Optane...as then you could have enough ram to store high quality textures all at once while having super fast latency.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, wanderingfool2 said:

Actually, if this does come to fruition though (at the price point of DRAM) then this could be a game changer; even if it was limited to 1 million cycles.  That's a big if though.  Some practical applications is keeping the actual runtime code in this kind of RAM, as it would be "faster" to access while not needing power to maintain and the code  itself rarely would be changed.  Then when put into "sleep" quickly transfer the RAM to here (as RAM to RAM should only take less than a second in most cases).  When it's booted back up copy back to RAM (pretty much instant boot...but no power draw while in sleep).

 

Optane was more proposed as almost a caching solution. It was more expensive than a SSD per GB, but "faster" and more wear resilient than an SSD; so it "fit" into the market of using a smaller sized to essentially buffer the HDD reads/writes.  It was still also cheaper per GB in the RAM space, which was where it was thought it might become a thing (server side)...but I think the advent of just better cheaper NAND kind of won out along with the fact that you still needed large amounts of power to perform the PE cycle compared to DDR.

Optane was made primarily as the next generation of NVDIMM/NVM rather than an alternative to NAND. It was suitable enough in storage density to be applicable to storage device use cases, Optane first product range also happened to be SSD/Storage Device rather than Optane DIMM as it was easier to bring to market. Optane DIMM required CPU and Chipset support as well as a huge amount of firmware/microcode development and lots of qualification testing of the different modes Optane DIMM could operate in.

 

The main issue was really that not many actually NEED in memory databases or other datasets so it didn't get a lot of traction. Big developments in NVMe also made it that for the most part traditional NVMe storage devices were suitable even for extreme performance applications.

 

It's also why Sapphire Rapids Xeon with HBM won't be widely used either, but it's evolutionary better than Optane DIMM also supporting the same/similar operating modes but with much greater bandwidth and easier system integration since it's just on the CPU package and goes in the socket. No complicated DIMM slot population matrix, no higher productizing costs due to it being on a DIMM etc.

 

I think Intel read the market and situation a little wrong in regards to Optane and should have forgone Optane DIMM entirely and focused on storage devices only. Then got on package memory with Xeons a generation earlier (maybe two but doubt that).

 

A lot of NVDIMM solutions are actually just battery/super-cap backed DIMMs 

https://docs.netapp.com/us-en/ontap-systems/a800/nvdimm-battery-replace.html#step-3-replace-the-nvdimm-battery

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, rcmaehl said:

Probably going to be low storage capacity like Optane for more critical in-memory applications?


CPU Registers <-> L4 Cache <-> L3 Cache <-> L2 Cache <-> L1 Cache <-> "UltraRam" <-> DRAM <-> Non-Violate Storage <-> Optane <-> Regular Storage

 

Maybe?

 

 

FYI you have the cache levels backwards it is: CPU Registers <-> L1 Cache <-> L2 Cache <-> L3 Cache <-> DRAM

 

 

On another note I see some people suggesting using the for some type of cache. I do not think it is appreciated how high the write volume of CPU cache is on a modern CPU. Every read from RAM is a write to one or more levels of cache depending on design. A 32 MB L3 cache with a 10^7 write cycle lifetime, could only last for about 4400 seconds caching data being read from DDR5 6000MT 100% bus usage with perfect wear leveling. You would need to get to 10^13 before it would last for years at 100% utilization.

 

I can see this sitting along side of RAM, for storing specific data for fast access, like OS files for example.

 

Is there any comparison data to SRAM?

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/26/2023 at 8:18 AM, Crunchy Dragon said:

Give it a decade, we might start seeing DIMM kits of 2x2GB featuring this stuff for just $200!

For what it's worth, having metadata stored on that COULD do some awesome things. This would require the filesystem and the drive(s) in place to be somewhat intelligent though. 

3900x | 32GB RAM | RTX 2080

1.5TB Optane P4800X | 2TB Micron 1100 SSD | 16TB NAS w/ 10Gbe
QN90A | Polk R200, ELAC OW4.2, PB12-NSD, SB1000, HD800
 

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/26/2023 at 5:18 PM, Crunchy Dragon said:

Give it a decade, we might start seeing DIMM kits of 2x2GB featuring this stuff for just $200!

Not to mention with 1 ns write latency and a durability of 10 million cycles, you can theoretically wear out your $200 SuperRAM kit in just 10 ms (yes, milliseconds).

Fastest RAM ever. 😉

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, HenrySalayne said:

Not to mention with 1 ns write latency and a durability of 10 million cycles, you can theoretically wear out your $200 SuperRAM kit in just 10 ms (yes, milliseconds).

Fastest RAM ever. 😉

That is per cell though and since it's non-volatile you aren't constantly refreshing the cells so it's not worse than NAND for write durability. You'd actually have to write that amount of data.

 

10 million cycles is a hell of a lot more than the best SLC NAND at 100,000, and most NAND in use today is around 3,000 (TLC).

 

It would be "safe" enough as system memory so long as you have enough capacity, but it would have an actual wear life that matters unlike basically every higher end SSD on the market that realistically will never write wear out.

Link to comment
Share on other sites

Link to post
Share on other sites

There is one other way this could be used assuming the following:

  • Finite write life, even if much higher than NAND
  • Very fast access times

I'm thinking, direct storage execution. Right now we have stuff on storage, that needs to get put into ram to use. This sounds like a good medium for this kind of usage, as writes will be relatively much lower than reads, plus you get speed benefits of the medium as well as not having to move it into ram. I think some kind of conventional ram would still be needed for dynamic runtime data.

 

On the wear thing, you'd only wear it on a state change. If the data is sufficiently random that's on average 50%. So a 2x bonus to life possibly.

 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, porina said:

There is one other way this could be used assuming the following:

  • Finite write life, even if much higher than NAND
  • Very fast access times

I'm thinking, direct storage execution. Right now we have stuff on storage, that needs to get put into ram to use. This sounds like a good medium for this kind of usage, as writes will be relatively much lower than reads, plus you get speed benefits of the medium as well as not having to move it into ram. I think some kind of conventional ram would still be needed for dynamic runtime data.

 

On the wear thing, you'd only wear it on a state change. If the data is sufficiently random that's on average 50%. So a 2x bonus to life possibly.

 

If it actually has 10m+ write cycles, then that throws existing SLC/MLC/TLC/QLC/etc under the bus. But I feel something is not being said here. Cause SLC to MLC goes from 100,000 to 10,000 and TLC to 1000, and QLC to 100 or something like that. But also as dies shrink NAND gets even less durable.

 

Quote

The memory retains data even after power is removed, and the company claims it has at least 4,000X more endurance than NAND and can store data for 1,000+ years. It is also designed to have 1/10th the latency of DRAM and be more energy efficient (by a factor of 100X) than DRAM fabricated on a similar node, drawing the interest of industry heavyweights like Meta. 

 

Quote

UltraRAM is a charge-based memory that uses a floating gate, like flash NAND. Also like flash, the charge state of the floating gate is read non-destructively by measuring the conductance of an underlying ‘channel.’ However, unlike flash, UltraRAM doesn't wear during program and erase cycles because of its TBRT structure. 

So they figured out a way to make it not wear? But how would it last for 1000 years if it's charge based?

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, leadeater said:

10 million cycles is a hell of a lot more than the best SLC NAND at 100,000, and most NAND in use today is around 3,000 (TLC).

I wonder though, if you picked a good quality SLC NAND that was lab manufactured (one that is a known good golden sample for testing); you could should be able to hit 10 million cycles with a single bit of SLC NAND.  There have been some in the past that were rated as 1,000,000 cycles; so testing to destruction you might get samples for SLC NAND at that amount.  I could be wrong on this, but at a glance as well it appears as though they might have only done one test to get that 10 million number (and only on a 2x2 matrix).  It was also tested on a nm that won't be used in the final product, which should shift it as well.

 

They also only tested that sample for breakage not wear.  I'm not sure how this reads out a 1/0, but if it's by passing a voltage through it and measuring the amperage then this technology might have a problem.  While I don't doubt that it might last longer than NAND, to me it seems more like they are comparing apples to oranges.

 

 

37 minutes ago, porina said:

I'm thinking, direct storage execution. Right now we have stuff on storage, that needs to get put into ram to use. This sounds like a good medium for this kind of usage, as writes will be relatively much lower than reads, plus you get speed benefits of the medium as well as not having to move it into ram. I think some kind of conventional ram would still be needed for dynamic runtime data.

Ultimately where this thing would end up, if it actually does ever hit the market and has lower latency than DRAM then where it ends up will be determined by the price point.

 

If it hits a price point of lets say DRAM itself, I could see it used in maybe a server type of setting where persistent data is needed and NAND is the bottleneck (some forms of AI training potentially).

 

If it's between the pricing of NAND and RAM, I could see it starting off as a GPU thing.  Having normal GDDR ram, but have the special higher capacity on the board that's capable of storing an entire game's textures/models.  Then I could see it bleeding over into consumer electronics (maybe a new standard), where then it would hold things like the OS and other chunks of RAM that you know isn't going to be changed a lot (but this requires a change at the OS level).

 

33 minutes ago, Kisai said:

So they figured out a way to make it not wear? But how would it last for 1000 years if it's charge based?

I think their claim is that the they think it's not going to wear out, but really they haven't created a sample where they have properly tested it yet.

 

I want to say the 1000+ years though is a hyperbole (just like those CD manufacturers).  Although, depending what they mean...maybe the act of having the device powered/data read could reaffirm the current charge (don't know).  

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

There is one other way this could be used assuming the following:

  • Finite write life, even if much higher than NAND
  • Very fast access times

I'm thinking, direct storage execution. Right now we have stuff on storage, that needs to get put into ram to use. This sounds like a good medium for this kind of usage, as writes will be relatively much lower than reads, plus you get speed benefits of the medium as well as not having to move it into ram. I think some kind of conventional ram would still be needed for dynamic runtime data.

 

On the wear thing, you'd only wear it on a state change. If the data is sufficiently random that's on average 50%. So a 2x bonus to life possibly.

 

SSD's currently have small DRAM caches, you could swap that out for this and make it much lager and forgo the dynamic SLC cache that is done in the main TLC NAND. That might actually be a good use case, 100GB-400GB SSD cache which is a lot more than the 0.5GB to 4GB we currently get with DRAM cache.

 

The cache would then also be non-volatile so you could commit writes at the cache layer vastly improving write latency of SSDs and then also have huge IOPs in low queue depths. This would basically make every SSD server grade ish as they would all be power protected.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Kisai said:

So they figured out a way to make it not wear? But how would it last for 1000 years if it's charge based?

It's a permanent imprint - just like flash which uses a charge trap or electret microphones which use a permanently polarized foil.

With flash the trapped charge slowly diffuses (temperature dependent). This TBRT structure seems to be much more stable.

Link to comment
Share on other sites

Link to post
Share on other sites

Awesome sounding tech. And we will never hear about it ever again after today. As is usual with fancy new tech. Just like the millions of different battery tech and storage tech that were showed over the years.
I still remember that holographic cube for storage... The DNA storage... The....

CPU: AMD Ryzen 3700x / GPU: Asus Radeon RX 6750XT OC 12GB / RAM: Corsair Vengeance LPX 2x8GB DDR4-3200
MOBO: MSI B450m Gaming Plus / NVME: Corsair MP510 240GB / Case: TT Core v21 / PSU: Seasonic 750W / OS: Win 10 Pro

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×