Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Origami Cactus

SSD's got even bigger? PLC nand

Recommended Posts

3 hours ago, leadeater said:

Even if the reads are only 200MB/s-250MB/s (which is lower than I expect) that's still faster than an HDD and the access latency times will be below ms to less than 3ms (bar cases where it makes no difference) so will still be much faster than an HDD, even a good 7200 rpm.

 

Remember back to the first generation of SSDs that were barely faster than WD VelociRaptors of the time? Usability and feel of those SSDs were still much better, no seek times = better.

Let's not forget we can cram more of these in a server than we can HDDs AND we can also chuck them in a shopping cart and take them across the parking lot to the new building without data loss lol


CPU: Intel i7 7700K | GPU: ROG Strix GTX 1080Ti | PSU: Seasonic X-1250 (faulty) | Memory: Corsair Vengeance RGB 3200Mhz 16GB | OS Drive: Western Digital Black NVMe 250GB | Game Drive(s): Samsung 970 Evo 500GB, Hitachi 7K3000 3TB 3.5" | Motherboard: Gigabyte Z270x Gaming 7 | Case: Fractal Design Define S (No Window and modded front Panel) | Monitor(s): Dell S2716DG G-Sync 144Hz, Acer R240HY 60Hz (Dead) | Keyboard: G.SKILL RIPJAWS KM780R MX | Mouse: Steelseries Sensei 310 (Striked out parts are sold or dead, awaiting zen2 parts)

Link to post
Share on other sites
35 minutes ago, Lady Fitzgerald said:

they will allow us to have huge amounts of data in far less physical space

At the cost of performance and reliability. 

Link to post
Share on other sites
8 minutes ago, VegetableStu said:

... you know what, I'm actually starting to warm up on the idea of using 5LC SSDs for cold storage. maybe thaw them twice per year just in case

Being a confirmed coward, I would "thaw" (I like that term) more often than that.


Jeannie

 

As long as anyone is oppressed, no one will be safe and free.

One has to be proactive, not reactive, to ensure the safety of one's data so backup your data! And RAID is NOT a backup!

 

Link to post
Share on other sites
Just now, Donut417 said:

At the cost of performance and reliability. 

Why would a larger capacity SSD have reduced performance and reliability unless it was QLC or PLC? Being an enterprise drive, I'm thinking that drive will be SLC or MLC, possibly TLC, but almost definitely not QLC.


Jeannie

 

As long as anyone is oppressed, no one will be safe and free.

One has to be proactive, not reactive, to ensure the safety of one's data so backup your data! And RAID is NOT a backup!

 

Link to post
Share on other sites
Just now, Lady Fitzgerald said:

Why would a larger capacity SSD have reduced performance and reliability unless it was QLC or PLC? Being an enterprise drive, I'm thinking that drive will be SLC or MLC, possibly TLC, but almost definitely not QLC.

That’s what the tread is pertaining to PLC. At this point 3.5 inch SSD’s for larger storage. 

Link to post
Share on other sites
2 hours ago, Zodiark1593 said:

Bruce Lee, Jackie Chan, ???

stan lee?


I live in misery USA. my timezone is central daylight time which is either UTC -5 or -4 because the government hates everyone.

into trains? here's the model railroad thread!

Link to post
Share on other sites
7 minutes ago, Donut417 said:

That’s what the tread is pertaining to PLC. At this point 3.5 inch SSD’s for larger storage. 

First of all, the term is "thread", not "tread". Second, Donut417's query didn't need to be limited to just PLC drives. threads often go off topic a bit (even a lot at times); build a bridge and get over it.

 

I was doing some research on the ExaDrive100 and it uses MLC NAND (researching this was not difficult; you should try it sometime). The manufacturer is claiming speed and durability to be equal to or better than current, smaller enterprise SSDs but its biggest claim to fame include reduced power consumption over even current enterprise SSDs and the ability to use existing infrasturcture without the need for adapters; just yank an HDD and pop this puppy in.


Jeannie

 

As long as anyone is oppressed, no one will be safe and free.

One has to be proactive, not reactive, to ensure the safety of one's data so backup your data! And RAID is NOT a backup!

 

Link to post
Share on other sites
Posted · Original PosterOP
42 minutes ago, VegetableStu said:

... you know what, I'm actually starting to warm up on the idea of using 5LC SSDs for cold storage. maybe thaw them twice per year just in case

QLC is already not suitable for cold storage because of the difference between voltage levels being very small, PLC would need to be updated once a month atleast with current error correction technology.

Link to post
Share on other sites
25 minutes ago, will4623 said:

stan lee?

Hitmonstan?

 

Just now, Origami Cactus said:

QLC is already not suitable for cold storage because of the difference between voltage levels being very small, PLC would need to be updated once a month atleast with current error correction technology.

I wonder how much power is required to perform the refresh? Could have space for a battery to perform this task periodically.


The pursuit of knowledge for the sake of knowledge.

Forever in search of my reason to exist.

Link to post
Share on other sites
13 minutes ago, Origami Cactus said:

QLC is already not suitable for cold storage because of the difference between voltage levels being very small, PLC would need to be updated once a month atleast with current error correction technology.

aaaaaaand into the trash that idea goes ,_,

 

(how about 3LC?)

 

14 minutes ago, gabrielcarvfer said:

*20 years in the future: 100bit NAND cells replaces both HDDs and tape in data centers*

at that rate you might as well stand an SSD on its shorter edge and call it a day ._.

Link to post
Share on other sites
Posted · Original PosterOP
7 minutes ago, VegetableStu said:

aaaaaaand into the trash that idea goes ,_,

 

(how about 3LC?)

 

at that rate you might as well stand an SSD on its shorter edge and call it a day ._.

3LC only has 8 levels of voltage per CELL, still much more than MLC and SLC, but it should last much longer than PLC (32 voltage levels), you have to keep in mind that TLC is pretty old at that point, so much more advanced than first gen QLC.

Link to post
Share on other sites

So, with these PLC drives: Would they not use a SLC or MLC smaller chip (say, 32 or 64 GB) to have those better write speeds, and after the write operation, it would offload that to the larger umpteen-TB PLC storage? I mean, I don't know business people, but I'd imagine that someone would do something like this to be able to claim superiority in the market. The SLC or MLC bit could also store the important part of the information on the drive, and maybe help with error correction or something? Maybe its wishful thinking.


Spoiler

CPU: Intel i7 6850K

GPU: nVidia GTX 1080Ti (ZoTaC AMP! Extreme)

Motherboard: Gigabyte X99-UltraGaming

RAM: 16GB (2x 8GB) 3000Mhz EVGA SuperSC DDR4

Case: RaidMax Delta I

PSU: ThermalTake DPS-G 750W 80+ Gold

Monitor: Samsung 32" UJ590 UHD

Keyboard: Corsair K70

Mouse: Corsair Scimitar

Audio: Logitech Z200 (desktop); Roland RH-300 (headphones)

 

Link to post
Share on other sites
Posted · Original PosterOP
8 minutes ago, The1Dickens said:

So, with these PLC drives: Would they not use a SLC or MLC smaller chip (say, 32 or 64 GB) to have those better write speeds, and after the write operation, it would offload that to the larger umpteen-TB PLC storage? I mean, I don't know business people, but I'd imagine that someone would do something like this to be able to claim superiority in the market. The SLC or MLC bit could also store the important part of the information on the drive, and maybe help with error correction or something? Maybe its wishful thinking.

No, that is not the case.

The current QLC drives run some QLC cells as SLC cells, but the size changes as the drive fills up. If the 512gb 660p is about 80% the slc bit is only 5gb.

But yes, the bigger the drive, the less noticable the qlc speed is, as the dynamic slc cache is bibiggergfer.

 

But if we actually had some massive qlc or plc drives that wouldn't be a problem, as if you have a 8tb drive you could effectively run like a quarter tb for the slc part.

Link to post
Share on other sites
2 hours ago, Lady Fitzgerald said:

While dramatically overkill, I make sure my backup drives get powered up and read at least once a month, especially the ones that have static data on them, to avoid the danger of corrupting or losing data due to charge leakage during downtime.

That's never a bad idea. 

That being said, I recently plugged in the 240GB 840EVO (TLC) that holds my old rig's Win7 install.  That drive had been in a drawer for 6 months and it still booted just fine.  Even the video files in the download folder played without any visible or audible artifacts.  And that model is known for slowing down due to charge leakage if data was kept static, even if the drive was powered on. 

Didn't do any checksums though, so I can't verify that there was no bitrot at all.  I'll do some next year when I put it back in storage at the end of the winter (end of folding season).

Link to post
Share on other sites
10 minutes ago, Captain Chaos said:

That's never a bad idea. 

That being said, I recently plugged in the 240GB 840EVO (TLC) that holds my old rig's Win7 install.  That drive had been in a drawer for 6 months and it still booted just fine.  Even the video files in the download folder played without any visible or audible artifacts.  And that model is known for slowing down due to charge leakage if data was kept static, even if the drive was powered on. 

Didn't do any checksums though, so I can't verify that there was no bitrot at all.  I'll do some next year when I put it back in storage at the end of the winter (end of folding season).

Well, I did say it was overkill. ?

 

I don't worry about bit rot and check sum since my data backup program (FreeFileSync) will pick up any differences between files between the data disk and the backup, which would show up in the versioning folder. Frequently updating backups makes checking the versioning folder easy.


Jeannie

 

As long as anyone is oppressed, no one will be safe and free.

One has to be proactive, not reactive, to ensure the safety of one's data so backup your data! And RAID is NOT a backup!

 

Link to post
Share on other sites
49 minutes ago, Origami Cactus said:

No, that is not the case.

The current QLC drives run some QLC cells as SLC cells, but the size changes as the drive fills up. If the 512gb 660p is about 80% the slc bit is only 5gb.

But yes, the bigger the drive, the less noticable the qlc speed is, as the dynamic slc cache is bibiggergfer.

 

But if we actually had some massive qlc or plc drives that wouldn't be a problem, as if you have a 8tb drive you could effectively run like a quarter tb for the slc part.

The problem of wear occurs due to the granularity required to read the voltage from tye cells. Could cells that are too worn for QLC/PLC use be dynamically repurposed as SLC cells?


The pursuit of knowledge for the sake of knowledge.

Forever in search of my reason to exist.

Link to post
Share on other sites
50 minutes ago, Lady Fitzgerald said:

Well, I did say it was overkill. ?

 

I don't worry about bit rot and check sum since my data backup program (FreeFileSync) will pick up any differences between files between the data disk and the backup, which would show up in the versioning folder. Frequently updating backups makes checking the versioning folder easy.

That's why I compared it to tape backup. If your normal use case is to just cycle drives out, then 4LC and 5LC probably just fine, because they'll stay in the machine until they need to be sent to a secure storage facility. 

 

Ideally, you wouldn't put anything but SLC or MLC in performance/high-reliability systems. TLC is this weird middle-ground where it's unusable for high performance servers and HEDT, but probably good enough for infrequently written to storage arrays, or laptops that use "modern standby" . 

 

With that said, I held off buying SSD's and didn't recommend them for high-reliability systems due to experiences with them on servers where they die annually. I just can't recommend using them on web servers or anything that has a high level of random writing.

 

If you just throw away the drive after 10 full backups or so, then then that's no worse than using tapes.

Link to post
Share on other sites
7 hours ago, huilun02 said:

Maybe by 2023 we will be back to regular old mechanical drives

dohecahedroncellathon... an entire HDD per cell on the chip. ;)

 

PS hopefully this becomes an option for mixed cells in an SSD. So consumer grade would have 1 nice speedy controller, Some RAM, 1 or 2 nice faster chips, and the rest could be slower PLC? That might be a better option?

 

Link to post
Share on other sites

Reliability is far more important than speed. To me, NAND is a dead end for this kind of storage. They should stop wasting resources trying to squeeze more out of it and move on.

I keep hoping Micron will get off their butts and start kranking out 3d XPoint/QuantX memory for SSD's. That is a step in the right direction. They cry about it being too costly to compete, but that's only because they won't mass produce it on the level that brings it down to the cost of nand.

 

And Intel is brain dead trying to use it for system ram. That goes back to the same problem using nand for drives. When they can get the reliability to that of static ram, then use it for system ram.

Link to post
Share on other sites
1 hour ago, Augustin said:

before i could save up enough to buy these drives they had become obsolete

I had 4 of them in RAID 0, living on the edge lol

Link to post
Share on other sites
21 minutes ago, Vorg said:

Reliability is far more important than speed. To me, NAND is a dead end for this kind of storage. They should stop wasting resources trying to squeeze more out of it and move on.

I keep hoping Micron will get off their butts and start kranking out 3d XPoint/QuantX memory for SSD's. That is a step in the right direction. They cry about it being too costly to compete, but that's only because they won't mass produce it on the level that brings it down to the cost of nand.

 

And Intel is brain dead trying to use it for system ram. That goes back to the same problem using nand for drives. When they can get the reliability to that of static ram, then use it for system ram.

Intel is not completely wrong, the thing is the market for Optane accelerated applications is super restricted, and they didn't delivered the "1000x faster than NAND". Micron already knows the market simply doesn't exist, that's why they're letting it develop more first. Another group is working on even denser alternative for XPoint (ReRAM Crossbar), but it's focused on low-power applications.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×