Jump to content

SSD's got even bigger? PLC nand

Origami Cactus

A random question, but where did the SLC SSDs go? I get that we have SLC cache, but I have heard nothing of full-blown drives since 2013.

Read the community standards; it's like a guide on how to not be a moron.

 

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Remember, calling facts opinions does not ever make the facts opinions, no matter what nonsense you pull.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Colonel_Gerdauf said:

A random question, but where did the SLC SSDs go? I get that we have SLC cache, but I have heard nothing of full-blown drives since 2013.

They're not really made anymore. MLC is good enough that there wasn't a significant demand for SLC anymore. It's not till TLC that you start seeing significant degradation and speed problems.

 

On a side note, the one SLC drive I have is an mSATA one that I pulled from a very old high end laptop. It's 32GB.

Make sure to quote or tag me (@JoostinOnline) or I won't see your response!

PSU Tier List  |  The Real Reason Delidding Improves Temperatures"2K" does not mean 2560×1440 

Link to comment
Share on other sites

Link to post
Share on other sites

52 minutes ago, Colonel_Gerdauf said:

A random question, but where did the SLC SSDs go? I get that we have SLC cache, but I have heard nothing of full-blown drives since 2013.

Server SSDs, Write Intensive optimized ones. Bloody expensive $/GB. Nobody needs SLC in a desktop, nobody. MLC is extremely good and you can get Write Intensive optimized MLC SSDs, again server SSDs and high $/GB.

 

This is SLC NAND if you're interested though: https://www.anandtech.com/show/13951/the-samsung-983-zet-znand-ssd-review. Lots of others on the market too.

Link to comment
Share on other sites

Link to post
Share on other sites

More Bits per cell is fine if your drive is big enough for it to not matter 

 

bring on the 10TB Ssd’s 

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites

I want 6 2TB MLC NVME SSDS stacked in a RAID 100 array configuration inside a 3.5 inch form factor with a PCIe 8x cable connector

Read the community standards; it's like a guide on how to not be a moron.

 

Gerdauf's Law: Each and every human being, without exception, is the direct carbon copy of the types of people that he/she bitterly opposes.

Remember, calling facts opinions does not ever make the facts opinions, no matter what nonsense you pull.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, Donut417 said:

That’s what the tread is pertaining to PLC. At this point 3.5 inch SSD’s for larger storage. 

As far as I am aware, except for M.2 SSDs physical space in terms of being cramped is not actually an issue with (consumer level) SSDs.
This is a Samsung 860Pro SSD:
Samsung-860-PRO-Case-Open.png

The chip on the bottom is a 256GB TLC Flash chip (This board is dual sided, with another Flash chip on the other side making this a 512 GB SSD ). The middle chip is the ARM controller chip, and the one on top is 512MB of LPDDR4 Cache chip. You can see the impressions on the housing for where the 1 and 2TB versions boards line up.

From what Ive gathered the problem isn't physical room for chips. Its more about how complicated and expensive, the design gets when you have 8+ flash chips.- Which is probably why the form factor is targeted towards enterprise who will gladly pay more per GB unlike consumers.


I believe the processing of a wafer is where the expense is, while the what you are processing it into is overall negligible once you have the infrastructure. So the game is to maximize how many SSDs one can make per Wafer - and with PLC they are looking at a 66% increase over TLC.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, leadeater said:

You're talking TLC here or consumer SSDs in general? I've never actually had a server SSD fail, had one Samsung 840 EVO fail but I abused that pretty badly in a system that didn't support TRIM.

Lemme find out, because I'm not the one who put it together. Each machine has two drives.

 

Machine with replaced drives:

"Samsung SSD 840 PRO", total LBA written: 240,807,343,067 : 36,932,696,9123 

 

Newer Machine still using drives purchased:

" Samsung SSD 850 PRO" , total LBA written: 378,769,718,615 : 231,973,652,880

 

higher end machine:

"INTEL SSDSC2BA400G4" host writes 32MiB: 630,733

"Samsung SSD 850 PRO" total LBA written: 15,793,315,449

 

Looks like they're all MLC. Technically they're probably rated as consumer drives since they're all also SATA.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, Kisai said:

Looks like they're all MLC. Technically they're probably rated as consumer drives since they're all also SATA.

Most server SSDs are SATA but use different controllers, more physical NAND over provisioning and have power loss protection. Samsung is the one who has the closest hardware between their consumer Pro series and their actual server products, I personally only buy Samsung myself.

 

18 minutes ago, Kisai said:

INTEL SSDSC2BA400G4

This is a server rated SSD.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, GoldenLag said:

TL;DR: if its cheap enough, get two and Raid them together for speed. to heck with reliability. 

Actually, in a way you'd improve reliability (outside of any manufacturing defects). RAID 0 SSD=better life than single same-size SSD.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Dabombinable said:

Actually, in a way you'd improve reliability (outside of any manufacturing defects). RAID 0 SSD=better life than single same-size SSD.

well the issue of bit-flips or data corruption is still there. 

 

but the real question of these drives is how cheap they can be. if they reach about HDD pricing + 50% its a really good deal over QLC which is about HDD + 100++%. 

Link to comment
Share on other sites

Link to post
Share on other sites

The MX500 series is still the king of best bang for the buck SSD. My Win10 is on my 850 EVO 250gb and my games are on my MX500 SSD. No RAMless SSD for me.

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, leadeater said:

Most server SSDs are SATA but use different controllers, more physical NAND over provisioning and have power loss protection. Samsung is the one who has the closest hardware between their consumer Pro series and their actual server products, I personally only buy Samsung myself.

 

This is a server rated SSD.

Same. Years ago when I bought 2TB SSD and decided to go all out, only Samsung was standing out with its 850 Pro (they didn't make NVMe drives with 2TB back then). They make the whole thing from controller to NAND and they utilized one of the best types of MLC NAND back then and also getting by far the best performance and durability results. 850 Pro's were going past 1PB of writes and have 10 years of warranty. Meaning I'll never come anywhere close those numbers. I have like 40TB of writes done in all these years. That's 4% of writes capacity used assuming 1PB is its life limit XD

 

And I must say it is great. Fast, reliable. It was expensive back then, but I don't regret it one bit.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, pizapower said:

The MX500 series is still the king of best bang for the buck SSD.

im fairly certain the intel 660p and Sabrent rocket/mp510 have taken that crown. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, GoldenLag said:

im fairly certain the intel 660p and Sabrent rocket/mp510 have taken that crown. 

The 660p is RAMless.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, RejZoR said:

That's 4% of writes capacity used assuming 1PB is its life limit XD

Samsung has 850 Pro SSDs in their labs with 8PB writes with no errors that would signal it needs replacing soon, think that was also only a 256GB or 512GB model. Samsung Pros are great.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, Dabombinable said:

Actually, in a way you'd improve reliability (outside of any manufacturing defects). RAID 0 SSD=better life than single same-size SSD.

I haven't had my morning coffee yet, so might be misreading this. To clarify are you saying 2x 1TB drives are better than 1x 2TB drive, or 2x 1TB drive is better than 1x 1TB drive?

 

5 minutes ago, GoldenLag said:

well the issue of bit-flips or data corruption is still there. 

(some?) drives already have some form of ECC, not sure how it is implemented as implicitly it would eat into your capacity and cause more write amplification. I've long wondered if the way different bits have different risks. The last added bit per cell is more likely to drift than others. I wondered if ECC systems could make use of that, in effect providing more ECC to that bit than higher up ones. It would eat into the additional capacity that extra bit would have provided.

 

I've looked at ZFS on and off for a while for its self-repair potential, but I guess it isn't happening as long as my main OS is Windows.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, porina said:

I haven't had my morning coffee yet, so might be misreading this. To clarify are you saying 2x 1TB drives are better than 1x 2TB drive, or 2x 1TB drive is better than 1x 1TB drive?

was kinda questioning that aswell, seeing as you go from one big drive being being a liability to 2 drives being a liability. 

 

you add an extra controller with all the stuff it comes with, not to mention the sketch of raid 0

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, GoldenLag said:

seeing as you go from one big drive being being a liability to 2 drives being a liability.

Given the failure rates of SSDs, the added risk is very very minimal.  But because with RAID0 you are writing half as much data to each SSD's NAND chips, each SSD's NAND lasts longer than a single large SSD's NAND would.  So instead of 1PB of total write endurance, you'd get around 2PB, or 1.5PB if the smaller SSDs have slightly less endurance than the large one.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, pizapower said:

@GoldenLag Long time no see. How is your Mazda living? My BMW e38 730d is still going strong and no rust.

thats fairly off-topic..... its doing fine, no real rust to be seen. 

 

4 minutes ago, pizapower said:

And QLC flash sucks.

well good thing you really dont have to care about QLC flash when you have about 80GB of SLC cache to spare when you have to write something. 

 

and the Sabrent rocket uses TLC. so the mx500 nolonger has the crown. 

Link to comment
Share on other sites

Link to post
Share on other sites

27 minutes ago, GoldenLag said:

thats fairly off-topic..... its doing fine, no real rust to be seen. 

 

well good thing you really dont have to care about QLC flash when you have about 80GB of SLC cache to spare when you have to write something. 

 

and the Sabrent rocket uses TLC. so the mx500 nolonger has the crown. 

That's not 80GB of dedicated SLC NAND chips. It's QLC only using single cell, basically emulating SLC behavior. But it's still just QLC or TLC or MLC... This also means if you have only 10GB of free space, you only have 10GB of SLC available. Assuming they utilize it fully to the end. But I don't think it scales this way and it needs 80GB of free space and when that 80GB is not available anymore it operates in regular mode.

 

In general, such drives are perfectly fine for laptops which benefit from them greatly with basically no downsides and still low price. Coz those cheap DRAM-less SSD's in laptops were horrid. I once tried one and it took 3 hours to update Windows. It was so painfully slow because of SSD alone.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, RejZoR said:

That's not 80GB of dedicated SLC NAND chips. It's QLC only using single cell, basically emulating SLC behavior. But it's still just QLC or TLC or MLC... This also means if you have only 10GB of free space, you only have 10GB of SLC available. Assuming they utilize it fully to the end. But I don't think it scales this way and it needs 80GB of free space and when that 80GB is not available anymore it operates in regular mode.

nice to know, but its still the sort of usage where you really wont notice it, unless you have a very write intensive application, its also usually a good idea to have about 10% left on the SSD regardless of nand-flash used. 

Link to comment
Share on other sites

Link to post
Share on other sites

That's 80GB that needs some time to regenerate as SSD controller reallocates data from fast SLC emulated cache to regular TLC cells. Usually it does this during garbage collection cycles done periodically by the controller. Casual users almost never write this much to a drive in a single whole day so most people don't even notice anything and TLC/QLC SSD behaves almost like SLC or MLC drive.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×