Jump to content

SSD's got even bigger? PLC nand

Origami Cactus
1 hour ago, GoldenLag said:

was kinda questioning that aswell, seeing as you go from one big drive being being a liability to 2 drives being a liability. 

 

you add an extra controller with all the stuff it comes with, not to mention the sketch of raid 0

Depends on how many chips one has. Or how the chips are write level protected. Or the controllers and how well they cope (heat/age/do they burn out?). It gets really complicated. NASA does 3 devices for redundancy (not reliability!) but uses tested and true devices for reliability. Then some systems use 100s of devices for when *you absolutely must confirm it cannot fail*. But then again, if you have more devices, more can go wrong! So it's all about balance, and individual situations.

Link to comment
Share on other sites

Link to post
Share on other sites

6 minutes ago, pizapower said:

Like I said, those cheap SSDs are crap. The MX500 is one of the cheapest good SSD.

the P1 and 660p are good SSDs.........

 

and i mean moving the goalpost from best "bang for buck" to one of the good cheap SSDs certainly helps your case. 

 

but the QLC nvme drives and the sabrent rocket holds the pricepoint in terms of best "bang for buck"

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, porina said:

I haven't had my morning coffee yet, so might be misreading this. To clarify are you saying 2x 1TB drives are better than 1x 2TB drive, or 2x 1TB drive is better than 1x 1TB drive?

 

(some?) drives already have some form of ECC, not sure how it is implemented as implicitly it would eat into your capacity and cause more write amplification. I've long wondered if the way different bits have different risks. The last added bit per cell is more likely to drift than others. I wondered if ECC systems could make use of that, in effect providing more ECC to that bit than higher up ones. It would eat into the additional capacity that extra bit would have provided.

 

I've looked at ZFS on and off for a while for its self-repair potential, but I guess it isn't happening as long as my main OS is Windows.

 

1 hour ago, GoldenLag said:

was kinda questioning that aswell, seeing as you go from one big drive being being a liability to 2 drives being a liability. 

 

you add an extra controller with all the stuff it comes with, not to mention the sketch of raid 0

2x1TB SSD > 1x2TB SSD, as you have the writes spread across 2 SSD. The only liability in the end comes from manufacturing defects and any physical damage. Considering that I still run 15 year old HDD in RAID 0 without any issues, and that SSD are far more durable...

Especially if you use the SSD as scratch disks.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

25 minutes ago, pizapower said:

Like I said, those cheap SSDs are crap. The MX500 is one of the cheapest good SSD.

Samsung also has some OEM drives that can be found around and they are essentially the same as their higher series. But at lower price since they don't come branded as their retail models and usually come without any fancy box or instructions.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, RejZoR said:

Samsung also has some OEM drives that can be found around and they are essentially the same as their higher series. But at lower price since they don't come branded as their retail models and usually come without any fancy box or instructions.

Samsung is very sneaky. I got a new batch of 2x16gb Corsair v4.32 RAM kit(samsung b-die made april/may 2019) for 152€.

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, VegetableStu said:

dear fucking everyone,

 

1LC. 2LC. 3LC. 4LC. 5LC.

 

fucking thanks,

Dr. WTF IS PLC

Standards

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

It's called naming convention, not a standard. Naming conventions are often self created by the users or communities, often abbreviations of things, standards are shaped by a cover organization and then specified exactly.

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, pizapower said:

Samsung is very sneaky. I got a new batch of 2x16gb Corsair v4.32 RAM kit(samsung b-die made april/may 2019) for 152€.

good B-die? how does it clock?

 

 

you can usually find cheaper options like Hynix CJR (i think its called CJR) or Micron E-die, but B-die has allways been neat. 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Dabombinable said:

2x1TB SSD > 1x2TB SSD, as you have the writes spread across 2 SSD. The only liability in the end comes from manufacturing defects and any physical damage. 

Not sure about that, or at least it depends on the use case.

 

If you are interface speed limited then the raid 0 could be faster for sustained transfers but otherwise it depends on the internal layout.

 

Endurance for both generally scales with capacity so effectively same for both.

 

I would be more concerned about sudden death, presumed controller failure. I have had two SSDs die from that and got warranty replacement. None from exhausting endurance. So more SSDs = higher risk in that area.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, GoldenLag said:

good B-die? how does it clock?

 

 

you can usually find cheaper options like Hynix CJR (i think its called CJR) or Micron E-die, but B-die has allways been neat. 

My motherboard is pretty crap(Asus Prime X370-A) I need that sweet old PCI legacy lane for my studio sound card. I run 2700 @ 4ghz and ram @ 3000mhz 14-14-14-30 @ 1.3v. My motherboard can't even handle 3200mhz with very loose timings.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, pizapower said:

Like I said, those cheap SSDs are crap. The MX500 is one of the cheapest good SSD.

At the moment some E12 and SM2262 NVMes are plain better value for money. The EX920 and Sabrent Rocket are within $5 (for 1TB) of the MX500 while being much faster drives. They even dip below $100 sometimes. 

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, porina said:

If you are interface speed limited then the raid 0 could be faster for sustained transfers but otherwise it depends on the internal layout.

RAID 0 is slower where it counts, without RAID low queue depth I/O is faster because there is no extra calculations or data striping required. If you push the queue depth up or I/O threads RAID 0 will do more IOPs but each I/O operation time is longer, concurrency vs latency.

 

I've got 4 800GB WI NVMe SSD (server class) in RAID 10 and the latency hit is huge, basically takes an NVMe device and puts it in to SATA class latency, but I have 6 of these servers and have concurrent I/O operations (application operations not disk I/O, that's way higher) per server in the hundreds so require that total combined IOPs.

 

20 minutes ago, porina said:

Endurance for both generally scales with capacity so effectively same for both.

Endurance scales but failure risk is still higher 1 device vs 2 devices. Take it down to the basic coin flip theory, if I have 1 coin and tails means failure then you have 50% chance of failure. 2 coins and only 1 tail required for failure means 75% chance of failure.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, pizapower said:

The MX500 series is still the king of best bang for the buck SSD. My Win10 is on my 850 EVO 250gb and my games are on my MX500 SSD. No RAMless SSD for me.

I've got two 1TB of these and I paid £200 for them back in Jan... they've come down further in price too since then...

 

As for the capacity/read/write speeds of the OP... for my mediaserver. I need capacity more than I need write/read speeds. Adding files to me mediaserver isn't done at hight speed. So even in write speeds are 100-200mb/s that's perfectly fine.. even 80mb/s would be fine for my use. Read speeds aren't as important either as these are write once, read only scenario's... so even 4k HDR content is gonna be streamed around the house at an absolute max speed of 120mb/s and most stuff I encode is half that without sacrificing quality.

 

But my current server has almost 30TB of storage... and I'm down to my last 4TB, so within the next 12 months I'll have to replace one or both of the last two 4TB drives (rest are 6-8TB) with much larger capacity... But HDD's are still silly prices and show no signs of dropping anytime soon. The larger capacity the sillier the pricing.

 

But I doubt we'll see 6TB or higher SSD drives at sensible  prices for years yet... and until you can pick up a 6TB SSD for less than a HDD... there's no point in moving my storage over to it... So my next upgrade will either be a HDD in the 6-8TB range or a SATA expansion card and a couple more 4TB drives... which is cheaper than buying a single 8tb one.

System 1: Gigabyte Aorus B450 Pro, Ryzen 5 2600X, 32GB Corsair Vengeance 3200mhz, Sapphire 5700XT, 250GB NVME WD Black, 2x Crucial MX5001TB, 2x Seagate 3TB, H115i AIO, Sharkoon BW9000 case with corsair ML fans, EVGA G2 Gold 650W Modular PSU, liteon bluray/dvd/rw.. NO RGB aside from MB and AIO pump. Triple 27" Monitor setup (1x 144hz, 2x 75hz, all freesync/freesync 2)

System 2: Asus M5 MB, AMD FX8350, 16GB DDR3, Sapphire RX580, 30TB of storage, 250GB SSD, Silverstone HTPC chassis, Corsair 550W Modular PSU, Noctua cooler, liteon bluray/dvd/rw, 4K HDR display (Samsung TV)

System 3 & 4: nVidia shield TV (2017 & 2019) Pro with extra 128GB samsung flash drives.

Link to comment
Share on other sites

Link to post
Share on other sites

47 minutes ago, porina said:

Not sure about that, or at least it depends on the use case.

 

If you are interface speed limited then the raid 0 could be faster for sustained transfers but otherwise it depends on the internal layout.

 

Endurance for both generally scales with capacity so effectively same for both.

 

I would be more concerned about sudden death, presumed controller failure. I have had two SSDs die from that and got warranty replacement. None from exhausting endurance. So more SSDs = higher risk in that area.

I did say outside manufacturing defects (which is what you had). Don't forget that HDD have the same risk, and there are plenty examples of drives that have had heavy use for well over a decade without failing. Including 10K and 15K RPM drives.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Dabombinable said:

I did say outside manufacturing defects (which is what you had). Don't forget that HDD have the same risk, and there are plenty examples of drives that have had heavy use for well over a decade without failing. Including 10K and 15K RPM drives.

Similarly I've had hard disks last "forever" but also some fail within warranty. Even if we exclude that, I fail to see a benefit in raid 0 two smaller drives compared to one large drive, apart from if you really need the potential increase in sequential transfers.

 

The only time I considered raid of SSD is when I needed a lower cost ram substitute in the TB scale. Even then, it would best be done at application level, not OS or hardware level for maximum performance.

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, porina said:

Similarly I've had hard disks last "forever" but also some fail within warranty.

some products seem to be in a category of either lasting 3 years, or going on "forever". 

Link to comment
Share on other sites

Link to post
Share on other sites

53 minutes ago, GoldenLag said:

some products seem to be in a category of either lasting 3 years, or going on "forever". 


Most devices either die quickly, or live forever, it's probability 

1280px-Exponential_pdf.svg.png

PLEASE QUOTE ME IF YOU ARE REPLYING TO ME

Desktop Build: Ryzen 7 2700X @ 4.0GHz, AsRock Fatal1ty X370 Professional Gaming, 48GB Corsair DDR4 @ 3000MHz, RX5700 XT 8GB Sapphire Nitro+, Benq XL2730 1440p 144Hz FS

Retro Build: Intel Pentium III @ 500 MHz, Dell Optiplex G1 Full AT Tower, 768MB SDRAM @ 133MHz, Integrated Graphics, Generic 1024x768 60Hz Monitor


 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, Sypran said:

...The middle chip is the ARM controller chip...

Actually, Samsung uses its proprietary MJX controller chip, not ARM.

 

6 hours ago, Dabombinable said:

Actually, in a way you'd improve reliability (outside of any manufacturing defects). RAID 0 SSD=better life than single same-size SSD.

That is completely wrong. RAID 0 adds more failure points, increasing chances for failure.

4 hours ago, GoldenLag said:

nice to know, but its still the sort of usage where you really wont notice it, unless you have a very write intensive application, its also usually a good idea to have about 10% left on the SSD regardless of nand-flash used. 

10% is the figure usually recommended for HDDs. For SSDs, 20%-25% is recommended (in addition to the factory set amount of overprovisioning).

 

3 hours ago, Trik'Stari said:

Standards

You're comparing apples to kumquats.

Jeannie

 

As long as anyone is oppressed, no one will be safe and free.

One has to be proactive, not reactive, to ensure the safety of one's data so backup your data! And RAID is NOT a backup!

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Lady Fitzgerald said:

Actually, Samsung uses its proprietary MJX controller chip, not ARM.

Huh. Weird they literally label it "ARM"

- I can't quickly find anything on MJX, is it possible they made a proprietary customized ARM chip? Kinda like how the Sharp LR35902 is a cut down Z80 missing a bunch of instructions, registers, exc, while adding some unique ones of its own. It honestly would make more sense that way, rather then create a whole new platform, with all the tools the firmware writers will need to use.
Samsung-860-PRO-PCB-Top.png

Link to comment
Share on other sites

Link to post
Share on other sites

On 9/29/2019 at 4:48 AM, minibois said:

I can't wait for companies to use this tech and market it as a great thing!

Can we get a sequel?! "How SSD tech keep getting worse-er still!!"

I don't think that's fair.  Certainly they are worse in some ways, but they are better in others.  That means it's not a bad thing.  It's just different - a new choice that may or may not be right depending on your needs.  MLC drives still exist so clearly there's market for that.  The existence of these new ones will simply be a new offering, not a replacement for them.

 

Personally, I really look forward to this, particularly for the reason raised by OP.  They may be slow to write and no very durable, making them a questionable choice for a system/boot drive, but they sound perfect for a game drive - high read speeds, low price, and other specs that, while not good, aren't going to get in the way.  I already have a great game drive and no reason to upgrade so I probably won't be able to experiment with this for many years yet, but when I eventually do, assuming I will, I'll be sure to post about how it goes.

Quote

The worst thing about tech like this is that laptops will use whatever sort of drive they like and not mention that in their spec sheets. Now suddenly you're comparing a 250GB TLC SSD to a 500GB QLC and 750GB PLC drive and people thinking it's the same across the board (I'm probably off with my math, don't care at the moment).

I can see where you are coming from with this though.  However, I suspect this would only be an issue on low end machines.  As Linus has talked about in the past, it's a lot of effort, time, and cost to create a laptop SKU, and so they want everyone that they make ti be "good".  That's why it's a big deal that AMD has started scoring companies to build Ryzen into laptops - they don't do that if it's crap.  But anyway, I suspect they very well might do what you're saying, but only on the cheaper machines where they can "get away with it".  Anything more mid to high-tier should use a good drive simply because it's the right choice.  It's the same reason they don't put 120 Hz displays and 2080s in a laptop with an i3, but they definitely do when there's a $2000 price tag and other quality parts to match.

 

As evidence to back up this otherwise nearly baseless speculation of mine, I'd point to the fact that many laptops include NVMe drives currently that, tbh, there isn't really any need for, so clearly they're not afraid to go above and beyond just for a nice looking spec.

 

On 9/29/2019 at 5:22 AM, 5x5 said:

This is literally the point where an HDD makes more sense as a product. Who in their right mind thinks making an SSD slower than a 5400RPM HDD is a good idea?

There are still a great many advantages to this over a HDD, even if we assume the (sequential) writes will indeed be that slow:

  • More than likely, random writes will still be faster
  • Reads (both sequential and random) will be massively faster, which unless you're exclusively moving big files around is where a lot of the improvement to feel comes from anyway
  • Smaller
  • More physically durable (little to no risk from magnets, drops, etc.)
  • Lower power consumption
  • Lower (or no) noise
  • Durability is based on quantity of data written, rather than mount/unmount cycles, so depending on the use case it may outlast a HDD (*yes technically HDDs have a max write too but it's so high most people don't even know about it and it's only quoted on server drives from what I've seen)

Solve your own audio issues  |  First Steps with RPi 3  |  Humidity & Condensation  |  Sleep & Hibernation  |  Overclocking RAM  |  Making Backups  |  Displays  |  4K / 8K / 16K / etc.  |  Do I need 80+ Platinum?

If you can read this you're using the wrong theme.  You can change it at the bottom.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, porina said:

Similarly I've had hard disks last "forever" but also some fail within warranty. Even if we exclude that, I fail to see a benefit in raid 0 two smaller drives compared to one large drive, apart from if you really need the potential increase in sequential transfers.

 

The only time I considered raid of SSD is when I needed a lower cost ram substitute in the TB scale. Even then, it would best be done at application level, not OS or hardware level for maximum performance.

You'd see a tangible benefit in write endurance. As in you effectively double the daily TBW.

10 hours ago, GoldenLag said:

some products seem to be in a category of either lasting 3 years, or going on "forever". 

*looks at WD AC280 and 80MB Maxtor HDD*

8 hours ago, Lady Fitzgerald said:

That is completely wrong. RAID 0 adds more failure points, increasing chances for failure.

Watch what Linus was saying when talking about Red's cartridges. Not talking about chance of failure AT ALL. I'm talking about write endurance, as in how much can be written to the SSD. 2x1TB can have more written to then than 1x2TB, which is why especially for scratch disks the cost would actually be worth it.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

10 minutes ago, Dabombinable said:

2x1TB can have more written to then than 1x2TB

Not from what I've seen of rated specs, assuming you are keeping within a model range and not comparing across different ranges. Endurance is generally proportional to capacity, so double the capacity, double the endurance. Would welcome an example otherwise. 

Main system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, Corsair Vengeance Pro 3200 3x 16GB 2R, RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, Lady Fitzgerald said:

Actually, Samsung uses its proprietary MJX controller chip, not ARM.

 

That is completely wrong. RAID 0 adds more failure points, increasing chances for failure.

10% is the figure usually recommended for HDDs. For SSDs, 20%-25% is recommended (in addition to the factory set amount of overprovisioning).

 

You're comparing apples to kumquats.

Is a naming convention not a standard?

Ketchup is better than mustard.

GUI is better than Command Line Interface.

Dubs are better than subs

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, Dabombinable said:

...Watch what Linus was saying when talking about Red's cartridges. Not talking about chance of failure AT ALL. I'm talking about write endurance, as in how much can be written to the SSD. 2x1TB can have more written to then than 1x2TB, which is why especially for scratch disks the cost would actually be worth it.

You said, "...Actually, in a way you'd improve reliability (outside of any manufacturing defects). RAID 0 SSD=better life than single same-size SSD..." 

 

Increasing potential failure points reduces reliability. The amount of TBWs is not the only thing that determines reliability.

Jeannie

 

As long as anyone is oppressed, no one will be safe and free.

One has to be proactive, not reactive, to ensure the safety of one's data so backup your data! And RAID is NOT a backup!

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×