Jump to content

Backblaze: SSDs might be as unreliable as disk drives

Lightwreather
Go to solution Solved by LAwLz,
59 minutes ago, jagdtigger said:

Look when i have a wd green from 2011 that even survived running torrents and being in a raid vs 3 hdd from 2017 which died with something like 30k hours on it used in its intended use-case. Thats way more than just bad luck.

(I even have a 200 GB WD somewhere that still works with <10 bad sectors.... [and even those are old AF, the drive was one or two years old i think])

/EDIT

Oh and did i mention for not much more i could get wd dc-hc drives instead of crappy ironwolfs? Yeah seagate can go bust for all i care.....

Seagate has lower RMA rates than Western digital. 

It was 0.93% vs 1.26% in 2017 (no more up to date data). 

 

Failure rate of the 4TB WD Red - 2.95%

Failure rate of 4TB IronWolf - 2.81%

 

 

Source: https://www.hardware.fr/articles/962-6/disques-durs.html

 

It's RMA rates from a very large French retailer. 

 

I don't doubt your experience, but the fact of the matter is that your experience is just a very tiny sample and as a result of bad luck, it is very skewed compared to the real world generalized numbers. 

 

 

Edit: 

For those interested, here are the RMA statistics for HDDs and SSDs according to the French retailer, which I think is way more representative of what consumers doing consumer things can expect. 

 

HDDs:

  • HGST 0,82%
  • Seagate 0,93%
  • Toshiba 1,06%
  • Western 1,26%

 

SSDs:

  • Samsung 0,17%
  • Intel 0,19%
  • Crucial 0,31%
  • Sandisk 0,31%
  • Corsair 0,36%
  • Kingston 0,44%

I'm kind of worried about NVMe's and some SSDs, with mostly quality control and temps, also getting what you actually buy for the money.

If they can't then DON'T SELL IT AS ONE. ugh.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, BachChain said:

Long-term reliability was never the selling point of SSDs, especially in write-heavy enterprise situations.

outside of write heavy applications?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, BachChain said:

This is news? Long-term reliability was never the selling point of SSDs, especially in write-heavy enterprise situations.

The selling point of SSD's is read speed.

 

In most cases the the bigger the capacity, the higher the reliability and faster the speed of the drive. The problem is that when you use up more than 80% of the drive, the speed and reliability falls off a cliff. This is because it has to be more aggressive.

 

If backblaze was only storing things things and then never re-using them, the reliability of the SSD's should be 100% short of a thermal failure. However that's likely not the case.

 

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, jagdtigger said:

Look when i have a wd green from 2011 that even survived running torrents and being in a raid vs 3 hdd from 2017 which died with something like 30k hours on it used in its intended use-case. Thats way more than just bad luck.

(I even have a 200 GB WD somewhere that still works with <10 bad sectors.... [and even those are old AF, the drive was one or two years old i think])

/EDIT

Oh and did i mention for not much more i could get wd dc-hc drives instead of crappy ironwolfs? Yeah seagate can go bust for all i care.....

Were those HDD by chance in slots directly above and below each other, and in non rubber mounts? A drive that I shucked back in 2015, and has only ever run non spaced for short periods (always with rubber mounts) - btw I can raise you a faulty WD Green 2TB (still have it) with fewer hours, a 2TB WD Black (that I returned), and a 2013 WD Blue that has had even more use:
image.png.ccbed81ed38f0c8c6719272b8714f874.png

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

This is backblaze, they will be using consumer bottom barrel SSDs not ones designed for their needs because to them bottom dollar wins out for lowest service cost.

 

Use actually good SSDs with greater than 1 DWPD and problem solved.

 

News story here is "Tech company cheaps out on SSDs and complains"

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

This is backblaze, they will be using consumer bottom barrel SSDs not ones designed for their needs because to them bottom dollar wins out for lowest service cost.

 

Use actually good SSDs with greater than 1 DWPD and problem solved.

 

News story here is "Tech company cheaps out on SSDs and complains"

I wonder if they're using Crucial P1 as some of their SSD. With the glitches mine has had several times, that'd explain some of their failures. Mine doesn't like data being moved around too much (and then there is the low endurance).

Intel SSD if they are being used would also be strictly enforce their right endurance limits - so they'd "fail" while they are still actually good to use.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Dabombinable said:

I wonder if they're using Crucial P1 as some of their SSD. With the glitches mine has had several times, that'd explain some of their failures. Mine doesn't like data being moved around too much (and then there is the low endurance).

Intel SSD if they are being used would also be strictly enforce their right endurance limits - so they'd "fail" while they are still actually good to use.

Here are the stats for the SSDs. Please keep in mind that "DELLBOSS VD" is a PCIe to M.2 expansion card so we don't know the model of 176 of those drives. They got 215 SSDs just labeled "Seagate SSD" as well...

Untitled.png.4374c049b0b02065cb0f28e90c5ac248.png

 

Seems like most if not all of their drives are consumer models, and in some cases OEM drives.

Edit: Actually, it seems like at least one of the drives (the Micron ending in TDU) is an enterprise drive. It seems like it's from the "Micron 5300 boot" lineup. It's made to be a boot drive and has about 50% lower endurance than the regular "5300 pro" model. The 5300 Max has about 5 times the endurance. So even though it's an SSD made for data centers, it is the bottom of the barrel that is only meant to be used as a boot drive. 

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, LAwLz said:

Here are the stats for the SSDs. Please keep in mind that "DELLBOSS VD" is a PCIe to M.2 expansion card so we don't know the model of 176 of those drives. They got 215 SSDs just labeled "Seagate SSD" as well...

Untitled.png.4374c049b0b02065cb0f28e90c5ac248.png

 

Seems like most if not all of their drives are consumer models, and in some cases OEM drives.

Edit: Actually, it seems like at least one of the drives (the Micron ending in TDU) is an enterprise drive. It seems like it's from the "Micron 5300 boot" lineup. It's made to be a boot drive and has about 50% lower endurance than the regular "5300 pro" model. The 5300 Max has about 5 times the endurance. So even though it's an SSD made for data centers, it is the bottom of the barrel that is only meant to be used as a boot drive. 

Yeah just looking at the chart, they're less accurate than WCCFTech.

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, jagdtigger said:

Look when i have a wd green from 2011 that even survived running torrents and being in a raid vs 3 hdd from 2017 which died with something like 30k hours on it used in its intended use-case. Thats way more than just bad luck.

(I even have a 200 GB WD somewhere that still works with <10 bad sectors.... [and even those are old AF, the drive was one or two years old i think])

/EDIT

Oh and did i mention for not much more i could get wd dc-hc drives instead of crappy ironwolfs? Yeah seagate can go bust for all i care.....

I have a seagate drive with almost 100k hours on it, 80gb from 20 years ago and its fine, the drive runs as expected

I could use some help with this!

please, pm me if you would like to contribute to my gpu bios database (includes overclocking bios, stock bios, and upgrades to gpus via modding)

Bios database

My beautiful, but not that powerful, main PC:

prior build:

Spoiler

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, jagdtigger said:

(I even have a 200 GB WD somewhere that still works with <10 bad sectors.... [and even those are old AF, the drive was one or two years old i think])

Lol, what? I've got a Toshiba drive around here somewhere that still worked last time I used it, and it had more than 6,000 bad sectors and counting. 10 is nothing. 

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

This is backblaze, they will be using consumer bottom barrel SSDs not ones designed for their needs because to them bottom dollar wins out for lowest service cost.

which is exactly why i put no weight onto anything Backblaze says. Barracudas in a data center? yeah of course they are going to have massive failure rates

 

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

I think looking at SSDs in terms of reliability in the long term is not the right perspective to take. 
 

I’ve got a 2TB WD Green HDD that I ripped out of a 4th Gen Apple Time Capsule. That drive is 10 years old and has been used as a bulk storage drive in 2 PCs and currently serves as a NAS drive on my home network. 
 

The point is HDDs are usually very reliable if you don’t abuse them. Small 2.5” HDDs that are often found in laptops though? Those die all the time. Why? Because people throw, hit, drop, and otherwise abuse their laptops all the time. If you swap that laptop HDD with an SSD you’ll find that drive failure rates suddenly go way down. That’s really where SSDs are more reliable. 

Laptop: 2019 16" MacBook Pro i7, 512GB, 5300M 4GB, 16GB DDR4 | Phone: iPhone 13 Pro Max 128GB | Wearables: Apple Watch SE | Car: 2007 Ford Taurus SE | CPU: R7 5700X | Mobo: ASRock B450M Pro4 | RAM: 32GB 3200 | GPU: ASRock RX 5700 8GB | Case: Apple PowerMac G5 | OS: Win 11 | Storage: 1TB Crucial P3 NVME SSD, 1TB PNY CS900, & 4TB WD Blue HDD | PSU: Be Quiet! Pure Power 11 600W | Display: LG 27GL83A-B 1440p @ 144Hz, Dell S2719DGF 1440p @144Hz | Cooling: Wraith Prism | Keyboard: G610 Orion Cherry MX Brown | Mouse: G305 | Audio: Audio Technica ATH-M50X & Blue Snowball | Server: 2018 Core i3 Mac mini, 128GB SSD, Intel UHD 630, 16GB DDR4 | Storage: OWC Mercury Elite Pro Quad (6TB WD Blue HDD, 12TB Seagate Barracuda, 1TB Crucial SSD, 2TB Seagate Barracuda HDD)
Link to comment
Share on other sites

Link to post
Share on other sites

Backblaze can call me when they start putting drives designed for datacentre use in their datacentres.

Intel i7 5820K (4.5 GHz) | MSI X99A MPower | 32 GB Kingston HyperX Fury 2666MHz | Asus RoG STRIX GTX 1080ti OC | Samsung 951 m.2 nVME 512GB | Crucial MX200 1000GB | Western Digital Caviar Black 2000GB | Noctua NH-D15 | Fractal Define R5 | Seasonic 860 Platinum | Logitech G910 | Sennheiser 599 | Blue Yeti | Logitech G502

 

Nikon D500 | Nikon 300mm f/4 PF  | Nikon 200-500 f/5.6 | Nikon 50mm f/1.8 | Tamron 70-210 f/4 VCII | Sigma 10-20 f/3.5 | Nikon 17-55 f/2.8 | Tamron 90mm F2.8 SP Di VC USD Macro | Neewer 750II

Link to comment
Share on other sites

Link to post
Share on other sites

As far as SSDs go, one cheap Team Group SSD in my office died, but none of the others in computers I've built have. I have a SanDisk SSD that I used in my workstation, it had a 70 TBW durability rating and I retired it (and moved to a bigger one) after it got to 115 TBW, no failure and the diskinfo was still alright otherwise. I've really never seated SSD reliability, when it's my money I buy decent stuff and it lasts.

 

In contrast, the last what I consider to be a good long-lasting spinning drive was a 10gb Western Digital I put into service in 1999. That computer ran nearly continuously from when I built it and after I sold it, to roughly 2010 when I got the machine back. 11 years, 24 hours a day, always spun-up because it was a media repository. So 95k hours? That's actual reliability.

 

Nearly every spinning drive I've had after that lasted 3-4 years before there were problems with it. At one point, I had piles of my and friends 80-250gb drives from mostly Western Digital that had failed prematurely. Most were unrecoverable, and just had the magnets pulled out of them and destroyed. I have one active mirrored array at my office, but outside of that I keep everything on SSDs and only back-up to spinning drives, but they're externals and don't get run all the time.

 

My Current Setup:

AMD Ryzen 5900X

Kingston HyperX Fury 3200mhz 2x16GB

MSI B450 Gaming Plus

Cooler Master Hyper 212 Evo

EVGA RTX 3060 Ti XC

Samsung 970 EVO Plus 2TB

WD 5400RPM 2TB

EVGA G3 750W

Corsair Carbide 300R

Arctic Fans 140mm x4 120mm x 1

 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, LAwLz said:

No, Seagate drives aren't bad. You have been unlucky if that's your experience. 

Seagate has roughly the same failure rate as all other consumer HDD makers like WD, Toshiba and Samsung. 

Seagate got a bad rep because they had one or two models that were bad like a decade ago. 

Also worth noting that 4 years is the point when critical failures start to become more common regardless of drive brand. 

 

The typical bathtub curve.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

@WolframaticAlpha Is the A400 that bad? 😄 The one I have, I think I'll have to wipe and include as part of a system sales later. The only requirements I had at the time I got it was it is a SSD and cheap. Boot drive for a cruncher, back in the days when people ran software 24/7 without expecting profit.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, valdyrgramr said:

Not the most reliable, and superior options in that price range.

Can't remember exactly what was on offer around that time, but I didn't have problems with their older V300 series which was my go to in cheap systems before that. I think WD had taken over Sandisk by then, so the old Sandisks weren't so common, and WD Green was more expensive. Crucial BX series was also more expensive, and I don't recall any other branded drives that were on the cheap end at the time. 

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, LAwLz said:

Read the full report now.

Do you actually have the full report link, with the numbers?  [A link to them]  If so I think you were reading a different report than I was

 

I've looked through their site and while there is some correlation, it appears the Q1/Q2 2021 stats that were used were not included in numbers given out (yet were used in their conclusion).  Don't get me wrong, I don't think this is a fully fair test, but it's better to have more of the facts (as you'll see below, as I address some of your talking points)

 

20 hours ago, LAwLz said:

For those wondering, yes, their sample size for SSDs are way smaller than the HDDs.

Sample size in report was 1666 "SSD"s and 1297 HDD's so no.  (As can be seen in their original table, which was even posted in the OP's thread)

 

20 hours ago, LAwLz said:

They have had 7 SSD failures and 551 HDD failures, and somehow they think that's enough data to extrapolate what the numbers will look like several years into the future. They only had a handful of SSD models, one of which they only had 49 drives of and one of them died so they put the AFR of that at model at 2.82%. 

"7 SSD failures" is not true, they experienced 17 failures.  You are looking at the older data.  They also stated their compared the HDD's failures over the same period of time, which totaled to 25 HDD failures.  Ultimately they had 1394 Seagate drives, 40 micron drives and 176 Dell drives (based on where you were pulling your numbers from) plus some more which I couldnt find references to...and even then Seagate made up the "7" drive failures during that time.  [Which again, there are clearly 10 more drive failures happening somewhere, but I don't think that data was released].

 

The "551" number you grabbed is over a period of 7 years, if you did read what they said they specifically said it wasn't a good comparison to do things like that...which is why they later went on to utilize only the first 14.3 months of data they had of the HDD's...which lines up with the SSD numbers.

 

20 hours ago, LAwLz said:

And before anyone asks, yes, they still use consumer grade parts in their server environment. That SSD they only has 49 of? That's a Seagate BarraCuda SSD which is no longer sold, but I found an old listing for it that seems to indicate it was a $80 drive that was "perfect for laptops and desktop PCs".

Yes they are consumer drives, but I think that is sort of their point behind a SSD vs HDD debate, as it faces consumer level stuff [Although I do agree with the caveats that they were doing shucking before which is like the worst case scenario for hard drives].  Some of the server grade stuff costs a fortune (and yes, there are benefits to that)..but for some smaller businesses it can make sense running consumer stuff instead of enterprise stuff and in this case it gives us a good look into at least some of the brands.

 

20 hours ago, LAwLz said:

For crying out loud they even list "DELLBOSS VD" as one of the "drives" they use. The problem is that "DELLBOSS VD" is not even an SSD. It's a PCIe to M.2 expansion card. So we have no idea which SSDs they even put in that.

Given it's dell, likely a WD or their own branded Dell M.2 drive [which in effect would be WD] (i.e. non-descript only the logo and maybe if you look hard enough a model number).  An example of why they might just write DellBoss VD https://www.dell.com/en-ca/shop/dell-m2-sata-class-20-2280-solid-state-drive-256gb/apd/aa615517/storage-drives-media?gacd=9683780-3008-5761040-266662033-0&dgc=st&gclid=EAIaIQobChMIjcjy6-2t8wIV3h-tBh2KKgg9EAQYBCABEgIkYfD_BwE&gclsrc=aw.ds&nclid=BjAcUkJSdx_XE7ZD6on1G_ZrSwISboZQHshkQQJ0A6ICfXU8q-i3Lj37zCkrWQru

 

Looking at the picture, no real markings (which I've experienced before with dell hardware)...but given it's coming from what appears to be M.2 raid solution to PCI I would suspect that the SSD's aren't exactly bottom of the barrel drives.

 

 

Again, not really denying that there is likely a lot not being said in the article, it's just that it's not as stretched as you are making it out to be (in terms of sample sizes and scaling)

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

43 minutes ago, valdyrgramr said:

Well, is the point not to put them through hell to test the reliability?

Not sure how to explain it, but the problem is that they are not actually testing what consumers will see. Running an SSD through a datacenter workload for 1 year is not the same as let's say running a consumer SSD through a typical workload for 3 years. It's more akin to testing the reliability of a Toyota Prius by driving it through the Sahara desert. The test doesn't give you any information at all about how well it will work as a family car in a big city.

 

Why is it not the same? Because in a DC you can have a very different workload characteristics. For example in a standard home PC you will probably have plenty of short bursts of mostly reads but also some writes. As a result, that's what most consumers SSDs are aimed at. They got things like an SLC cache, and are programmed to shuffle data around in the cells in a particular way and so on. They are also made to operate in a particular environment, like a case with fairly good airflow. The temperatures inside DC servers could potentially be much higher than inside a home PC. Not to mention that DC SSDs often have significantly higher write endurance. You might think that high write endurance means they are more reliable, but then thing is that at a certain point it doesn't matter how much write endurance the SSD got.

If I'm only going to write 20TB of data to a drive in its lifespan then it doesn't matter if it has 300TB of endurance or 3,000TB of write endurance.

 

 

Don't judge a fish by its ability to climb a tree.

 

 

58 minutes ago, valdyrgramr said:

You can't use averages because each user is unique. 

I think you are overestimating how "unique" peoples' workloads really are.

I bet 90% of the average computer users will fit within the "mostly reads, writes are small and bursty, and reads are either individual files or sequential reads".

 

 

1 hour ago, valdyrgramr said:

As for the RMA thing, that's flawed too.  It doesn't realistically tell you drive failure rates and is purely assumption based.  Why?  That information is only telling you that many people did the RMA process, but that doesn't mean the drive actually died and it ignores the people who had a dead drive then bought new ones.  You can send a drive you think in is dead, but that doesn't mean it was dead that just means you filled out the paperwork and sent it.  It could be a bad SATA port or something else.

Yes but since those things (incorrect RMAs or people not RMA'ing when they should have) apply to all drives that were RMA'd equally it should even itself out.

Even if you can find some potential flaws in the statistics, I still think it is way more representative of what consumers can expect than someone with a handful of consumer SSDs putting it in a datacenter and then report on how quickly they die.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, wanderingfool2 said:

 

 

Given it's dell, likely a WD or their own branded Dell M.2 drive [which in effect would be WD] (i.e. non-descript only the logo and maybe if you look hard enough a model number). 

Dell NVMe drives are always Toshiba, SK Hynix, or Samsung P981 (basically the OEM-only version of the EVO drives.) I never paid attention to the SATA SSD's in the cheaper laptops. Suffice it to say that if they were using SATA SSD's for cheapness, if they were QLC, DRAM-less drives they likely slowed down after being used for 6months. 

 

So the reliability metrics are completely different from mechanical drives, which don't slow down over time, and don't wear out from writing.

 

edit: come to think of it, there is a way to get a list of what Dell is using by getting a list of the firmwares.

source dell.com:

Micron 2200S, 1100

Toshiba KBG20ZMS128G/256G/512G

Toshiba KSG60ZSE256G/KSG60ZSE512G

Toshiba KSG50ZMV256/512G

Toshiba KXG50ZNV256G/512G/1T02

Intel SSDPEMKF256G8/SSDPEMKF512G8/SSDPEMKF010T8

Intel SSDPEKKF256G8/SSDPEKKF512G8/SSDKEKKF010T8

SK Kynix SC311 , SC401, BC501, PC601, SC313, PC401

WD SN730, SN520

ADATA SU810NS38 128/256G SATA

LiteOn CA3-8D256/512 CV8

Sandisk A400

 

Of course that list will be incomplete since some SSD's never have firmware updates.

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, Dabombinable said:

Were those HDD by chance in slots directly above and below each other, and in non rubber mounts?

The HDDs are on their side in a vertical position, as far as i can remember they "died"* in a random order..... Most of their time was spent in a synology ds416, last year i moved them into a nas i built and has this backplane:
https://icybox.de/product.php?id=333

 

 

*One proper toast, the remaining 2 got kicked out by ZFS for too many errors (and started piling up bad sectors).

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, wanderingfool2 said:

Do you actually have the full report link, with the numbers?  [A link to them]  If so I think you were reading a different report than I was

Here is the report I looked at.

https://www.backblaze.com/blog/backblaze-drive-stats-for-q2-2021/

 

 

 

28 minutes ago, wanderingfool2 said:

I've looked through their site and while there is some correlation, it appears the Q1/Q2 2021 stats that were used were not included in numbers given out (yet were used in their conclusion).  Don't get me wrong, I don't think this is a fully fair test, but it's better to have more of the facts (as you'll see below, as I address some of your talking points)

 

Sample size in report was 1666 "SSD"s and 1297 HDD's so no.  (As can be seen in their original table, which was even posted in the OP's thread)

 

"7 SSD failures" is not true, they experienced 17 failures.  You are looking at the older data.  They also stated their compared the HDD's failures over the same period of time, which totaled to 25 HDD failures.  Ultimately they had 1394 Seagate drives, 40 micron drives and 176 Dell drives (based on where you were pulling your numbers from) plus some more which I couldnt find references to...and even then Seagate made up the "7" drive failures during that time.  [Which again, there are clearly 10 more drive failures happening somewhere, but I don't think that data was released].

 

The "551" number you grabbed is over a period of 7 years, if you did read what they said they specifically said it wasn't a good comparison to do things like that...which is why they later went on to utilize only the first 14.3 months of data they had of the HDD's...which lines up with the SSD numbers.

Yep, you're right. I didn't like that they were hiding a bunch of numbers in the SSD vs HDD article so I went looking for the full report, not realizing they were measuring different things.

I still think their data is full of holes though and should not be taken seriously.

 

 

30 minutes ago, wanderingfool2 said:

Yes they are consumer drives, but I think that is sort of their point behind a SSD vs HDD debate, as it faces consumer level stuff [Although I do agree with the caveats that they were doing shucking before which is like the worst case scenario for hard drives].  Some of the server grade stuff costs a fortune (and yes, there are benefits to that)..but for some smaller businesses it can make sense running consumer stuff instead of enterprise stuff and in this case it gives us a good look into at least some of the brands.

Well like I said, this article is only really relevant to people who build data centers in an improper way. I'd argue that if you are going to build a DC, then you should be able to afford building it properly. If you can't afford that, then you probably don't need a DC to begin with and you're better off using XaaS.

 

32 minutes ago, wanderingfool2 said:

Again, not really denying that there is likely a lot not being said in the article, it's just that it's not as stretched as you are making it out to be (in terms of sample sizes and scaling)

Yeah you're right. I'll edit my post.

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, wanderingfool2 said:

Yes they are consumer drives, but I think that is sort of their point behind a SSD vs HDD debate, as it faces consumer level stuff

What consumer is going to have the amount of active power on time and also write workload as Backblaze, something that is far closer to zero than it is toward everyone, far closer.

 

That's like testing engine reliability for typical sedans by running them at 6000 RPM all day every day then saying the failure rate is high. Not too sure how many people drive around 24/7 and also in 1st gear.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×