Jump to content

Backblaze HDD stats for 2018 are out.

exetras

Link: https://www.backblaze.com/blog/hard-drive-stats-for-2018/

 

Quote

When we compare Hard Drive stats for 2018 to previous years two things jump out. First, the migration to larger drives, and second, the improvement in the overall annual failure rate each year. The chart below compares each of the last three years. The data for each year is inclusive of that year only.

 

Quote

 

Notes and Observations

  • In 2016 the average size of hard drives in use was 4.5 TB. By 2018 the average size had grown to 7.7 TB.
  • The 2018 annualized failure rate of 1.25% was the lowest by far of any year we’ve recorded.
  • None of the 45 Toshiba 5 TB drives (model: MD04ABA500V) has failed since Q2 2016. While the drive count is small, that’s still a pretty good run.
  • The Seagate 10 TB drives (model: ST10000NM0086) continue to impress as their AFR for 2018 was just 0.33%. That’s based on 1,220 drives and nearly 500,000 drive days, making the AFR pretty solid.

 

  •  

Blog-chart-afr-3-years.png

blog-chart-2018-hard-drive-stats.png

 

 

My Opinion: Its nice to see that only 3 drives got listed as having an error rate higher than 2%, the worst being 3.03%. Also 2016 was a really bad year for HDD's.

Link to comment
Share on other sites

Link to post
Share on other sites

Good, throw them in the bin,  Unless blackblaze learn how to properly qualify data and compare it they aren't worth the hdd space they occupy.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

TMW WD Reds have the highest failure rates.

 

 

 


Oh Backblaze, how accurate art thou
 

 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

The only Problem I see with this is that they don't mention hours of usage and the way it is used...

 

That should be somehow worked into this.

"Hell is full of good meanings, but Heaven is full of good works"

Link to comment
Share on other sites

Link to post
Share on other sites

i mean if anything the data shows that in general, no big desktop hard drive is really a /bad/ choice. 

The problems i've seen at work are usually 2.5" drives for some reason...but they were one toshiba model that was apparently a known issue. Dell had sent us a giant box of new ones when we started having to replace them. almost 35% of them failed before the 3 year lease was up lol

 I've had 3.5" drives fail but they were all 40k+ hours so not really what I'd call terrible

"If a Lobster is a fish because it moves by jumping, then a kangaroo is a bird" - Admiral Paulo de Castro Moreira da Silva

"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.6 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

Link to comment
Share on other sites

Link to post
Share on other sites


So which brand has the most durable hard drives?

Specs: Motherboard: Asus X470-PLUS TUF gaming (Yes I know it's poor but I wasn't informed) RAM: Corsair VENGEANCE® LPX DDR4 3200Mhz CL16-18-18-36 2x8GB

            CPU: Ryzen 9 5900X          Case: Antec P8     PSU: Corsair RM850x                        Cooler: Antec K240 with two Noctura Industrial PPC 3000 PWM

            Drives: Samsung 970 EVO plus 250GB, Micron 1100 2TB, Seagate ST4000DM000/1F2168 GPU: EVGA RTX 2080 ti Black edition

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, williamcll said:


So which brand has the most durable hard drives?

 westerngate then seadigital.

 

As has always been the case.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, mr moose said:

Good, throw them in the bin,  Unless blackblaze learn how to properly qualify data and compare it they aren't worth the hdd space they occupy.

Agreed, tired of all the shit the first big one caused. I still hear shit about their 2014 report where they were using shucked 3TB segate drives,
image.png.18cff2cf6dc3b8c4fd561ca9a1c3adff.png
which for a couple of years after the taiwan floods irrc were just bad with sample sizes that were stupid

One of segates worst consumer grade drives in recent years, shucked ones at that, against a mch smaller sample size of other drives, most meant for actual data centers. z
All of this which, btw, was stated in the report. Cost saving measures. They took the cheaper higher failure rates over buying more expensive drives was the goal with the consumer grade shucked segate drives. 

But do people read? No

Still hear "don't but segate" with a link to this chart 

Hard Drive Failure Rates by Model

 

I wish backblaze never did these reports. 

 

muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, Syntaxvgm said:

 I still hear shit about their 2014 report

 

That's the biggest problem with the internet, too many people with no real understanding of something yet repeat it flaws and all as if they know all about it, once something becomes an accepted narrative on the internet it takes decades to lose it.     Like the idea that pong was the first video game. 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, mr moose said:

That's the biggest problem with the internet, too many people with no real understanding of something yet repeat it flaws and all as if they know all about it, once something becomes an accepted narrative on the internet it takes decades to lose it.     Like the idea that pong was the first video game. 

 

 

 

1 hour ago, Syntaxvgm said:

Agreed, tired of all the shit the first big one caused. I still hear shit about their 2014 report where they were using shucked 3TB segate drives,
image.png.18cff2cf6dc3b8c4fd561ca9a1c3adff.png
which for a couple of years after the taiwan floods irrc were just bad with sample sizes that were stupid

 

Using shucked drives shouldn't be an issue. And if they are an issue, it should indeed reflect on the company's quality control, and is worth mentioning.

 

Yes blackblaze needs some lessons in statistics (particularly in actually USING the confidence intervals they calculate to show error bars on their plots) but the data isn't inherently wrong. People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

 

image.png.187a0caa5092bfb7a7ab7f6b80316f7d.png

 

It's still dramatically more useful than the official 'statistics' where everyone and their brother's harddrive claims 10^6-7 MTBF (though iirc WD doesn't even report their MTBF or internal AFR records anymore), when obviously (the companies have admitted as much) that is a literally made up number that has no bearing on whatever limited testing the company actually generates on their own failure rates.

2 hours ago, Stefan Payne said:

The only Problem I see with this is that they don't mention hours of usage and the way it is used...

 

That should be somehow worked into this.

Actually they do. The data generated is annualized failure rate (as in # drive failure per 365 days).

image.png.b6b1cce8b56d3063362d3298beae24a0.png

 

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

11 minutes ago, Curufinwe_wins said:

Actually they do. The data generated is annualized failure rate (as in # drive failure per 365 days).

and in days. 

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Curufinwe_wins said:

 

 

Using shucked drives shouldn't be an issue. And if they are an issue, it should indeed reflect on the company's quality control, and is worth mentioning.

 

Yes blackblaze needs some lessons in statistics (particularly in actually USING the confidence intervals they calculate to show error bars on their plots) but the data isn't inherently wrong. People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

 

image.png.187a0caa5092bfb7a7ab7f6b80316f7d.png

 

It's still dramatically more useful than the official 'statistics' where everyone and their brother's harddrive claims 10^6-7 MTBF (though iirc WD doesn't even report their MTBF or internal AFR records anymore), when obviously (the companies have admitted as much) that is a literally made up number that has no bearing on whatever limited testing the company actually generates on their own failure rates.

Actually they do. The data generated is annualized failure rate (as in # drive failure per 365 days).

image.png.b6b1cce8b56d3063362d3298beae24a0.png

 

I don't think the data was wrong per se, what they claimed and how they mis-presented it was atrocious though, also the hodge podge of drives and the procurement methods they used in addition the huge variance in quantities essentially ruined any attempt at clear comparisons. 

 

Issues with the original and second reports:

https://insidehpc.com/2015/02/henry-newman-on-why-backblaze-is-still-wrong-about-disk-reliability/

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, GDRRiley said:

and in days. 

? Not sure what you are going for there. But realistically, throwing drive days is the same thing as drive hours when you are talking about drive days in the 10s-100s of thousands.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Curufinwe_wins said:

? Not sure what you are going for there. But realistically, throwing drive days is the same thing as drive hours when you are talking about drive days in the 10s-100s of thousands.

I'm saying people who were above complaining about not having drive hours, they've changed them to days. 

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, mr moose said:

I don't think the data was wrong per se, what they claimed and how they mis-presented it was atrocious though, also the hodge podge of drives and the procurement methods they used in addition the huge variance in quantities essentially ruined any attempt at clear comparisons. 

 

Issues with the original and second reports:

https://insidehpc.com/2015/02/henry-newman-on-why-backblaze-is-still-wrong-about-disk-reliability/

Amusingly, neither of those quoted issues are valid arguments. Though perhaps it was more valid at the time.

Spoiler


So what if they are using non-enterprise drives? They never claim at any point that they are enterprise drives, or that they represent enterprise drives. Claiming this data is useless because it is non-enterprise drives is more akin to telling non-enterprise users they don't deserve to know full-time usage failure rates than any sort of intellectual mismanagement.

 

So what if a drive they use is old design? They list the drives by MODEL NUMBER. So if there is a specific model number that has higher failure rates... that shows up. Nothing claims otherwise. 

 

Seems like the guy only looked at the BS publicity pages about it and didn't dig into the actual data whatsoever.

 

 

 

Also huge variation in quantities has no impact (statistically) on comparability. It does have impact on confidence interval ranges of the individual drives, but when we make the assumption that individual drive failures are independent and pseudo-random, it also then follows, through simple statistics (central limit theorem), that the drive failures of specific models are independent of each other and generate a normal distribution. 

 

Of course, when Blackblaze makes a chart/table (even if they have others) that fails to include the confidence intervals they themselves calculated... that is less than ideal intellectual behavior.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Curufinwe_wins said:

Amusingly, neither of those quoted issues are valid arguments. Though perhaps it was more valid at the time.

  Reveal hidden contents

 

So what if they are using non-enterprise drives? They never claim at any point that they are enterprise drives, or that they represent enterprise drives. Claiming this data is useless because it is non-enterprise drives is more akin to telling non-enterprise users they don't deserve to know full-time usage failure rates than any sort of intellectual mismanagement.

 

So what if a drive they use is old design? They list the drives by MODEL NUMBER. So if there is a specific model number that has higher failure rates... that shows up. Nothing claims otherwise. 

 

Seems like the guy only looked at the BS publicity pages about it and didn't dig into the actual data whatsoever.

 

 

 

Also huge variation in quantities has no impact (statistically) on comparability. It does have impact on confidence interval ranges of the individual drives, but when we make the assumption that individual drive failures are independent and pseudo-random, it also then follows, through simple statistics (central limit theorem), that the drive failures of specific models are independent of each other and generate a normal distribution. 

 

Of course, when Blackblaze makes a chart/table (even if they have others) that fails to include the confidence intervals they themselves calculated... that is less than ideal intellectual behavior.

Valid arguments yet, they were serious issues with the previous reports and it's going to take more than one "technically" accurate report from them before I believe that the HDD's were all sourced brand new, what systems they were used in, whether the workload was the same across all drives and whether it is even fair to include 45 WD's next to 30K seagates in a table where most readers don't understand confidence intervals (when they actually mean something).

 

I am always open to correction if the data exists, but sadly given BB history I have trouble accepting this is thorough just on the surface of it.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, mr moose said:

Valid arguments yet, they were serious issues with the previous reports and it's going to take more than one "technically" accurate report from them before I believe that the HDD's were all sourced brand new, what systems they were used in, whether the workload was the same across all drives and whether it is even fair to include 45 WD's next to 30K seagates in a table where most readers don't understand confidence intervals (when they actually mean something).

 

I am always open to correction if the data exists, but sadly given BB history I have trouble accepting this is thorough just on the surface of it.

I guess I'd always come from the perspective that even with the inherent limitations, having the data is definitively better than not. But mountains of salt are fine as well.

LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Note 10+ - Surface Book 2 15"

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, Curufinwe_wins said:

 

 

Using shucked drives shouldn't be an issue. And if they are an issue, it should indeed reflect on the company's quality control, and is worth mentioning.

 

Yes blackblaze needs some lessons in statistics (particularly in actually USING the confidence intervals they calculate to show error bars on their plots) but the data isn't inherently wrong. People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

 

-snip-

 

It's still dramatically more useful than the official 'statistics' where everyone and their brother's harddrive claims 10^6-7 MTBF (though iirc WD doesn't even report their MTBF or internal AFR records anymore), when obviously (the companies have admitted as much) that is a literally made up number that has no bearing on whatever limited testing the company actually generates on their own failure rates.

 

 

Actually, the shucked drives are a huge problem, especially since some of the drives they were up against in the statistics were meant for this kind of use. 

Image result for backblaze pod

And they knew that, they took that risk intentionally. Consumer drives are not meant to run next to so many vibrating drives or take so much hard use. Compared to @mr moose, I don't see so much of a problem with the data or how it's presented, it's just the fact that people don't fucking read or take anything in the proper context- the worst years of segate drives, the problem models for those years, shucked consumer drives in a setup like above (and they weren't like the current situation with nas drives being sold in enclosures), and the volumes not matching in any way. IIRC these drives literally had less testing and R&D and were rushed to market. 

I had bought a few drives that I shucked from segate from before the taiwan flood. I also got some segate 3TB drives from after. I have one of the latter left working. 

 

A thing that a lot of people don't know is the bathtub failure model for hard drives- most defective drives will fail early, then more will fail after a certain time. This is why when I get new drives I write a lot to them in the first week or two- usually at least fill them, but I prefer to start out with one random pass then zero it out if I have time. I have caught at least 4 failures this way in a past couple of years that I can think of- 2 were WD 3TB blue drives. If they're defective, they often fail early, and you can do this to make it a painless return to amazon instead of dealing with warranty replacement down the line. I started doing this after I bought a stack of 5 and 6TB WD reds that basically all failed me- which I've been told they were actually green rebages with different firmware or something. I don't know if that's true, but there were a lot of failure complaints. They were loud and hot, which is not a green thing so idk. Because they failed past my return window but all failed within the first couple of TB of writes, I started this practice. I sold my warranty replacements of those lol.

Enter the segate 3TB drives- They DIDN'T follow the bathtub failure pattern. They were not defective, they were flawed in design. The 3TB models from this time were using less platters to save money iirc, and they were rushed out. I think the 1.5 had problems as well, but not the 2TB 


This report was a couple of years after the 2011 floods, but the drives were launched in 2011 

https://www.seagate.com/staticfiles/docs/pdf/datasheet/disc/barracuda-ds1737-1-1111us.pdf

 



When I first saw the backblaze data, I enjoyed the info on those specific drives, and it really shed light to what basically was horrible drive rushed out the door. The HGST drives were low failure rates, which backed up what I had heard and experienced. They were launched shortly before the flood and the acquisition by WD, which they were forced to operate independently from WD for a couple of years after as well. 
 

https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/hgst/deskstar-5k-series/data-sheet-deskstar-5k3000.pdf


People took this segate's drive's failure rate to mean ALL SEGATE DRIVES BEFORE AND AFTER. I'm still linked to this fucking page. I still see people telling new builders to never pick segate and linking to the old study. Even more people just post this picture with "lol segate"

Spoiler

Hard Drive Failure Rates by Model


People don't read. These charts are very dangerous in the hands of these people who don't read or understand the context. I've gotten into arguments in person with others over it even. 


Interesting note, this study caused a lawsuit 

https://www.extremetech.com/extreme/222267-seagate-faces-lawsuit-over-3tb-hard-drive-failure-rates


Which is more along the lines of what this is- just like those old IBM deskstar failures mentioned in that article. A drive problem, and not a brand problem. 
But people don't read like us. And that's the problem. It's as you said 

Quote

People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

And for that reason I think it has caused more harm than good and really wished it didn't exist. 
 

muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Syntaxvgm said:

Actually, the shucked drives are a huge problem, especially since some of the drives they were up against in the statistics were meant for this kind of use. 

Image result for backblaze pod

And they knew that, they took that risk intentionally. Consumer drives are not meant to run next to so many vibrating drives or take so much hard use. Compared to @mr moose, I don't see so much of a problem with the data or how it's presented, it's just the fact that people don't fucking read or take anything in the proper context- the worst years of segate drives, the problem models for those years, shucked consumer drives in a setup like above (and they weren't like the current situation with nas drives being sold in enclosures), and the volumes not matching in any way. IIRC these drives literally had less testing and R&D and were rushed to market. 

I had bought a few drives that I shucked from segate from before the taiwan flood. I also got some segate 3TB drives from after. I have one of the latter left working. 

 

A thing that a lot of people don't know is the bathtub failure model for hard drives- most defective drives will fail early, then more will fail after a certain time. This is why when I get new drives I write a lot to them in the first week or two- usually at least fill them, but I prefer to start out with one random pass then zero it out if I have time. I have caught at least 4 failures this way in a past couple of years that I can think of- 2 were WD 3TB blue drives. If they're defective, they often fail early, and you can do this to make it a painless return to amazon instead of dealing with warranty replacement down the line. I started doing this after I bought a stack of 5 and 6TB WD reds that basically all failed me- which I've been told they were actually green rebages with different firmware or something. I don't know if that's true, but there were a lot of failure complaints. They were loud and hot, which is not a green thing so idk. Because they failed past my return window but all failed within the first couple of TB of writes, I started this practice. I sold my warranty replacements of those lol.

Enter the segate 3TB drives- They DIDN'T follow the bathtub failure pattern. They were not defective, they were flawed in design. The 3TB models from this time were using less platters to save money iirc, and they were rushed out. I think the 1.5 had problems as well, but not the 2TB 


This report was a couple of years after the 2011 floods, but the drives were launched in 2011 

https://www.seagate.com/staticfiles/docs/pdf/datasheet/disc/barracuda-ds1737-1-1111us.pdf

 



When I first saw the backblaze data, I enjoyed the info on those specific drives, and it really shed light to what basically was horrible drive rushed out the door. The HGST drives were low failure rates, which backed up what I had heard and experienced. They were launched shortly before the flood and the acquisition by WD, which they were forced to operate independently from WD for a couple of years after as well. 
 

https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/hgst/deskstar-5k-series/data-sheet-deskstar-5k3000.pdf


People took this segate's drive's failure rate to mean ALL SEGATE DRIVES BEFORE AND AFTER. I'm still linked to this fucking page. I still see people telling new builders to never pick segate and linking to the old study. Even more people just post this picture with "lol segate"

  Reveal hidden contents

Hard Drive Failure Rates by Model


People don't read. These charts are very dangerous in the hands of these people who don't read or understand the context. I've gotten into arguments in person with others over it even. 


Interesting note, this study caused a lawsuit 

https://www.extremetech.com/extreme/222267-seagate-faces-lawsuit-over-3tb-hard-drive-failure-rates


Which is more along the lines of what this is- just like those old IBM deskstar failures mentioned in that article. A drive problem, and not a brand problem. 
But people don't read like us. And that's the problem. It's as you said 

And for that reason I think it has caused more harm than good and really wished it didn't exist. 
 

I have a shucked Seagate 1.5TB drive, I've used it for about 6 years now as a back up drive and the main OS drive for my sons computer for about 3 years. It is absolutely slower than any other seagate drive (even some of the older ones) I've had.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, mr moose said:

I have a shucked Seagate 1.5TB drive, I've used it for about 6 years now as a back up drive and the main OS drive for my sons computer for about 3 years. It is absolutely slower than any other seagate drive (even some of the older ones) I've had.

I do not know if this is true in that case, but a lot of the segate failures I experienced ended up with the drive having read and access times being incredibly slow but the data still accessible. 

muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Syntaxvgm said:

I do not know if this is true in that case, but a lot of the segate failures I experienced ended up with the drive having read and access times being incredibly slow but the data still accessible. 

It might be, regardless though, it got so bad I bought him a new ssd anyway.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Syntaxvgm said:

A thing that a lot of people don't know is the bathtub failure model for hard drives- most defective drives will fail early, then more will fail after a certain time.

Yep, if an HDD doesn't fail in the first year (usually in the first few months) it's most likely to work for 5+ years.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Yep, if an HDD doesn't fail in the first year (usually in the first few months) it's most likely to work for 5+ years.

Which means if people have home backups (like NAS etc) they should note when they purchase a drive and aim top replace it in about 4-5  years.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

In my own experience, solid power delivery and vibration isolation are more important than the model of drive.  Chaining 16 drives off one molex (been there) => drives dying every day from voltage swings.

 

That said, Hitachi was really good at surviving shitty conditions.  Deskstar I want to say was probably on par with WD Red Pro.  I eventually moved over to Seagate Enterprise He drives and gave them a dedicated power supply, so I've not had any issues in over a year now.

Workstation:  13700k @ 5.5Ghz || Gigabyte Z790 Ultra || MSI Gaming Trio 4090 Shunt || TeamGroup DDR5-7800 @ 7000 || Corsair AX1500i@240V || whole-house loop.

LANRig/GuestGamingBox: 9900nonK || Gigabyte Z390 Master || ASUS TUF 3090 650W shunt || Corsair SF600 || CPU+GPU watercooled 280 rad pull only || whole-house loop.

Server Router (Untangle): 13600k @ Stock || ASRock Z690 ITX || All 10Gbe || 2x8GB 3200 || PicoPSU 150W 24pin + AX1200i on CPU|| whole-house loop

Server Compute/Storage: 10850K @ 5.1Ghz || Gigabyte Z490 Ultra || EVGA FTW3 3090 1000W || LSI 9280i-24 port || 4TB Samsung 860 Evo, 5x10TB Seagate Enterprise Raid 6, 4x8TB Seagate Archive Backup ||  whole-house loop.

Laptop: HP Elitebook 840 G8 (Intel 1185G7) + 3080Ti Thunderbolt Dock, Razer Blade Stealth 13" 2017 (Intel 8550U)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×