Jump to content
Phishing Emails & YouTube Messages - Fake Giveaway Read more... ×
Search In
  • More options...
Find results that contain...
Find results in...
exetras

Backblaze HDD stats for 2018 are out.

Recommended Posts

Posted · Original PosterOP

Link: https://www.backblaze.com/blog/hard-drive-stats-for-2018/

 

Quote

When we compare Hard Drive stats for 2018 to previous years two things jump out. First, the migration to larger drives, and second, the improvement in the overall annual failure rate each year. The chart below compares each of the last three years. The data for each year is inclusive of that year only.

 

Quote

 

Notes and Observations

  • In 2016 the average size of hard drives in use was 4.5 TB. By 2018 the average size had grown to 7.7 TB.
  • The 2018 annualized failure rate of 1.25% was the lowest by far of any year we’ve recorded.
  • None of the 45 Toshiba 5 TB drives (model: MD04ABA500V) has failed since Q2 2016. While the drive count is small, that’s still a pretty good run.
  • The Seagate 10 TB drives (model: ST10000NM0086) continue to impress as their AFR for 2018 was just 0.33%. That’s based on 1,220 drives and nearly 500,000 drive days, making the AFR pretty solid.

 

  •  

Blog-chart-afr-3-years.png

blog-chart-2018-hard-drive-stats.png

 

 

My Opinion: Its nice to see that only 3 drives got listed as having an error rate higher than 2%, the worst being 3.03%. Also 2016 was a really bad year for HDD's.

Link to post
Share on other sites

Good, throw them in the bin,  Unless blackblaze learn how to properly qualify data and compare it they aren't worth the hdd space they occupy.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites

TMW WD Reds have the highest failure rates.

 

 

 


Oh Backblaze, how accurate art thou
 

 


PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to post
Share on other sites

i mean if anything the data shows that in general, no big desktop hard drive is really a /bad/ choice. 

The problems i've seen at work are usually 2.5" drives for some reason...but they were one toshiba model that was apparently a known issue. Dell had sent us a giant box of new ones when we started having to replace them. almost 35% of them failed before the 3 year lease was up lol

 I've had 3.5" drives fail but they were all 40k+ hours so not really what I'd call terrible


"There is nothing more difficult than fixing something that isn't all the way broken yet." - Author Unknown

"A redline a day keeps depression at bay" - Author Unknown

Spoiler

Intel Core i7-3960X @ 4.4 GHz - Asus P9X79WS/IPMI - 12GB DDR3-1600 quad-channel - EVGA GTX 1080ti SC - Fractal Design Define R5 - 500GB Crucial MX200 and 2 x Seagate ST2000DM006 (in RAID 0 for games!) - The good old Corsair GS700 - Yamakasi Catleap 2703 27" 1440p and ASUS VS239H-P 1080p 23" - NH-D15 - Logitech G710+ - Mionix Naos 7000 - Sennheiser PC350 w/Topping VX-1

 

Avid Miata autocrosser :D

Link to post
Share on other sites
Just now, williamcll said:


So which brand has the most durable hard drives?

 westerngate then seadigital.

 

As has always been the case.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
7 hours ago, mr moose said:

Good, throw them in the bin,  Unless blackblaze learn how to properly qualify data and compare it they aren't worth the hdd space they occupy.

Agreed, tired of all the shit the first big one caused. I still hear shit about their 2014 report where they were using shucked 3TB segate drives,
image.png.18cff2cf6dc3b8c4fd561ca9a1c3adff.png
which for a couple of years after the taiwan floods irrc were just bad with sample sizes that were stupid

One of segates worst consumer grade drives in recent years, shucked ones at that, against a mch smaller sample size of other drives, most meant for actual data centers. z
All of this which, btw, was stated in the report. Cost saving measures. They took the cheaper higher failure rates over buying more expensive drives was the goal with the consumer grade shucked segate drives. 

But do people read? No

Still hear "don't but segate" with a link to this chart 

Hard Drive Failure Rates by Model

 

I wish backblaze never did these reports. 

 


muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to post
Share on other sites
26 minutes ago, Syntaxvgm said:

 I still hear shit about their 2014 report

 

That's the biggest problem with the internet, too many people with no real understanding of something yet repeat it flaws and all as if they know all about it, once something becomes an accepted narrative on the internet it takes decades to lose it.     Like the idea that pong was the first video game. 

 

 


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
29 minutes ago, Syntaxvgm said:

I wish backblaze never did these reports. 

they should just start with 2015 and remove all the sketchy disks of that period that are still running

(well, if anything they should post morterm that period of disks separately ._.)

Link to post
Share on other sites
33 minutes ago, mr moose said:

That's the biggest problem with the internet, too many people with no real understanding of something yet repeat it flaws and all as if they know all about it, once something becomes an accepted narrative on the internet it takes decades to lose it.     Like the idea that pong was the first video game. 

 

 

 

1 hour ago, Syntaxvgm said:

Agreed, tired of all the shit the first big one caused. I still hear shit about their 2014 report where they were using shucked 3TB segate drives,
image.png.18cff2cf6dc3b8c4fd561ca9a1c3adff.png
which for a couple of years after the taiwan floods irrc were just bad with sample sizes that were stupid

 

Using shucked drives shouldn't be an issue. And if they are an issue, it should indeed reflect on the company's quality control, and is worth mentioning.

 

Yes blackblaze needs some lessons in statistics (particularly in actually USING the confidence intervals they calculate to show error bars on their plots) but the data isn't inherently wrong. People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

 

image.png.187a0caa5092bfb7a7ab7f6b80316f7d.png

 

It's still dramatically more useful than the official 'statistics' where everyone and their brother's harddrive claims 10^6-7 MTBF (though iirc WD doesn't even report their MTBF or internal AFR records anymore), when obviously (the companies have admitted as much) that is a literally made up number that has no bearing on whatever limited testing the company actually generates on their own failure rates.

2 hours ago, Stefan Payne said:

The only Problem I see with this is that they don't mention hours of usage and the way it is used...

 

That should be somehow worked into this.

Actually they do. The data generated is annualized failure rate (as in # drive failure per 365 days).

image.png.b6b1cce8b56d3063362d3298beae24a0.png

 


LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Galaxy S9+ - XPS 13 (9343 UHD+) - Samsung Note Tab 7.0 - Lenovo Y580

 

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to post
Share on other sites
11 minutes ago, Curufinwe_wins said:

Actually they do. The data generated is annualized failure rate (as in # drive failure per 365 days).

and in days. 


Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between midday and midnight.

NightHawk: i5 6600k @4.1, ASUS Z170m-plus, H105, 16gb corsair vengeance LPX, Strix GTX 970 XFX RX 580 8GB, Corsair RM750X, 500 gb 850 evo, 250gb 750 evo and 5tb Toshiba x300

My compute server (remember to add link) HP DL380G6 2xE5520 24GB ram with 4x146gb 10k drives and 4x300gb 10K drives, running NOTHING can't get anything to work :)

WIP NAS Cisco Security Multiservices Platform server e5420 12gb ram, 1x6 1tb raid 6 for plex + Need funding 16+1 2tb raid 6 for mass storage.

PSU White list:

Spoiler

 

I love to fly 

Spoiler

 

How to get PC parts cheap 

Spoiler

 

 

Link to post
Share on other sites
4 minutes ago, Curufinwe_wins said:

 

 

Using shucked drives shouldn't be an issue. And if they are an issue, it should indeed reflect on the company's quality control, and is worth mentioning.

 

Yes blackblaze needs some lessons in statistics (particularly in actually USING the confidence intervals they calculate to show error bars on their plots) but the data isn't inherently wrong. People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

 

image.png.187a0caa5092bfb7a7ab7f6b80316f7d.png

 

It's still dramatically more useful than the official 'statistics' where everyone and their brother's harddrive claims 10^6-7 MTBF (though iirc WD doesn't even report their MTBF or internal AFR records anymore), when obviously (the companies have admitted as much) that is a literally made up number that has no bearing on whatever limited testing the company actually generates on their own failure rates.

Actually they do. The data generated is annualized failure rate (as in # drive failure per 365 days).

image.png.b6b1cce8b56d3063362d3298beae24a0.png

 

I don't think the data was wrong per se, what they claimed and how they mis-presented it was atrocious though, also the hodge podge of drives and the procurement methods they used in addition the huge variance in quantities essentially ruined any attempt at clear comparisons. 

 

Issues with the original and second reports:

https://insidehpc.com/2015/02/henry-newman-on-why-backblaze-is-still-wrong-about-disk-reliability/


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
Just now, GDRRiley said:

and in days. 

? Not sure what you are going for there. But realistically, throwing drive days is the same thing as drive hours when you are talking about drive days in the 10s-100s of thousands.


LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Galaxy S9+ - XPS 13 (9343 UHD+) - Samsung Note Tab 7.0 - Lenovo Y580

 

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to post
Share on other sites
Just now, Curufinwe_wins said:

? Not sure what you are going for there. But realistically, throwing drive days is the same thing as drive hours when you are talking about drive days in the 10s-100s of thousands.

I'm saying people who were above complaining about not having drive hours, they've changed them to days. 


Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between midday and midnight.

NightHawk: i5 6600k @4.1, ASUS Z170m-plus, H105, 16gb corsair vengeance LPX, Strix GTX 970 XFX RX 580 8GB, Corsair RM750X, 500 gb 850 evo, 250gb 750 evo and 5tb Toshiba x300

My compute server (remember to add link) HP DL380G6 2xE5520 24GB ram with 4x146gb 10k drives and 4x300gb 10K drives, running NOTHING can't get anything to work :)

WIP NAS Cisco Security Multiservices Platform server e5420 12gb ram, 1x6 1tb raid 6 for plex + Need funding 16+1 2tb raid 6 for mass storage.

PSU White list:

Spoiler

 

I love to fly 

Spoiler

 

How to get PC parts cheap 

Spoiler

 

 

Link to post
Share on other sites
2 minutes ago, mr moose said:

I don't think the data was wrong per se, what they claimed and how they mis-presented it was atrocious though, also the hodge podge of drives and the procurement methods they used in addition the huge variance in quantities essentially ruined any attempt at clear comparisons. 

 

Issues with the original and second reports:

https://insidehpc.com/2015/02/henry-newman-on-why-backblaze-is-still-wrong-about-disk-reliability/

Amusingly, neither of those quoted issues are valid arguments. Though perhaps it was more valid at the time.

Spoiler


So what if they are using non-enterprise drives? They never claim at any point that they are enterprise drives, or that they represent enterprise drives. Claiming this data is useless because it is non-enterprise drives is more akin to telling non-enterprise users they don't deserve to know full-time usage failure rates than any sort of intellectual mismanagement.

 

So what if a drive they use is old design? They list the drives by MODEL NUMBER. So if there is a specific model number that has higher failure rates... that shows up. Nothing claims otherwise. 

 

Seems like the guy only looked at the BS publicity pages about it and didn't dig into the actual data whatsoever.

 

 

 

Also huge variation in quantities has no impact (statistically) on comparability. It does have impact on confidence interval ranges of the individual drives, but when we make the assumption that individual drive failures are independent and pseudo-random, it also then follows, through simple statistics (central limit theorem), that the drive failures of specific models are independent of each other and generate a normal distribution. 

 

Of course, when Blackblaze makes a chart/table (even if they have others) that fails to include the confidence intervals they themselves calculated... that is less than ideal intellectual behavior.


LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Galaxy S9+ - XPS 13 (9343 UHD+) - Samsung Note Tab 7.0 - Lenovo Y580

 

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to post
Share on other sites
7 minutes ago, Curufinwe_wins said:

Amusingly, neither of those quoted issues are valid arguments. Though perhaps it was more valid at the time.

  Reveal hidden contents

 

So what if they are using non-enterprise drives? They never claim at any point that they are enterprise drives, or that they represent enterprise drives. Claiming this data is useless because it is non-enterprise drives is more akin to telling non-enterprise users they don't deserve to know full-time usage failure rates than any sort of intellectual mismanagement.

 

So what if a drive they use is old design? They list the drives by MODEL NUMBER. So if there is a specific model number that has higher failure rates... that shows up. Nothing claims otherwise. 

 

Seems like the guy only looked at the BS publicity pages about it and didn't dig into the actual data whatsoever.

 

 

 

Also huge variation in quantities has no impact (statistically) on comparability. It does have impact on confidence interval ranges of the individual drives, but when we make the assumption that individual drive failures are independent and pseudo-random, it also then follows, through simple statistics (central limit theorem), that the drive failures of specific models are independent of each other and generate a normal distribution. 

 

Of course, when Blackblaze makes a chart/table (even if they have others) that fails to include the confidence intervals they themselves calculated... that is less than ideal intellectual behavior.

Valid arguments yet, they were serious issues with the previous reports and it's going to take more than one "technically" accurate report from them before I believe that the HDD's were all sourced brand new, what systems they were used in, whether the workload was the same across all drives and whether it is even fair to include 45 WD's next to 30K seagates in a table where most readers don't understand confidence intervals (when they actually mean something).

 

I am always open to correction if the data exists, but sadly given BB history I have trouble accepting this is thorough just on the surface of it.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
2 minutes ago, mr moose said:

Valid arguments yet, they were serious issues with the previous reports and it's going to take more than one "technically" accurate report from them before I believe that the HDD's were all sourced brand new, what systems they were used in, whether the workload was the same across all drives and whether it is even fair to include 45 WD's next to 30K seagates in a table where most readers don't understand confidence intervals (when they actually mean something).

 

I am always open to correction if the data exists, but sadly given BB history I have trouble accepting this is thorough just on the surface of it.

I guess I'd always come from the perspective that even with the inherent limitations, having the data is definitively better than not. But mountains of salt are fine as well.


LINK-> Kurald Galain:  The Night Eternal 

Top 5820k, 980ti SLI Build in the World*

CPU: i7-5820k // GPU: SLI MSI 980ti Gaming 6G // Cooling: Full Custom WC //  Mobo: ASUS X99 Sabertooth // Ram: 32GB Crucial Ballistic Sport // Boot SSD: Samsung 850 EVO 500GB

Mass SSD: Crucial M500 960GB  // PSU: EVGA Supernova 850G2 // Case: Fractal Design Define S Windowed // OS: Windows 10 // Mouse: Razer Naga Chroma // Keyboard: Corsair k70 Cherry MX Reds

Headset: Senn RS185 // Monitor: ASUS PG348Q // Devices: Galaxy S9+ - XPS 13 (9343 UHD+) - Samsung Note Tab 7.0 - Lenovo Y580

 

LINK-> Ainulindale: Music of the Ainur 

Prosumer DYI FreeNAS

CPU: Xeon E3-1231v3  // Cooling: Noctua L9x65 //  Mobo: AsRock E3C224D2I // Ram: 16GB Kingston ECC DDR3-1333

HDDs: 4x HGST Deskstar NAS 3TB  // PSU: EVGA 650GQ // Case: Fractal Design Node 304 // OS: FreeNAS

 

 

 

Link to post
Share on other sites

i've had thoughts of downloading their csv data to convert to a per-disk gapminder graph, but that'll take a lot of learning on my part to handle the data via code ,_,

 

file under idle speak for now

Link to post
Share on other sites
21 hours ago, Curufinwe_wins said:

 

 

Using shucked drives shouldn't be an issue. And if they are an issue, it should indeed reflect on the company's quality control, and is worth mentioning.

 

Yes blackblaze needs some lessons in statistics (particularly in actually USING the confidence intervals they calculate to show error bars on their plots) but the data isn't inherently wrong. People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

 

-snip-

 

It's still dramatically more useful than the official 'statistics' where everyone and their brother's harddrive claims 10^6-7 MTBF (though iirc WD doesn't even report their MTBF or internal AFR records anymore), when obviously (the companies have admitted as much) that is a literally made up number that has no bearing on whatever limited testing the company actually generates on their own failure rates.

 

 

Actually, the shucked drives are a huge problem, especially since some of the drives they were up against in the statistics were meant for this kind of use. 

Image result for backblaze pod

And they knew that, they took that risk intentionally. Consumer drives are not meant to run next to so many vibrating drives or take so much hard use. Compared to @mr moose, I don't see so much of a problem with the data or how it's presented, it's just the fact that people don't fucking read or take anything in the proper context- the worst years of segate drives, the problem models for those years, shucked consumer drives in a setup like above (and they weren't like the current situation with nas drives being sold in enclosures), and the volumes not matching in any way. IIRC these drives literally had less testing and R&D and were rushed to market. 

I had bought a few drives that I shucked from segate from before the taiwan flood. I also got some segate 3TB drives from after. I have one of the latter left working. 

 

A thing that a lot of people don't know is the bathtub failure model for hard drives- most defective drives will fail early, then more will fail after a certain time. This is why when I get new drives I write a lot to them in the first week or two- usually at least fill them, but I prefer to start out with one random pass then zero it out if I have time. I have caught at least 4 failures this way in a past couple of years that I can think of- 2 were WD 3TB blue drives. If they're defective, they often fail early, and you can do this to make it a painless return to amazon instead of dealing with warranty replacement down the line. I started doing this after I bought a stack of 5 and 6TB WD reds that basically all failed me- which I've been told they were actually green rebages with different firmware or something. I don't know if that's true, but there were a lot of failure complaints. They were loud and hot, which is not a green thing so idk. Because they failed past my return window but all failed within the first couple of TB of writes, I started this practice. I sold my warranty replacements of those lol.

Enter the segate 3TB drives- They DIDN'T follow the bathtub failure pattern. They were not defective, they were flawed in design. The 3TB models from this time were using less platters to save money iirc, and they were rushed out. I think the 1.5 had problems as well, but not the 2TB 


This report was a couple of years after the 2011 floods, but the drives were launched in 2011 

https://www.seagate.com/staticfiles/docs/pdf/datasheet/disc/barracuda-ds1737-1-1111us.pdf

 



When I first saw the backblaze data, I enjoyed the info on those specific drives, and it really shed light to what basically was horrible drive rushed out the door. The HGST drives were low failure rates, which backed up what I had heard and experienced. They were launched shortly before the flood and the acquisition by WD, which they were forced to operate independently from WD for a couple of years after as well. 
 

https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/hgst/deskstar-5k-series/data-sheet-deskstar-5k3000.pdf


People took this segate's drive's failure rate to mean ALL SEGATE DRIVES BEFORE AND AFTER. I'm still linked to this fucking page. I still see people telling new builders to never pick segate and linking to the old study. Even more people just post this picture with "lol segate"

Spoiler

Hard Drive Failure Rates by Model


People don't read. These charts are very dangerous in the hands of these people who don't read or understand the context. I've gotten into arguments in person with others over it even. 


Interesting note, this study caused a lawsuit 

https://www.extremetech.com/extreme/222267-seagate-faces-lawsuit-over-3tb-hard-drive-failure-rates


Which is more along the lines of what this is- just like those old IBM deskstar failures mentioned in that article. A drive problem, and not a brand problem. 
But people don't read like us. And that's the problem. It's as you said 

Quote

People just overgeneralize from the data. This is what people should actually be looking at. With the plotted error propagation, but that is too difficult for people to grasp for some reason.

And for that reason I think it has caused more harm than good and really wished it didn't exist. 
 


muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to post
Share on other sites

Neat


PRISIMHEART 2.0

Case: TT Core V1 || PSU: EVGA Supernova P2 750w || MB: Asrock Fata1ity AB350 Gaming-ITX/ac || CPU: AMD Ryzen R5 1600 || CPU Cooler: Cryorig H7 w/ FD Venturi fan || RAM: G.Skill Flare X 16GB || GPU: Galax GTX 1070 EXOC-SNPR || Storage: Samsung 860 Evo 1TB + Crucial MX500 1TB + SG Firecuda 2TB

 

PERIPHERALS / DISPLAY

Keyboard: Logitech G810 Orion Spectrum || Mouse: Logitech G502 Proteus Spectrum || Monitor: HP Omen 32

 

Link to post
Share on other sites
6 hours ago, Syntaxvgm said:

Actually, the shucked drives are a huge problem, especially since some of the drives they were up against in the statistics were meant for this kind of use. 

Image result for backblaze pod

And they knew that, they took that risk intentionally. Consumer drives are not meant to run next to so many vibrating drives or take so much hard use. Compared to @mr moose, I don't see so much of a problem with the data or how it's presented, it's just the fact that people don't fucking read or take anything in the proper context- the worst years of segate drives, the problem models for those years, shucked consumer drives in a setup like above (and they weren't like the current situation with nas drives being sold in enclosures), and the volumes not matching in any way. IIRC these drives literally had less testing and R&D and were rushed to market. 

I had bought a few drives that I shucked from segate from before the taiwan flood. I also got some segate 3TB drives from after. I have one of the latter left working. 

 

A thing that a lot of people don't know is the bathtub failure model for hard drives- most defective drives will fail early, then more will fail after a certain time. This is why when I get new drives I write a lot to them in the first week or two- usually at least fill them, but I prefer to start out with one random pass then zero it out if I have time. I have caught at least 4 failures this way in a past couple of years that I can think of- 2 were WD 3TB blue drives. If they're defective, they often fail early, and you can do this to make it a painless return to amazon instead of dealing with warranty replacement down the line. I started doing this after I bought a stack of 5 and 6TB WD reds that basically all failed me- which I've been told they were actually green rebages with different firmware or something. I don't know if that's true, but there were a lot of failure complaints. They were loud and hot, which is not a green thing so idk. Because they failed past my return window but all failed within the first couple of TB of writes, I started this practice. I sold my warranty replacements of those lol.

Enter the segate 3TB drives- They DIDN'T follow the bathtub failure pattern. They were not defective, they were flawed in design. The 3TB models from this time were using less platters to save money iirc, and they were rushed out. I think the 1.5 had problems as well, but not the 2TB 


This report was a couple of years after the 2011 floods, but the drives were launched in 2011 

https://www.seagate.com/staticfiles/docs/pdf/datasheet/disc/barracuda-ds1737-1-1111us.pdf

 



When I first saw the backblaze data, I enjoyed the info on those specific drives, and it really shed light to what basically was horrible drive rushed out the door. The HGST drives were low failure rates, which backed up what I had heard and experienced. They were launched shortly before the flood and the acquisition by WD, which they were forced to operate independently from WD for a couple of years after as well. 
 

https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/hgst/deskstar-5k-series/data-sheet-deskstar-5k3000.pdf


People took this segate's drive's failure rate to mean ALL SEGATE DRIVES BEFORE AND AFTER. I'm still linked to this fucking page. I still see people telling new builders to never pick segate and linking to the old study. Even more people just post this picture with "lol segate"

  Reveal hidden contents

Hard Drive Failure Rates by Model


People don't read. These charts are very dangerous in the hands of these people who don't read or understand the context. I've gotten into arguments in person with others over it even. 


Interesting note, this study caused a lawsuit 

https://www.extremetech.com/extreme/222267-seagate-faces-lawsuit-over-3tb-hard-drive-failure-rates


Which is more along the lines of what this is- just like those old IBM deskstar failures mentioned in that article. A drive problem, and not a brand problem. 
But people don't read like us. And that's the problem. It's as you said 

And for that reason I think it has caused more harm than good and really wished it didn't exist. 
 

I have a shucked Seagate 1.5TB drive, I've used it for about 6 years now as a back up drive and the main OS drive for my sons computer for about 3 years. It is absolutely slower than any other seagate drive (even some of the older ones) I've had.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites
7 hours ago, mr moose said:

I have a shucked Seagate 1.5TB drive, I've used it for about 6 years now as a back up drive and the main OS drive for my sons computer for about 3 years. It is absolutely slower than any other seagate drive (even some of the older ones) I've had.

I do not know if this is true in that case, but a lot of the segate failures I experienced ended up with the drive having read and access times being incredibly slow but the data still accessible. 


muh specs 

Gaming and HTPC (reparations)- ASUS 1080, MSI X99A SLI Plus, 5820k- 4.5GHz @ 1.25v, asetek based 360mm AIO, RM 1000x, 16GB memory, 750D with front USB 2.0 replaced with 3.0  ports, 2 250GB 850 EVOs in Raid 0 (why not, only has games on it), some hard drives

Screens- Acer preditor XB241H (1080p, 144Hz Gsync), LG 1080p ultrawide, (all mounted) directly wired to TV in other room

Stuff- k70 with reds, steel series rival, g13, full desk covering mouse mat

All parts black

Workstation(desk)- 3770k, 970 reference, 16GB of some crucial memory, a motherboard of some kind I don't remember, Micomsoft SC-512N1-L/DVI, CM Storm Trooper (It's got a handle, can you handle that?), 240mm Asetek based AIO, Crucial M550 256GB (upgrade soon), some hard drives, disc drives, and hot swap bays

Screens- 3  ASUS VN248H-P IPS 1080p screens mounted on a stand, some old tv on the wall above it. 

Stuff- Epicgear defiant (solderless swappable switches), g600, moutned mic and other stuff. 

Laptop docking area- 2 1440p korean monitors mounted, one AHVA matte, one samsung PLS gloss (very annoying, yes). Trashy Razer blackwidow chroma...I mean like the J key doesn't click anymore. I got a model M i use on it to, but its time for a new keyboard. Some edgy Utechsmart mouse similar to g600. Hooked to laptop dock for both of my dell precision laptops. (not only docking area)

Shelf- i7-2600 non-k (has vt-d), 380t, some ASUS sandy itx board, intel quad nic. Currently hosts shared files, setting up as pfsense box in VM. Also acts as spare gaming PC with a 580 or whatever someone brings. Hooked into laptop dock area via usb switch

Link to post
Share on other sites
4 hours ago, Syntaxvgm said:

I do not know if this is true in that case, but a lot of the segate failures I experienced ended up with the drive having read and access times being incredibly slow but the data still accessible. 

It might be, regardless though, it got so bad I bought him a new ssd anyway.


QuicK and DirtY. Read the CoC it's like a guide on how not to be moron.  Also I don't have an issue with the VS series.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×