Jump to content

BackBlaze - HDD reliability stats for Q2 2015

zMeul

replacing the drive then rebuilding the data is going to lead to a lot of downtime 

which they make up for by having tons and tons of extra drives, which eliminate downtime as everything has multiple redundancies. 

45 WD drives vs 17 895 Seagate ones?

 

Jeez

they don't have many WD drives. 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

45 WD drives vs 17 895 Seagate ones?

 

Jeez

 

Hang on a second...1.5% of their WD drives have failed...that means 0.675 drives failed?

What...?How can you have 0.6 of a drive to fail?

 

I want to make a joke about them probably costing the same totals.

Link to comment
Share on other sites

Link to post
Share on other sites

they don't have many WD drives. 

Read updates post.

~1.5% failure rate with 45 WD drives.

That means 0.675 drives failed.

How did they count that??

i5 4670k @ 4.2GHz (Coolermaster Hyper 212 Evo); ASrock Z87 EXTREME4; 8GB Kingston HyperX Beast DDR3 RAM @ 2133MHz; Asus DirectCU GTX 560; Super Flower Golden King 550 Platinum PSU;1TB Seagate Barracuda;Corsair 200r case. 

Link to comment
Share on other sites

Link to post
Share on other sites

-nvm- 

PSU Tier List | CoC

Gaming Build | FreeNAS Server

Spoiler

i5-4690k || Seidon 240m || GTX780 ACX || MSI Z97s SLI Plus || 8GB 2400mhz || 250GB 840 Evo || 1TB WD Blue || H440 (Black/Blue) || Windows 10 Pro || Dell P2414H & BenQ XL2411Z || Ducky Shine Mini || Logitech G502 Proteus Core

Spoiler

FreeNAS 9.3 - Stable || Xeon E3 1230v2 || Supermicro X9SCM-F || 32GB Crucial ECC DDR3 || 3x4TB WD Red (JBOD) || SYBA SI-PEX40064 sata controller || Corsair CX500m || NZXT Source 210.

Link to comment
Share on other sites

Link to post
Share on other sites

i wonder why wd's numbers are worse than hgst's eventhough it's an "wd company."

 

They are still different drives, even though it is owed by WD, Hitachi still designs the drives. 

5800X3D - RTX 4070 - 2K @ 165Hz

 

Link to comment
Share on other sites

Link to post
Share on other sites

Haven't we already established that this is a bunch of BS?

 

Yes, repeatedly.

These idiots pulled 2.5" external drives from their enclosures and ran them 24/7 in a server environment at one point, and published the "data" as if it wasn't worthless.

 

THEY ARE MUPPETS

 

Here's some real, verified data: (Failure rates within warranty periods through French online retailers in 2014)

 

 

- Seagate 0,69% (vs 0,86%)

- Western 0,93 (vs 1,13%)
- HGST 1,01% (vs 1,08%)
- Toshiba 1,29% (vs 1,02%)

http://linustechtips.com/main/topic/255466-huge-list-of-failure-rates-for-all-pc-components/

 

Edit: I should point out that Seagate's 3TB drives have always been awful, I am not refuting that fact.

In case the moderators do not ban me as requested, this is a notice that I have left and am not coming back.

Link to comment
Share on other sites

Link to post
Share on other sites

I am starting to think Black blaze is owned or paid by WD. :o  :unsure:  :ph34r:  

 

Seriously I am surprised they keep doing this every year,  they keep posting the same results with the same (almost universally) rejected test conditions and keep posting the flawed results like it somehow means something.

 

They are literally the laughing stock of everyone who understands the basic principals of scientific inquiry.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

The fact that you're taking this seriously is painful

Specs: 4790k | Asus Z-97 Pro Wifi | MX100 512GB SSD | NZXT H440 Plastidipped Black | Dark Rock 3 CPU Cooler | MSI 290x Lightning | EVGA 850 G2 | 3x Noctua Industrial NF-F12's

Bought a powermac G5, expect a mod log sometime in 2015

Corsair is overrated, and Anime is ruined by the people who watch it

Link to comment
Share on other sites

Link to post
Share on other sites

The fact that you're taking this seriously is painful

 

Whatever feed my favorite brands is good by me.

 

/s

 

It is just an interesting note, no more than that. 

CPU: i9-13900k MOBO: Asus Strix Z790-E RAM: 64GB GSkill  CPU Cooler: Corsair H170i

GPU: Asus Strix RTX-4090 Case: Fractal Torrent PSU: Corsair HX-1000i Storage: 2TB Samsung 990 Pro

 

Link to comment
Share on other sites

Link to post
Share on other sites

Whatever feed my favorite brands is good by me.

 

/s

 

It is just an interesting note, no more than that. 

I don't know what the hell they do to their Seagate drives that makes them fail at more than 30x the normal amount, but it's incredibly deceiving and i wish they would stop releasing this stuff.

Specs: 4790k | Asus Z-97 Pro Wifi | MX100 512GB SSD | NZXT H440 Plastidipped Black | Dark Rock 3 CPU Cooler | MSI 290x Lightning | EVGA 850 G2 | 3x Noctua Industrial NF-F12's

Bought a powermac G5, expect a mod log sometime in 2015

Corsair is overrated, and Anime is ruined by the people who watch it

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, repeatedly.

These idiots pulled 2.5" external drives from their enclosures and ran them 24/7 in a server environment at one point, and published the "data" as if it wasn't worthless.

 

THEY ARE MUPPETS

 

Here's some real, verified data: (Failure rates within warranty periods through French online retailers in 2014)

 

http://linustechtips.com/main/topic/255466-huge-list-of-failure-rates-for-all-pc-components/

 

Edit: I should point out that Seagate's 3TB drives have always been awful, I am not refuting that fact.

 

Funny how people keep coming up with the 2.5" nonsense.  A few months ago I checked every model number of the Seagate drives they released failure stats for and didn't encounter any 2.5" ones.

 

 

 

As for that thread you linked to, that's a big joke.  I honestly can't believe that anyone who is smart enough to read takes that seriously.

First off, It only lists components that failed within a year, while Seagate drives tend to have a linear failure pattern.  The older they get, the more they fail. 

Secondly, it doesn't list actual numbers but mentions that 100 sold items is enough for them to make stats.  100 is not enough to get reliable numbers.  Add a zero and then we'll talk.

 

I'm looking at Backblaze's numbers and I see thousands of Seagate drives failing much more than thousands of HGST drives in the same conditions and environment.  So don't even mention that they are consumer drives in an enterprise environment, because it's the same for the other brands and models.  It's a level playing field really.

 

Google released a PDF with data relating their drives, and despite not saying names they did state that

Failure rates are known to be highly correlated with drive models, manufacturers and vintages. Our results do not contradict this fact.

... which means certain drives failed much more than others.  There's just no way that they would have had completely different reliability numbers. 

 

 

And just to get back to the "within one year" thing, the last time this topic was discussed I double-checked on Newegg and Seagate only gives one year of warranty whereas the others give 3 or more years.  If that's not an indication of how crap the drives are, I don't know what is.

Link to comment
Share on other sites

Link to post
Share on other sites

Funny how people keep coming up with the 2.5" nonsense.  A few months ago I checked every model number of the Seagate drives they released failure stats for and didn't encounter any 2.5" ones.

 

 

 

As for that thread you linked to, that's a big joke.  I honestly can't believe that anyone who is smart enough to read takes that seriously.

First off, It only lists components that failed within a year, while Seagate drives tend to have a linear failure pattern.  The older they get, the more they fail. 

Secondly, it doesn't list actual numbers but mentions that 100 sold items is enough for them to make stats.  100 is not enough to get reliable numbers.  Add a zero and then we'll talk.

 

I'm looking at Backblaze's numbers and I see thousands of Seagate drives failing much more than thousands of HGST drives in the same conditions and environment.  So don't even mention that they are consumer drives in an enterprise environment, because it's the same for the other brands and models.  It's a level playing field really.

 

Google released a PDF with data relating their drives, and despite not saying names they did state that

... which means certain drives failed much more than others.  There's just no way that they would have had completely different reliability numbers. 

 

 

And just to get back to the "within one year" thing, the last time this topic was discussed I double-checked on Newegg and Seagate only gives one year of warranty whereas the others give 3 or more years.  If that's not an indication of how crap the drives are, I don't know what is.

People aren't arguing the test puts Seagate drives under worse conditions than the competition. What people are arguing is that those conditions will almost never be reached in a consumer scenario and thus do not provide even remotely useful data for consumer-usage scenarios.

 

This is basic scientific inquiry.

My Build:

Spoiler

CPU: i7 4770k GPU: GTX 780 Direct CUII Motherboard: Asus Maximus VI Hero SSD: 840 EVO 250GB HDD: 2xSeagate 2 TB PSU: EVGA Supernova G2 650W

Link to comment
Share on other sites

Link to post
Share on other sites

Haven't we already established that this is a bunch of BS?

 

Yes, BlackBlaze's stats are horseshit and I have no idea why people still believe them.

 

IIRC from a few years ago they were putting consumer drives under 24/7 enterprise workloads and even taking drives out of enclosures (the ones where you buy a hard drive already in the external enclosure) and putting them into server racks and whatnot. Basically, everything was mixed and matched and there was zero consistency and it was not scientific or accurate whatsoever.

CPU: i7 4790K  RAM: 32 GB 2400 MHz  Motherboard: Asus Z-97 Pro  GPU: GTX 770  SSD: 256 GB Samsung 850 Pro  OS: Windows 8.1 64-bit

Link to comment
Share on other sites

Link to post
Share on other sites

i wonder why wd's numbers are worse than hgst's eventhough it's an "wd company."

Because HGST is an enterprise class drive, designed for 24/7 use in a large array, there is a price premium for these drives and naturally their failure rate is lower

Link to comment
Share on other sites

Link to post
Share on other sites

What about Samsung/Spinpoint?

 

Samsung left the HDD market in 2011.

Link to comment
Share on other sites

Link to post
Share on other sites

These are quite funny to look at..

Link to comment
Share on other sites

Link to post
Share on other sites

i wonder why wd's numbers are worse than hgst's eventhough it's an "wd company."

Before they were bought out hgst was made for nas systems and heavy usage, more than just what western digital is aiming at which is just consumer use in most cases
Link to comment
Share on other sites

Link to post
Share on other sites

Over 11k HGST drives, over 17k seagate drives and toshiba and WD are both under 1000drives, how the hell are they even allowed to put these numbers out against each other?

I personally still call BS on this site because of their testing methods and drive amounts. If I'm not mistaken, they use server/profesional grade envoirments for consumers drives and the otherway around, making this the WORST possible test for reliability if you ask me.

May the light have your back and your ISO low.

Link to comment
Share on other sites

Link to post
Share on other sites

i wonder why wd's numbers are worse than hgst's eventhough it's an "wd company."

HGST is primarily in the enterprise business. WD is less so. That's not totally surprising. I just laugh so hard at Seagate's expense. It's pathetic!

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Over 11k HGST drives, over 17k seagate drives and toshiba and WD are both under 1000drives, how the hell are they even allowed to put these numbers out against each other?

I personally still call BS on this site because of their testing methods and drive amounts. If I'm not mistaken, they use server/profesional grade envoirments for consumers drives and the otherway around, making this the WORST possible test for reliability if you ask me.

It's called significance of the study. If the variance in a sample set is high, then getting a tight confidence interval for 95% or 99% strength may require more samples. Western Digital is highly consistent, so it showed up with fewer tests required. This is actually quite common in statistics gathering. Now, the probability of a Type II error obviously decreases with more samples, but 1000 is a large sample size if the variance is remotely moderate or small.

 

And the purpose was to present worst-case scenarios for both sets of drives. enterprise drives are actually built to absorb and feed off of vibrations from nearby drives. Consumer drives are not. The lack of exposure for the one and the included exposure for the other make it a worst case scenario test for both sets, consistently.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

It's called significance of the study. If the variance in a sample set is high, then getting a tight confidence interval for 95% or 99% strength may require more samples. Western Digital is highly consistent, so it showed up with fewer tests required. This is actually quite common in statistics gathering. Now, the probability of a Type II error obviously decreases with more samples, but 1000 is a large sample size if the variance is remotely moderate or small.

 

And the purpose was to present worst-case scenarios for both sets of drives. enterprise drives are actually built to absorb and feed off of vibrations from nearby drives. Consumer drives are not. The lack of exposure for the one and the included exposure for the other make it a worst case scenario test for both sets, consistently.

 

There are many reasons why most of the industry believe the whole thing is flawed. Not least of which is the severe lack of qualification on how these drives where used, were the seagate's used in higher load situations? where the WD's put into arrays that where only used for periodic backups/data collection?

 

http://www.enterprisestorageforum.com/storage-hardware/selecting-a-disk-drive-how-not-to-do-research-1.html

http://www.tweaktown.com/articles/6028/dispelling-backblaze-s-hdd-reliability-myth-the-real-story-covered/index5.html

http://insidehpc.com/2015/02/henry-newman-on-why-backblaze-is-still-wrong-about-disk-reliability/

http://www.theregister.co.uk/2014/02/17/backblaze_how_not_to_evaluate_disk_reliability/

http://hardware.slashdot.org/story/14/01/29/233259/hard-drive-reliability-study-flawed

 

 

 

Also, this is the 3rd time they have released similar results, if seagate are so bad why did they keep using them after the first or second round?

 

too many questions, too many hypothesis, too many theories and not enough data to dis/prove any let alone how they obtained those results.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

It's called significance of the study. If the variance in a sample set is high, then getting a tight confidence interval for 95% or 99% strength may require more samples. Western Digital is highly consistent, so it showed up with fewer tests required. This is actually quite common in statistics gathering. Now, the probability of a Type II error obviously decreases with more samples, but 1000 is a large sample size if the variance is remotely moderate or small.

 

And the purpose was to present worst-case scenarios for both sets of drives. enterprise drives are actually built to absorb and feed off of vibrations from nearby drives. Consumer drives are not. The lack of exposure for the one and the included exposure for the other make it a worst case scenario test for both sets, consistently.

 

that actually makes sense yeah, but from what i have learned, when calculating the interval for the 95% or 99% strength testing should be carried out on a single batch for a reliable outcome, where as that is still possible for the WD drives, but i highley doubt that for the is the case on seagate drives.

 

But yeah, this being a worst case scenario test is something people tend to forget(thanks for the reminder btw) and start using it as something in best case or reallife based situations.

 

 

There are many reasons why most of the industry believe the whole thing is flawed. Not least of which is the severe lack of qualification on how these drives where used, were the seagate's used in higher load situations? where the WD's put into arrays that where only used for periodic backups/data collection?

 

http://www.enterprisestorageforum.com/storage-hardware/selecting-a-disk-drive-how-not-to-do-research-1.html

http://www.tweaktown.com/articles/6028/dispelling-backblaze-s-hdd-reliability-myth-the-real-story-covered/index5.html

http://insidehpc.com/2015/02/henry-newman-on-why-backblaze-is-still-wrong-about-disk-reliability/

http://www.theregister.co.uk/2014/02/17/backblaze_how_not_to_evaluate_disk_reliability/

http://hardware.slashdot.org/story/14/01/29/233259/hard-drive-reliability-study-flawed

 

 

 

Also, this is the 3rd time they have released similar results, if seagate are so bad why did they keep using them after the first or second round?

 

too many questions, too many hypothesis, too many theories and not enough data to dis/prove any let alone how they obtained those results.

 

hmm that is new for me, doesn't make this much better

May the light have your back and your ISO low.

Link to comment
Share on other sites

Link to post
Share on other sites

There are many reasons why most of the industry believe the whole thing is flawed. Not least of which is the severe lack of qualification on how these drives where used, were the seagate's used in higher load situations? where the WD's put into arrays that where only used for periodic backups/data collection?

 

http://www.enterprisestorageforum.com/storage-hardware/selecting-a-disk-drive-how-not-to-do-research-1.html

http://www.tweaktown.com/articles/6028/dispelling-backblaze-s-hdd-reliability-myth-the-real-story-covered/index5.html

http://insidehpc.com/2015/02/henry-newman-on-why-backblaze-is-still-wrong-about-disk-reliability/

http://www.theregister.co.uk/2014/02/17/backblaze_how_not_to_evaluate_disk_reliability/

http://hardware.slashdot.org/story/14/01/29/233259/hard-drive-reliability-study-flawed

 

 

 

Also, this is the 3rd time they have released similar results, if seagate are so bad why did they keep using them after the first or second round?

 

too many questions, too many hypothesis, too many theories and not enough data to dis/prove any let alone how they obtained those results.

 

that actually makes sense yeah, but from what i have learned, when calculating the interval for the 95% or 99% strength testing should be carried out on a single batch for a reliable outcome, where as that is still possible for the WD drives, but i highley doubt that for the is the case on seagate drives.

 

But yeah, this being a worst case scenario test is something people tend to forget(thanks for the reminder btw) and start using it as something in best case or reallife based situations.

 

 

 

hmm that is new for me, doesn't make this much better

There are glaring flaws in the study, but not the kind that actually contradict the conclusions. It would be nice if every single screw was down tight, but honestly this is plenty to set out and test their hypothesis of reliability without much room for error. And actually they give details for how each batch was used. The same array of tests were used for each drive set.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

that actually makes sense yeah, but from what i have learned, when calculating the interval for the 95% or 99% strength testing should be carried out on a single batch for a reliable outcome, where as that is still possible for the WD drives, but i highley doubt that for the is the case on seagate drives.

 

But yeah, this being a worst case scenario test is something people tend to forget(thanks for the reminder btw) and start using it as something in best case or reallife based situations.

 

 

 

hmm that is new for me, doesn't make this much better

 

When their first report came out I was in full support of it,  I think I even posted here about it and pointed out why people should accept when the test consists of such a large volume of samples. By the time the second came out I started reading all the blogs and reports from data companies and specialists in the field.  Now the third one has surfaced, literally nothing has changed in BB, more specialists are coming out of the woodwork to criticize their method, less information is available on their setup,  they still have a significant disparity between the number of branded drives and no control grouping.  I subscribe to the notion that if the fish has an odor it's probably off.

 

EDIT: ignore the control grouping, it's not really important for this type of study.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×