Jump to content

Backblaze releases some more flawed HDD failure statistics for Q3

Oshino Shinobu
7 minutes ago, AshleyAshes said:

I get the rational behind you guys arguing that larger HDDs must have statistically meaningfully larger failure rates, but when your base metric for that is like 1TB-4TB drives being 'more reliable', doesn't this mean that puny HDDs from the 90's that were only hundreds of megabytes or maybe a few gigabytes must have had a negative failure rate? :P  As drives get larger, the technology also advances the construction quality, and this is why we've seen overall HDD failure rates to stay within a nominal statistical range.

..." within the same generation / timeframe (and product category)" . Because you usually see that most of the components are the same, with simply more platters & heads. Thereby, more moving parts. Thereby, more failures.

Just for shits & giggles you could compare reliability between drives with equal numbers of platters & heads (regardless of capacity) from 20 years apart and see what you end up with. I honestly don't know. Might be more reliable (due to better tech), might be less reliable (due to everything being scaled down, less tolerance for failure). Don't really have any numbers to go on though.

 

By the way, I'm talking also per drive, not per GB (those might actually be more interesting, and where it probably doesn't hold up). In all honesty, if you're getting large amounts of storage (and care about drive reliability) you'd probably be best off working out the RMA/GB ratio. Although admittedly, if you care that much about reliability, you might be better off just going backblaze, and investing in redundancy instead of reliability. :)

Link to comment
Share on other sites

Link to post
Share on other sites

I'll play devil's advocate a bit here, although this is a true story. I *have* had a 3TB SeaGate drive die on me before, and it was only over a year old. That should not have happened. So to me, Backblaze's claim has some validity to it. The way I see it, if that stat was true for me, maybe I should look into this some more when buying another hard drive. HGST drives though are pretty darn good though. I replaced my dead SeaGate drive with a 3TB HGST drive and it's been running strong for about a year now.

 

 

 


My SeaGate 1TB drive I used in my original PC build is still going strong though. That PC was built like 3 or 4 years ago...? Something like that. I even passed it onto my friend, and it's still working fine for him.
 

 

COMIC SANS

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, TopWargamer said:

N = 1

There, fixed that for you. :)

 

For the record, I've had both Seagate and WD drives (and older Maxtor). I'm neither a fanboy or hater of either brand, and would happily use either of them. Even that WD 6GB drive that supposedly has a 11,31% failure rate according to Backblaze. 9_9

The problem with this data is that it gives an illusion of authority on the matter while it's about as reliable as a random number generator.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Jovidah said:

There, fixed that for you. :)

 

For the record, I've had both Seagate and WD drives (and older Maxtor). I'm neither a fanboy or hater of either brand, and would happily use either of them. Even that WD 6GB drive that supposedly has a 11,31% failure rate according to Backblaze. 9_9

The problem with this data is that it gives an illusion of authority on the matter while it's about as reliable as a random number generator.

What's misleading about it though? They're using consumer drives in a server. It's the ultimate stress test.

COMIC SANS

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, TopWargamer said:

What's misleading about it though? They're using consumer drives in a server. It's the ultimate stress test.

Like Furmark (which has only ever killed cards with a shitty design in the first place).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Dabombinable said:

Like Furmark (which has only ever killed cards with a shitty design in the first place).

Then if what Backblaze is doing is the same as what Furmark is doing (ie killing shitty drives/video cards), then they're doing their job. You now know not to buy those shitty drives. Even if you knew they were shitty, some people do not. These stress tests help the uninformed or slightly uninformed people pick out a drive. I still do not see why Backblaze gets all of this hate.

COMIC SANS

Link to comment
Share on other sites

Link to post
Share on other sites

50 minutes ago, 1823alex said:

You do realize the point of them using consumer drives in a datacenter is to test for failure rates and such on consumer drives, in order to inform the consumer.

Except when you have the figures so out of proportion you can't exactly say you have valid results.

USEFUL LINKS:

PSU Tier List F@H stats

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, TopWargamer said:

Then if what Backblaze is doing is the same as what Furmark is doing (ie killing shitty drives/video cards), then they're doing their job. You now know not to buy those shitty drives. Even if you knew they were shitty, some people do not. These stress tests help the uninformed or slightly uninformed people pick out a drive. I still do not see why Backblaze gets all of this hate.

At the very least, it shows the wide difference in quality the manufacturers have between drives of the same model (you can really get some hit and miss model ranges-and primarily due to some of that model using different brands of a component/components across that single model).

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, Sauron said:

Sure, but they can't use a 34k unit sample size and then a 200 unit one and call the comparison fair... especially since the statistics have double decimal figures. They should all have at least 10000 units for it to work as you said.

Actually, they can, as long as they do what they do instead of what they should :P Ok, if the incidence is low enough 200 will be to low to reasonably detect it, that is true. But as @Jovidah said, if you had 34kk and 300k it would be fine despite the different order of magnitude in the number of observations. [And now there should be a new paragraph, but half the message gets deleted if I press enter. Surely the forum, Chrome, or both, is broken AF]. The other problem is you need a much lower number of observations for unconditional rates (what they report) than for controlling for other factors (as they should do, since not all drives are used in the same conditions). The third problem is that in all cases, whatever they do and whether they compute something useful or not, they should report the standard errors, or the 95% confidence interval, or something. That's all we would need to be able to tell whether a particular failure rate can be considered statistically different from another. Again, assuming any of those numbers are a meaningful estimate of failure rates (actually, under their assumption -that nothing needs to be controlled for- we can probably recover the standard errors from the information they provided). [insert enter, again] So you are right in the end, but the difference in sample sizes is not the problem. The lack of any reporting on precision is.

Link to comment
Share on other sites

Link to post
Share on other sites

10 hours ago, TheRandomness said:

When will they learn you can't rely on a consumer drive in a datacenter...

When Google does. Google uses desktop class drives for everything but their innermost core of backups. This report isn't flawed in the slightest. You're just trying to apply it for all use cases.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

The problem is, we won't get reliable data from anywhere. I hoped Backblaze would give additional data along, so I could verify for myself if it is relevant or not. All in all it gives me confidence that the numbers are that "good" in a worst case scenario. Makes me feel comfortable. That said I had 2 Seagate drives fail (from 3), no WD-Green (2), no WD-Red (1) and no hitachi (2) with the hitachi in a pretty bad raid setup and the Seagate with what I would call average use for 3 years.

I know that this sample size is the worst and intelectually I should not conclude from this that Seagate makes bad HDDs but afterwards I just had a bad feeling and never bought Seagate again until recently.

Link to comment
Share on other sites

Link to post
Share on other sites

22 minutes ago, anotherriddle said:

The problem is, we won't get reliable data from anywhere. I hoped Backblaze would give additional data along, so I could verify for myself if it is relevant or not. All in all it gives me confidence that the numbers are that "good" in a worst case scenario. Makes me feel comfortable. That said I had 2 Seagate drives fail (from 3), no WD-Green (2), no WD-Red (1) and no hitachi (2) with the hitachi in a pretty bad raid setup and the Seagate with what I would call average use for 3 years.

I know that this sample size is the worst and intelectually I should not conclude from this that Seagate makes bad HDDs but afterwards I just had a bad feeling and never bought Seagate again until recently.

Dont mix in that one here, that is for use in NAS units, "normal" consumers wont even touch them because they are expensive... Just for fun i looked through all my drives to found out which one is the oldest(yep, none of my HDD's failed so far(including my portable ones)), and shore enough, i have one from 03/10/2009 xD:

https://dl.dropboxusercontent.com/u/1201829/ltt-forum/IMG_20161120_232643.jpg

 

3 years as OS drive, then data for 1 year(mostly games so it was still heavily used), and since then it sits beside my NAS in a ext HDD enclosure. Its old but its still good enough to store some misc stuff...

Link to comment
Share on other sites

Link to post
Share on other sites

12 hours ago, patrickjp93 said:

When Google does. Google uses desktop class drives for everything but their innermost core of backups. This report isn't flawed in the slightest. You're just trying to apply it for all use cases.

Even if you ignore the drive types they are using, the report is still flawed and not usable without additional information. For example, Backblaze has multiple revisions of their mounting solution, the older ones don't have vibration dampening. For all we know, the drives with the high failure rates were in their old mounting system and the ones with low failure rates are in their new systems. Without that information, the failure rates are useless for any comparison or reliability statistics. 

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Prysin said:

nor is their testing methodology

 

"hey let's test some HDDs"
"mkay, how"
"Oh, i dont know, 24/7 surveilance taping with heavy IO loads for 2 years?"
"oh great. So what disks we testing??"
"some cheap ones"

they are not testing the hard drives they are a business that uses hard drives and they release their results with them. they use cheap drives probably because they found that even accounting for their failure rates they are a cheaper solution

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, TopWargamer said:

What's misleading about it though? They're using consumer drives in a server. It's the ultimate stress test.

The problem is that all the conditions in the 'test' preclude a fair comparison between them. They have them under different conditions; different types of mounting pods, with differences in vibration dampening between the pods and even differences in temperatures within the pods. The actual drives were a complete mix and in earlier years contained drives ripped out of external storage enclosures, refurbished models, RMA's, and stuff in all different ages. Their methodology of measurement sucks the big one as well, and compares drives with wildly different drive ages (meaning you're comparing  a group op drives just 3 months old with a group that is 4 years old), and many groups of drives are far too small (leading to a very low unreliability of the acutal numbers).

 

Long story short, the methodology is so incredibly flawed that the numbers are completely meaningless.

Had they:

-bought large batches of brand-new drives, straight from manufacturer, at the same time (several thousand for each model at a minimum)

-put them under equal conditions in equal mounting solutions in equal pods etc etc

-ran them all for 5 years under the exact same stress and workload


Then I wouldn't have as much of a problem with it. Although the results would still be fairly unrepresentative of actual usage. So they'd still be pointless, but at least they'd be reliable. Now they are anything but.

19 hours ago, TopWargamer said:

Then if what Backblaze is doing is the same as what Furmark is doing (ie killing shitty drives/video cards), then they're doing their job. You now know not to buy those shitty drives. Even if you knew they were shitty, some people do not. These stress tests help the uninformed or slightly uninformed people pick out a drive. I still do not see why Backblaze gets all of this hate.

As mentioned above... at least Furmark is a 'fair' test; it just puts everything at 100%. This test is the equivalent of putting 50 different videocards in 50 different cases with 50 different cooling solutions, then running a Furmark at 50 different levels of performance / intensity settings, and then using that to claim which card is best.

It's simply not a consistent / fair test. It's predictive value is exactly zero.

19 hours ago, Dabombinable said:

At the very least, it shows the wide difference in quality the manufacturers have between drives of the same model (you can really get some hit and miss model ranges-and primarily due to some of that model using different brands of a component/components across that single model).

Again, due to the wildly different treatment, it is impossible to claim anything at all about any drive. Although I do agree that there can be sizable differences between different models, or lines, of the same brand. Just claiming one brand is 'superior' is usually flawed.

14 hours ago, patrickjp93 said:

When Google does. Google uses desktop class drives for everything but their innermost core of backups. This report isn't flawed in the slightest. You're just trying to apply it for all use cases.

What google does is their business. This report is flawed as fuck. Regardless of what you want to apply it to. It's not even applicable to server scenarios, as too much information is missing, and many groups are too small.

1 hour ago, Oshino Shinobu said:

Even if you ignore the drive types they are using, the report is still flawed and not usable without additional information. For example, Backblaze has multiple revisions of their mounting solution, the older ones don't have vibration dampening. For all we know, the drives with the high failure rates were in their old mounting system and the ones with low failure rates are in their new systems. Without that information, the failure rates are useless for any comparison or reliability statistics. 

Agreed. This suspicion is reinforced by how their 'patterns' are very far off from the hardware.fr numbers. 

Even the drives within the same pod aren't even operating under the same condition.

If anyone's interested I think there was an article on tweaktown that tackled it in some detail a while back (just google it).

 

The WD Red drive that sees 11,31% failure at Backblaze? 1,5% at hardware.fr...

 

As I said, the problem is not just that the type of testing isn't representative of home usage... the problem is that their 'worst case scenario' isn't equalized for all the drives, so any results for it might be as much because of the conditions than due to differences between the drives. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, spartaman64 said:

they are not testing the hard drives they are a business that uses hard drives and they release their results with them. they use cheap drives probably because they found that even accounting for their failure rates they are a cheaper solution

That's definitely why they do it, and them publishing failure rates are a way of the company being transparent about their drive usage. This time round, they haven't really published the results under the guise of being "reliability test result" like they have in the past, but they still treat it that way in some parts of their report. The issue comes from people interpreting their data as an indication of a drive's quality and reliability. 

 

Link to comment
Share on other sites

Link to post
Share on other sites

All of this is reminding me of the time I bought a 300GB Maxtor drive. It lasted nearly five years before I gave it to someone else (but I don't know what happened afterwards to it), and it showed no signs of imminent failure when I let go of it. Heck I remember before that I had a 60 GB Maxtor drive that lasted longer than the computer it was originally in.

 

And Maxtor apparently had this reputation of being worse than Seagate.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2016-11-20 at 1:38 PM, Oshino Shinobu said:

The issue with Backblaze's results is the way in which they pick the drives they use and the environments they use the drives in. Basically, they tend to pick the cheapest drives and buy a load of them, shove them in the data centre and just replace them when they fail. While it's somewhat of a questionable business model, it can work out cheaper than buying data centre class drives, so they can pass on savings to their customers. The primary issue of their drive selections is when it comes to their failure rates. The vast majority of the drives in their data centres are desktop class drives, ones that are not designed for data centre conditions whatsoever. They have even had drives that were ripped from external enclosures because they were cheap. The fact that the drives are being used in an environment that is significantly different from their intended purpose makes the results meaningless right away, but it is not the only issue with the statistics. The general lack of information on the drives is a big issue when trying to compare failure rates. They ommit things such as the age of the drives, the proximity of the drives, the temperatures they were running at, the case/rack revision they were mounted in and so on. Without any of this, the results are even more meaningless. 

 

Ultimately, these results are like comparing which super cars died when driving them off-road. It's irrelevant to their intended use and if that's pretty much the only information provided, it's invalid anyway. 

On 2016-11-20 at 1:38 PM, Oshino Shinobu said:

Needless to say, these results are meaningless for the drives' intended purposes. If a company like WD would allow a drive to have a greater than 11% failure rate, they wouldn't be in business for long. 

 

At least Backblaze hasn't tried to make the data look as much of a reliability chart this time round. The fact that they still use the term "reliability stats" in their article is wrong, though. Here's their previous graphs, which caused a lot of misinformation, in case you haven't seen it: 

So... any reason why we should take their reports seriously?

Props to OP for clearly explaining why he thinks the data is flawed. ^_^

Quote

The problem is that this is an nVidia product and scoring any nVidia product a "zero" is also highly predictive of the number of nVidia products the reviewer will receive for review in the future.

On 2015-01-28 at 5:24 PM, Victorious Secret said:

Only yours, you don't shitpost on the same level that we can, mainly because this thread is finally dead and should be locked.

On 2016-06-07 at 11:25 PM, patrickjp93 said:

I wasn't wrong. It's extremely rare that I am. I provided sources as well. Different devs can disagree. Further, we now have confirmed discrepancy from Twitter about he use of the pre-release 1080 driver in AMD's demo despite the release 1080 driver having been out a week prior.

On 2016-09-10 at 4:32 PM, Hikaru12 said:

You apparently haven't seen his responses to questions on YouTube. He is very condescending and aggressive in his comments with which there is little justification. He acts totally different in his videos. I don't necessarily care for this content style and there is nothing really unique about him or his channel. His endless dick jokes and toilet humor are annoying as well.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, spartaman64 said:

they are not testing the hard drives they are a business that uses hard drives and they release their results with them. they use cheap drives probably because they found that even accounting for their failure rates they are a cheaper solution

You should take their reports as accurate, but it should not be used as a reliability or quality reference for drives. I doubt they're manipulating any data or similar for their reports, they're just transparent about their drive usage. 

 

You should take the reports seriously in the sense of that's their drive failure rate for their specific use case. That's it really, it shouldn't be applied to any other data or use case. 

 

EDIT: To be clear, I'm not saying Backblaze is a bad company or anything. From what I know, they're reliable in terms of data integrity and redundancy, they just take a different approach to a lot of companies when it comes to drive choices. They go for cheap drives and replace them often rather than going for more expensive drives that don't need to be replaced as often. This will work out cheaper for them and makes the initial cost of capacity expansion smaller, due to them not having to buy a bunch of expensive datacentre drives. 

 

The only issue I have is when their drive failure reports are reported as reliability statistics or used as such, like they have been in the past. I think their 2013-2014 reports and graphs are a big part of the reason Seagate has a reputation as being unreliable, even though it is not true. 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Jovidah said:

Again, due to the wildly different treatment, it is impossible to claim anything at all about any drive. Although I do agree that there can be sizable differences between different models, or lines, of the same brand. Just claiming one brand is 'superior' is usually flawed.

As in, a single model can have certain drives that are more prone to failing due to using different brands of components. Eg. Samsung HA250JC are more likely to have the control board die if it has ESMT chips on it, while HA250JC are less likely to have it die if Samsung chips were used (the control boards can be switched between HDD BTW despite the different chips-hence I've still got 3x working HA250JC out of 4, instead of just 2).

41 minutes ago, M.Yurizaki said:

All of this is reminding me of the time I bought a 300GB Maxtor drive. It lasted nearly five years before I gave it to someone else (but I don't know what happened afterwards to it), and it showed no signs of imminent failure when I let go of it. Heck I remember before that I had a 60 GB Maxtor drive that lasted longer than the computer it was originally in.

 

And Maxtor apparently had this reputation of being worse than Seagate.

 

They were known to be worse due to the control boards being very easy to damage/dying easily. I replaced the control board on my long "dead" 420MB Maxtor HDD last year, and its working like new again (I'm modifying Windows XP so that it can be installed on it, with only the drivers that match the hardware it would be used with for example)

 

Edit: I've got shitloads of HDD from Seagate, Western Digital, Quantum and Maxtor that just need the control boards replaced. The only drive with mechanical failure is one of the HA250JC that I bought second hand on Ebay.

 

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

45 minutes ago, M.Yurizaki said:

All of this is reminding me of the time I bought a 300GB Maxtor drive. It lasted nearly five years before I gave it to someone else (but I don't know what happened afterwards to it), and it showed no signs of imminent failure when I let go of it. Heck I remember before that I had a 60 GB Maxtor drive that lasted longer than the computer it was originally in.

 

And Maxtor apparently had this reputation of being worse than Seagate.

I routinely 'abandoned' my Seagate drives after they had about 75000 hours on it. Still going strong without errors. I know, bad practise, but I was too poor to replace them. But I was never punished for it; none of them ever died. Neither did any of the WD or Maxtor drives ever fail me. 

But again that's all N=1. You can be extremely lucky or extremely unlucky. I could just as easily have 4 drives fail on me. It's just luck of the draw.

41 minutes ago, Shahnewaz said:

So... any reason why we should take their reports seriously?

Props to OP for clearly explaining why he thinks the data is flawed. ^_^

In all honesty... no. I agree with Ishino Shinobu that there's probably no foul play at hand here. Most likely they just have a far better understanding of running cloud storage than they do of research design. They just did what they did to run a company, and tried to share what they found out along the way. The problem is that due to the (lack of) methodology, any data gained is completely invalid and corrupt. 

To be fair though; a lot of the 'benchmarking' / comparison in the techworld is very flawed. Setting this up as a 'proper' research would have been very expensive and unworkable if you're trying to run a company off it. But for me that'd be a reason to at least take out all references to specific brands or drives (as your data is too unreliable to call it).

 

If anything this shows that if you have your software in order and a proper redundancy setup, you can screw up your hardware all you want and get away with it. :) But these reports themselves...well... they're mostly useful as clickbaity marketing material. I agree, these are the main reason Seagate's reputation has tanked (it used to be stellar) and I'm actually somewhat surprised they haven't sued them for slander. I suppose they were afraid for the Barbara Streisand effect.

8 minutes ago, Dabombinable said:

As in, a single model can have certain drives that are more prone to failing due to using different brands of components. Eg. Samsung HA250JC are more likely to have the control board die if it has ESMT chips on it, while HA250JC are less likely to have it die if Samsung chips were used (the control boards can be switched between HDD BTW despite the different chips-hence I've still got 3x working HA250JC out of 4, instead of just 2).

 

They were known to be worse due to the control boards being very easy to damage/dying easily. I replaced the control board on my long "dead" 420MB Maxtor HDD last year, and its working like new again (I'm modifying Windows XP so that it can be installed on it, with only the drivers that match the hardware it would be used with for example)

 

Edit: I've got shitloads of HDD from Seagate, Western Digital, Quantum and Maxtor that just need the control boards replaced. The only drive with mechanical failure is one of the HA250JC that I bought second hand on Ebay.

 

Interesting example. I guess my point was mostly that there usually is more variability between different products of the same brand, than there is between products of different brands. (Inter-group variance > Intra-group variance) People often claim 'this brand is reliable' or 'this brand is unreliable'. In practise this 'brand-approach' is flawed.

The same is to some extent probably also true for some other hardware (sillicon lottery!) like graphics cards. The differences are usually so small (and test samples preselected) that I wouldn't be surprised to see that many 'benchmark results' are just capitalization on chance, and taking different samples of the same models could yield completely different results. Sadly no one seems to be very eager to test this one.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Oshino Shinobu said:

You should take their reports as accurate, but it should not be used as a reliability or quality reference for drives. I doubt they're manipulating any data or similar for their reports, they're just transparent about their drive usage. 

 

You should take the reports seriously in the sense of that's their drive failure rate for their specific use case. That's it really, it shouldn't be applied to any other data or use case. 

 

EDIT: To be clear, I'm not saying Backblaze is a bad company or anything. From what I know, they're reliable in terms of data integrity and redundancy, they just take a different approach to a lot of companies when it comes to drive choices. They go for cheap drives and replace them often rather than going for more expensive drives that don't need to be replaced as often. This will work out cheaper for them and makes the initial cost of capacity expansion smaller, due to them not having to buy a bunch of expensive datacentre drives. 

 

The only issue I have is when their drive failure reports are reported as reliability statistics or used as such, like they have been in the past. I think their 2013-2014 reports and graphs are a big part of the reason Seagate has a reputation as being unreliable, even though it is not true. 

yes they have a very different workload

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Jovidah said:

Interesting example. I guess my point was mostly that there usually is more variability between different products of the same brand, than there is between products of different brands. (Inter-group variance > Intra-group variance) People often claim 'this brand is reliable' or 'this brand is unreliable'. In practise this 'brand-approach' is flawed.

The same is to some extent probably also true for some other hardware (sillicon lottery!) like graphics cards. The differences are usually so small (and test samples preselected) that I wouldn't be surprised to see that many 'benchmark results' are just capitalization on chance, and taking different samples of the same models could yield completely different results. Sadly no one seems to be very eager to test this one.

Well I do know that my GTX 970 G1 Gaming can not be overclocked past 1354MHz due to how hot and unstable it is. A 62.1% ASIC score means that its pretty much the equivalent of getting an i7 4790K that requires 1.35V to run at 4.4GHz (mine is good in that regard, 1.35V for 4.8GHz)

"We also blind small animals with cosmetics.
We do not sell cosmetics. We just blind animals."

 

"Please don't mistake us for Equifax. Those fuckers are evil"

 

This PSA brought to you by Equifacks.
PMSL

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Oshino Shinobu said:

You should take their reports as accurate, but it should not be used as a reliability or quality reference for drives. I doubt they're manipulating any data or similar for their reports, they're just transparent about their drive usage. 

 

You should take the reports seriously in the sense of that's their drive failure rate for their specific use case. That's it really, it shouldn't be applied to any other data or use case. 

 

EDIT: To be clear, I'm not saying Backblaze is a bad company or anything. From what I know, they're reliable in terms of data integrity and redundancy, they just take a different approach to a lot of companies when it comes to drive choices. They go for cheap drives and replace them often rather than going for more expensive drives that don't need to be replaced as often. This will work out cheaper for them and makes the initial cost of capacity expansion smaller, due to them not having to buy a bunch of expensive datacentre drives. 

 

The only issue I have is when their drive failure reports are reported as reliability statistics or used as such, like they have been in the past. I think their 2013-2014 reports and graphs are a big part of the reason Seagate has a reputation as being unreliable, even though it is not true. 

I'm sorry but Seagate is unreliable. When neither Amazon nor Google cloud services use a single Seagate drive when they're cheap as dirt, there's clearly a reliability issue.

Software Engineer for Suncorp (Australia), Computer Tech Enthusiast, Miami University Graduate, Nerd

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×