Jump to content

Backblaze's credibility

Mikensan
Go to solution Solved by Enderman,
4 minutes ago, Mikensan said:

 

@Enderman 10-20C above ambient? Maybe in a small law firm that's using a closet and broke as hell lol. If your office keeps the ambient temp around 70f, and you're 80-90f (assuming you meant f and not c). Every server room I've been in have alarms set for 70f and sits betwen 55-65. Although they aren't running tests, they're real case use - it is true we have no idea exactly how much data was being r/w. Only valid point so far. 

pretty sure most server rooms run at temperatures much higher than ambient, and yes i meant C

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

 

and if we dont know wtf backblaze is doing with data, how can you know that the seagate drives and WD drives were being used equally?

there is no proof that they are

for all we know, backblaze could have put the seagate drives in active servers used 24/7 and WD drives in archive servers for periodic backups

how many hours a drive has been on doesn't tell us shit about the reliability

Without getting side tracked into the lawsuit, thought I would briefly talk about how to use their data. Summary: Use their data as a minimum expectation.

Is their data useful? Yes. 

Is their data directly applicable for home users? No

Would "scientific" testing yield any more useful data? Hahaha. No. There'll be a margin of error, but it'll be damn close.

 

All drives fail, simply increasing the usage just causes them to fail sooner. So for the ST4000DM000 - 13 months you can expect a 3.06% failure under extreme loads. Personally I find that acceptable and would buy that model. Less than 1 out of 10 drives might fail within the warranty period.. super. 

 

Some areas of argument I've noticed:

Environment:

Humidity, temperature, and access are controlled. Whether or not the drives have more or less heat doesn't matter, it's consistent. 60f-70f air is being pulled between the drives, I'm willing to bet the drive temps are well within manufacturer's expectations.

 

Usage:

Certainty the usage is greatly higher in a data center - this doesn't increase the odds of a drive dying, all drives die. It simply means it'll fail sooner in their environment. If an enterprise environment only gets say 13 months out of a consumer drive, and even then only a 3.06% failure chance, a consumer might get that over 5 years - but still risk ~3% failure.... 

 

Worst argument I keep reading "Consumer drives are not designed for enterprise environments." You guys think that the drives magically spin faster, run hotter, and are somehow under extreme stress. Stress? Being in an enterprise environment simply means more data changes. The drives get used more. What exactly are the limitations of a consumer drive that an enterprise environment is overwhelming?

 

If a drive fails within the warranty - replace it. If a drive lasts longer than the warranty - you got your money's worth. If a drive fails and you lose your data .... that's Your fault.

 

Link to comment
Share on other sites

Link to post
Share on other sites

You literally just said it yourself. Consumer drives should not be used in enterprise environments -- especially ones with high data throughput.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Mikensan said:

Worst argument I keep reading "Consumer drives are not designed for enterprise environments." You guys think that the drives magically spin faster, run hotter, and are somehow under extreme stress. Stress? Being in an enterprise environment simply means more data changes. The drives get used more. What exactly are the limitations of a consumer drive that an enterprise environment is overwhelming?

 

 

Vibration.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, Sakkura said:

 

Vibration.

One can easily reduce vibration in the chassis from being transferred to the drives and vice versa with cheap materials. Far cheaper than forking over the extra money to buy enterprise drives over consumer.

Don't do drugs. Do hugs!

 

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Tmt97 said:

One can easily reduce vibration in the chassis from being transferred to the drives and vice versa with cheap materials. Far cheaper than forking over the extra money to buy enterprise drives over consumer.

 

It's probably not possible within a reasonable footprint in a large deployment. At least it wasn't something Backblaze prioritized in their setup.

Link to comment
Share on other sites

Link to post
Share on other sites

clearly you have never been in a server room

"cool air being pulled between the drives" LOL

nope

temperature inside a server room is typically 10-20C higher than ambient

and what tests do they even run on the drives? how do you know all the drives get the same amount of reads and writes?

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Jade @Sakkura - point still stands, how does that make the data useless? 

 

@Enderman 10-20C above ambient? Maybe in a small law firm that's using a closet and broke as hell lol. If your office keeps the ambient temp around 70f, and you're 80-90f (assuming you meant f and not c). Every server room I've been in have alarms set for 70f and sits betwen 55-65. Although they aren't running tests, they're real case use - it is true we have no idea exactly how much data was being r/w. Only valid point so far. 

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Mikensan said:

 

@Enderman 10-20C above ambient? Maybe in a small law firm that's using a closet and broke as hell lol. If your office keeps the ambient temp around 70f, and you're 80-90f (assuming you meant f and not c). Every server room I've been in have alarms set for 70f and sits betwen 55-65. Although they aren't running tests, they're real case use - it is true we have no idea exactly how much data was being r/w. Only valid point so far. 

pretty sure most server rooms run at temperatures much higher than ambient, and yes i meant C

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

 

and if we dont know wtf backblaze is doing with data, how can you know that the seagate drives and WD drives were being used equally?

there is no proof that they are

for all we know, backblaze could have put the seagate drives in active servers used 24/7 and WD drives in archive servers for periodic backups

how many hours a drive has been on doesn't tell us shit about the reliability

NEW PC build: Blank Heaven   minimalist white and black PC     Old S340 build log "White Heaven"        The "LIGHTCANON" flashlight build log        Project AntiRoll (prototype)        Custom speaker project

Spoiler

Ryzen 3950X | AMD Vega Frontier Edition | ASUS X570 Pro WS | Corsair Vengeance LPX 64GB | NZXT H500 | Seasonic Prime Fanless TX-700 | Custom loop | Coolermaster SK630 White | Logitech MX Master 2S | Samsung 980 Pro 1TB + 970 Pro 512GB | Samsung 58" 4k TV | Scarlett 2i4 | 2x AT2020

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Mikensan said:

@Jade @Sakkura - point still stands, how does that make the data useless? 

 

@Enderman 10-20C above ambient? Maybe in a small law firm that's using a closet and broke as hell lol. If your office keeps the ambient temp around 70f, and you're 80-90f (assuming you meant f and not c). Every server room I've been in have alarms set for 70f and sits betwen 55-65. Although they aren't running tests, they're real case use - it is true we have no idea exactly how much data was being r/w. Only valid point so far. 

80F is what all our alarms are set to, but we're a bit different as we're a networking company not a data storage / processing company. But any higher than that and you can actually have warranties denied on enterprise grade hardware. Of course Backblaze isn't going to get warranty replacement for the consumers drives they kill, my point is that the cold air in front of a servers fans would normally be no more than 80F. In one of Backblaze's reports they did say what temperatures they had running, I don't remember what it was other than thinking it was reasonable. And they had a 5 degree difference from top of rack to bottom, again reasonable.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Mikensan said:

@Jade @Sakkura - point still stands, how does that make the data useless? 

If you're running a drive in an environment where they're not meant to be (read: running 24/7, in configurations where vibrations are significantly increased in contrast to where they're designed to be run, constantly being written/read from, also in potentially hot environments), it's no surprise the drive dies significantly quicker. Also as above said... sample size. If they're running a 1:1 ratio of every type of drive, then it's a fair data chart. If they're running 100 HGST drives, 1000 WD drives, and 10000 Seagate drives, once again, no surprise, the Seagate drives are going to fail more frequently because there's more of them to have issues.

Let's be totally clear here - I don't like Seagate; I use WD products. I have no fan-based reason to protect them.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, Enderman said:

pretty sure most server rooms run at temperatures much higher than ambient, and yes i meant C

http://www.datacenterknowledge.com/archives/2012/03/23/too-hot-for-humans-but-google-servers-keep-humming/

 

and if we dont know wtf backblaze is doing with data, how can you know that the seagate drives and WD drives were being used equally?

there is no proof that they are

for all we know, backblaze could have put the seagate drives in active servers used 24/7 and WD drives in archive servers for periodic backups

how many hours a drive has been on doesn't tell us shit about the reliability

 

From your article: "  Typical temperature ranges in data centers often range from 68 and 72 degrees. "

 

@Enderman if it's 10-20c Above ambient temperatures, then you're suggesting server rooms operate from 80F-100F... I would start turning equipment off. We don't know how the load is being distributed across their drives, so I agree without more information from backblaze, it is inconclusive. If they however did provide more details then we would have something worthwhile.

@brwainer 80f is fine too but for my environment it would make me nervous, and good point about the warranty. Even 80f being pulled over the drives will keep them within spec.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, Jade said:

If you're running a drive in an environment where they're not meant to be (read: running 24/7, in configurations where vibrations are significantly increased in contrast to where they're designed to be run, constantly being written/read from, also in potentially hot environments), it's no surprise the drive dies significantly quicker. Also as above said... sample size. If they're running a 1:1 ratio of every type of drive, then it's a fair data chart. If they're running 100 HGST drives, 1000 WD drives, and 10000 Seagate drives, once again, no surprise, the Seagate drives are going to fail more frequently because there's more of them to have issues.

Let's be totally clear here - I don't like Seagate; I use WD products. I have no fan-based reason to protect them.

That's the overall argument from me, you might decrease their lifespan but you're not over-exerting it causing more failures. They're working constantly but at the pace they're designed to work at. I agree and didn't mention it before but small sample sizes are not going to provide any useful insight. Not arguing about which drive is better than the other, because you may think you're willing to take the risk and save money. Cheap parts will save you money but will not ultimately last. 

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, Mikensan said:

That's the overall argument from me, you might decrease their lifespan but you're not over-exerting it causing more failures. They're working constantly but at the pace they're designed to work at. I agree and didn't mention it before but small sample sizes are not going to provide any useful insight. Not arguing about which drive is better than the other, because you may think you're willing to take the risk and save money. Cheap parts will save you money but will not ultimately last. 

 

A NAS optimised HDD or enterprise server disk have vibration resilient designs and firmware, that is about the only thing physical reliability wise that would differentiate from a standard consumer disk. This has a lot to do with server sleds and NAS bays not providing any real vibration dampening which you can do with a more custom case design. Also NAS disks have higher unload cycle rating compared to even enterprise server disks.

 

Seagate can and does make very reliable disks, almost every server and storage vendor on the market use their disks as OEM parts with custom firmware. The only issue I've had is with the Seagate ES/ES.2 series which had serious issues and failed at a much higher rate than even the desktop barracuda disks, extremely disappointing for a supposed NAS/enterprise disk.

 

For what Backblaze does and how they configure their servers and distribute the data I would say using the desktop disks is a smart choice for the cost savings. It is not that hard to replace disks in servers especially when you have explicit plans and expectations on having to do so.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×