Jump to content

Backblaze: SSDs might be as unreliable as disk drives

Lightwreather
Go to solution Solved by LAwLz,
59 minutes ago, jagdtigger said:

Look when i have a wd green from 2011 that even survived running torrents and being in a raid vs 3 hdd from 2017 which died with something like 30k hours on it used in its intended use-case. Thats way more than just bad luck.

(I even have a 200 GB WD somewhere that still works with <10 bad sectors.... [and even those are old AF, the drive was one or two years old i think])

/EDIT

Oh and did i mention for not much more i could get wd dc-hc drives instead of crappy ironwolfs? Yeah seagate can go bust for all i care.....

Seagate has lower RMA rates than Western digital. 

It was 0.93% vs 1.26% in 2017 (no more up to date data). 

 

Failure rate of the 4TB WD Red - 2.95%

Failure rate of 4TB IronWolf - 2.81%

 

 

Source: https://www.hardware.fr/articles/962-6/disques-durs.html

 

It's RMA rates from a very large French retailer. 

 

I don't doubt your experience, but the fact of the matter is that your experience is just a very tiny sample and as a result of bad luck, it is very skewed compared to the real world generalized numbers. 

 

 

Edit: 

For those interested, here are the RMA statistics for HDDs and SSDs according to the French retailer, which I think is way more representative of what consumers doing consumer things can expect. 

 

HDDs:

  • HGST 0,82%
  • Seagate 0,93%
  • Toshiba 1,06%
  • Western 1,26%

 

SSDs:

  • Samsung 0,17%
  • Intel 0,19%
  • Crucial 0,31%
  • Sandisk 0,31%
  • Corsair 0,36%
  • Kingston 0,44%

i know you're supposedly "done" with this conversation, but i just wanted to point out a few major issues with your posts:

23 hours ago, BigDamn said:

Backblaze has information regarding failure rates if you're interested. I'm aware you're not a fan of Backblaze, as you're unable to decipher their data with intelligence (and apparently read opinion pieces on it?) but the data is there.

22 hours ago, BigDamn said:

The data regarding these drives isn't hiding from you, but since you're unable to find it on your own I'll provide you a report.

22 hours ago, BigDamn said:

Failure data. Not 14 years ago. The data isn't hiding from you.

 

Whenever you make claims, it is up to YOU to back them up. You can't just say "here's my argument points, now go validate them yourself, the data is all there"

As Lawlz has already mentioned, but to reiterate, your one link you have provided as "evidence" is of one specific drive model, of which you've based your entire opinion about a company.

 

Dabombinable has already mentioned, but here is the article (remember, link your sources):

https://www.tomshardware.com/news/seagate-hdd-failure-lawsuit-3tb,31118.html

 

Quote

The Seagate 3 TB models failed at a higher rate than other drives during the Backblaze deployment, but in fairness, the Seagate drives were the only models that did not feature RV (Rotational Vibration) sensors that counteract excessive vibration in heavy usage models -- specifically because Seagate did not design the drives for that use case.

 

Quote

Backblaze setup

The first revision of the pods, pictured above, had no fasteners for securing the drive into the chassis. As shown, a heavy HDD is mounted vertically on top of a thin multiplexer PCB. The SATA connectors are bearing the full weight of the drive, and factoring the vibration of a normal HDD into the non-supported equation creates the almost perfect recipe for device failure.

 

Backblaze has confirmed it still has all revisions of its chassis installed in its datacenters and that it replaced failed drives into the same chassis the original drive failed in. This could create a scenario where replacement drives are repeatedly installed into defective chassis, thus magnifying the failure ratio.

Quote

The Backblaze environment employed more drives per chassis and featured much heavier workloads (both of which accelerate failure rates tremendously) than the vendors designed the client-class HDDs for

Quote

The conditions of the Backblaze failure data, even by the company's own admission, are far beyond the warranty claims of said hardware, which begs the immediate question of whether that data will pass the sniff test in court. I am no lawyer, but it should be relatively easy for Seagate to parry in this case; the results are essentially worthless to measure any practical consumer client application within the warranty guidelines.

It's like using the handle of a screwdriver to hammer in a nail and then complaining that your screwdriver got damaged.

 

< removed by moderation >

Edited by LogicalDrm

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

20 hours ago, LAwLz said:

 

 

But yes, you're right that the ST3000DM001 had higher failure rates than most drives. It's also worth mentioning that the ST3000DM001 was released a decade ago, in 2011.

 

 

I've got one of those with a lot of data on it,  not sure what version it is but it is coming up to 3 years now,  I think I will replace it early just to be sure (I usually swap them out at 4 years and repurpose them for cold storage).

 

< removed by moderation > Even the 2015 class action resulted in an arbitrary decision by the court that seagate over stated their claims by 7%.   That's fucking margin of error in just about all applications.    Just more lawyers cashing in on consumer ignorance.

 

https://topclassactions.com/lawsuit-settlements/closed-settlements/575-seagate-hard-disk-drive-class-action-lawsuit-settlement/

 

 

Edited by LogicalDrm

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

I think the vitriol towards Backblaze is a bit unnecessary. Stats are stats, and they of course require context and examination to pull useful information as with any studies, reports, etc... The dismissing of data by so many users is disingenuous towards objectivity and a little disturbing on this forum.

 

All this "running DC workloads on consumer drives isn't practical to real failure," blah blah is a bit silly as well. Yes, because tech reviewers overclocking, running Prime95, FurMark, etc. are also super practical for the average teen looking into specs for his new PC so he can play Fortnite a few times a week.

 

But they are still relevant components of reviews, because more information is better. They can outline some best case scenarios and worst case scenarios. 

 

For most of the listed stuff with Backblaze, these constitute one end of the spectrum, and under certain "abuse," you can expect these types of failures. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, valdyrgramr said:

Well, it sounds more like they're stressing than anything.  That's the most realistic way of testing reliability.  No test is rock solid, sadly.  It's about using the most rock solid approach you can.  It would be far more unrealistic for them to make tests based on assumptions.  As for SSD reliability, that depends on the scenario and type of SSD.  HDDs are more reliable for long term storage, but tape drives are supperior to both.  However, there's a lot of ignored information.  For example, Lawlz pointed out RMAs from one French retailer while ignoring the fact that retailers get bad batches.  How do we know if those RMA rates are or aren't exclusive to just them?  Now, it would be nice if BB did point out their reasoning for their methedology and statements.  But, still you have to take the info with a grain of salt as it won't be utter perfection no matter what methedology they use.

We don't want them to do the tests in a specific way, nor are we saying that there is anything intrinsically wrong with their data,  the issue is how they are presenting the data and the severe lack of qualification of what that data means.

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, divito said:

The dismissing of data by so many users is disingenuous towards objectivity and a little disturbing on this forum.

I think it's perfectly objective to point out that these SSDs are being used in a situation they are not designed for and unlike HDDs SSDs do have defined wear life based on workload.

 

This bears almost zero useful information to consumers because the way they are being used will shorten the lifespan, wear endurance is not linear. You can kill an SSD with as little as 50TB of writes and the same SSD could survive 500TB of writes, it's purely down to how those writes were done and over what length.

 

Running these SSDs like this is literally nothing like overlocking a CPU or GPU, even using LN2 and OC the hell out of a CPU won't reduce it's life as much so long as it's done properly. Also none of what you speak of is a CPU of GPU endurance test and gives you no insight in to that in the same way ATTO doesn't give you any insight in to SSD endurance. Performance testing and endurance testing are entirely different things and endurance testing is much harder to do correctly.

 

The most objective way to look at this data is to discard it because it's an outlier, because it's dirty, like any proper statistician would. If you wanted to know if you can get away with these SSDs in servers then this is completely useful and relevant information, next to zero reading that article would actually be interest in that however. Those that want to cheap out on higher end SSDs go for the likes of Samsung EVOs/Pros which already have published endurance testing so we already know they can be acceptably used in those situations, so why did Backblaze not use those? Because they cost more than what they went with.

 

If you want to cheap out on SSDs there are good ways to do it and there are bad ways, I present to you Backblaze aka the bad way.

 

If you want to test SSD endurance on a consumer SSD for consumer workloads that's easy to do, run up IOmeter and set the correct parameters and run it on hundreds of SSDs over years then publish that data. Seems a bit costly to do doesn't it? The hardware vendor already has endurance specification on the SSD and they arrive at those in a much more applicable way. So we don't need Backblaze's unrealistic data, I'm more willing to trust Sandisk when they say X product is endurance rated to 200TBW over 5 years. 

 

We have 20 Netapp 200GB SSDs (Samsung OEM) that are 5 years old, zero have failed and ONTAP reports 99% remaining life. We also have 48 Netapp 800GB SSDs (Samsung OEM) that are 4 years old, zero have failed and ONTAP reports 99% remaining life.

 

We have 105 HCI VMware hosts with probably an average of 3 SSDs per host, so 315 SSDs of various sizes all Samsung OEM and some as old as 8 years with also zero failures. This does not including the dual M.2 boot SSDs each host has either. We also have various other HPE servers with SSDs in them for host OS or RAID data arrays for specific use cases so to pull a number out of thin air at least another 100 SSDs between the ages of 3 years to 8 years, also none failed.

 

So my data shows that of almost 700 SSDs that using the correctly rated ones results in so far no failures. Sure Backblaze's data has 1000 more data points than I do but at least mine are use case relevant.

 

My data also does not include replaced servers that ran for their expected lifespan and got replaced, I've only included what we have right now.

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

The amount to activity logging being written to those SSDs will be far higher than any regular desktop.

 

The problem with using under rated SSDs is that if you exceed the TRIM/GC and wear leveling of the SSD then the NAND will wear out in fewer total TB written than if the write workload was lower.

I've met quite the decent amount of people who leave things up/don't close tabs, and end up spilling over to virtual memory (only then restarting their computer)...even I had that happen a few times (although in my case it was a program that gobbled up 28 GB of memory when it crashed). I also find that general consumers tend to fill their boot drive more (downloading files and leaving it until their drive it full).  The result being that the NAND gets worn a lot quicker.

 

While I do get that activity logging being higher and such, on general consumer machines as well, there is often a lot more web browsing and web pages being written to cache...then again it's all dependent on what they specifically are running on the servers and how it is configured.  *I've glanced over a bit of the smart information...maybe tomorrow if I get some free time and am bored will write something to scrub the data for the dates the failures happened and look at the smart data to see what kind of wear each drive had on it when it failed*

 

Not saying that I agree with the article, but there is information to be gleamed from this and I do think this has a lot more practical aspects towards consumer base than DC cases (or light use servers, where enterprise equipment isn't necessarily "in the budget").

 

  

12 hours ago, LAwLz said:

Besides, they have only had 7 of their Seagate drives die (out of at least 979) in 7 years. Considering how they have put consumer drives in their data centers I am really surprised the failure rate is that low.

Please read through the link you sited as a source again.  It wasn't 7 year old drives.  At most it was 3 years, but it appears the average age was about 1.3 year....

 

Okay, I am a gluten for punishment, I went an looked up the data.  The answer to the question 1.85 years of powered on....with the lowest 1.2 years [This was Mar 31, so 3 months after the source you quoted range].    A note earlier about your mention of temps...the average temp was 35C measured on the SSD SMART data [so a perfectly normal temp]

 

 

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, divito said:

I think the vitriol towards Backblaze is a bit unnecessary. Stats are stats, and they of course require context and examination to pull useful information as with any studies, reports, etc... 

I think the problem here is that Backblaze's business model is explicitly using consumer drives, sometimes going as far to "shuck" consumer external drives to get them.

 

Knowing this is what they do, would make me wary of using them. So these SSD numbers tell me they have not changed, they are still using whatever they can get a good deal on, even if they're unsuitable for their business purposes.

 

My opinion here, as well as many others, is that SSD stats are misleading because they're being tested the same way mechanical drives are, by "failing outright", not by failure conditions.

 

SSD's either hit a failure point where they no longer write, a failure point where they are completely inoperable, or a failure point where they are too slow to be functional.

 

For example, this is from a friend:

image.thumb.png.2f75ff5f136fe58d14b772d18d8cbf20.png

They said they bought this PC 6 months ago and the ADATA SU630 SSD started failing 3 days ago. The test on the SSD shows it at 21MB/sec. It was literately running at 1/10th the speed of the HDD in the same computer. Relatively speaking it was 25x slower than it should be, but it was still working, and had no failure flags on it. If you look VERY carefully, you can see the TBW is at 20TBW. It's a QLC drive rated for 800TBW. By all accounts, this drive is not suitable as a boot drive.

 

If you go back a few pages where I posted the values from my SSD's, the hot S drive and the V SATA drive are both 6 year old SSD's. They're TLC and basically still good. 

 

This is the key thing about SSD reliability, especially as more consumer units reach the end of their useful life, is that SSD's are only predicted to fail by putting them into high-volume write situations, such as "logging" and "swap/page file" use.

 

Link to comment
Share on other sites

Link to post
Share on other sites

33 minutes ago, Kisai said:

 

This is the key thing about SSD reliability, especially as more consumer units reach the end of their useful life, is that SSD's are only predicted to fail by putting them into high-volume write situations, such as "logging" and "swap/page file" use.

 

And even then, for most bog stock consumers like myself, the pagefile isn't going to have much of an impact over it's working life. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, wanderingfool2 said:

I've met quite the decent amount of people who leave things up/don't close tabs, and end up spilling over to virtual memory (only then restarting their computer)...even I had that happen a few times (although in my case it was a program that gobbled up 28 GB of memory when it crashed). I also find that general consumers tend to fill their boot drive more (downloading files and leaving it until their drive it full).  The result being that the NAND gets worn a lot quicker.

Important note regarding this however is swap rate and not swap usage (amount). Spilling a bunch of unused memory in to swap and it staying there mostly untouched won't put a lot of write load on to the SSD other than that first write of the data in to swap. Constant ingesting of data and writing to disk that is causing higher swap rate will write a lot more data than 100 chrome tabs pushed out to swap. This also isn't occasional workload for Backblaze, it's their day to day business 24/7.

 

How full the SSD is a good point though, even with high log rotation and swap usage they probably aren't running at over 80% capacity utilization which does make a huge difference. I create my OS partition 20% smaller than the SSD capacity so I don't fall in to that, my own idiocy protection lol.

 

2 hours ago, wanderingfool2 said:

Not saying that I agree with the article, but there is information to be gleamed from this and I do think this has a lot more practical aspects towards consumer base than DC cases (or light use servers, where enterprise equipment isn't necessarily "in the budget").

Datacenter Read Intensive SSDs who's designed purpose is OS boot drives still has between 2x to 5x that of a consumer SSD.

 

Read Intensive (RI) is about ~0.3-0.5 DWPD

Mixed Use (MU) is about ~1 DWPD. Note some ~3 DWPD class SSDs are also considered MU, usually very small capacity.

Write Intensive (WI) is ~3+ DWPD

 

A Samsung 850 EVO 500GB is 0.16 DWPD and the 850 Pro 512GB is 0.32 DWPD. Samsung only makes two DC/Enterprise SSDs this low in DWPD, PM982a 480GB/960GB M.2 0.26 DWPD, everything else is 1 DWPD or greater.

 

Also I killed an 120GB ADATA SSD in under a year when using it as an SSD tier in Windows Storage Spaces. It being the write layer for all new data and modifications as well as data being migrated up from HDD tier caused a heck of a lot of writes and that was just my little old home server doing not a lot of anything other than iSCSI Steam library and a few mostly idle VMs. I replaced the two 120GB ADATA SSDs with two 120GB SanDisk (of equal meh quality) and those died within the year as well. So I can say it's really easy to murder SSDs if you don't use them for intended purpose, I've murdered 3 in fact and the 4th (the surviving ADATA) is basically dead.

 

Back to commenting on Backblaze and their SSDs choices, honestly the price of DC/Enterprise RI SSDs even from HPE with an attached next day onsite warranty is around what any person buying a Samsung EVO SSD. They really are cheaping out, whatever people are thinking it's worse than that. I can't bring myself to do what they are doing in my home server lab and I'm not charging people for using my stuff.

 

I still like Backblaze's service it's just that they have questionable decisions that I really can't see having that much impact on profit margins. Since they are custom building their servers it would make sense to just buy slightly more expensive and vastly better SSDs and use them across multiple server lifecycles. A $150 SSD used twice is cheaper than two $100 SSDs.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, leadeater said:

I still like Backblaze's service it's just that they have questionable decisions that I really can't see having that much impact on profit margins. Since they are custom building their servers it would make sense to just buy slightly more expensive and vastly better SSDs and use them across multiple server lifecycles. A $150 SSD used twice is cheaper than two $100 SSDs.

That's what I was thinking too.

I am not sure if BackBlaze has some information I am unaware of, but it seems to me like they are just flat out incompetent. Maybe they do it so they can get some coverage in the news? Maybe they just don't know what they are doing? The fact that they list an expansion card as one of the "SSD models" makes me lean towards the latter. 

Link to comment
Share on other sites

Link to post
Share on other sites

SSD's are consumables, people really need to digest and understand this.  Eat their rated write endurance and it will be a door stop in no time.


What bugs me is the article from BackBlaze is missing vital information, because i am assuming that they buy consumer grade SSD's and mechanical disks then throw them in to the fray expecting enterprise levels of endurance... 😕 not sure what else they would expect besides early failure rates.

The article is incredibly misleading, an SSD will die within weeks if you obliterate it's potato flash write endurance in that time.


What the article is missing

  • What SSD models they were using and why
  • What the budget per server was for the SSDs vs Mechanical drives
  • What was the TBW at the point of failure (was it inside or outside manufacturer rated spec)
  • Were SSDs behind a RAID/HBA controller with/without trim support and was the OS in use providing said support
  • Are they utilising or reading the additional manufacturer specific S.M.A.R.T data provided

As someone who has put over 50k SSD's in to environments from consumer to DC grade devices I find the whole thing misleading. I have seen numerous SSD's fail for all sorts of reasons but the biggest one by far is exceeding the manufacturer rated endurance, followed by Intel's 'DISABLE LOGICAL STATE' which still gives me nightmares to this day.

 

Please quote or tag me if you need a reply

Link to comment
Share on other sites

Link to post
Share on other sites

So we went into the 5th page of this thread even though it was clear after post number 2 or 3 that the people at Backblaze are basically just morons that use consumer gear in enterprise applications? Am I getting this right? 😄

 

I once put Samsung consumer HDDs in a 2-bay QNAP NAS and one began to fail (sector errors) after 7years of very spotty NAS usage. Even I then realized that there is a reason for NAS-grade drives. Well I was lucky enough to buy WD Reds before they started labeling SMR drives as NAS suitable 😛

 

Link to comment
Share on other sites

Link to post
Share on other sites

* thread cleaned *

 

This might have resulted in a break in continuity and some replies might not make much sense, but everything couldn't be removed. If anything was missed, DO NOT QUOTE ME and instead please report the replies.

If you need help with your forum account, please use the Forum Support form !

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

Important note regarding this however is swap rate and not swap usage (amount). Spilling a bunch of unused memory in to swap and it staying there mostly untouched won't put a lot of write load on to the SSD other than that first write of the data in to swap. Constant ingesting of data and writing to disk that is causing higher swap rate will write a lot more data than 100 chrome tabs pushed out to swap. This also isn't occasional workload for Backblaze, it's their day to day business 24/7.

Yea, maybe it's the people I hang around with...I know enough people who have managed to thrash their SSD's so much by operating with near 100% system RAM usage. Unfortunately in my case (with the 28GB system ram eating program), it was actively changing stuff...so many page writes.

 

Anyways, I decided to look at the 6 GB of compressed (30 GB uncompressed) data they provided to find the failures.  (They stored everything in CSV files, 1 file per day and removed drives from the file when they failed...such a pain, had to quickly write a C# prog to parse it quickly to find the files with failures).

 

Just the quick rundown of the "Seagate BarraCuda SSD ZA250CM10002" and "ZA250CM10003" data points

Average days: ~78, lowest ~4.6 days and ~24 days, highest ~228 days and ~96 days

Aside from 1 drive, none of them had touched the spare blocks per plane (as reported by SMART)

So yea, 3 of the 9 drives I looked at didn't pass the 35 day power on mark

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, valdyrgramr said:

Then tell them that?  Complaining on a forum, they won't likely even read, isn't going to get them to change that.  If they don't do anything from there then maybe don't use them as a source?  https://www.backblaze.com/company/contact.html here is their contact page.

That's like saying "don't point out that anti-vaxxers are wrong in public. Just tell them in private that they are wrong and they might change their ways".

I have contacted BackBlaze, and so have many others. You can just check out the comments on their previous articles and there are a ton of people pointing out issues. They are well aware of the criticism people have of them, but they do not care.

That's why the second best opinion, which is what I try to do, is educate people on why the numbers from BackBlaze are meaningless, and recommend other statistics that are more relevant.

Link to comment
Share on other sites

Link to post
Share on other sites

32 minutes ago, valdyrgramr said:

Well, I clarified to stop using them as a source if they don't care.  Stop giving them a platform, and they'll die off like the anti-vaxxers.  Not saying education is wrong, but the more you talk about them the more attention they get.

I disagree with that approach.

Educating the listeners is far more important than trying to silence your opponent. Sure, me posting about them may give them more attention, but hopefully the attention I am giving them will make people question whether or not the information in their articles are actually relevant. Besides, it's not like I was the one who posted the article. 

 

Also, I am not sure about anti-vaxxers in Klaus Nomi's Realm, but here in Sweden they are sadly not dying off. If anything, it seems like they are growing and attempts to silence them just further strengthens their beliefs.

 

You don't respond to misleading claims with silence. That just sends the listeners the message that the pseudoscience might be correct since their claims aren't being challenged.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, valdyrgramr said:

Well, I clarified to stop using them as a source if they don't care.  Stop giving them a platform, and they'll die off like the anti-vaxxers.  Not saying education is wrong, but the more you talk about them the more attention they get.

The exact opposite will happen if we stay silent.   We stayed silent on anti vaxxers too long and they are a growing problem, they are certainly not dying off.   We stayed silent on anti nuclear and now no one understands nuclear technology except through the lens of 1950's cold war technology fear mongering.  If you stay silent you do not fix the issue you only give it time to grow and establish. 

 

 

1 hour ago, LAwLz said:

 

Also, I am not sure about anti-vaxxers in Klaus Nomi's Realm, but here in Sweden they are sadly not dying off. If anything, it seems like they are growing and attempts to silence them just further strengthens their beliefs.

 

That's exactly what happens when you try to silence or mandate anything.  I was having this argument about mandated vaccines (because they are trying to do it here), reason and logic will win more over than mandates and laws.

 

1 hour ago, LAwLz said:

You don't respond to misleading claims with silence. That just sends the listeners the message that the pseudoscience might be correct since their claims aren't being challenged.

 

It also means they will continue because they are making money from the uninformed unchallenged.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, valdyrgramr said:

What if your attempts fail, though?  There are people too arrogant to stop/change.  Even the people who follow them will just start shouting and yelling, "fake news!"

So far we have stopped several people on this forum from accepting/regurgitating bad information.  That in and of itself is a win because they will not promote it in future threads much less use it when making a purchasing decision.  Then there is the fact this forum is google searchable, meaning when people google "SSD reliability" this thread will surface in the results.  That is already not a fail.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, leadeater said:

I think it's perfectly objective to point out that these SSDs are being used in a situation they are not designed for and unlike HDDs SSDs do have defined wear life based on workload.

 

This bears almost zero useful information to consumers because the way they are being used will shorten the lifespan, wear endurance is not linear. You can kill an SSD with as little as 50TB of writes and the same SSD could survive 500TB of writes, it's purely down to how those writes were done and over what length.

While fair to a point, kind of glosses over that every industry has reviews and statistics of things being used in ways not "designed" for and showcasing the results. It's a reason we have channels dedicated to phone reviews where they see what it takes to damage a phone through severely unrealistic tests. Or cars put through tests they're definitely not designed for; we can go on down the line in almost any product segment.

And yet, no reasonable viewer watches these and goes, 'oh, 100% of iPhones got destroyed from a 200ft drop, better avoid an iPhone.' The majority of people take it as they should, these are the screen limits, glass limits, water limits of x, y, z and they move on. The majority of things in those gauntlets won't happen to 99% of consumers, but consumers clearly pay attention to these things anyways.

 

And maybe I'm a weird consumer, but seeing what companies have better results in their consumer offerings with their product taking abuse is certainly interesting and useful. I'm not arguing that Backblaze's numbers are gospel or something, but that they're simply a data point. Maybe it's anomalous if you combine with other things, but those other data points will also have certain corrections and biases to account for. 

 

As someone who dealt with certain QA things in the past, there tend to be patterns in logistical and certain tech product stacks, and while Backblaze is not scientifically rigorous amongst all the uncontrolled variables and stuff they go through, it's merely a note I don't wish to dismiss.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, valdyrgramr said:

But, that doesn't answer my question.  What do you do when you have people who won't accept that?

There are always going to be people who refuse to accept information.  Our goal is not absolute conversion to a higher plane of understanding, Our goal is simply to highlight the bad and save as many people as we can.

 

Success or failure is not an all or nothing thing.  There are scales of success.  In this thread we have already accomplished more than we would have by sticking our heads in the sand and pretending BB will go away.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, divito said:

It's a reason we have channels dedicated to phone reviews where they see what it takes to damage a phone through severely unrealistic tests. Or cars put through tests they're definitely not designed for; we can go on down the line in almost any product segment.

All of these are clearly obvious and well understood by the viewers as not being realistic or damaging to the product. Very few will have that immediate understanding for this Backblaze data which is how it becomes a problem because it spreads unrealistic expectations and fears that just won't happen outside of "doing a Backblaze".

 

It's the same reasoning I didn't complain nor was surprised when I killed 3 SSDs in my server being used as hot tier/write-back cache.

 

  

2 hours ago, divito said:

And maybe I'm a weird consumer, but seeing what companies have better results in their consumer offerings with their product taking abuse is certainly interesting and useful. I'm not arguing that Backblaze's numbers are gospel or something, but that they're simply a data point. Maybe it's anomalous if you combine with other things, but those other data points will also have certain corrections and biases to account for. 

I am too but  Backblaze exclusively used equally crap SSDs that either use the same NAND, controller, both or would wear to death in the exact same way every time. It would have been useful as you say if they used a range of SSDs from these cheapest possible to actually good ones and in between, where are the "Pro" SSDs from Samsung or XPG.

 

But alas SSDs do have specific wear characteristic and are know before testing, for example a 20cm long pencil can only take so many turns of sharpening before it's gone and this will be consistent across all 20cm pencils. Here sharpening is the SSD workload and the 20cm pencil is the SSD so unless you change the sharpener or the pencil the result will be the same.  

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

 

But alas SSDs do have specific wear characteristic and are know before testing, for example a 20cm long pencil can only take so many turns of sharpening before it's gone and this will be consistent across all 20cm pencils. Here sharpening is the SSD workload and the 20cm pencil is the SSD so unless you change the sharpener or the pencil the result will be the same.  

That reminds me of how some students found it satisfying to ram crayons into those large mechanical pencil sharpeners, and then jam their pencils into it to make a colorful mess. Electric pencil sharpeners also made it a bit less easier to over-sharpen pencils since there was a stop at the end, but it made the tip razor-sharp.

 

Anyway, the analogy works for both SSD's and OLED monitors.

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, 26astr00 said:

Am I the only one who thought OP meant floppy disk drives when he said “disk drives”?

the DD in FDD and HDD both refer to "disk drive", so I dunno. Didn't confuse me.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×