Jump to content

Backblaze: SSDs might be as unreliable as disk drives

Lightwreather
Go to solution Solved by LAwLz,
59 minutes ago, jagdtigger said:

Look when i have a wd green from 2011 that even survived running torrents and being in a raid vs 3 hdd from 2017 which died with something like 30k hours on it used in its intended use-case. Thats way more than just bad luck.

(I even have a 200 GB WD somewhere that still works with <10 bad sectors.... [and even those are old AF, the drive was one or two years old i think])

/EDIT

Oh and did i mention for not much more i could get wd dc-hc drives instead of crappy ironwolfs? Yeah seagate can go bust for all i care.....

Seagate has lower RMA rates than Western digital. 

It was 0.93% vs 1.26% in 2017 (no more up to date data). 

 

Failure rate of the 4TB WD Red - 2.95%

Failure rate of 4TB IronWolf - 2.81%

 

 

Source: https://www.hardware.fr/articles/962-6/disques-durs.html

 

It's RMA rates from a very large French retailer. 

 

I don't doubt your experience, but the fact of the matter is that your experience is just a very tiny sample and as a result of bad luck, it is very skewed compared to the real world generalized numbers. 

 

 

Edit: 

For those interested, here are the RMA statistics for HDDs and SSDs according to the French retailer, which I think is way more representative of what consumers doing consumer things can expect. 

 

HDDs:

  • HGST 0,82%
  • Seagate 0,93%
  • Toshiba 1,06%
  • Western 1,26%

 

SSDs:

  • Samsung 0,17%
  • Intel 0,19%
  • Crucial 0,31%
  • Sandisk 0,31%
  • Corsair 0,36%
  • Kingston 0,44%

I only skimmed the sources, did they state how the SSDs failed? Basically at the least I'd like to separate out "wear and tear" failures from others. So if you massively exceed endurance, I wouldn't count that as an unexpected fail, but an expected fail. However, if a drive suddenly dies after say 6 months, without hitting endurance, that's a very different mechanism.

 

They described their workload as boot and logging. Don't know what that logging looks like, but it isn't Chia plotting by the sounds of it. Still, as a wider picture value simplified to about 1% per year rate they're stating isn't unexpected for random failures that aren't endurance related.

Gaming system: R7 7800X3D, Asus ROG Strix B650E-F Gaming Wifi, Thermalright Phantom Spirit 120 SE ARGB, Corsair Vengeance 2x 32GB 6000C30, RTX 4070, MSI MPG A850G, Fractal Design North, Samsung 990 Pro 2TB, Acer Predator XB241YU 24" 1440p 144Hz G-Sync + HP LP2475w 24" 1200p 60Hz wide gamut
Productivity system: i9-7980XE, Asus X299 TUF mark 2, Noctua D15, 64GB ram (mixed), RTX 3070, NZXT E850, GameMax Abyss, Samsung 980 Pro 2TB, random 1080p + 720p displays.
Gaming laptop: Lenovo Legion 5, 5800H, RTX 3070, Kingston DDR4 3200C22 2x16GB 2Rx8, Kingston Fury Renegade 1TB + Crucial P1 1TB SSD, 165 Hz IPS 1080p G-Sync Compatible

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, leadeater said:

 

 

That's like testing engine reliability for typical sedans by running them at 6000 RPM all day every day then saying the failure rate is high. Not too sure how many people drive around 24/7 and also in 1st gear.

Damn this privacy thing has gone way too far,  now they know how I drive.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

40 minutes ago, valdyrgramr said:

My roommate.

At least you don't have someone with a motorcycle that decides to just rev the every-loving shit out of it at 7am for like 5 minutes. I don't think he goes anywhere, hell i don't even know if he knows how to ride a motorcycle, because I've only ever heard it rev like a bat out of hell and then nothing, no doppler effect of him riding off into the distance......where i wish he would stay 😠

 

off is the direction in which he can fuck to.

🌲🌲🌲

 

 

 

◒ ◒ 

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, valdyrgramr said:

 So, take it with grain of salt if you desire.  Or, give them the funding to do what you desire.

The problem is two fold, one is just the poor assembly of their "study" which, yes,  we could simply take with a grain of salt and leave it alone.  But the other problem is they are presenting this data as if it is rock solid and people are going to use it to make purchasing decisions.  That is a big no no for many of us who actually care about good advice, we won't sit idly by and let poor data interpretation be presented as to the sufficiency of a brand/product.

 

Just reading BB's landing page on this it reads like you should not buy ssd's if you want reliability:

 

Quote

Once we controlled for age and drive days, the two drive types were similar and the difference was certainly not enough by itself to justify the extra cost of purchasing a SSD versus a HDD.

Quite an erroneous statement given most desktops (even heavily used ones) won't max out an ssd's writes in 5 years (the age at which no hdd should be trusted).  Which is why many makers offer 5 and 10 year warranties.     And to top that off, most good quality ssd's need to have about 200GB written to them daily for 5 years before they even likely will have a failure (making them more than likely adequate for 10 years).

 

And as anecdotal evidence I present all the posts in this thread of people with 7, 8, 9 (and me with a 10) year old ssd that still works as fast as the day I bought it. 

 

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, valdyrgramr said:

You're making assumptions here.  That's flawed data.  It doesn't represent the actuality of each user's workload.  It doesn't factor in user damage either.  So, unless you plan to factor in each scenario you won't have perfect results.

Who says results have to be perfect?

When I see statistics like, Samsung SSDs have a 0,17% RMA rate and Kingston has a 0,44% RMA rate, I don't think "when I buy a Samsung SSD I have precisely a 0,17% risk of it dying within 365 days". I think "this is a general number that should be fairly representative of what to expect.

If the same retailer observes 0,17% RMAs for one vendor and 1,26% for another, the reality is most likely that the 1,26% brand is slightly less reliable than the 0,17%.

Sure, the exact percentages might not be a perfect and flawless representation of reality down to the ‰ but we don't need such precise data to make generalizations.

 

 

2 hours ago, valdyrgramr said:

Putting drives through hell and back to test what survives is closer to reality for what they can do.

No it doesn't, and if you don't understand the difference between a DC workload and a typical consumer workload, and why they are different then I don't think there is any point in further discussion. I tried to explain it to you and me and at least one other person has made analogies but you don't seem to understand them.

Let me repeat what I said earlier, 1 year of DC workloads is not the same as 3 years of typical workloads, just like driving a car through a desert will not show how reliable a car is when used in a city.

The tests are completely different and as a result they do not wear or tear on the drives the same way. Results from one test does not translate to results for another test.

 

 

2 hours ago, valdyrgramr said:

You might not like that, but what company is going to ask every user their individual scenario then test that?

What do you mean? Who said anything about asking a company to test anything? All I am saying is that this test is misleading. I didn't ask BackBlaze to make this article.

Also, we don't need to ask every individual user about their specific workload because as I said earlier, people really aren't as unique as you think. Most users have almost identical storage usage patterns. It's mostly low queue depth reads and the writes are small and bursty.

 

2 hours ago, valdyrgramr said:

And no, the flaws aren't realistic it just sounds nice on paper.  It's like how they use average person logic when it comes to diets.  They don't factor in genetics, health history, etc.  Each person is unique.  They just don't have the time nor money to invest in researching everyone.  So, they make up bs data and treat it as reality to stay relevant.  That's far worse than what this company does.  Because at least they're being far more relastic than that.  I wouldn't expect a company to do that.  So, take it with grain of salt if you desire.  Or, give them the funding to do what you desire.

I feel like you went on a tangent here that is not at all related to the conversation.

By the way, genetics plays a really small role in diets. If you want to lose weight, you eat less and move more. That's it. There is no special gene that breaks the laws of thermodynamics. Your genes only plays a role in how likely you are to over/under eat, and where the excess energy is stored. Nothing more and nothing less.

Link to comment
Share on other sites

Link to post
Share on other sites

21 hours ago, LAwLz said:

No, Seagate drives aren't bad. You have been unlucky if that's your experience. 

Seagate has roughly the same failure rate as all other consumer HDD makers like WD, Toshiba and Samsung. 

Seagate got a bad rep because they had one or two models that were bad like a decade ago. 

You've been "unlucky" if that's your experience. So we're just going to pretend that Seagate hasn't rushed experimental drives to market on multiple occasions, sometimes topping a failure rate of 50%? Seagate makes such a good drive they put a generous 1 year warranty on them! Do you realize that brands like GoHardDrive sell REFURBISHED WD and HGST drives with a greater warranty than that? I've got about 50 drives in my possession, mostly WD, and have had issues only with Seagate. HGST/Hitachi, WD, and even my lone Toshiba have held up through the years, but Seagates have been dropping like flies. At work where we deal with significantly more drives it's much of the same story.

 

And some of this was well within a "decade ago". If you want a decade ago look at Samsung drives, which you oddly compared to current HDD makers.

QUOTE ME IF YOU WANT A REPLY!

 

PC #1

Ryzen 7 3700x@4.4ghz (All core) | MSI X470 Gaming Pro Carbon | Crucial Ballistix 2x16gb (OC 3600mhz)

MSI GTX 1080 8gb | SoundBlaster ZXR | Corsair HX850

Samsung 960 256gb | Samsung 860 1gb | Samsung 850 500gb

HGST 4tb, HGST 2tb | Seagate 2tb | Seagate 2tb

Custom CPU/GPU water loop

 

PC #2

Ryzen 7 1700@3.8ghz (All core) | Aorus AX370 Gaming K5 | Vengeance LED 3200mhz 2x8gb

Sapphire R9 290x 4gb | Asus Xonar DS | Corsair RM650

Samsung 850 128gb | Intel 240gb | Seagate 2tb

Corsair H80iGT AIO

 

Laptop

Core i7 6700HQ | Samsung 2400mhz 2x8gb DDR4

GTX 1060M 3gb | FiiO E10k DAC

Samsung 950 256gb | Sandisk Ultra 2tb SSD

Link to comment
Share on other sites

Link to post
Share on other sites

17 minutes ago, valdyrgramr said:

Maybe don't treat them as the holy grail of data and compare results on your own then? 

When did I do that?

 

19 minutes ago, valdyrgramr said:

Most tests are misleading, shocker.

So? We're not allowed to point out misleading tests just because other tests might be misleading too?

 

19 minutes ago, valdyrgramr said:

The point is they put the HW through utter hell and that's the results they get. 

Yes, and the results are meaningless.

 

33 minutes ago, valdyrgramr said:

However, a lot of variables are still ignored on top of that which makes all tests flawed.

It seems to me like you are just shrugging off all tests as being equal because "all tests are flawed" even though the degree of which they are "flawed" varies a lot.

I think the RMA rates are a much better indicator of reliability for consumers than the test BackBlaze did. Do you not agree?

BackBlaze's tests gives us an indication of how an improperly built data center will function. Nothing more and nothing less.

The stats I posted gives us an indication of how likely a consumer SSD is to break when used in a consumer computer. Nothing more and nothing less.

 

Please note that I used the word "indication", because none of these stats are accurate down to the fraction of a percentage point. It's impossible to have such accuracy.

Yes, there might be a handful of people who incorrectly RMA'd their drives, or didn't RMA their drives when they should have. But those are most likely a rather insignificant number that does not invalidate the overall statistics. Have you heard of a confidence interval? Just because a result has a confidence interval does not mean it is flawed and can be discarded.

 

 

40 minutes ago, valdyrgramr said:

And no, losing weight is more complex than eating less and moving.  It is about what you eat as different types of calorie sources do matter, the type and rate of motion, genetics, health history, and even more.  Each person is unique.  Again, you're ignoring imporant variables.

I'm not going to reply to this anymore. You believe what you want to believe.

Link to comment
Share on other sites

Link to post
Share on other sites

20 minutes ago, BigDamn said:

So we're just going to pretend that Seagate hasn't rushed experimental drives to market on multiple occasions, sometimes topping a failure rate of 50%?

Got any source?

I don't deny that they have had some bad drives, but so has all brands as far as I am aware. I'd like a source on the 50% failure rate though, and that has it happened multiple times in the last 10 years.

 

 

21 minutes ago, BigDamn said:

Seagate makes such a good drive they put a generous 1 year warranty on them!

Which of their drives only has a 1 year warranty? I'm looking at their most common drives right now and they all have 2 year warranties or longer, which is the same as WD.

Drives I looked up:

Seagate BarraCuda 2TB - 2 year warranty

Seagate Momentus - 3 year warranty

Seagate IronWolf - 3 year warranty

Seagate IronWolf Pro - 5 year warranty

 

WE Blue - 2 year warranty

WD Black - 5 year warranty

WD Red - 3 year warranty

WD Green - 2 year warranty

 

 

37 minutes ago, BigDamn said:

Do you realize that brands like GoHardDrive sell REFURBISHED WD and HGST drives with a greater warranty than that?

Now you're cherry picking.

Not only are you wrong about Seagate only having a 1 year warranty, most WD and HGST drives on GoHardDrive also only has a 1 year warranty. But if you cherry pick, like you are doing right now, you can find Seagate drives on GoHardDrive with more than a 1 year warranty. Like this one for example.

 

 

40 minutes ago, BigDamn said:

I've got about 50 drives in my possession, mostly WD, and have had issues only with Seagate. HGST/Hitachi, WD, and even my lone Toshiba have held up through the years, but Seagates have been dropping like flies. At work where we deal with significantly more drives it's much of the same story.

So what?

I've had two WD Red drives die in my NAS. I've only had 1 Seagate drive die though. See how easy it is to just tell someone your anecdotal experience? Just because you have experience something does not mean that is what everyone will experience.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Kisai said:

Dell NVMe drives are always Toshiba, SK Hynix, or Samsung P981 (basically the OEM-only version of the EVO drives.) I never paid attention to the SATA SSD's in the cheaper laptops. Suffice it to say that if they were using SATA SSD's for cheapness, if they were QLC, DRAM-less drives they likely slowed down after being used for 6months. 

Yea, fair enough...I thought just WD's because out of all machines I've had come in it's always been WD HDD stuff (and then I saw Dell's Canadian site and saw mostly WD and Dell branded stuff in their SSD section)

 

5 hours ago, leadeater said:

What consumer is going to have the amount of active power on time and also write workload as Backblaze, something that is far closer to zero than it is toward everyone, far closer.

I know plenty of people who keep their computer on 24/7, with stuff running in the background (so the boot drive will always remain in an active state).

This was their OS drives only (by the looks of the numbers and what was stated)...so I'd imagine the workload is a lot more comparable to standard users....so I would argue it would depends what Blackblaze is doing with the OS, because I know a ton of users who abuse their writes more than the servers I run...because the data there gets written to the data drives

 

6 hours ago, LAwLz said:

Well like I said, this article is only really relevant to people who build data centers in an improper way. I'd argue that if you are going to build a DC, then you should be able to afford building it properly. If you can't afford that, then you probably don't need a DC to begin with and you're better off using XaaS.

Depends on the economics of things and you are incorrectly assuming I am saying building a DC.  I mean in my office alone we have a local PC used as a print server and NAS drive...where it doesn't matter if the OS drive has a 1% failure rate because we are there and have backups (and there is always someone on shift during the time it would be used).   If, and it's a big if, Blackblazes data is correct then at least it tells you to avoid Seagate's barracuda SSD.

 

6 hours ago, LAwLz said:

The temperatures inside DC servers could potentially be much higher than inside a home PC

Rented a cabinet and a half rack at a DC...air intake as read by the servers never peaked above 27C...most of the time it 21C, and only reason it ever peaked to 27C was because their HVAC malfunctioned, plus the high powered fans that blow the air from the hot to the cold side means excellent airflow for cooling.  I'd imagine in the summer, in a smallish room your computer is warmer than most DC's.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, LAwLz said:

Which of their drives only has a 1 year warranty?

Just a guess but i think the shop told them that. In my case for example the shop claimed that they got it with one year warranty and i have to go to seagate directly if im so certain about it.... 🤬  Needless to say i dropped that shop completely.

Link to comment
Share on other sites

Link to post
Share on other sites

On 10/2/2021 at 6:25 AM, mr moose said:

Blackblaze is as defective as you can get.  I have yet to see them redeem themselves form their last attempt at pretending to be science.

 

A proper study of hdd and ssd longevity has to explain how it controlled for batch (random acquisition of specific drives) and end use variance (things like temperature, primary use conditions, etc).  There is no point in comparing 1600 low end ssd's to 1600 enterprise grade hdd's or even to 3400 shucked seagate specials. 

 

 

You would think that a provider that treats storage as a fungible commodity would want extra data points, and then resell the stats back to the vendors as part of their R&D for product improvement.

Nooooo, that makes too much sense. What I'm I thinking! 🙄 

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, LAwLz said:

Well like I said, this article is only really relevant to people who build data centers in an improper way. I'd argue that if you are going to build a DC, then you should be able to afford building it properly. If you can't afford that, then you probably don't need a DC to begin with and you're better off using XaaS.

Sometimes failure pays. It's been found in multiple studies that hardware is rather resilient against higher temperatures. A chicken coop design might even be appropriate.

So is it really bad if you have a higher hardware failure rate if the savings in energy offsets the loss?

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, wanderingfool2 said:

I mean in my office alone we have a local PC used as a print server and NAS drive...

That workload is not the same as the workload BackBlaze tested. 

 

24 minutes ago, wanderingfool2 said:

If, and it's a big if, Blackblazes data is correct then at least it tells you to avoid Seagate's barracuda SSD.

No it doesn't, because we don't know how other drives would perform in the same test. Besides, they have only had 7 of their Seagate drives die (out of at least 979) in 7 years. Considering how they have put consumer drives in their data centers I am really surprised the failure rate is that low.

 

27 minutes ago, wanderingfool2 said:

Rented a cabinet and a half rack at a DC...air intake as read by the servers never peaked above 27C...most of the time it 21C, and only reason it ever peaked to 27C was because their HVAC malfunctioned, plus the high powered fans that blow the air from the hot to the cold side means excellent airflow for cooling.  I'd imagine in the summer, in a smallish room your computer is warmer than most DC's.

I don't think you understand the full story here.

The cold aisles in my data center sits at around 20C as well. But that's just the temperature of the air the servers suck in. That temperature doesn't tell you how hot it is inside the servers, and that temperature can be significantly higher than what you will find inside a desktop PC.

It was just an example of how things are different in a server environment vs a desktop environment. 

Just because the air on the outside is the same or cooler does not mean the air around the SSD is.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, StDragon said:

Sometimes failure pays. It's been found in multiple studies that hardware is rather resilient against higher temperatures. A chicken coop design might even be appropriate.

So is it really bad if you have a higher hardware failure rate if the savings in energy offsets the loss?

That's a fair way to look at things I guess.

I don't like when things are designed to fail because it's cheaper to build it low quality and then repair as it breaks, but that's just me. 

I don't think it's a good idea to listen to that type of business when it comes to failure rates though.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, porina said:

@WolframaticAlpha Is the A400 that bad? 😄 The one I have, I think I'll have to wipe and include as part of a system sales later. The only requirements I had at the time I got it was it is a SSD and cheap. Boot drive for a cruncher, back in the days when people ran software 24/7 without expecting profit.

Idk, I am honestly happy as long as my boot time(to desktop) is less than 20 secondsn and vscode opens in <5 seconds. Everything else is useless. I just bought the A400 because it wa on clearance(yeah it was a stupid buy 😛. I don't know whether it is bad, but having one made me a bit scared.

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, LAwLz said:

There never was a 3TB Seagate Barracuda, let along one with a 3TB platter.

There absolutely were 3TB Seagate Barracuda drives. The ST3000DM001 was a very failure prone drive, and was even involved in a class action lawsuit. It used 1TB platters, but it was a 3TB drive in total. 

 

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, leadeater said:

What consumer is going to have the amount of active power on time and also write workload as Backblaze, something that is far closer to zero than it is toward everyone, far closer.

 

That's like testing engine reliability for typical sedans by running them at 6000 RPM all day every day then saying the failure rate is high. Not too sure how many people drive around 24/7 and also in 1st gear.

for their boot drives I wouldn't say thats fair, they just host the OS and deal with log files, no caching

13 hours ago, Fetzie said:

Backblaze can call me when they start putting drives designed for datacentre use in their datacentres.

they almost always do for HDD now.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

4 hours ago, wanderingfool2 said:

I know plenty of people who keep their computer on 24/7, with stuff running in the background (so the boot drive will always remain in an active state).

This was their OS drives only (by the looks of the numbers and what was stated)...so I'd imagine the workload is a lot more comparable to standard users....so I would argue it would depends what Blackblaze is doing with the OS, because I know a ton of users who abuse their writes more than the servers I run...because the data there gets written to the data drives

The amount to activity logging being written to those SSDs will be far higher than any regular desktop.

 

The problem with using under rated SSDs is that if you exceed the TRIM/GC and wear leveling of the SSD then the NAND will wear out in fewer total TB written than if the write workload was lower.

 

Quote

describing these drives as boot drives is a misnomer as boot drives are also used to store log files for system access, diagnostics, and more. In other words, these boot drives are regularly reading, writing, and deleting files in addition to their named function of booting a server at startup.

So we can add Swap to this and maybe other things they haven't said or want to say.

 

But lets throw some numbers at this. Maybe they are using 120GB SSDs with 0.2 DWPD rating, that's only 24GB per day or you'll be exceeding the capability of the SSD. With the amount of access logs that our Apache/Tomcat servers and IIS servers generate this is likely more than that for them alone, their traffic per server is likely greater than ours. Then you have Swap, writing a lot of data to slower disk subsystem will put a lot of pressure on OS write cache which will probably put pressure on Swap, this was the case for the Ceph servers I ran so I don't see any reason it wouldn't be for what they are using.

 

Maybe they are using 240GB and 0.5 DWPD SSDs, that's 120GB per day which is vastly better and actually likely to be in their workload requirements. Problem is they don't state how much write workload they have, they may not have even bothered to find this out.

 

P.S. I looked at a few of the SSDs said to be what they are using in this thread, they were all 0.1-0.2 DWPD, maybe they have some better ones than that I didn't check. Even datacenter RI SSDs targeted for boot drives and near read only workloads have 0.5 DWPD btw.

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, GDRRiley said:

for their boot drives I wouldn't say thats fair, they just host the OS and deal with log files, no caching

As my post above, it's super easy to exceed these consumer SSDs with writes for that, really easy.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, LAwLz said:

Sorry, I meant "there never was a 3TB 7200.11". That's why I in the next sentence said "perhaps you're thinking of the 7200.14", which was also from the barracuda lineup.

 

But yes, you're right that the ST3000DM001 had higher failure rates than most drives. It's also worth mentioning that the ST3000DM001 was released a decade ago, in 2011.

Ah, no worries. I wasn't sure if I was understanding you correctly there. 

Phobos: AMD Ryzen 7 2700, 16GB 3000MHz DDR4, ASRock B450 Steel Legend, 8GB Nvidia GeForce RTX 2070, 2GB Nvidia GeForce GT 1030, 1TB Samsung SSD 980, 450W Corsair CXM, Corsair Carbide 175R, Windows 10 Pro

 

Polaris: Intel Xeon E5-2697 v2, 32GB 1600MHz DDR3, ASRock X79 Extreme6, 12GB Nvidia GeForce RTX 3080, 6GB Nvidia GeForce GTX 1660 Ti, 1TB Crucial MX500, 750W Corsair RM750, Antec SX635, Windows 10 Pro

 

Pluto: Intel Core i7-2600, 32GB 1600MHz DDR3, ASUS P8Z68-V, 4GB XFX AMD Radeon RX 570, 8GB ASUS AMD Radeon RX 570, 1TB Samsung 860 EVO, 3TB Seagate BarraCuda, 750W EVGA BQ, Fractal Design Focus G, Windows 10 Pro for Workstations

 

York (NAS): Intel Core i5-2400, 16GB 1600MHz DDR3, HP Compaq OEM, 240GB Kingston V300 (boot), 3x2TB Seagate BarraCuda, 320W HP PSU, HP Compaq 6200 Pro, TrueNAS CORE (12.0)

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, BigDamn said:

primarily due to heads too weak for three platters

In this case there would have been less heads due to the lower number of platters and also the heads themselves would likely not have been changed much if at all. There really isn't much need to do that unless you change the recording medium being used i.e. HAMR. The head itself is floating anyway and it's the actuator arm that is the machinal part under load and that wouldn't have changed.

 

The failures were almost certainly internal heat from the reduced volume of the disk, internal structure changes and heat coating would have dealt with that.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, leadeater said:

As my post above, it's super easy to exceed these consumer SSDs with writes for that, really easy.

eh a decent cosumer one should do fine, a 850 evo 120-240gb is rated for 75TB over 5 years while a 870 evo is rated for 150TB 240gb.
which is 41gb and 82gb a day. logs have to be hurting to write 40gb+ a day. 80gb a day is 0.3DW for a 240gb drive which is more than you'd expect from a 860DCT at 0.2DW

Edited by GDRRiley
added context

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

41 minutes ago, GDRRiley said:

eh a decent cosumer one should do fine, a 850 evo 120-240gb is rated for 75TB over 5 years while a 870 evo is rated for 150TB 240gb.
which is 41gb and 82gb a day. logs have to be hurting to write 40gb+ a day. 80gb a day is 0.3DW which is more than you'd expect from a 860DCT at 0.2DW

Yep, but they didn't use decent ones 🙃

 

Samsung EVO's and Pro's are actually great budget server SSDs, they just lack power loss protection but otherwise identical to their servers ones component wise.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, leadeater said:

Yep, but they didn't use decent ones 🙃

 

Samsung EVO's and Pro's are actually great budget server SSDs, they just lack power loss protection but otherwise identical to their servers ones component wise.

fair

 

yeah, I love Samsung drives as recording media for cameras.

Good luck, Have fun, Build PC, and have a last gen console for use once a year. I should answer most of the time between 9 to 3 PST

NightHawk 3.0: R7 5700x @, B550A vision D, H105, 2x32gb Oloy 3600, Sapphire RX 6700XT  Nitro+, Corsair RM750X, 500 gb 850 evo, 2tb rocket and 5tb Toshiba x300, 2x 6TB WD Black W10 all in a 750D airflow.
GF PC: (nighthawk 2.0): R7 2700x, B450m vision D, 4x8gb Geli 2933, Strix GTX970, CX650M RGB, Obsidian 350D

Skunkworks: R5 3500U, 16gb, 500gb Adata XPG 6000 lite, Vega 8. HP probook G455R G6 Ubuntu 20. LTS

Condor (MC server): 6600K, z170m plus, 16gb corsair vengeance LPX, samsung 750 evo, EVGA BR 450.

Spirt  (NAS) ASUS Z9PR-D12, 2x E5 2620V2, 8x4gb, 24 3tb HDD. F80 800gb cache, trueNAS, 2x12disk raid Z3 stripped

PSU Tier List      Motherboard Tier List     SSD Tier List     How to get PC parts cheap    HP probook 445R G6 review

 

"Stupidity is like trying to find a limit of a constant. You are never truly smart in something, just less stupid."

Camera Gear: X-S10, 16-80 F4, 60D, 24-105 F4, 50mm F1.4, Helios44-m, 2 Cos-11D lavs

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×