Jump to content

Western Digital gets sued for sneaking SMR disks into its NAS channel

Pickles von Brine
5 hours ago, wanderingfool2 said:

The last calculator on the website is the correct one to use (which is the 69.1%).  Otherwise I could claim a horrible drive is only 0.001% slower (e.g. if I have a drive that rebuilds in 1 day, and another that rebuilds in 100,000 days I could say it is only 0.001% slower when in actuality it's 1,000,000% slower)

 

 

I must of missed something because I didn't hear him say they were slower by that percentage, but that the other was faster by that percentage. Now I can't find the specific result. 

 

I have watched the videos too many times now and I am starting to confuse which results with apply to which device let alone which hdd.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, wanderingfool2 said:

I had watched the rebuild of the 90-95% rebuild, and the write test. In the rebuild test at full they actually compared mixed (single SMR) and pure SMR.  I didn't watch the remaining videos, as I really don't like his testing methodology (on top of using the wrong calculation for the metrics, and yes I know I was say 8 hours when it's closer to 7).

Do you actually honestly expect that a pure CMR 90%+ full array rebuild is going to go significantly faster than the CMR+SMR, I'll use the QNAP, 8 hours and 18 minutes when the empty array rebuild using all CMR took 8 hours and 2 minutes? I think you just let your bias against this channel cloud your judgement here, the best case is 16 minutes difference and yet you are expecting a significant difference?

 

Like I said when you have enough data you can just simply call it and end because the further tests may not be needed to come to a conclusion. This does mean you could miss something but not here with the point you are trying to make, only the client load is missing.

 

Thing is if rebuild times is the thing that for what ever reason is your break point then why are you not buying WD Red Pro, WD Gold, Seagate Iron Wolf (7200 RPM models), Seagate Ironwolf Pro, Seagate Exos? These all have much faster rebuild times. Why is it for you say 10 hours is fine and 18 hours is not? Do you not have backups of your important data? Are you running single parity? Why are you not concerned with a failure of the NAS itself?

 

P.S. If I use the QNAP times it's 12 hours vs 8 hours so I can't see how that is make or break on a better resource NAS than the Synology, client load is the make or break but if close of business is 6PM then you'd have to resume full I/O load before 6AM.

 

Just so you know I trust STH far more but they did not do any tests other than with ZFS or single disk.

 

8 hours ago, wanderingfool2 said:

I've seen 4 bay ones filled to nearly 100% before in the wild.  Actually, now I think of it...I've actually seen a rebuild before matching that (albeit 2TB drives at the time).  I'll agree, SMR's can fit into a NAS situation (they can be cost effective, especially if you don't have to worry about rebuilds in the future).

Load is not how full it is, it's how much actual I/O load you put on the NAS and also how much change rate you have. If like a great majority of these they do not have high I/O loads rebuild times are not going to blow out. If you do happen to have high I/O loads or do a lot of large file transfers in a single session (media content) it's going to need to be in the area of about 100GB to 200GB on a 4 bay NAS to hit CMR zone full, even more on high bay count NAS's. Or that because you are likely connected to the NAS at 1Gbps that data transfer will need to be even larger than that because your data rate in is 100MB/s and the SMR write rate per disk is 20MB/s-40MB/s and disk times are not going to be 100%. As with all the major NAS brands today and for a decent while they all have SSD write-back caching and tiering options with either internal M.2 slots or dedicated 2.5" bays, use those if you need performance.

 

I just think a lot of the rhetoric is blown way out unnecessarily and people are making bad or questionable points that under analysis don't actually hold up to scrutiny.

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, mr moose said:

I must of missed something because I didn't hear him say they were slower by that percentage, but that the other was faster by that percentage. Now I can't find the specific result. 

Can't remember the timestamp in which he says anything about it, the results he wrote though are at timestamp 16:51 on the rebuild full video.

 

7 hours ago, leadeater said:

Do you actually honestly expect that a pure CMR 90%+ full array rebuild is going to go significantly faster than the CMR+SMR, I'll use the QNAP, 8 hours and 18 minutes when the empty array rebuild using all CMR took 8 hours and 2 minutes? I think you just let your bias against this channel cloud your judgement here, the best case is 16 minutes difference and yet you are expecting a significant difference?

 

On 5/29/2020 at 10:47 PM, leadeater said:

Well the thing is for the one person so far to actually do proper testing to see how this was affected it actually wasn't by a much.

It isn't about bias against the channel...it is that it wasn't proper testing and it was by a significant amount.

It was a note at how the channels testing methodology isn't sound for .  For some tests running pure CMR, and in different batches running CMR+SMR, vs SMR.  It isn't a bias against the channel, it's that I am saying you cannot use the rebuild times as an example of SMR not having affecting when it clearly does affect it by a considerable amount.  7 hours is a lot, in a test that didn't introduce a factor that could have made it a lot worse.

 

7 hours ago, leadeater said:

 

Thing is if rebuild times is the thing that for what ever reason is your break point then why are you not buying WD Red Pro, WD Gold, Seagate Iron Wolf (7200 RPM models), Seagate Ironwolf Pro, Seagate Exos? These all have much faster rebuild times. Why is it for you say 10 hours is fine and 18 hours is not? Do you not have backups of your important data? Are you running single parity? Why are you not concerned with a failure of the NAS itself?

The whole point of this is that SMR wasn't made known, so you take the choice away, and for many NAS users that could be an important factor.  10 hours and 18 hours is a huge difference in time frames; for business it would mean the difference of it hitting within working hours and not (lets say it is an SMB that people frequently poll/write data to)...on SMR drives, it is likely that that 18 hour time may balloon to exponentially higher (because that would be a weakness of SMR).

 

To be clear, I am saying SMR in NAS -needs- to be specified to the purchaser as it does have downsides to some important metrics in NAS builds. 

 

Example what considerations might go into picking a HDD for a NAS:

Rebuild times (affected by SMR when there is a lot of data)

Overall write speeds (Probably faster I/O random speeds, but slower if doing massive...like TB....worth of data)

Read speeds

Failure rates

Cost

 

Just because you have never experienced or seen use cases where an 7-8 hour delta could have negatives doesn't mean they don't exist.  Everything is about the balancing act, and those questions you asked are part of it...but WD literally submarined one of the performance metrics by switching technology and not telling anyone.  If comparing drives, and two were close candidates but one used SMR and another used CMR I would go CMR.  Even Seagate's philosophy is that SMR doesn't belong in Seagate NAS lineup drives (Which clearly does mean Seagate feels it negatively impacts NAS performance in enough regards) [And yes, things like rebuilds are weighed into performance]

 

8 hours ago, leadeater said:

Load is not how full it is, it's how much actual I/O load you put on the NAS and also how much change rate you have.

I never said load was how full it is.  I am not stupid.  I said I saw a nearly 100% full NAS before doing a rebuild...and btw it was active during that time.

 

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, wanderingfool2 said:

It isn't about bias against the channel...it is that it wasn't proper testing and it was by a significant amount.

So just because he didn't do a test as well with client load everything else is invalid? What? Think you'll find what was tested is valid.

 

Unless you are prepared to dedicate around 2 weeks of your time then I don't see much basis to complain here, you didn't get to see some specific tests you wanted to see but there are other ones and they are valid and when it comes to day to day usage nothing was present that showed much difference in performance (I could however do a test to show it).

 

3 hours ago, wanderingfool2 said:

For some tests running pure CMR, and in different batches running CMR+SMR, vs SMR

Even when this was the case the tests that were done were common for each NAS and configuration. Not doing a pure CMR full NAS rebuild does not invalidate write usage testing results in a different video looking at a different thing for a different reason. If any usage of SMR in day to day usage would be a problem it would show up very readily and it basically did not. So the fact a full array CMR only rebuild test was not carried out does not change that, neither would that have changed anything like I said (16 minutes best case difference).

 

People are saying any usage of DM-SMR would cause the whole thing to break and cannot possibly work, that's what is being tested not if a CMR disk is faster than DM-SMR. The complaints and accusations are you can't use them in a NAS, testing everything under the sun just is not required to answer this question.

 

P.S STH as far as I know did not have any client load during their rebuild test, is everything they did also invalid?

 

3 hours ago, wanderingfool2 said:

The whole point of this is that SMR wasn't made known, so you take the choice away, and for many NAS users that could be an important factor.  10 hours and 18 hours is a huge difference in time frames; for business it would mean the difference of it hitting within working hours and not (lets say it is an SMB that people frequently poll/write data to)...on SMR drives, it is likely that that 18 hour time may balloon to exponentially higher (because that would be a weakness of SMR).

 I'm addresses the point that somehow 10 hours is acceptable yet 18 hours is not. Why? When? How? Because again as I have pointed out I/O on that NAS actually need to be enough to actually blow out like you are saying. Yes this is a thing, I even gave an example of it, 3 days for a 3TB disk and that is sure as heck longer than 18 hours and crossed in to working hours multiple times.

 

This is just a theoretical point, unless someone actually tested it it's just speculating. I also don't need someone running a parallel 64kQD32 20/80 load on the NAS to show that it can blow it, that's a totally unrealistic load here. Client load testing is just fraut with issues and potential arguments over what is and is not a realistic load for an SMB NAS, this is not a performance test it's a measure to find if a problem exists or not or when it can exist so realistic sensible load profiles actually matters..

 

3 hours ago, wanderingfool2 said:

Just because you have never experienced or seen use cases where an 7-8 hour delta could have negatives doesn't mean they don't exist.

Really? Haven't I? Someone who works in the IT industry and has been an IT contractor, I bet you I have, but increased response times from the storage array during a disk rebuild that preferences client load doesn't always cripple performance. The only danger worth worrying about if the performance is not degraded beyond usable is further/cascade disk failure during the rebuild and backups address that plus 10 hours to 18 hours just is not big enough of a difference for that, that's getting way in to the weeds for statistical probability arguments.

 

Basically what I am pointing to here is the fact that these rebuilds just are not as impactful as you're trying to make out, sure they can be but that's more than few and far between than normal. What would be the point of storage arrays and NAS's if rebuilds make them unusable and no measures are put in place to ensure service to client load over rebuild I/O.

 

Why is everyone so eager to die on the hill of rebuild times for an event that you might have to do once every 3 years (for this market sector of NAS's) and ignore the existence of weekends or multitudes of other factors.

 

Remember the entire point was to find out if the claims being made about DM-SMR are actually as bad as being made out, not specifically if there is a difference. Can DM-SMR disks be used in a NAS? Does their use cause the NAS to be not usable?  Is it possible to rebuild a failed array in an acceptable time frame when that event arises? (these are the more nicely worded questions than those screamed over on Reddit).

Link to comment
Share on other sites

Link to post
Share on other sites

The interesting part of this debate is people are happy to accept STH's video where he takes a drives with known ZFS issue and does a whole video leveraging on that one issue to pontificate how bad the drive is, but when a video is presented showing that those same drives work within normal and acceptable parameters everywhere else and are no where as bad as the portrayed,  then all of a sudden the guy doing the tests "didn't do enough", or "did the wrong tests", or "failed to adequately account for other things".   Even though the more thorough testing including copious amounts of qualifiers and explanations of where the results can change and which results we must make allowances for.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

The interesting part of this debate is people are happy to accept STH's video where he takes a drives with known ZFS issue and does a whole video leveraging on that one issue to pontificate how bad the drive is

That's a little unfair to STH, they only focus on the server/enterprise market and would only be inclined to test something like ZFS. His main point was more that he expected a difference just not nearly as large as it was and rightly said don't use these with ZFS. Before they did their article/video there wasn't a trusted source that did a clean documented test with ZFS to check these claims for rebuild failures and drives getting kicked out of arrays and to see just how bad DM-SMR disks actually are.

 

The more people that look in to and look at different areas and aspects the more information we will get. That's actually one of the problem points about not disclosing the recording technology change because that could have prompted the storage reviewers to acquire the disks and run them through testing to investigate how these are different.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, leadeater said:

That's a little unfair to STH, they only focus on the server/enterprise market and would only be inclined to test something like ZFS. His main point was more that he expected a difference just not nearly as large as it was and rightly said don't use these with ZFS. Before they did their video there wasn't a trusted source that did a clean documented test with ZFS to check these claims for rebuild failures and drives getting kicked out of arrays and to see just how bad DM-SMR disks actually are.

 

The more people that look in to and look at different areas and aspects the more information we will get. That's actually one of them problem points about not disclosing the recording technology change because that could have prompted the storage reviewers to acquire the disks and run them through testing to investigate how these are different.

It probably is unfair to STH, but if people across the net keep posting it as if it is the sole evidence the drives are no good everywhere,  then I see it a bit like the Linus torvalds finger to NVIDIA,  Basically that it is used out of context to infer an issue exists that isn't entirely applicable to the larger situation.  

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

So just because he didn't do a test as well with client load everything else is invalid? What? Think you'll find what was tested is valid.

I never said it was invalid.  It's that you said that it was proper testing showing it wasn't affected by much.  If it creates an 7-8 hour delta in a 10 hour-18hour test that isn't minimal and again, it didn't test a scenario where load gets put on the NAS during rebuild (which would likely happen in a real world scenario and in theory be worse on SMR than CMR)

 

The additional problem being, a client workload during rebuild is a likely scenario; so it is still speculation on how SMR affects rebuilds (but it clearly shows that in optimal rebuild conditions SMR is having an impact on times)

 

5 hours ago, leadeater said:

Basically what I am pointing to here is the fact that these rebuilds just are not as impactful as you're trying to make out, sure they can be but that's more than few and far between than normal

Again, everything is a balancing act (price, performance etc.) where rebuilds are a valid factor; and it isn't as non-impactful to rebuilds as you are making it out to be.  Would you be happy to use SMR for all NAS applications?

 

For the rebuild, that 7 hour delta would move to 14 hours if using 8 drives and 21 hours if it was 6TB drives.

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/31/2020 at 11:53 AM, wanderingfool2 said:

The last calculator on the website is the correct one to use (which is the 69.1%).  Otherwise I could claim a horrible drive is only 0.001% slower (e.g. if I have a drive that rebuilds in 1 day, and another that rebuilds in 100,000 days I could say it is only 0.001% slower when in actuality it's 1,000,000% slower)

99,999/100,000 is not 0.001%, so you couldn't claim that either way (the difference -numerator- is always 99,999 days, you are only changing the denominator)

 

Leaving aside your math mistake, you are basically debating about up and down in space: there is no "correct" denominator when computing difference between numbers. It is correct to say that 6 is 40% smaller than 10. It is also correct to say that 10 is ~66.67% bigger than 6. The absolute difference 10-6=4 is immutable, the percentage difference is a relative measure so of course it is mutable (mutable with respect to the reference point, which you can pick and choose). Which reference point to use is a matter of what you find most intuitive or useful to wrap your head around a magnitude, and therefore subjective. There is definitely no right or wrong denominator.

The only rule is to be consistent when comparing multiple variables for the same units. For example, if I want to compare price and performance of two CPUs, I can caluclate PriceA/PriceB and PerformanceA/PerformanceB or I can calculate PriceB/PriceA and PerformanceB/PerformanceA. Each pair will deliver different % figures and both are correct; however, using the pair PriceA/PriceB and PerformanceB/PerformanceA would definitely be wrong and misleading.

 

Link to comment
Share on other sites

Link to post
Share on other sites

39 minutes ago, SpaceGhostC2C said:

99,999/100,000 is not 0.001%, so you couldn't claim that either way (the difference -numerator- is always 99,999 days, you are only changing the denominator)

1/100,000 = 0.000 01 (or 0.001%) which is how the video was calculating slower [which is wrong] (Yes, in my example I am off by a small %...typed it too quickly, but it is a minor mistake vs using the wrong reference).  The term A is x% slower [than B] b as the reference frame (and thus the formula becomes (a - b) / b).  i.e. The video SMR was "59.14972%" slower than mixed [But they used Mixed/SMR speed, instead of (SMR - Mixed)/Mixed].

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, wanderingfool2 said:

 The term A is x% slower [than B] b as the reference frame (and thus the formula becomes (a - b) / b). 

This is where you are confused. (a-b)/b and (a-b)/a are both correct ways to calculate a difference in relative terms - relative to what is a choice, basically a choice of units. Saying that it's "the wrong reference" is like saying that using kilograms is correct and using grams or pounds is wrong.

The difference is X units of "b" and it's also Y units of "a".

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, wanderingfool2 said:

I never said it was invalid.  It's that you said that it was proper testing showing it wasn't affected by much.  If it creates an 7-8 hour delta in a 10 hour-18hour test that isn't minimal and again

If you were reading what I was writing then you would know I'm questioning the claim that this difference in time significantly matters in the real world, not a number on a test. In what way does 18 hours become not ok but 10 hours is? It's not a significant difference to make the disk not usable or no longer applicable for usage in a NAS.

 

Thing is there is a difference between statements that actually imply different things and use different reasoning:

  1. "Why would I not buy a disk of the same capacity with similar performance that has a 40%-60% lower rebuild time for the same price"
  2. "I cannot purchase this disk because it has 40%-60% longer rebuild times"

The first statement is a well rounded well justified statement without any additional factors that could or need to be justified or could be challenged. Why wouldn't you buy that other disk i.e. Seagate Ironwolf.

 

5 hours ago, wanderingfool2 said:

which would likely happen in a real world scenario and in theory be worse on SMR than CMR

And I've asked what are these? Because you've only given theoretical or pondering statements around this. When I actually apply some critical thinking to such a statement to assess it's validity and applicability I'm actually coming up with few real world situations where the usage load on a SMB NAS would actually blow this out. Would a legal firm working almost exclusively with text documents cause this? Very unlikely. Would an architectural firm with a couple of workers using standard design software cause this? Unlikely as once open, if working directly off the NAS, it's in memory and only writes to the NAS when you save. A smaller sized school with student home drives on the NAS? Also not likely, not that schools use these NAS's for that now days or much in the past either.

 

What are these real word situations with demanding I/O loads that are going to blow out rebuild times? You're not going to be doing video editing directly off a 4-5 NAS with 5400 RPM of any kind using 1Gb connection, you will be editing local and copying completed work. You're also not going to be hosting a ESXi cluster on one using 5400 RPM disk of any kind, neither an SQL database.

 

As I stated if you are in a situation where rebuild times are actually important you'd be opting for 7200 RPM disks not 5400 RPM disks (of any kind), additionally probably not using a low end NAS either but that's by the by as many people do dumb things and I can't stop them.

 

5 hours ago, wanderingfool2 said:

Would you be happy to use SMR for all NAS applications?

That's not the issue, the issue I'm pointing at is the statement you are presenting that 18 hours is actually significantly different than 10 hours that it makes it no longer possible or appropriate to use. When? How? Why? Justify.

 

If strictly comparing to the two difference WD Red revisions the newer one is actually faster for actual usage for most workload profiles and as long as you are not constantly pushing write loads to the system you will get better performance out of the new revision and even double the rebuild time would not inherently offset that to make the previous revision drive better. You just gave up extra usable performance for a once in a ~3 year event because 10 hours might be critically better than 18 hours.

 

But to give a real world example where I would never use a SMR disk, of all 3 types, is video surveillance. For this usage SMR will never be an appropriate storage device unless write performance significantly improves or is combined with other technologies or storage tiering. But strictly speaking SMR is not suitable for this workload. 

 

5 hours ago, wanderingfool2 said:

For the rebuild, that 7 hour delta would move to 14 hours if using 8 drives and 21 hours if it was 6TB drives.

No it would not, it would increase if you used larger disks not if if you had more disks in the system. In fact more disks in the system would more likely reduce the rebuild time as there is more I/O capability in the system to deal with additional client load during the rebuild. Number of disks does not increase the data size required for the rebuild, you are still rebuilding the same disk of the same size calculating the same number of parity chunks, just spread over more disks which actually put more load on the rebuilding disk reducing rebuild time not increasing it.

 

Spoiler

image.thumb.png.fa423a340abfce1c2d4e338a7311c615.png

 

 

image.thumb.png.418ad096df55f5c46daaa89307ae7f7d.png

 

https://www.memset.com/support/resources/raid-calculator/

 

If it worked the way you just said it did then our storage arrays with hundreds of disks would be untenable as rebuild times would be in months to years not hours or days.

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, SpaceGhostC2C said:

This is where you are confused. (a-b)/b and (a-b)/a are both correct ways to calculate a difference in relative terms - relative to what is a choice, basically a choice of units. Saying that it's "the wrong reference" is like saying that using kilograms is correct and using grams or pounds is wrong.

The difference is X units of "b" and it's also Y units of "a".

Both are valid ways to calculate relative terms, but the phrasing takes away an option.  Example, A is 50% slower than B; semantically the item B is being compared.

(Also, the way they were calculating that, the slower the slow rebuild would go the better it would look as the % slower would go down)  [Actually they also did use the correct calculation for the second set of numbers, using the same wording]

31 minutes ago, leadeater said:

What are these real word situations with demanding I/O loads that are going to blow out rebuild times? You're not going to be doing video editing directly off a 4-5 NAS with 5400 RPM of any kind using 1Gb connection, you will be editing local and copying completed work. You're also not going to be hosting a ESXi cluster on one using 5400 RPM disk of any kind, neither an SQL database.

You are assuming you need high load to blow up the rebuild times though.  Having say 10 users accessing even something like an excel files.  You don't need high IO to totally mess up a harddrive caching system.  My point being that even if doing a simulated light workload could potentially make SMR drives behave a lot worse in rebuilds (which is how I would design a test).  It's not a contrived example as well, because I have seen work environments that have had similar things going on.  The rebuild would be pushing the cache to the extreme already, and adding in random IO into the mix will have impacts (yes it has impacts on CMR as well, but SMR does a lot more with caching).

 

16 minutes ago, leadeater said:

No it would not, it would increase if you used larger disks not if if you had more disks in the system. In fact more disks in the system would more likely reduce the rebuild time as there is more I/O capability in the system to deal with additional client load during the rebuild. Number of disks does not increase the data size required for the rebuild, you are still rebuilding the same disk of the same size calculating the same number of parity chunks, just spread over more disks which actually put more load on the rebuilding disk reducing rebuild time not increasing it.

I really need to stop writing after just waking up.  It might not be a linear scale (like I wrote), assuming all SMR disks there, there might be more penalties

3735928559 - Beware of the dead beef

Link to comment
Share on other sites

Link to post
Share on other sites

On 5/30/2020 at 6:32 AM, mr moose said:

Given that just like shell can't control how Hyundai tune their cars, WD can't control how freenas tune ZFS. 

However Shell would test and tweak the fuel to work for all vehicles before releasing it.

 

For WD to ignore freenas and its claims to being "the world’s most popular open source storage operating system" would be like Shell releasing a fuel that would not work on GM vehicles since freenas uses ZFS.

 

that said if they can use Seagates words, videos showing poor performance with zfs, and prove that zfs is a decent chunk of nas systems they should have a good chance at winning.

 

they are also being sued in Canada for it

https://ca.topclassactions.com/lawsuit-settlements/electronics/western-digital-facing-class-action-lawsuit-over-hard-disc-drives/

 

It's going to be interesting no matter which way it goes however 😁

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, Egg-Roll said:

However Shell would test and tweak the fuel to work for all vehicles before releasing it.

 

For WD to ignore freenas and its claims to being "the world’s most popular open source storage operating system" would be like Shell releasing a fuel that would not work on GM vehicles since freenas uses ZFS.

That's assuming ZFS is used in and serious market share.  As far as I know the bulk of people with freenas are us enthusiasts and we make up a scary small percentage of the market.

https://www.smartprofile.io/analytics-papers/synology-keeps-gaining-ground-storage-solutions/

 

6 hours ago, Egg-Roll said:

that said if they can use Seagates words, videos showing poor performance with zfs, and prove that zfs is a decent chunk of nas systems they should have a good chance at winning.

If they can prove seagate works on freenas (assuming seagate SMR) then all they prove is seagate works on freenas,  Then they prove it is not SMR that is the cause.  SMR itself has not been linked to the problem with ZFS and so right now people are trying to tie SMR as a failure to NAS where the issue is more easily explained as a ZFS issue with WD firmware.  We need more evidence and testing before we can go further with this.  I'll grant the issue is a bit coincidental with the rise of SMR drives, however it will not be the first time we have seen coincidences like this raised as issues only to find out they were not linked.

 

6 hours ago, Egg-Roll said:

they are also being sued in Canada for it

https://ca.topclassactions.com/lawsuit-settlements/electronics/western-digital-facing-class-action-lawsuit-over-hard-disc-drives/

 

It's going to be interesting no matter which way it goes however 😁

What concerns me is not the case itself, if they are guilty of something anti consumer then they should be found guilty and punished.  What concerns me is the response from the general tech community,  it's like they have lost all sense and reason in order to pitch fork hunt a company based on repeating what someone else said. 

 

It seems logical that at the moment they are going to have to prove WD don't work as advertised on many other more prominent NAS products before Freenas becomes a anything other than a tidbit in the case.

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, mr moose said:

That's assuming ZFS is used in and serious market share.  As far as I know the bulk of people with freenas are us enthusiasts and we make up a scary small percentage of the market.

https://www.smartprofile.io/analytics-papers/synology-keeps-gaining-ground-storage-solutions/

Nobody cares that the degree of difference being shown was for ZFS, or the literal direct comment by the author that is does not apply to all other RAID types. I mean just look at the comments in any Youtube video that talks about it or posts in other places, everyone repeating the same thing with zero care as to the applicability of the data source.

 

Obviously DM-SMR should and is going to be slower in a number of situations than CMR drives and slapping a huge cache on to the disk isn't going to prevent it but for those that want to be better informed in general to do this and literally ignore actually how small ZFS usage really is in the target market for the disks, because nobody is using WD Reds in anybodies ZFS implementation or product out on the market currently. The venn diagram of WD Reds to ZFS is home DYI and small businesses willing to build their own and support their own.

 

For any large vendor i.e. HPE/EMC/Netapp etc you can only put in their disks sold by them with their firmware on it and none of them are OEM WD Red so you can write that entire market segment out, not that these companies sell in large volume 8 bay or less NAS systems.

 

Then you have the Synologys and QNAPs etc who do use these disks OEM or user installed. QNAP does have ZFS NAS's in their product portfolio marketed as and sold as 'Enterprise ZFS NAS' with bay counts greater than 8 and are not sold with WD Reds. QNAP is however introducing QuTS hero for slightly lower end NAS's, rackmount and the smallest is 4 3.5" bays + 5 2.5" bays for SSDs. These are still vastly more expensive than the typical SOHO/SMB NAS but still applicable to that medium size part of SMB so ZFS usage is making it's way down the market in some ways. (Thing is for these actual medium sized businesses they buy more from the large vendors and it's something I myself advise to do also)

 

Going around repeating this 9 day rebuild time as if it applies to everything only hurts your own intelligence and is literally destroying the ability to use the technology in the market ever even where it would make sense. It's like Brawndo in the movie Idiocracy, technically being correct about a particular detail doesn't make what you are saying about it correct, plants need water not Brawndo .

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, mr moose said:

That's assuming ZFS is used in and serious market share.  As far as I know the bulk of people with freenas are us enthusiasts and we make up a scary small percentage of the market.

https://www.smartprofile.io/analytics-papers/synology-keeps-gaining-ground-storage-solutions/

Ok, but how many of those companies are buying NAS drives over enterprise? It's very likely they are buying enterprise and that needs to be taken into account. Where is most of NAS drive sales, more specifically where are most of the NAS 5400 RPM market? Large deployments or small scale home units?

 

3 hours ago, mr moose said:

If they can prove seagate works on freenas (assuming seagate SMR) then all they prove is seagate works on freenas,  Then they prove it is not SMR that is the cause.  SMR itself has not been linked to the problem with ZFS and so right now people are trying to tie SMR as a failure to NAS where the issue is more easily explained as a ZFS issue with WD firmware.  We need more evidence and testing before we can go further with this.  I'll grant the issue is a bit coincidental with the rise of SMR drives, however it will not be the first time we have seen coincidences like this raised as issues only to find out they were not linked.

 

Obviously Seagate will work with freenas as their NAS drives are CMR therefore will rebuild and function like others before SMR, what I was referring to was Seagate stating SMR had no business being in NAS drives in the first place.

https://arstechnica.com/information-technology/2020/04/seagate-says-network-attached-storage-and-smr-dont-mix/

 

One can blame ZFS for the issues, however until someone tries a normal SMR drive in a ZFS setup with similar specs to see if the WD Red and that SMR drive produce different results one can't go out and blame the firmware of the drive either. It's a clash between SMR and ZFS not firmware imo unless you have proof it is firmware. Fact is SMR where it should matter is clearly slower and worst than CMR. So to me in order to win all they need to prove is their competition has stated SMR shouldn't be in NAS drives, rebuild times (aka tha last time you want something to take forever) takes forever, and gravy on the top would be being able to prove a decent chunk of sales of 5400RPM Reds end up in freenas setups. Get all 3 and the class action should win, or 2 with favour being on your side by the jury.

 

4 hours ago, mr moose said:

What concerns me is not the case itself, if they are guilty of something anti consumer then they should be found guilty and punished.  What concerns me is the response from the general tech community,  it's like they have lost all sense and reason in order to pitch fork hunt a company based on repeating what someone else said. 

 

It seems logical that at the moment they are going to have to prove WD don't work as advertised on many other more prominent NAS products before Freenas becomes a anything other than a tidbit in the case.

 

I didn't read everything about the suits so there could be something in there. Not to mention if the people win it means big changes will have to come to the industry which will be a benefit to all. I don't think if WD lost tomorrow they would suffer too large of a loss, because Reds are a small portion of their business. It'll be something along the lines of the CRT refund we had last year or so.

 

The tech community can do all the stupid crap they want, you only need a few smart people taking the facts proving them in court and convincing the jury/judge in your favour, which in this case there is 2 unknowns, the number of Reds sold for ZFS use compared to everything else, and how reliable are the speed loss claims and can they be replicated. For speed loss claims it could be as easy as can it be replicated with say WD Blue/Seagate Barracuda 6TB and either preforms no worse than a WD Red.

 

Like I said no they don't, not if they can prove most of those use cases you have provided are using Pro or better drives. This case is solely around Reds and the SMR units, what 99% of the world does doesn't matter.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Going around repeating this 9 day rebuild time as if it applies to everything only hurts your own intelligence and is literally destroying the ability to use the technology in the market ever even where it would make sense.

This is what happened to WP, MS release a phone and the internet shit on it flat out and recommended everyone stay away from it without giving it a fair chance,  then when it does die (doesn't take a sherlock to know why) they all pat themselves on the back for understanding a product they never bothered to actually take any interest in other than to follow the hate trend.  

 

 

55 minutes ago, Egg-Roll said:

Ok, but how many of those companies are buying NAS drives over enterprise? It's very likely they are buying enterprise and that needs to be taken into account. Where is most of NAS drive sales, more specifically where are most of the NAS 5400 RPM market? Large deployments or small scale home units?

 

I'd say most of them. If you need to replace a drive in your NAS, you go to the computer shop and buy a drive recommended for NAS.  Or whatever came in the original one which for many of them would be the reds or ironwolfs.  Most of these units are used in home office and small to mid size companies.  Large firms with large networks aren't likely to have a small qnap or synology unit in the corner.

55 minutes ago, Egg-Roll said:

Obviously Seagate will work with freenas as their NAS drives are CMR therefore will rebuild and function like others before SMR, what I was referring to was Seagate stating SMR had no business being in NAS drives in the first place.

https://arstechnica.com/information-technology/2020/04/seagate-says-network-attached-storage-and-smr-dont-mix/

That's great and all, but WD have proven you can have SMR in NAS and it works.   Just because seagate don't want to doesn't mean it can't be done.  So again it comes down to does SMR actually have a n impact big enought o warrant the claims being made,  from all the testing presented so far it appears not.

 

 

 

55 minutes ago, Egg-Roll said:

One can blame ZFS for the issues, however until someone tries a normal SMR drive in a ZFS setup with similar specs to see if the WD Red and that SMR drive produce different results one can't go out and blame the firmware of the drive either. It's a clash between SMR and ZFS not firmware imo unless you have proof it is firmware. Fact is SMR where it should matter is clearly slower and worst than CMR. So to me in order to win all they need to prove is their competition has stated SMR shouldn't be in NAS drives, rebuild times (aka tha last time you want something to take forever) takes forever, and gravy on the top would be being able to prove a decent chunk of sales of 5400RPM Reds end up in freenas setups. Get all 3 and the class action should win, or 2 with favour being on your side by the jury.

You are still assuming the clash is SMR specific,  we have no evidence to that end,  we have no evidence to make any claim about why the drives are so terrible with ZFS, all we know is ZFS is the common denominator.

 

55 minutes ago, Egg-Roll said:

I didn't read everything about the suits so there could be something in there. Not to mention if the people win it means big changes will have to come to the industry which will be a benefit to all. I don't think if WD lost tomorrow they would suffer too large of a loss, because Reds are a small portion of their business. It'll be something along the lines of the CRT refund we had last year or so.

 

The tech community can do all the stupid crap they want, you only need a few smart people taking the facts proving them in court and convincing the jury/judge in your favour, which in this case there is 2 unknowns, the number of Reds sold for ZFS use compared to everything else, and how reliable are the speed loss claims and can they be replicated. For speed loss claims it could be as easy as can it be replicated with say WD Blue/Seagate Barracuda 6TB and either preforms no worse than a WD Red.

 

Like I said no they don't, not if they can prove most of those use cases you have provided are using Pro or better drives. This case is solely around Reds and the SMR units, what 99% of the world does doesn't matter.

It's easy to get a court to find in favor of something even in the face of evidence.  That's why this is an issue, if enough people get angry about something a lawyer will take the charge, the judge will find based on the who's lawyer makes the best argument, not on the facts of the case.

 

It seems enough people got angry about it, unfortunately many don't understand the actual issue and instead are angry because they have been told an injustice has occurred.  This is evidence by the sheer number of youtube videos popping up where they test the reds on a ZFS NAS like they couldn't predict the result.  If those guys are interested in being fair and reasonable they will test the reds on as many different NAS systems they can find. Alas,  telling people that we tested 6 different NAS systems and the drives were marginally worse compared to previous drives does not bring in as much ad revenue and doesn't get shared.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

26 minutes ago, mr moose said:

You are still assuming the clash is SMR specific,  we have no evidence to that end,  we have no evidence to make any claim about why the drives are so terrible with ZFS, all we know is ZFS is the common denominator.

Generally speaking we know why it is a problem, as in it's the way ZFS interacts with storage devices and the type of I/O it puts on the storage device (pattern, block sizes, data placement etc). ZFS is quite advanced at what it does so it's a case of it being important for it to know what type of storage device it is so it can best handle it. These DM-SMR disks report features that other disks do not, like TRIM, so it's currently possible to identify if a disk is or is not DM-SMR by what the disk reports to the OS and storage controller so it's not out of the question that a fix is in the pipeline or can be put in to the pipeline to at least get the rebuild times to more acceptable time frames by making necessary adjustments to how the disks are handled. Additionally firmware optimizations and fixes may also be required by WD, I don' think it's as simple as WD alone creating a firmware fix for this.

 

However while it's fair to say WD should have detailed the change on the product sheet, or really created a new product line like WD Orange (not quite Red 😉), coming at from an angle that WD should not have released a disk that doesn't work with ZFS isn't exactly a good way to look at it. Computer systems are complex and it isn't necessarily up to WD to fix an issue with a product change they are releasing in somebody else's product unless they feel it directly affects them and competitively within the market it benefits them to do so e.g. AMD providing optimizations for Zen architecture for the Linux Kernel.

 

So it's on WD to make people aware of changes and also on others to check how these changes impact their product/software or systems they are responsible for. While I do think it was also short sighted for WD to have not tested the new revision under ZFS (this is just my suspicion) because it would have given them advanced warning of this problem I don't actually think they would not have gone ahead with the change. If ZFS was an important enough market to them for this particular model of disk to me it does not seem reasonable that it would not have been tested and thus been identified.

Link to comment
Share on other sites

Link to post
Share on other sites

29 minutes ago, leadeater said:

So it's on WD to make people aware of changes and also on others to check how these changes impact their product/software or systems they are responsible for. While I do think it was also short sighted for WD to have not tested the new revision under ZFS (this is just my suspicion) because it would have given them advanced warning of this problem I don't actually think they would not have gone ahead with the change. If ZFS was an important enough market to them for this particular model of disk to me it does not seem reasonable that it would not have been tested and thus been identified.

I think that's because ZFS is probably just not on WD's radar at the moment.   Should they have done something? maybe, sure it wouldn't have hurt to at least get the intern to throw their drive into a ZFS system and see what happens,  but if 99% of their drives are going into synology and qnap systems then it isn't really a good business move to try and account for every other on percent device out there. I mean, where do they draw the line one what they stop validating and developing for?

 

 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, mr moose said:

It's easy to get a court to find in favor of something even in the face of evidence.  That's why this is an issue, if enough people get angry about something a lawyer will take the charge, the judge will find based on the who's lawyer makes the best argument, not on the facts of the case.

 

Not always the case, and all you need in that situation is a jury. At which time instead of one person you have to convince at least a good number (I know criminal it's all 12, not sure about these situations). In a jury situation all you need is one tech person who knows tech to some degree. A good movie for you to watch is something called 12 Angry White Men While yes I know the title is discriminatory, however that was how the law was back in the day, there is a 97 version, but I never saw it even tho it was out when I saw the original..

 

On 6/2/2020 at 11:05 PM, mr moose said:

telling people that we tested 6 different NAS systems and the drives were marginally worse compared to previous drives does not bring in as much ad revenue and doesn't get shared.

For the most part SMR drives are ok for most NAS setups that I agree with, but it still comes down to market share for Reds, if even 25% of them are sold for freenas use then there is a case to be had, however if only 1% of all Reds are used in freenas then there technically still is a case, in order for no case to be had on the ZFS front 0% of the Reds would have to be used in ZFS, which lets be real is not a real number. Plus the fact they introduced a new tech in a sector many would not expect w/o notice brings up several other issues, where while that 1% is insignificant their actions can easily sway the courts in favour of those who are (technically rightfully should be) pissed at the move. Here is how I see it if WD stated all drives under FAX 2-6TB are SMR consumers would have been able to make informed purchase decisions, the fact they failed to do so could ultimately result in their failure in court. The court could literally ignore everything WD and their thousand lawyers could have to say due to that one simple fact, they hid information preventing customers to make informed (and potentially biased) choices. That is 100% on them, along with everyone else who makes drives, however they were the first and currently only ones who have put them in anything beyond consumer grade drives.

 

For ad revenue while true, the video titles of that one person has not been very misleading. 

 

  

21 hours ago, mr moose said:

I think that's because ZFS is probably just not on WD's radar at the moment. 

...
but if 99% of their drives are going into synology and qnap systems then it isn't really a good business move to try and account for every other on percent device out there.

It really should be as it is used by freenas at least.

 

Once again while correct, it can easily also be incorrect because 99% of their sold drives are not Reds, esp once you consider all externals as well which are either Blues or 5400 versions of the Ultrastar line. If they failed to do market research to find out where Reds are going, then that is on them. It really comes down to numbers, how many Reds sold go into freenas? If it is any decent chunk they should have tested it. Not how many sold and where do most go.

 

All in all SMR isn't bad however after a quick search I just found out QNAP also uses ZFS so it isn't just freenas, the biggest issue shouldn't be CMR with SMR it's when it's a full SMR drive bay at least for QNAP which then becomes slower than the normal CMR variants.

  

simply put SMR in current state and with actually far more venders using ZFS than I originally thought they shouldn't be in NAS setups, unless you like living on the edge. Like I said tho in "most" cases using every other file format it works no different, however I myself am unsure as to exactly how many use ZFS now. Also QNAP still says the SMR drives are ok to use even tho they could take double the time in a pure SMR rebuild, which is unacceptable.

 

Sure more testing should be done however it feels like it was never done on WDs end, rebuilds are rare but to seemingly never test for them? Pure stupidity. One could throw in 6 SMR drives with 6 CMR drives and not know the difference till it takes an idiotic amount of time to rebuild because the 5-6 SMR drives plus the new potentially SMR drive all shit bricks because you bought a QNAP system using ZFS. Now knowing that more than freenas are using ZFS it brings up bigger issues, like how many NAS setups are actually using ZFS, how many actually have tested SMR rebuilds within their own ecosystem before suggesting them, and more importantly how many petabytes are at risk of a second drive failure because of SMR rebuild times?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Egg-Roll said:

Not always the case, and all you need in that situation is a jury. At which time instead of one person you have to convince at least a good number (I know criminal it's all 12, not sure about these situations). In a jury situation all you need is one tech person who knows tech to some degree. A good movie for you to watch is something called 12 Angry White Men While yes I know the title is discriminatory, however that was how the law was back in the day, there is a 97 version, but I never saw it even tho it was out when I saw the original..

Except for the Samsung v apple case where the jury took just 3 days (long enough to enjoy the hotel room and meals) to conclude a very lengthy and in depth IP case that required a lot of debate regarding technical aspects of law.  The case was not simple and i have read a few commentaries by IP lawyers that found it to be too quick and no were near satisfactorily explained.

https://techcrunch.com/2012/08/24/apple-wins-patent-ruling-as-jury-finds-samsung-infringes/

Also there was that case that Samsung won that the president overturned because it was apple. so No justice is bought by the highest bidder or best argument in most cases , I've already mentioned the Monsanto trial and how ridiculous the courts look with regard to that.

3 hours ago, Egg-Roll said:

For the most part SMR drives are ok for most NAS setups that I agree with, but it still comes down to market share for Reds, if even 25% of them are sold for freenas use then there is a case to be had, however if only 1% of all Reds are used in freenas then there technically still is a case, in order for no case to be had on the ZFS front 0% of the Reds would have to be used in ZFS, which lets be real is not a real number.

It doesn't have to be zero,  It has to be proved the problem is with the drive and not the freenas given that freenas is the device that has the issues.

3 hours ago, Egg-Roll said:

 

Plus the fact they introduced a new tech in a sector many would not expect w/o notice brings up several other issues, where while that 1% is insignificant their actions can easily sway the courts in favour of those who are (technically rightfully should be) pissed at the move.

Why did anyone expect it wouldn't be released into that sector? I really don't think the assumptions made by consumers on which tech works and which doesn't is grounds for a lawsuit, if that was the case windows should be illegal because plenty of consumers didn't expect it to become what it has.  And many will argue it is much larger than it needs to be and it's updates system is broken etc.   (people don't need to actually argue this in this thread, even though doing so would prove my point).  But in the end, windows does what it says it does, doesn't work properly on all systems, has some issues with bricking systems after updates or even on install.  It is by no means perfect but it is not the target of a lawsuit for very obvious reasons.  Now applying this tot he red's,  the reds work in majority of NAS's, they have some issues in a small number of freenas systems but not entirely broken.  It seems to me to be exactly the same degree of problem.  The product not performing in absolute terms the consumer wants is not grounds for a lawsuit because the products works as advertised,  the product having conflicts with another product is also not grounds for a lawsuit unless they promised it would work to a specific performance degree with said product.

 

 

3 hours ago, Egg-Roll said:

Here is how I see it if WD stated all drives under FAX 2-6TB are SMR consumers would have been able to make informed purchase decisions, the fact they failed to do so could ultimately result in their failure in court. The court could literally ignore everything WD and their thousand lawyers could have to say due to that one simple fact, they hid information preventing customers to make informed (and potentially biased) choices. That is 100% on them, along with everyone else who makes drives, however they were the first and currently only ones who have put them in anything beyond consumer grade drives.

I think that's a very big if.  They would still have to prove SMR is the root cause and not ZFS issues with firmware.

 

3 hours ago, Egg-Roll said:

For ad revenue while true, the video titles of that one person has not been very misleading. 

The point was all these people doing videos testing reds on ZFS only are not being very helpful because they are only feeding the echo chambers.  

3 hours ago, Egg-Roll said:

  

It really should be as it is used by freenas at least.

And compared with other NAS systems so consumers can see what is what. 

 

3 hours ago, Egg-Roll said:

Once again while correct, it can easily also be incorrect because 99% of their sold drives are not Reds, esp once you consider all externals as well which are either Blues or 5400 versions of the Ultrastar line. If they failed to do market research to find out where Reds are going, then that is on them. It really comes down to numbers, how many Reds sold go into freenas? If it is any decent chunk they should have tested it. Not how many sold and where do most go.

We are only talking about the reds with DM SMR.   From a cursory look it appears freenas makes up a very small part of the market,  so small it does not warrant thought let alone validation time in R+D.   How much time does any hdd makers put into testing their drives firmware with XP or 98?   they probably don't but there are still  machines out there running those OS's.

 

3 hours ago, Egg-Roll said:

All in all SMR isn't bad however after a quick search I just found out QNAP also uses ZFS so it isn't just freenas, the biggest issue shouldn't be CMR with SMR it's when it's a full SMR drive bay at least for QNAP which then becomes slower than the normal CMR variants.

  

If the reds work in QNAP with ZFS (which I don't think they do just yet, because it is only new this year).  then that will prove it is a freenas issue and not a ZFS or red issue.  

3 hours ago, Egg-Roll said:

simply put SMR in current state and with actually far more venders using ZFS than I originally thought they shouldn't be in NAS setups, unless you like living on the edge. Like I said tho in "most" cases using every other file format it works no different, however I myself am unsure as to exactly how many use ZFS now. Also QNAP still says the SMR drives are ok to use even tho they could take double the time in a pure SMR rebuild, which is unacceptable.

 

Sure more testing should be done however it feels like it was never done on WDs end, rebuilds are rare but to seemingly never test for them? Pure stupidity. One could throw in 6 SMR drives with 6 CMR drives and not know the difference till it takes an idiotic amount of time to rebuild because the 5-6 SMR drives plus the new potentially SMR drive all shit bricks because you bought a QNAP system using ZFS. Now knowing that more than freenas are using ZFS it brings up bigger issues, like how many NAS setups are actually using ZFS, how many actually have tested SMR rebuilds within their own ecosystem before suggesting them, and more importantly how many petabytes are at risk of a second drive failure because of SMR rebuild times?

I think they did test rebuild times,  Given it really is the only part that lets the drive down I can see why the ran with it anyway.  WD had a choice, they likely aren't making more money by developing and selling DM SMR other that the small percentage of sales due to the appeal of the drive capacity,   I really don't think this is as bad as people are making out, rebuild times sure, but everything else is within spit for a new product or product change and etc etc etc.

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Egg-Roll said:

All in all SMR isn't bad however after a quick search I just found out QNAP also uses ZFS so it isn't just freenas

For QNAP ZFS usage has been limited to their enterprise products and are currently trialing a newer version of their OS which can uses ZFS but also on higher end models, still isn't their main OS yet though. All the desktop/non-rackmount NAS's all run the older OS which doesn't use ZFS. The smallest bay count QuTS eligible product I know of is the rackmount 4 3"5 + 5 2.5" NAS. Most likely disk used or OEM'd in disks included sales would be Seagate Exos, you're already paying thousands for the NAS so saving $10-20 per disk doesn't make a lost of sense and you probably need the slight bit of extra performance anyway, the models above this are higher than 8 3.5" bays so WD Red doesn't apply as it's for 8 or less.

 

https://www.qnap.com/static/landing/2020/en-quts-hero/index.html#:~:text=QNAP QuTS hero&text=QuTS hero operating system combines,needs of business-critical applications.

 

QNAP also showing Ryzen some love 🙂

Link to comment
Share on other sites

Link to post
Share on other sites

22 hours ago, mr moose said:

It doesn't have to be zero,  It has to be proved the problem is with the drive and not the freenas given that freenas is the device that has the issues.

The problem isn't freenas or zfs if previous generation drives worked equally across all platforms. It could be technically a issue with either, however as long as the before worked equally unlike the new it is seen as a hardware issue due to inabilities or compatibility that should have been tested to ensure it would work.

 

22 hours ago, mr moose said:

Except for the Samsung v apple case where the jury took just 3 days (long enough to enjoy the hotel room and meals) to conclude a very lengthy and in depth IP case that required a lot of debate regarding technical aspects of law.  The case was not simple and i have read a few commentaries by IP lawyers that found it to be too quick and no were near satisfactorily explained.

https://techcrunch.com/2012/08/24/apple-wins-patent-ruling-as-jury-finds-samsung-infringes/

Also there was that case that Samsung won that the president overturned because it was apple. so No justice is bought by the highest bidder or best argument in most cases , I've already mentioned the Monsanto trial and how ridiculous the courts look with regard to that.

Except that is company v company, we are talking about consumer v company. Totally different ballgame, while yes I do agree 3 days is stupidly short however I didn't pay much attention to that case and nor am I going to look into it. Those IP lawyers likely were not there throughout the whole case as reading about court hearings is hardely trustworthy.

 

22 hours ago, mr moose said:

The point was all these people doing videos testing reds on ZFS only are not being very helpful because they are only feeding the echo chambers.  

Do tell me where all these "videos" are. When I searched "wd red smr raid rebuild" or "wd red smr raid test" all I keep getting are videos already posted here and videos already posted here less this one video (maybe) who used a non-nas drive. I think you are confusing all the hate topics on Reddit for videos.

 

22 hours ago, mr moose said:

We are only talking about the reds with DM SMR.   From a cursory look it appears freenas makes up a very small part of the market,  so small it does not warrant thought let alone validation time in R+D.   How much time does any hdd makers put into testing their drives firmware with XP or 98?   they probably don't but there are still  machines out there running those OS's.

 

We are not talking about overall market share, you still have not proven that freenas doesn't take up more or less of WD Red (non-pro) drives. Obviously freenas isn't making up most of WDs market share nor are the Reds. However Reds are more along the lines of consumer grade drives built for a specific thing. So if WD goes out and says "most of our network based drives go in these ecosystems" while not wrong, most of those drives are enterprise grade and in doing so have failed to do the research into the actual market for Reds. You're using OS's that have been known to have very little issues (in XP basically none) with SATA drives, let alone a dying breed as a fare comparison? Reds have their own market share, freenas is part of that market share. How big or small you likely don't know like me. It could be a large share or a small share, but a share is a share no matter how big, esp considering they did not kill the product like Windows did for the OS's. What WD did with their SMR Reds and your opinion towards zfs/freenas would at least be the equivalent to Windows breaking all computers using PCI-E Sound cards and/or Video capture cards and blaming it on the hardware.

 

23 hours ago, mr moose said:

I think they did test rebuild times,  Given it really is the only part that lets the drive down I can see why the ran with it anyway.  WD had a choice, they likely aren't making more money by developing and selling DM SMR other that the small percentage of sales due to the appeal of the drive capacity,   I really don't think this is as bad as people are making out, rebuild times sure, but everything else is within spit for a new product or product change and etc etc etc.

The issue is rebuild times, the last place you want to take longer. Longer it takes if you have a weak drive and you don't have a 2 drive redundancy then you are literally fucked, all data lost. Even if you have a 2 drive redundancy with a second failure and your first one still trying to rebuild could put more stress on the other drives. That's the problem, the issue isn't around day to day use, it's the issue of when something bad will happen, not the if it will happen.

 

23 hours ago, mr moose said:

If the reds work in QNAP with ZFS (which I don't think they do just yet, because it is only new this year).  then that will prove it is a freenas issue and not a ZFS or red issue.  

 

22 hours ago, leadeater said:

For QNAP ZFS usage has been limited to their enterprise products and are currently trialing a newer version of their OS which can uses ZFS but also on higher end models, still isn't their main OS yet though. All the desktop/non-rackmount NAS's all run the older OS which doesn't use ZFS. The smallest bay count QuTS eligible product I know of is the rackmount 4 3"5 + 5 2.5" NAS. Most likely disk used or OEM'd in disks included sales would be Seagate Exos, you're already paying thousands for the NAS so saving $10-20 per disk doesn't make a lost of sense and you probably need the slight bit of extra performance anyway, the models above this are higher than 8 3.5" bays so WD Red doesn't apply as it's for 8 or less.

 

https://www.qnap.com/static/landing/2020/en-quts-hero/index.html#:~:text=QNAP QuTS hero&text=QuTS hero operating system combines,needs of business-critical applications.

 

QNAP also showing Ryzen some love 🙂

I would like to point out something to both of you, QNAP isn't exactly the best example to make any type of defense for SMR drive 🤣

https://www.qnap.com/en-us/compatibility/?device_category=3.5 hdd&brand=wd They litterally state it is OK to put Reds in a TS-2888X which is a 28 bay layout, which I guess is ok if all SSDs less the 8. However I did find one model that has 4 middle drives being surrounded by 8 other drives, which according to them is ok.

 

I seriously doubt QNAP will test or do anything but require others to test it for them, also if QNAP naturally degrades w/o ZFS how bad will it do with ZFS? If QNAP was using ZFS during that test, which by the sounds of it may not have (they never stated) then there could have been a argument that the main issue is freenas and how it implements zfs.

https://forum.qnap.com/viewtopic.php?p=750101&sid=57da03f5c5f965b8726fde31a1cfa822#p749937

 

Someone buying QNAP or Synology for home use are not likely very techy and will go for the "cheapest" NAS drive on a supported list w/o doing any research. This week WD Reds 4Tb are on sale at Canada Computers and like QNAP they like to be lazy and not update their website. One could be getting a CMR at a great price or a SMR at a bad price (Seagates NAS are $10 cheaper). A person buying a QNAP will see both being OK and not understand the potential risks involving SMR, let alone possibly even know about SMR.

Link to comment
Share on other sites

Link to post
Share on other sites

23 minutes ago, Egg-Roll said:

The problem isn't freenas or zfs if previous generation drives worked equally across all platforms. It could be technically a issue with either, however as long as the before worked equally unlike the new it is seen as a hardware issue due to inabilities or compatibility that should have been tested to ensure it would work.

You can't claim what it is or isn't without evidence.  the reds work on everything else, just not ZFS in freenas.  Regardless of what worked before, you still haven't shown that DM SMR is the issue (which is the core of the lawsuit).  IT could well be just the firmware.

 

Quote

Except that is company v company, we are talking about consumer v company. Totally different ballgame, while yes I do agree 3 days is stupidly short however I didn't pay much attention to that case and nor am I going to look into it. Those IP lawyers likely were not there throughout the whole case as reading about court hearings is hardely trustworthy.

 

Nope, it's exactly the same regardless who the plaintiff is,  besides that the plaintiff in the Monsanto case were consumers.

 

Quote

Do tell me where all these "videos" are. When I searched "wd red smr raid rebuild" or "wd red smr raid test" all I keep getting are videos already posted here and videos already posted here less this one video (maybe) who used a non-nas drive. I think you are confusing all the hate topics on Reddit for videos.

 

We are not talking about overall market share, you still have not proven that freenas doesn't take up more or less of WD Red (non-pro) drives.

I linked to an article that showed a basic market share of NAS systems, Freenas doesn't even present on the small business side,   It would be a large assumption to assume any significant portion of WD's red sales go to home NAS systems when such a large commercial market (that is projected to grow immensely) already exists.

Quote

Obviously freenas isn't making up most of WDs market share nor are the Reds. However Reds are more along the lines of consumer grade drives built for a specific thing. So if WD goes out and says "most of our network based drives go in these ecosystems" while not wrong, most of those drives are enterprise grade and in doing so have failed to do the research into the actual market for Reds. You're using OS's that have been known to have very little issues (in XP basically none) with SATA drives, let alone a dying breed as a fare comparison?

WD advertise them as NAS drives that work in all commercial NAS products.  What failed is the DIY option that they likely didn't even look at.  This is not false advertising.    And I am using OS as an example of a product that is advertised to work on all machines but actually doesn't for various reasons.  MS can't be held legally liable for that just like WD can't be held legally liable for not ensuring it works on a device they have no control over.

 

 

Quote

 

Reds have their own market share, freenas is part of that market share. How big or small you likely don't know like me. It could be a large share or a small share, but a share is a share no matter how big, esp considering they did not kill the product like Windows did for the OS's. What WD did with their SMR Reds and your opinion towards zfs/freenas would at least be the equivalent to Windows breaking all computers using PCI-E Sound cards and/or Video capture cards and blaming it on the hardware.

 

Given the stats I linked before it is very very very likely to be less than a few % at best.  Windows updates fail for a myriad of unexpected reasons,  that's the point, WD failed on freenas for an unexpected reason, not a misleading consumer reason.

 

Quote

The issue is rebuild times, the last place you want to take longer. Longer it takes if you have a weak drive and you don't have a 2 drive redundancy then you are literally fucked, all data lost. Even if you have a 2 drive redundancy with a second failure and your first one still trying to rebuild could put more stress on the other drives. That's the problem, the issue isn't around day to day use, it's the issue of when something bad will happen, not the if it will happen.

Objective issue,  cheaper with larger capacity versus longer rebuilds times once or twice a year.    @Leadeater has gone over the specifics of that already.

Quote

 

I would like to point out something to both of you, QNAP isn't exactly the best example to make any type of defense for SMR drive 🤣

https://www.qnap.com/en-us/compatibility/?device_category=3.5 hdd&brand=wd They litterally state it is OK to put Reds in a TS-2888X which is a 28 bay layout, which I guess is ok if all SSDs less the 8. However I did find one model that has 4 middle drives being surrounded by 8 other drives, which according to them is ok.

 

I seriously doubt QNAP will test or do anything but require others to test it for them, also if QNAP naturally degrades w/o ZFS how bad will it do with ZFS? If QNAP was using ZFS during that test, which by the sounds of it may not have (they never stated) then there could have been a argument that the main issue is freenas and how it implements zfs.

https://forum.qnap.com/viewtopic.php?p=750101&sid=57da03f5c5f965b8726fde31a1cfa822#p749937

 

Someone buying QNAP or Synology for home use are not likely very techy and will go for the "cheapest" NAS drive on a supported list w/o doing any research. This week WD Reds 4Tb are on sale at Canada Computers and like QNAP they like to be lazy and not update their website. One could be getting a CMR at a great price or a SMR at a bad price (Seagates NAS are $10 cheaper). A person buying a QNAP will see both being OK and not understand the potential risks involving SMR, let alone possibly even know about SMR.

That doesn't really change the reality of the situation.    We have a series of drives that do as they say for (what we know to be) every product except freenas,  we still don't know if SMR is an absolute issue with freenas.   It seems to me that is a crucial bit of information if you are intending on suing WD for misleading the consumer.  Because if an update to either the firmware or freenas comes out that fixes that issue then all you have is a drive that takes longer to rebuild, but works as intended and advertised every way.   I think that should fail in the courts,  but as I pointed out before, the courts don't find in favor of facts, they find in favor of the best argument or feeling. 

Grammar and spelling is not indicative of intelligence/knowledge.  Not having the same opinion does not always mean lack of understanding.  

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×