Jump to content

RAID Card, Drive Choice and MOBO Appropriateness

cdawe

@LinusTech, (or whomever can clear up the confusion for me :unsure: ...)

 

I'm going to build my home server soon.  I've purchased Home Server 2011 from NCIX a few weeks back and I'm in the process of getting the other gear pieced together.

 

I'm now looking into getting a good RAID card to do a RAID 5 or 6 across maybe 5, 3TB WD Red drives (or RE's).

 

First of all, I haven't been able to discern for sure, based on the varied ideas across the net, whether there's any major advantage of the "RE - Raid Enabled" drives over the new Red drives. What's the community take on what to do here?

 

Linus, I know you like the Seagate drives for this kinda thing, but I've had such bad experiences with Seagate that I just want to avoid them altogether.

 

Secondly, I would like to use my current Gigabyte EP45-UD3LR board with an Intel Core2 Quad (Q6600) 2.40 GHz CPU as the main guts for the thing. At the same time, I've heard that Gigabyte boards have major issues with RAID cards. Especially, with LSI RAID cards. What's the deal for real?

 

Linus, I know you used a much newer board on your Mega-RAID build on YouTube,

but you did seem to like the LSI card? I know it can have issues with UEFI boards, or so I gather. Anyways, if LSI is OK with UEFI ... how does this apparent BIOS upgrade need to be done to get that to work (assuming it will).

 

As you can tell, I'm confused.  I always thought that Adaptec was the only way to go with RAID... Actually, the MaximumPC guys did a comparison of RAID cards back in 2008 (I know, it's a while ago) and say at the end of the article to avoid LSI altogether. http://www.maximumpc...mpared?page=0,4 Some guys on hardwarecanucks say these days, RAID is passé altogether ??? http://www.hardwarec...580-post17.html

 

Whatever the solution I'm using an Antec Case for now, so I really need a full height card. 

 

I'm baffled. HELP! LOL!

Link to comment
Share on other sites

Link to post
Share on other sites

Raid 10 would be much faster but you only get the storage capacity of half of your drive and as for the motherboard compatibility I think the problem is just plane lack of support as they weren't expect people to run server parts in their motherboards.

Mein Führer... I CAN WALK !!

Link to comment
Share on other sites

Link to post
Share on other sites

I'm now looking into getting a good RAID card to do a RAID 5 or 6 across maybe 5, 3TB WD Red drives (or RE's).

Summoning @wpirobotbuilder and @Vitalius :)

 

First of all, I haven't been able to discern for sure, based on the varied ideas across the net, whether there's any major advantage of the "RE - Raid Enabled" drives over the new Red drives. What's the community take on what to do here?

As a normal consumer: In most cases, the RE drives aren't really worth the additional

money they'll cost you. They do have some additional features, but under most

normal scenarios (NAS, home servers, desktop PCs), those won't really be

relevant. I have some WD RE4 drives (the predecessor of the current RE drives,

which are technically speaking the RE5 line, but they dropped the digit from

the name with the new generation), and while they're very nice drives, I'd probably

have bought Reds if they'd been out when I bought them.

I have a few Reds as well, and while they're not quite as fast, they are quieter

and run cooler, so unless you have some very specific requirements that require an

enterprise-hardened drive, the Reds should work fine.

 

Secondly, I would like to use my current Gigabyte EP45-UD3LR board with an Intel Core2 Quad (Q6600) 2.40 GHz CPU as the main guts for the thing. At the same time, I've heard that Gigabyte boards have major issues with RAID cards. Especially, with LSI RAID cards. What's the deal for real?

Consumer motherboards and RAID cards is basically trial and error. Sometimes it'll

work, other times not, but there isn't really a way to predict which way things will

go before trying it out, at least none that I've heard of.

 

As you can tell, I'm confused.  I always thought that Adaptec was the only way to go with RAID... Actually, the MaximumPC guys did a comparison of RAID cards back in 2008 (I know, it's a while ago) and say at the end of the article to avoid LSI altogether. http://www.maximumpc...mpared?page=0,4 Some guys on hardwarecanucks say these days, RAID is passé altogether ??? http://www.hardwarec...580-post17.html

2008 is a very long time ago in computer terms. I used to run an Adaptec SCSI card,

worked very well for years. Now I'm using LSI host bus adapters, so far no issues.

LSI are pretty successful in the enterprise market, if there was really something

wrong with their products, they wouldn't be.

You can always run into issues (especially when running a mix of consumer and enterprise

components), but from what I've heard and read, LSI is pretty solid. Not that I have

heard any bad things about Adaptec. ;)

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Summoning @wpirobotbuilder and @Vitalius :)

Thanks for summoning me, but I'm out of my depth here. :P I only know FreeNAS and ZFS stuff. For general home use with WHS and such, I've got nothing. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks for summoning me, but I'm out of my depth here. :P I only know FreeNAS and ZFS stuff. For general home use with WHS and such, I've got nothing.

alpenwasser used summoning on Vitalius. It was not very effective. :P

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

As others have said, the reds should do you fine. I too have previously had a bad experience with Seagate, whilst every drive I've ever had from WD has been solid.

No experience with hardware RAID controllers though, someone else will have to help with that :)

Link to comment
Share on other sites

Link to post
Share on other sites

With WHS, you're going to have to go either with onboard chipset RAID or a RAID card.

 

If you have to, the LSI 9260-4i or 9260-8i are good cards. They have onboard cache to accelerate random operations, which is useful for mechanical RAID.

 

If you do, I seriously recommend NOT going with WD Reds, or Vitalius will have my head. If you end up with a drive failure with a Red RAID, it is likely that you'll have an unrecoverable read error during the rebuild. My recommendation instead is the WD SE series, which has a much better Bit Error Rate and falls inbetween the Red and the RE series in terms of price.

 

EDIT: Don't worry, Reds will do :)

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

If you do, I seriously recommend NOT going with WD Reds, or Vitalius will have my head. If you end up with a drive failure with a Red RAID, it is likely that you'll have an unrecoverable read error during the rebuild. My recommendation instead is the WD SE series, which has a much better Bit Error Rate and falls inbetween the Red and the RE series in terms of price.

I got curious and looked up the WD drives in 4 TB.

Non-recoverable read-errors per bits read:

Looking at that list does primarily raise one question: Is "10 in 1015" equivalent

to "1 in 1014"? I know that it looks like it would be, but I also remember the mathematics

of probability to be rather counter-intuitive quite often, and since I'm very rusty in

that field I think I'd rather ask than just assume.

If it is, then it looks to me like the Se series isn't really better when it comes

to UREs (<10 in 1015 as opposed to the Reds' <1 in 1014), the only series that

would have a clear advantage would be the RE (which, even despite that, I'm not

sure I'd really recommend for a home server considering the price tag).

@Vitalius, you're welcome to chime in too, obviously. ;)

EDIT:

Side note: @cdawe: If you're interested in the feature sets of the different disk

series in comparison to each other, the pdfs I've linked have some info on that

as well, just FYI.

Edited by alpenwasser

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

-snip-

-snip-

I think you guys missed something regarding what I've said about Parity RAID and UREs. 

TLER (Time Limited Error Recovery) fixes the problem that UREs introduce to Parity RAID pretty much entirely as they prevent the drive from spending so long on trying to get a good read that the controller says it failed.

WD Reds, SE, RE and Seagate NAS drives all have TLER as a feature, so you don't gain an advantage in regards to UREs in regards to RAID lifespan and protection. URE rate is kind of irrelevant in this case. The only thing it might change is how often your file is corrupted due to a URE when you have no backups or something similar, but that's literally 1 file every year or two or 1-2 files every RAID rebuild at most. 

So the worst case scenario is you lose a few files, but that's why RAID isn't backup anyway.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

I think you guys missed something regarding what I've said about Parity RAID and UREs. 

TLER (Time Limited Error Recovery) fixes the problem that UREs introduce to Parity RAID pretty much entirely as they prevent the drive from spending so long on trying to get a good read that the controller says it failed.

OK, I shall seize the opportunity to ask a question: Assume parity raid setup.

=> Drive tries to read some data.

=> One bit fails (for w/e reason). EDIT: And keeps failing should the drive retry. /EDIT

=> Then what? Controller fixes this via parity? Otherwise I don't see how the URE could

be mitigated into irrelevance.

 

The only thing it might change is how often your file is corrupted due to a URE when you have no backups or something similar, but that's literally 1 file every year or two or 1-2 files every RAID rebuild at most.

I was actually primarily thinking of an array rebuild due to failed drive(s). Just

to make sure I haven't been thinking incorrectly about this one: If you have a drive

failure, replace the drive and during the array rebuild process one of the other drives

has a URE, in that case you would suffer data loss, right? 1-2 files might actually be

significant for some people though. ;)

 

So the worst case scenario is you lose a few files, but that's why RAID isn't backup anyway.

No it is most certainly not. :D

EDIT:

This time the summoning was very effective. :D

Edited by alpenwasser

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

OK, I shall seize the opportunity to ask a question: Assume parity raid setup.

=> Drive tries to read some data.

=> One bit fails (for w/e reason).

=> Then what? Controller fixes this via parity? Otherwise I don't see how the URE could

be mitigated into irrelevance.

 

I was actually primarily thinking of an array rebuild due to failed drive(s). Just

to make sure I haven't been thinking incorrectly about this one: If you have a drive

failure, replace the drive and during the array rebuild process one of the other drives

has a URE, in that case you would suffer data loss, right? 1-2 files might actually be

significant for some people though. ;)

 

No it is most certainly not. :D

EDIT:

This time the summoning was very effective. :D

Haha very. 

Well, it's not really mitigated to irrelevance. The problem UREs cause is that when a consumer drive without TLER tries to read a sector but can't (the URE), it will keep trying for an extended period of time. My company saw this with our servers when we tried to rebuild our RAID 5 array which had consumer drives in it. 

Watching the I/Os on the Task Manager of our server, it was literally doing 0MB/s. Nothing effectively. Even though it was trying to rebuild the array. And that's where the problem lies. From the Controller's perspective, if a drive does not respond within (on average) 8 seconds, it is constituted as failed. This means that have a single bad sector on a drive without TLER, for RAID purposes, effectively makes the drive dead. 

However, if said drive were alone on a machine, it would eventually time out (around 30 to 60 seconds) and continue reading the rest of the data leaving that sector's data lost (corrupt) because the drive is fine aside from those bad sectors.

So UREs are dangerous in that, if you don't have TLER, they can cause a RAID array to fail even though the drives are 99% fine with only a few bad sectors. 

So compare losing all of your data because the RAID controller says the RAID has failed even though it hasn't really failed, to losing 1-2 files that are corrupt because you had TLER which kept the Controller from lying to the OS. Effectively. 

They aren't mitigated to irrelevance, but losing 1-2 files is a lot better than losing all the files. However, if your drives live, you can rebuild the lost data from UREs from parity data. You can't do that if the drives are constituted as failed by the Controller due to UREs and timing out.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Excellent, thanks for clearing that up. :)

They aren't mitigated to irrelevance, but losing 1-2 files is a lot better than losing all the files.

That it is.

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

Excellent, thanks for clearing that up. :)

That it is.

You're welcome. As long as your RAID array lives, you shouldn't lose any files unless both the parity data and the data that the parity data is for are both corrupt. And you'd have to be uber unlucky for that to happen. 

It's a complex issue to explain. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

You're welcome. As long as your RAID array lives, you shouldn't lose any files unless both the parity data and the data that the parity data is for are both corrupt. And you'd have to be uber unlucky for that to happen. 

It's a complex issue to explain. 

So TLER prevents a URE from ruining an array. Where does the BER come in to play (or does it)?

 

EDIT: The BER is the URE rate specified (1 sector per 10^14 bits). So that is good given that the array will likely be able to repair that sector.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

So TLER prevents a URE from ruining an array. Where does the BER come in to play (or does it)?

Yes. It prevents the Controller from believing that the HDD has failed when it hasn't. 

BER (Bit Error Rate) is how often a URE (Uncorrectable Read Error) happens basically. Where BER basically means you get a 1 instead of a 0, but it can apply to other things aside from an HDD reading a sector [just read the executive summary] (i.e. noise/interference messing with a signal over a cable). They're related but not the same thing. 

The whole <1 in 10^14 error rate is basically how often a URE will occur when an HDD reads a sector and is that particular things' BER. 

So the BER is relevant as it is how often a URE will likely happen, but it's not nearly as important for an average home user as having TLER on a drive is. Only a business should buy drives based on BERs because they have the money and because enterprise level stuff shouldn't have concessions (i.e. more UREs for lower cost per drive). 

Having a higher BER means the drive will slow down faster over time, as even though UREs have a much more strict time limit in an enterprise environment, 4-7 seconds of the drive doing nothing is a large performance hit regardless, and if they start to build up over time, things get worse. 

There are other things to consider, but that's the major one. BER shouldn't mean much to normal consumers though. Even the ones who build RAID 5 media arrays and such. It's not worth the monetary investment imo.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Yes. It prevents the Controller from believing that the HDD has failed when it hasn't. 

BER (Bit Error Rate) is how often a URE (Uncorrectable Read Error) happens basically. Where BER basically means you get a 1 instead of a 0, but it can apply to other things aside from an HDD reading a sector [just read the executive summary] (i.e. noise/interference messing with a signal over a cable). They're related but not the same thing. 

The whole <1 in 10^14 error rate is basically how often a URE will occur when an HDD reads a sector and is that particular things' BER. 

So the BER is relevant as it is how often a URE will likely happen, but it's not nearly as important for an average home user as having TLER on a drive is. Only a business should buy drives based on BERs because they have the money and because enterprise level stuff shouldn't have concessions (i.e. more UREs for lower cost per drive). 

Having a higher BER means the drive will slow down faster over time, as even though UREs have a much more strict time limit in an enterprise environment, 4-7 seconds of the drive doing nothing is a large performance hit regardless, and if they start to build up over time, things get worse. 

There are other things to consider, but that's the major one. BER shouldn't mean much to normal consumers though. Even the ones who build RAID 5 media arrays and such. It's not worth the monetary investment imo.

Got it.

 

I wonder if consumer drives will eventually get BERs of 1 in 10^15 and up. The WD XE series has 1 in 10^17 BER.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Got it.

 

I wonder if consumer drives will eventually get BERs of 1 in 10^15 and up. The WD XE series has 1 in 10^17 BER.

Yeah, that'd be nice. I believe this is why OS installations slow down over time, and why reinstalling makes it faster. As the sectors that the OS is on gets worse, it takes longer to get a correct read on them, but re-installing it moves the OS to new sectors, or fixes the problem with the old ones by overwriting them.

Which actually makes me wonder: Is there an OS that could actively move it's files around the HDD as time goes on? I bet that would fix a lot of performance issues with them over time, though SSDs fix this already, albeit at a cost (permanent failure VS slow degradation).

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

WD Reds, SE, RE and Seagate NAS drives all have TLER as a feature, so you don't gain an advantage in regards to UREs in regards to RAID lifespan and protection. URE rate is kind of irrelevant in this case. The only thing it might change is how often your file is corrupted due to a URE when you have no backups or something similar, but that's literally 1 file every year or two or 1-2 files every RAID rebuild at most. 

So the worst case scenario is you lose a few files, but that's why RAID isn't backup anyway.

You think it's worth it at all to get the SE series over the Red?

 

Restoring from backups runs a similar risk of a URE as a rebuild, whether it's a RAID or not.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

You think it's worth it at all to get the SE series over the Red?

 

Restoring from backups runs a similar risk of a URE as a rebuild, whether it's a RAID or not.

If it's really <10 in 10^15, then no since that's <1 in 10^14, so the BER isn't different unless that's a typo. Both have TLER so they are effectively the same drive, except for the branding and whatever performance differences there are between them.

True. The major difference is that without RAID, there are less disks and less disks mean lower chance of a URE. 

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

If it's really <10 in 10^15, then no since that's <1 in 10^14, so the BER isn't different unless that's a typo. Both have TLER so they are effectively the same drive, except for the branding and whatever performance differences there are between them.

I didn't even notice that. If it was 1 in 10^15, opinion changed or no?

 

Looks like it might be 1 in 10^15, according to arstechnica.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

I didn't even notice that. If it was 1 in 10^15, opinion changed or no?

 

Looks like it might be 1 in 10^15, according to arstechnica.

Well, they might have just made the same mistake you made, reading the 10 as a 1?

It is a bit strange though, for example the Purple drives are 10 in 10^14, which

is 1 in 10^13, so they're the worst of all by a factor of 10, if the spec sheet

is right.

Also, the RE drives use a "10" as well instead of a "1".

Hmmm...

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

It is a bit strange though, for example the Purple drives are 10 in 10^14, which

is 1 in 10^13, so they're the worst of all by a factor of 10, if the spec sheet

is right.

Also, the RE drives use a "10" as well instead of a "1".

Hmmm...

 Trickery!

 

I posted on the WD community forum, hopefully someone can answer the question.

 

EDIT: They came back and said to call customer support. Ugh.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Trickery!

 

I posted on the WD community forum, hopefully someone can answer the question.

Trickery indeed!

Nice thinking, much better than us cluelessly arguing back and forth about it

for ages. :D

BUILD LOGS: HELIOS - Latest Update: 2015-SEP-06 ::: ZEUS - BOTW 2013-JUN-28 ::: APOLLO - Complete: 2014-MAY-10
OTHER STUFF: Cable Lacing Tutorial ::: What Is ZFS? ::: mincss Primer ::: LSI RAID Card Flashing Tutorial
FORUM INFO: Community Standards ::: The Moderating Team ::: 10TB+ Storage Showoff Topic

Link to comment
Share on other sites

Link to post
Share on other sites

I didn't even notice that. If it was 1 in 10^15, opinion changed or no?

 

Looks like it might be 1 in 10^15, according to arstechnica.

Eh. Kinda. It is more so a personal choice as you have to consider if the price difference is worth performance degradation (which can be fixed with a reformatting/rewriting) and/or potential file loss (on the order of 1-3 files every 1-3 years). 

For a business or enterprise environment, yes, it's absolutely worth it.

† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

alpenwasser, Vitalius. It turns out that the < 10 in 10^X representation of BER is WD's standard for datacenter class drives. Strange, but means there's little reason to choose the SE drives over the Red drives unless you want the warranty and performance increase.

 

Straight from WD staff

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×