Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
wpirobotbuilder

Reducing Single Points of Failure (SPoF) in Redundant Storage

Recommended Posts

That could work. Removing the controllers as a SPoF is dependent on which drives (i.e. which SATA ports) you are using. For example, if all the drives you used for a RAID 6 array were attached to the LSI controller, that controller would be a SPoF. If you plan your RAID volume correctly, then you will avoid SPoFs.

 

If you want to see what can be done with two controllers (like you have), then go back and look at the configurations involving exactly two controllers. Those RAID configurations will be your possible options. In terms of ease of expansion, I highly recommend RAID 10. While you only get 50% of the total drive space, you only need two controllers and can throw in another RAID 1 to expand very quickly. You could expand up to twelve drives with that, getting 6 worth of usable capacity.

 

If you anticipate needing more than 18TB of storage, then you will likely need more than one LSI controller to keep avoiding SPoFs, regardless of the RAID configuration you choose.

 

P.S. RAID controllers need to be flashed from IR to IT mode to be fully compatible with ZFS/FreeNAS. See alpenwasser's tutorial on how to do that for the LSI 9211-8i here. BTW that is the most recommended controller to use, or the 9211-4i.

 

If you need more help, don't hesitate to ask.

 

If I do use the LSI raidcard and use software raid (ZFS/Free Nas), and somehow the raid card fails. and i can't get simillar raid card, Will I be screwed? or will it not be a problem?


MB :MSI Z77a G45 | Proc: I5 3570K (Stock) | HSF : CM 212X turbo | RAM : Corsair Vengeance 8GB (2X4GB) | VGA : MSI GTX 660 Twin Frozr | PSU : Corsair GS600 | Case : CM Storm Enforcer | Storage :  OCZ Vector 128GB, WD Blue 500GB , Samsung 840 Evo 120GB, WD Blue 1TB

Link to post
Share on other sites
Posted · Original PosterOP

If I do use the LSI raidcard and use software raid (ZFS/Free Nas), and somehow the raid card fails. and i can't get simillar raid card, Will I be screwed? or will it not be a problem?

If you have eliminated the card as a SPoF, you will still have all your data, and your RAID will continue to operate. However you will have to order a new controller, hook up the drives on the dead controller, and rebuild the array ASAP. Another failure could doom your array.

 

It doesn't have to be the same controller, as long as it is operating in IT mode (or it could just be a SATA HBA).


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

If you have eliminated the card as a SPoF, you will still have all your data, and your RAID will continue to operate. However you will have to order a new controller, hook up the drives on the dead controller, and rebuild the array ASAP. Another failure could doom your array.

 

It doesn't have to be the same controller, as long as it is operating in IT mode (or it could just be a SATA HBA).

 

Oh i See.. I Understand now.

Thank you dude.


MB :MSI Z77a G45 | Proc: I5 3570K (Stock) | HSF : CM 212X turbo | RAM : Corsair Vengeance 8GB (2X4GB) | VGA : MSI GTX 660 Twin Frozr | PSU : Corsair GS600 | Case : CM Storm Enforcer | Storage :  OCZ Vector 128GB, WD Blue 500GB , Samsung 840 Evo 120GB, WD Blue 1TB

Link to post
Share on other sites
Posted · Original PosterOP

Oh i See.. I Understand now.

Thank you dude.

You're very welcome.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

First xD

but seriously this is a great read, very informative

And how are there no replies after 2 days 0.o

 

Very informative but nothing to really discuss about. It's a great resource to get information but more than a 'thanks that helped me big time' won't come out of it.

 

Additionally it's something that most regular users here will know about already and people that seek knowledge about this will come here via google i would say, meaning they would have to register to the forums first before posting anything. Nevertheless it will help more guys than you would initially think, well written, short and visually appealing.


My builds:


'Baldur' - Data Server - Build Log


'Hlin' - UTM Gateway Server - Build Log

Link to post
Share on other sites
Posted · Original PosterOP

Very informative but nothing to really discuss about. It's a great resource to get information but more than a 'thanks that helped me big time' won't come out of it.

I'm not looking for praise, I write stuff because it's fun. During the process, I often learn something of use to me and/or others.

 

Additionally it's something that most regular users here will know about already and people that seek knowledge about this will come here via google i would say, meaning they would have to register to the forums first before posting anything.

Maybe some people know about it, but from the number of builds you see on the forums with a RAID card there is very little in the way of balancing arrays across multiple controllers. Then again, most people here aren't looking for such high-availability systems, but every now and then the knowledge could prove useful, as was done so here.

 

well written, short and visually appealing.

Thank you :)


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

holy cr*p this is great!! (very late to have been lost in this part of the forums)

i do not understand most of the terms (especially the add in cards?), but yeah, very informative.

 

never knew there was a a raid 50, 60, 70..i thought 1+0 was already great..

 

i finally know what to do with those extra expansion slots that keep on building up with dust..lol

Link to post
Share on other sites
Posted · Original PosterOP
never knew there was a a raid 50, 60, 70..i thought 1+0 was already great.

Most add-in RAID cards support RAID 50 and 60, and FreeNAS supports striped Z1 (RAID 50 equivalent), striped Z2 (RAID 60) and striped Z3 (triple-parity RAID). There are no controllers available that support RAID 70 to my knowledge, but it was easier to say RAID 70 than "striped triple-parity RAID".

 

 

holy cr*p this is great!! (very late to have been lost in this part of the forums)

i do not understand most of the terms (especially the add in cards?), but yeah, very informative.

You mean the HBA? It's called a Host Bus Adapter, and all it does is allow you to use hard drives (which normally run over a SATA bus) on a PCI-E bus. Essentially a PCI-E to SATA adapter on a "host" computer, hence "Host Bus Adapter" :)


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Most add-in RAID cards support RAID 50 and 60, and FreeNAS supports striped Z1 (RAID 50 equivalent), striped Z2 (RAID 60) and striped Z3 (triple-parity RAID). There are no controllers available that support RAID 70 to my knowledge, but it was easier to say RAID 70 than "striped triple-parity RAID".

 

 

You mean the HBA? It's called a Host Bus Adapter, and all it does is allow you to use hard drives (which normally run over a SATA bus) on a PCI-E bus. Essentially a PCI-E to SATA adapter on a "host" computer, hence "Host Bus Adapter" :)

 

the nested raid you've shown was easy to understand, it shows a quite expensive setup to build.lol right now i don't have a need and means for 6 or more drives for raid.

 

I did a bit of research on these add in cards, apparently the only available near here are old cards. Marvell, i think.. it can only handle two drives on SATA 2.not a good buy, right? good thing I also read that if I only plan to raid two drives, then the built-in mobo should do fine. i have a gigabyte z97m-d3h and the website says:

 

Chipset:

  1. 6 x SATA 6Gb/s connectors
  2. Support for RAID 0, RAID 1, RAID 5, and RAID 10

according to your samples and post, it would be best to have two controllers right? (to avoid one being an SPoF)

the site doesn't say any about the controller on my board, so I can assume it's only one (intel?), but raid 1 should work, right?

 

one more question, would two raid 1's will work on this board? (though i am interested to try actual controllers next time and follow ur guide here)

 

this can be a fun thing to do.lol

thanks

Link to post
Share on other sites
Posted · Original PosterOP

the nested raid you've shown was easy to understand, it shows a quite expensive setup to build.lol right now i don't have a need and means for 6 or more drives for raid.

 

I did a bit of research on these add in cards, apparently the only available near here are old cards. Marvell, i think.. it can only handle two drives on SATA 2.not a good buy, right? good thing I also read that if I only plan to raid two drives, then the built-in mobo should do fine. i have a gigabyte z97m-d3h and the website says:

 

Chipset:

  1. 6 x SATA 6Gb/s connectors
  2. Support for RAID 0, RAID 1, RAID 5, and RAID 10

according to your samples and post, it would be best to have two controllers right? (to avoid one being an SPoF)

the site doesn't say any about the controller on my board, so I can assume it's only one (intel?), but raid 1 should work, right?

 

one more question, would two raid 1's will work on this board? (though i am interested to try actual controllers next time and follow ur guide here)

 

this can be a fun thing to do.lol

thanks

Ideally you would purchase two separate HBAs and install them in the PCI-E slots, then run the drives off of those. The onboard chipset is usually second choice to an add-in HBA in enterprise environments for various reasons.

 

If you really want to make this work, then I would recommend that you buy a 4-port HBA (make sure it has no integrated RAID functionality at all). However, most inexpensive 4-port HBAs will be relatively flaky running under FreeNAS, and they recommend that you buy an actual RAID controller and flash it to HBA (IT) mode. This process is tricky and there is a link to a tutorial in my signature (alpenwasser), and definitely not something I recommend if you're looking for something simple.

 

I think it's a lot of fun, but be prepared for lots of research, debugging, and headaches. I'm going through that right now with my system, but once you engineer a solution that works you feel great.

 

And yes, you will be able to do two separate RAID 1's on the integrated chipset.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Ideally you would purchase two separate HBAs and install them in the PCI-E slots, then run the drives off of those. The onboard chipset is usually second choice to an add-in HBA in enterprise environments for various reasons.

 

If you really want to make this work, then I would recommend that you buy a 4-port HBA (make sure it has no integrated RAID functionality at all). However, most inexpensive 4-port HBAs will be relatively flaky running under FreeNAS, and they recommend that you buy an actual RAID controller and flash it to HBA (IT) mode. This process is tricky and there is a link to a tutorial in my signature (alpenwasser), and definitely not something I recommend if you're looking for something simple.

 

I think it's a lot of fun, but be prepared for lots of research, debugging, and headaches. I'm going through that right now with my system, but once you engineer a solution that works you feel great.

 

And yes, you will be able to do two separate RAID 1's on the integrated chipset.

 

ok great thanks.. i will be trying out the onboard first..dont worry, no enterprise here, just working on something for the fam.lol

most have the thinking "i need a backup for my backup's backup"..before we know it, lots of harddrives with the same d*amn files/movies..

 

i also read i should be looking for those LSI cards..there's just none available here (perhaps I'd look elsewhere)

it's great that I can use my other pci-e slots but that would then leave the sata ports unused if I opted to go with this..

Link to post
Share on other sites

Some of those diagrams look like a lot of controllers.... That can increase cost a lot if you're using nice gear. 

 

I have trouble understanding why one would choose that over a dual controller near line SAS setup if all you're trying to do is add fault tolerance by reducing single points of failure. 

Link to post
Share on other sites
Posted · Original PosterOP

Some of those diagrams look like a lot of controllers.... That can increase cost a lot if you're using nice gear.

Agreed, but nice controlers are overkill for what these setups are designed to do -- if you're buying a $500+ controller, it's probably not to use as an HBA, you want it for the hardware RAID capabilities, cache/bbu, etc. By using software RAID, you can throw in a bunch of $200-300 HBAs (no hardware RAID) and get the drive ports you need.

 

 

I have trouble understanding why one would choose that over a dual controller near line SAS setup if all you're trying to do is add fault tolerance by reducing single points of failure. 

NL-SAS is still pretty new, and the vast majority of users on this forum would say "what?" when asked about the setup you described. Moreover, there are tons of SATA drives out there and tons of cheap SATA HBAs out there (m1015 and 9211-8i), and there are tons of instructional guides on flashing firmwares as well -- FreeNAS 9.3 has the firmware update tool for LSI HBAs compiled and included in the release.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Agreed, but nice controlers are overkill for what these setups are designed to do -- if you're buying a $500+ controller, it's probably not to use as an HBA, you want it for the hardware RAID capabilities, cache/bbu, etc. By using software RAID, you can throw in a bunch of $200-300 HBAs (no hardware RAID) and get the drive ports you need.

 

 

NL-SAS is still pretty new, and the vast majority of users on this forum would say "what?" when asked about the setup you described. Moreover, there are tons of SATA drives out there and tons of cheap SATA HBAs out there (m1015 and 9211-8i), and there are tons of instructional guides on flashing firmwares as well -- FreeNAS 9.3 has the firmware update tool for LSI HBAs compiled and included in the release.

 

NL-SAS drives aren't exactly knew, they have been around for many years. Even longer if you look at why they are called Near Line SAS. Near Line is just a name given to identify hard disks that operate at slower RPM, typically 7200 rpm, and lower performance but still have the server grade disk controller and connector. Low RPM server disks have been around since parallel SCSI and fibre attached SCSI/ATA. Near Line disks exist for very specific reasons which no user here would need or hit issues that require to use them at home.

 

One key point relating to your SPoF topic for NL-SAS is that these disks support dual paths to the disks on a SAS HBA or RAID card but single path on a SATA HBA or RAID card. SATA disks only support single path in all cases.

 

I agree with your sentiment about most users on this forum not understanding what Near Line, SAS, SCSI etc mean so assuming they don't and catering for it like you have is the best way.

 

You also didn't mention possible RAID configurations such as RAID 0 + 1, 51 and 61 which would reduce the number of controllers required for no SPoF. RAID 51 and 61 configurations have a higher cost per GB clearly but controllers aren't free and depending on the computer limit the number you can have.

 

Like all things however, because you can do something doesn't mean you should. Over engineering solutions in IT is actually a rather big and annoying problem. Always remember the room the computer is in is a SPoF. Fires, earthquakes etc happen so spending less and having a SPoF but using the unspent money on backups is still safer. Good hardware lasts for many years without failures. Operator error is far more common than hardware failures, a no SPoF hardware design can be undone by a simple mistake in a configuration change.

 

Spending a little more money on a better motherboard, workstation or server grade, using a cheap Xeon and ECC ram along with a reliable LSI RAID card with NAS/server rated disks will always beat a cheap software RAID/ZFS build in terms of mean time between failures. Same applied to software RAID/ZFS and HBA's. Storage is cheap, data is expensive and data loss is very expensive.

Link to post
Share on other sites

Agreed, but nice controlers are overkill for what these setups are designed to do -- if you're buying a $500+ controller, it's probably not to use as an HBA, you want it for the hardware RAID capabilities, cache/bbu, etc. By using software RAID, you can throw in a bunch of $200-300 HBAs (no hardware RAID) and get the drive ports you need.

 

 

NL-SAS is still pretty new, and the vast majority of users on this forum would say "what?" when asked about the setup you described. Moreover, there are tons of SATA drives out there and tons of cheap SATA HBAs out there (m1015 and 9211-8i), and there are tons of instructional guides on flashing firmwares as well -- FreeNAS 9.3 has the firmware update tool for LSI HBAs compiled and included in the release.

 

Also an IBM M1015 or LSI 9211-8i in IT mode would be considered a SAS HBA not SATA. Just nit picking :P

Link to post
Share on other sites
Posted · Original PosterOP
You also didn't mention possible RAID configurations such as RAID 0 + 1, 51 and 61 which would reduce the number of controllers required for no SPoF. RAID 51 and 61 configurations have a higher cost per GB clearly but controllers aren't free and depending on the computer limit the number you can have.

 

To my knowledge, the two most popular forms of software RAID on this forum (FreeNAS/ZFS and FlexRAID) don't support mirrored configurations that, underneath, are parity RAID configurations.

 

Like all things however, because you can do something doesn't mean you should. Over engineering solutions in IT is actually a rather big and annoying problem. Always remember the room the computer is in is a SPoF. Fires, earthquakes etc happen so spending less and having a SPoF but using the unspent money on backups is still safer. Good hardware lasts for many years without failures. Operator error is far more common than hardware failures, a no SPoF hardware design can be undone by a simple mistake in a configuration change.

 

All true. Having a robust hardware setup only solves some of the problems.

 

If you really want robustness, AWS is the way to go. Unfortunately, a badass NAS based on cloud computing is impractical :)


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

It is not the physical controller which handles the RAID array. The controllers are merely providing the SATA ports for drives which the controlling software (FreeNAS, ZFS on Linux, FlexRAID, etc) uses to build its RAID volumes.

 

The idea is that a RAID volume, controlled by this software, should be spread out over multiple physical 'controllers' (which are just providing SATA ports) such that, if one fails, the loss will not destroy any RAID volumes.

But what happens if the whole system other than the hard drives fails?

Can you reinstall everything and get the software raid working again? Would you have to buy the same controllers or can you just use anything with enough SATA ports? 

 

And how do you see if a harddrive fails if you're not using software like Freenas? ( just using software raid in Windows ) Will it give an error somewhere or is third party software to check the hard drives required?

Link to post
Share on other sites
Posted · Original PosterOP

But what happens if the whole system other than the hard drives fails?

Can you reinstall everything and get the software raid working again? Would you have to buy the same controllers or can you just use anything with enough SATA ports? 

 

And how do you see if a harddrive fails if you're not using software like Freenas? ( just using software raid in Windows ) Will it give an error somewhere or is third party software to check the hard drives required?

I don't know about situations other than FreeNAS. I run a ZFS mirror (RAID 1) personally, and I've reinstalled the OS and moved the drives from the onboard SATA controller to an LSI controllerl. I told FreeNAS to import volumes when I reinstalled, and it will look through all the hard disks it can see to try and find old volumes. It worked for me twice, once when moving from the onboard controller to the LSI controller, and once from moving from an SSD install of FreeNAS to a virtual machine instance of FreeNAS.

 

I can't speak for larger configurations, like a bunch of RAID Z2 in a striped configuration, but I imagine it'd work just the same.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

To my knowledge, the two most popular forms of software RAID on this forum (FreeNAS/ZFS and FlexRAID) don't support mirrored configurations that, underneath, are parity RAID configurations.

 

Not true, ZFS can stripe/mirror any zvols you've created given that they are the same size.

 

Personally I often do raidz2s with 6 drives and then stripe them together, although in some situations I've used raidz3s if number of drives are 60-120.

Link to post
Share on other sites
Posted · Original PosterOP

Not true, ZFS can stripe/mirror any zvols you've created given that they are the same size.

 

Personally I often do raidz2s with 6 drives and then stripe them together, although in some situations I've used raidz3s if number of drives are 60-120.

FreeNAS specifically can do a mirror where the underlying vdevs are a RAID Z1/2/3? That's cool.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

A well thought out presentation. And it shows just how fragile storage systems are. And how likely you are to lose more data than you can imagine. Unless, you do a lot of management of the drives and other components and spend a lot of money.

 

I consider the very basic notions of NFS to be 'noble', but impractical for 'normal' users. The method uses is guaranteed to wear out a hard drive far faster than its normal life time. Perhaps we can come back to NFS once we have storage systems that can handle this design.

 

I would prefer hearing of a better design, NOT intended to kill hard drives.

 

Best regards, AraiBob

Link to post
Share on other sites

FreeNAS specifically can do a mirror where the underlying vdevs are a RAID Z1/2/3? That's cool.

Yep, it's basic ZFS functionality
Link to post
Share on other sites
Posted · Original PosterOP

Yep, it's basic ZFS functionality

I've never seen one before, do you know if it's a common practice? I would think that doing a RAID Z2 of mirrors would be better -- you'd get faster resilver times because you're rebuilding mirrors instead of a whole RAID Z2 in the event of a drive failure.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites
Posted · Original PosterOP

A well thought out presentation. And it shows just how fragile storage systems are. And how likely you are to lose more data than you can imagine. Unless, you do a lot of management of the drives and other components and spend a lot of money.

Thanks.

 

Unless there's a huge need for on-site redundant storage, I would always recommend cloud storage. At those scales, the software that sets up storage partitions is written to not trust the hardware, so you get better reliability without the hassle of managing hardware.

 

I would also take an office full of low-power desktops over a thin-client setup, unless the scaling needs were truly enormous. No matter how durable that storage setup is, it's an unnecessary lynchpin.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×