Jump to content
Floatplane payments are migrating! Read more... ×
Search In
  • More options...
Find results that contain...
Find results in...
wpirobotbuilder

Reducing Single Points of Failure (SPoF) in Redundant Storage

Recommended Posts

Posted · Original PosterOP

Reducing Single Points of Failure in Redundant Storage

 
In lots of the storage builds that exist here on the forum, the primary method of data protection is RAID, sometimes coupled with a backup solution (or the storage build is the backup solution). In the storage industry, there are load of systems that utilize RAID to provide redundancy for customers' data. One key aspect of (good) storage solutions is being resistant to not only drive failures (which happen a LOT), but also failure of other components as well. The goal is to have no single point of failure.
 
First, let's ask:
 
What is a Single Point of Failure?
 
A single point of failure is exactly what it sounds like. Pick a component inside of the storage system, and imagine that it was broken, removed, etc. Do you lose any data as a result of this? If so, then that component is a single point of failure.
 
By the way, from this point forward a single point of failure will be abbreviated as: SPoF
 
Let's pick on looney again, using his system given here.
 
Looney's build contains a FlexRAID software RAID array, which is comprised of drives on two separate hardware RAID cards running as Host Bus Adapters, with a handful of iSCSI targeted drives. We'll just focus on the RAID arrays for now, since those seem like where he would store data he wants to protect. His two arrays are on two separate hardware RAID cards, which provide redundancy in case of possible drive failures. As long as he replaces drives as they fail, his array is unlikely to go down.
 
Now let's be mean and remove his IBM card, effectively removing 8 of his 14 drives. Since he only has two drives worth of parity, is his system still online? No, we have exceeded the capability of his RAID array to recover from drive loss. If he only had this system, that makes his IBM card a SPoF, as well as his RocketRaid card.
 
However, he has a cloud backup service, which is very reliable in terms of keeping data intact. In addition, being from the Kingdom of the Netherlands, he has fantastic 100/100 internet service, making the process of recovering from a total system loss much easier.
 
See why RAID doesn't constitute backup? It doesn't protect you from a catastrophic event.
 
In professional environments, lots of storage is done over specialized networks, where multiple systems can replicate data to keep it safe in the event of a single system loss. In addition, systems may have multiple storage controllers (not like RAID controllers) which allow a single system to keep operating in the event of a controller failure. These systems also run RAID to prevent against drive loss.
 
In systems running the Z File System (ZFS) like FreeNAS or Ubuntu with ZFS installed, DIY users can eliminate SPoFs by using multiple storage controllers, and planning their volumes to reduce the risk of data loss. Something similar can be done (I believe) with FlexRAID. This article aims to provide examples for theoretical configurations, and will have some practical real-life examples as well. It also will outline the high (sometimes unreasonably high) cost of eliminating SPoF for certain configurations, and aim to identify more efficient and practical ones.
 
Please note: There is no hardware RAID control going on here, all software RAID. When 'controllers' are mentioned, I am referring to the Intel/3rd party SATA chipsets on a motherboard, an add-in SATA controller (Host Bus Adapter), or an add-in RAID card running without RAID configured. The controllers only provide the computer with more SATA ports, and it is the software itself which controls the RAID array.
 
First, lets start with hypothetical situations. We have a user with some drives who wants to eliminate SPoFs in his system. Since we can't remove the risk of a catastrophic failure (such as a CPU, motherboard or RAM failure), we'll ignore those for now. We can, however, reduce the risk of downtime due to a controller failure. This might be a 3rd party chipset, a RAID card (not configured for RAID) or other HBA which connects drives to the system.
 
RAID 0 will not be considered, since there is no redundancy.
 
Note: For clarification, RAID 5 represents single-parity RAID, or RAID Z1 (ZFS). RAID 6 represents dual-parity RAID, or RAID Z2. RAID 7 represents triple-parity RAID, or RAID Z3.
 
Note: FlexRAID doesn't support nested RAID levels.
 
[spoiler=Our user has two drives.]
Given this, the only viable configuration is RAID 1. In a typical situation, we might hook both drives up to the same controller and call it a day. But now that controller is a SPoF!
 
To get around this, we'll use two controllers, and set up the configuration as shown:
 
post-653-0-70221600-1383629001.jpg
 
Now, if we remove a controller, there is still an active drive that keeps the data alive! This system has removed the controllers as a SPoF.


[spoiler=Our user has three drives.]
With three drives, we can do either a 3-way RAID 1 mirror, or a RAID 5 configuration. Let's start with RAID 1:
 
Remembering that we want to have at least 2 controllers, we can set up the RAID 1 in one of two ways, shown below:
 
post-653-0-38517400-1383629018.jpg
 
In this instance, we could lose any controller, and the array would still be alive. Now let's go to RAID 5:
 
In RAID 5, a loss of more than 1 drive will kill the array. Therefore, there must be at least 3 controllers to prevent any one from becoming an SPoF, shown below:
 
post-653-0-11476800-1383629033.jpg
 
Notice that in this situation, we are using a lot of controllers given the number of drives we have. Note also that the more drives a RAID 5 contains, the more controllers we will need. We'll see this shortly.

 

[spoiler=Our user has four drives.]
We'll stop using RAID 1 at this point, since it is very costly to keep building the array. This time, our options are RAID 5, RAID 6 and RAID 10. We'll start with RAID 5, for the last time.
 
Remembering the insight we developed last time, we'll need 4 controller for 4 drives:
 
post-653-0-13920600-1383629052.jpg
 
This really starts to get expensive, unless you are already using 4 controllers in your system (we'll talk about this during the practical examples later on). Now on to RAID 6:
 
Since RAID 6 can sustain two drive losses, we can put two drives on each controller, so we need 2 controllers to meet our requirements:
 
post-653-0-00488700-1383629069.jpg
 
In this situation, the loss of a controller will down two drives, which the array can endure. Last is RAID 10:
 
Using RAID 10 with four drives gives us this minimum configuration:
 
post-653-0-09709900-1383629083.jpg
 
Notice that for RAID 10, we can put one drive from each RAID 1 stripe on a single controller. As we'll see later on, this allows us to create massive RAID 10 arrays with a relatively small number of controllers. In addition, using RAID 10 gives us the same storage space as a RAID 6, but with smaller worst-case redundancy. Given four drives, the best choices look like RAID 6and RAID 10, with the trade-off being redundancy (RAID 6is better) versus speed (RAID 10 is better).

 

[spoiler=Our user has five drives.]
For this case, we can't go with RAID 5, since it would require 5 controllers, and can't do RAID 10 with an odd number of drives. However, we do have RAID 6 and RAID 7. We'll start with RAID 6:
 
Here we need at least 3 controllers, but one controller is underutilized:
 
post-653-0-17986000-1383629103.jpg
 
For RAID 7, we get 3 drives worth of redundancy, so we can put 3 drives on each controller:
 
post-653-0-94444800-1383629214.jpg
 
In this case, we need two controllers, with one being underutilized.


[spoiler=Our user has six drives.]
We can now start doing some more advanced nested RAID levels. In this case, we can create RAID 10, RAID 6, RAID 7, and RAID 50 (striped RAID 5).
 
RAID 10 follows the logical progression from the four drive configuration:
 
post-653-0-67547600-1383629268.jpg
 
RAID 6 becomes as efficient as possible, fully utilizing all controllers:
 
post-653-0-10303800-1383629234.jpg
 
RAID 7 also becomes as efficient as possible, fully utilizing both controllers:
 
post-653-0-40462000-1383629255.jpg
 
RAID 50 is possible by creating two RAID 5volumes and striping them together as a RAID 0:
 
post-653-0-43799300-1383629281.jpg
 
Notice that we have reduced the number of controllers for a single-parity solution, since we can put one drive from each stripe onto a single controller. This progression will occur later as well, when we start looking at RAID 60 and RAID 70.



Here, we can only do a RAID 6 or RAID 7, where both are underutilized.
 
post-653-0-08010500-1383629303.jpg
 
post-653-0-08444800-1383629308.jpg
 
From now on, we'll skip doing RAID 6 setups, since it requires a large number of controllers.






Whatever solution you choose depends on how many drives and controllers you are prepared to pay for. A motherboard with multiple controllers can provide functionality for aforementioned configurations that only require two controllers, with the limitation being on the number of ports that the controllers hold. Some possible examples are examined below:
 
Here is a hypothetical solution which contains two volumes, one RAID 7 and one RAID 6. Notice that using multiple volumes allows for usage of more drives for the same number of controllers. As long as the volumes are properly planned, no volume will be at risk due to a SPoF:
 
post-653-0-91178000-1383630628.jpg
 
Here is a practical solution, involving a motherboard with an Intel chipset (with 4 ports available), and ASMedia chipset (with 4 ports), and an LSI 9211-4i:
 
post-653-0-63986500-1383630692.jpg
 

 

Here we consider the three possibilities for a controller failure. We'll start by clarifying our setup, specifying which drives are on a given stripe in the RAID 10 (done by color). As long as there is at least one drive of each color, the RAID 10 will continue to operate.

 

post-653-0-16687800-1391055794.png

 

First, let's kill the LSI controller:

 

post-653-0-30860400-1391055776.png

 

Since RAID7 can sustain 3 drive failures, it lives, and because there is one drive from each stripe (one of each color) in the RAID 10, the RAID 10 also lives.

 

Now let's kill that onboard Asmedia chipset:

 

post-653-0-05703600-1391055782.png

 

Similar situation, but with different dead drives. The same thing appears when we kill off the Intel controller:

 

post-653-0-64973300-1391055786.png

 

However, if the Intel controller died, you could lose your whole system if your OS was installed on a drive connected to it. In the case of a ZFS-based system, the OS would likely be on a flash drive, so it would remain alive. If you were running a ZFS Intent Log and/or an L2ARC on the Intel controller, then your performance might tank depending on how much RAM you have installed. But the system would live.

This system might store priceless memories (the RAID 7 volume) and nightly backups of every PC in the house (the RAID 10 volume). Notice that the failure of any one controller would not cause a loss of data.
 
Even though the ASMedia chipset is on the motherboard, this situation presumes that the ASMedia chipset could fail without the whole board failing. If the whole motherboard fails, that would probably be considered a catastrophic failure, and data would only survive if backups to another system were being done.

 

Note: If a controller dies, you may not lose data, but you should replace the controller ASAP to prevent other failures from destroying data! If a 3rd party SATA chipset fails, you will probably have to get an add-in SATA card or RAID card to replace it, unless you have another motherboard lying around.
 
Something important to note is that rebuilding an array takes longer as the amount of parity increases. For extremely large arrays (dozens of drives, lots of TB) RAID 10 becomes much more appealing in the event of a drive failure.
 
That's all I have on this, I'll probably edit it as things come up. If you see anything wrong with this doc please let me know.


We can obviously do a RAID 10, but now we can also venture into the realm of RAID 70 (striped RAID 7), in addition to RAID 60.
 
Our RAID 60 is slightly underutilized:
 
post-653-0-60502700-1383629425.jpg
 
Finally, we have our RAID 70:
 
post-653-0-24508900-1383629439.jpg
 
Our RAID 70 is relatively inefficient (using 12 drives would be better, with six drives per controller). However, it allows for increased performance in addition to providing a huge amount of redundancy for this setup. This is definitely not recommended for anything other than mission-critical data or perhaps priceless memories (wedding video/photos, important docs, etc.)

In this case, we can do either a RAID 50 or RAID 7, both of which require 3 controllers:
 
For RAID 50, we have 3 stripes of RAID 5:
 
post-653-0-52308700-1383629400.jpg
 
For RAID 7, we have 3 controllers which are all fully utilized:
 
post-653-0-90502600-1389643570.jpg
 
Both configurations give us 6 drives worth of space for 3 controllers, with the tradeoff, once again, being worst-case redundancy (RAID 7 is better) versus speed (RAID 50 is better).
 
For configurations with more drives, we aren't going to look at RAID 7, since we'll require more controllers.

With eight drives, we can jump into the realm of RAID 60 (striped RAID 6 volumes). We also have the options of RAID 10, RAID 50, and RAID 7.
 
RAID 10 looks just like it did before:
 
post-653-0-60869900-1383629326.jpg
 
RAID 50 looks similar, but now we need more controllers since each volume has more drives:
 
post-653-0-87538600-1383629354.jpg
 
RAID 7 looks pretty similar too:
post-653-0-33204800-1383629367.jpg
 
RAID 60 looks pretty efficient with this number of drives. Logically progressing from RAID 50, we can have two drives per stripe on one controller, for 4 drives per controller. This lets us use only two controllers:
 
post-653-0-48776500-1383629381.jpg
 
Like we had for four drives, we now have a tradeoff between worst-case redundancy and speed between RAID 60 and RAID 10.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

First xD

but seriously this is a great read, very informative

And how are there no replies after 2 days 0.o


<p>Wires Suck :angry:
!fY0|_|(4|\|R34[)7#!5PMM37#3(0[)3:1337 70833|\|73R3[)!|\|49!\/34\|/4Y 4|\|[)93741!f3

 

Link to post
Share on other sites
Posted · Original PosterOP

First xD

but seriously this is a great read, very informative

And how are there no replies after 2 days 0.o

 

Not as interesting or exciting as discussions on CPUs or graphics cards.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

I didn't read the entire guide line by line, more so 3/4 of it, but it is super informative and I enjoyed reading it.

 

very well done


Main rig: Coursair Air 540, Amd 8120(stock clock, had it up to 4.6), Gigabyte 990 fx ud3,  MSI DCU2 r9 280x, seasonic 1000w platinum, Samsung 830 64 GB ssd, Seagate 1tb, 4 GB (2x2) G.Skill, 

 

Secondary: Define r4, Amd A10 5800k(4.6), 1x Sapphire 7950, MSI FM2-A85xa, 8 GB coursair vengeance ram, Seasonic 1250w Gold, 500GB 840 Evo, 223GB 2.5 Hdd, 80 Gb Seasonic, Samsung CD/dvd

Link to post
Share on other sites

Just use unRAID, no need to worry about any controller failure, can survive on 1 harddrive failure, even multiple harddrive failure relatively easy to recover

Link to post
Share on other sites
Posted · Original PosterOP

It says RAID 5. 

Just saying.

 

comics-ouija-board-434125.jpeg

Fixed.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

OK, I've held back a bit after first reading and am re-reading for some other thing (cough cough @Vitalius).

 

Which controller allows crossing RAID arrays?


I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites

OK, I've held back a bit after first reading and am re-reading for some other thing (cough cough @Vitalius).

 

Which controller allows crossing RAID arrays?

No idea if a controller allows it (unless you can connect a controller to a controller). But I assume you could have one side be hardware and the other be software. So use the controllers for the first level, then software for the second level (i.e. RAID 1 on the controllers, and RAID 0 in the software).

However, Software RAID (specifically FreeNAS, and FlexRAID [i assume]) would easily allow it. 

Just create two RAID 1 arrays of two different set of drives, for example, in a zpool and RAID 0 them together as a single VDev (if that's how it works, not sure if they'd be 3 separate vdevs or a single vdev). 

Not sure how you would do it for RAID 5, 6, or 7 (probably more complicated), but with RAID 1 and 0, it should be relatively easy.

SCSI controllers might allow you to do that, but again, I am guessing (because it's enterprise level stuff) that is has such a feature.

Also, thanks Idea. :)


† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to post
Share on other sites

No idea if a controller allows it (unless you can connect a controller to a controller). But I assume you could have one side be hardware and the other be software. So use the controllers for the first level, then software for the second level (i.e. RAID 1 on the controllers, and RAID 0 in the software).

However, Software RAID (specifically FreeNAS, and FlexRAID [i assume]) would easily allow it. 

Just create two RAID 1 arrays of two different set of drives, for example, in a zpool and RAID 0 them together as a single VDev (if that's how it works, not sure if they'd be 3 separate vdevs or a single vdev). 

Not sure how you would do it for RAID 5, 6, or 7 (probably more complicated), but with RAID 1 and 0, it should be relatively easy.

SCSI controllers might allow you to do that, but again, I am guessing (because it's enterprise level stuff) that is has such a feature.

Also, thanks Idea. :)

 

Yes, Software RAID can but all this talk of SPoF and you throw in the "Intel chipset (with 4 ports available), and ASMedia chipset (with 4 ports), and an LSI 9211-4i" as a viable set of controllers to mix <insert mother of ghod meme> :D  , that's sure is reducing the SPoF but adding a cluster @#$@ of controllers that most likely will belly up any RAID array. Good in theory but in reality I'd run away from that system faster than a Over Clocked Liquid Nitrogen cooled CPU.

 

Best to keep any discussion on SPoF back with trusty, tested, and known controllers and drives, if not just Controller X. The LSI I'd keep in the mix but maybe two of them and then maybe look for others depending on budget, but for FreeNAS, FlexRAID basically ZFS an HBA is best.

 

There might be a RAID controller out there that can do the spanning but probably has to have equal brother controllers in the span, and for a nice price.

 

The Theory is good (@wpirobotbuilder), but I think most on LTT need to see it converted to more examples as most are just getting the feel for RAID on its own. One good add on would be an example build so they can see it lay-ed out in terms of actual components. Just a suggestion, not an arm twist.


I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites

-snip-

Ah, well for my personal project, I would use two (identical) RAID controllers and never use the onboard SATA ports if possible (except maybe for the boot drive). 

I'd imagine using that many different controllers wouldn't matter until the actual RAID broke. Then, if the failure was one of the controllers, like a motherboard one, you'd be ... close to SOL. It might work. Not enough experience with RAID to say, but I would assume it would. And admittedly, he did say the motherboard failing would probably mean catastrophic failure and a prior backup would be needed.

wpirobotbuilder I agree with IdeaStormer. I don't understand how you would set up the actual RAIDs themselves. Such as in the last picture with the controller examples. I'm assuming all examples are software RAID (except where hardware RAID is actually possible), because, unless you can somehow span RAID arrays across controllers like that via hardware, there is no other option. And Software RAID could do that fairly easily. 

I could be wrong though. There are still connectors on the motherboard and certain cards that I don't recognize that could somehow synchronize controllers. Which would be sweet but weird and I question how much I would be able to trust something like that with RAID and recovery.


† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to post
Share on other sites
Posted · Original PosterOP

OK, I've held back a bit after first reading and am re-reading for some other thing (cough cough @Vitalius).

 

Which controller allows crossing RAID arrays?

It is not the physical controller which handles the RAID array. The controllers are merely providing the SATA ports for drives which the controlling software (FreeNAS, ZFS on Linux, FlexRAID, etc) uses to build its RAID volumes.

 

The idea is that a RAID volume, controlled by this software, should be spread out over multiple physical 'controllers' (which are just providing SATA ports) such that, if one fails, the loss will not destroy any RAID volumes.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites
Posted · Original PosterOP

Updated to clarify that all RAID here is software RAID, and explain that controllers only provide SATA ports.

 

Changed some punctuation and wording.

 

Added tags to the post.

 

Also consolidated individual sections into spoilers. Saves a lot of TL;DR.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites
Posted · Original PosterOP

Yes, Software RAID can but all this talk of SPoF and you throw in the "Intel chipset (with 4 ports available), and ASMedia chipset (with 4 ports), and an LSI 9211-4i" as a viable set of controllers to mix <insert mother of ghod meme> :D  , that's sure is reducing the SPoF but adding a cluster @#$@ of controllers that most likely will belly up any RAID array.

Not quite, since the loss of a single controller will not damage data in any way. Also, there are some very efficient configurations like RAID 10 which only ever require two controllers.

 

With that said, you WILL have to replace a dead controller ASAP to protect against further data loss.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Not quite, since the loss of a single controller will not damage data in any way. Also, there are some very efficient configurations like RAID 10 which only ever require two controllers.

 

With that said, you WILL have to replace a dead controller ASAP to protect against further data loss.

So when you say it won't damage data in any way, does that mean the system can run as if nothing happened (albeit with some performance loss) while some drives are down due to either drive failure or controller loss, so long as it isn't the scenario that causes data loss? 

i.e. I have a RAID 10 array. One drive in each RAID 1 dies. Will it act like nothing happened and keep running? I'd assume so. I ask this question both for data corruption, and uninterrupted access related reasons.


† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to post
Share on other sites
Posted · Original PosterOP

So when you say it won't damage data in any way, does that mean the system can run as if nothing happened (albeit with some performance loss) while some drives are down due to either drive failure or controller loss, so long as it isn't the scenario that causes data loss? 

i.e. I have a RAID 10 array. One drive in each RAID 1 dies. Will it act like nothing happened and keep running? I'd assume so. I ask this question both for data corruption, and uninterrupted access related reasons.

Unless I'm missing something big, that is the case. However you'd better replace the drives quickly.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Unless I'm missing something big, that is the case. However you'd better replace the drives quickly.

Awesome, and understood. I just wanted to know what my options would be if a server like that did have drives that failed and we were in the middle of something fairly important.


† Christian Member †

For my pertinent links to guides, reviews, and anything similar, go here, and look under the spoiler labeled such. A brief history of Unix and it's relation to OS X by Builder.

 

 

Link to post
Share on other sites

Not quite, since the loss of a single controller will not damage data in any way. Also, there are some very efficient configurations like RAID 10 which only ever require two controllers.

 

With that said, you WILL have to replace a dead controller ASAP to protect against further data loss.

 

I think you misunderstood my post, I'm not saying the Theory of multiple controllers will ruin the RAID, but those specific different controller brands used together.


I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites
Posted · Original PosterOP

I think you misunderstood my post, I'm not saying the Theory of multiple controllers will ruin the RAID, but those specific different controller brands used together.

I don't think the controllers matter that much, since they just provide SATA ports and don't communicate with each other. I could be wrong, though.

 

If one bought a really crappy controller then that might break the RAID array quickly, but I don't think specific combinations of controllers spells D-O-O-M for the array.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

I don't think the controllers matter that much, since they just provide SATA ports and don't communicate with each other. I could be wrong, though.

 

If one bought a really crappy controller then that might break the RAID array quickly, but I don't think specific combinations of controllers spells D-O-O-M for the array.

 

So have you used that combination of controllers? Just curious on your results with those if so.


I roll with sigs off so I have no idea what you're advertising.

 

This is NOT the signature you are looking for.

Link to post
Share on other sites
Posted · Original PosterOP

So have you used that combination of controllers? Just curious on your results with those if so.

I haven't, but I know that this build uses the Intel chipset, an LSI card, and (I believe) a 3rd party onboard controller.

 

It's got some performance numbers as well.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites
Posted · Original PosterOP

Added some supplementary pictures for the example configuration towards the end. They outline exactly which drives would die for every controller that might die, and show what would be left to keep the volumes running.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Very Informative and helpful.

 

But if I may ask.

Let's say my MB have 6 sata ports and i want to add more but don't want to have spof (single point of failure) and want to expand to 7 drives or more.

So I should get a raid controller (raid card like LSI) but not setting up the raid and instead use only software raid?

 

*sorry for the noob question my english is not good.


MB :MSI Z77a G45 | Proc: I5 3570K (Stock) | HSF : CM 212X turbo | RAM : Corsair Vengeance 8GB (2X4GB) | VGA : MSI GTX 660 Twin Frozr | PSU : Corsair GS600 | Case : CM Storm Enforcer | Storage :  OCZ Vector 128GB, WD Blue 500GB , Samsung 840 Evo 120GB, WD Blue 1TB

Link to post
Share on other sites
Posted · Original PosterOP

Very Informative and helpful.

 

But if I may ask.

Let's say my MB have 6 sata ports and i want to add more but don't want to have spof (single point of failure) and want to expand to 7 drives or more.

So I should get a raid controller (raid card like LSI) but not setting up the raid and instead use only software raid?

 

*sorry for the noob question my english is not good.

That could work. Removing the controllers as a SPoF is dependent on which drives (i.e. which SATA ports) you are using. For example, if all the drives you used for a RAID 6 array were attached to the LSI controller, that controller would be a SPoF. If you plan your RAID volume correctly, then you will avoid SPoFs.

 

If you want to see what can be done with two controllers (like you have), then go back and look at the configurations involving exactly two controllers. Those RAID configurations will be your possible options. In terms of ease of expansion, I highly recommend RAID 10. While you only get 50% of the total drive space, you only need two controllers and can throw in another RAID 1 to expand very quickly. You could expand up to twelve drives with that, getting 6 worth of usable capacity.

 

If you anticipate needing more than 18TB of storage, then you will likely need more than one LSI controller to keep avoiding SPoFs, regardless of the RAID configuration you choose.

 

P.S. RAID controllers need to be flashed from IR to IT mode to be fully compatible with ZFS/FreeNAS. See alpenwasser's tutorial on how to do that for the LSI 9211-8i here. BTW that is the most recommended controller to use, or the 9211-4i.

 

If you need more help, don't hesitate to ask.


I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Buy VPN

×