Jump to content

Need a RAID card

rufee

Hey guys,

 

I recently acquired 16TB's (8x2tb) worth of drives and i need a decent RAID card.

My plan is to setup a RAID 50 or 60 (if needed), the setup itself will be used as storage for a hypervisor in a small business.

Ive been looking around and LSI 9260-8i really strikes my eye, though i would like to spend a bit less if possible, i would be happy with "last gen" if they offer similar features.

LSI's naming scheme and all the forks by dell, hp, etc... really makes everything confusing.

I like the IBM ServeRAID M1015, but i don't think you can attach a BBU to it, also having pins for HDD activity led's would be a plus.

 

I don't want to go FreeNAS, because i really don't like the stability on it and there would be no need for a second server.

 

Any suggestions ?

Something wrong with your connection ?

Run the damn cable :)

Link to comment
Share on other sites

Link to post
Share on other sites

First thing, i recommend against going with RAID 50 on such a large dataset. The rebuild time would be HORRIBLE. I strongly recommend using several smaller RAID 5 pools or RAID 10. That said, do as you like.

 

As for a good card, I've had success with LSI products:

 

This card supports up to 128 SATA drives, and comes with the breakout cables for connecting 8 SATA or SAS drives. 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118194

 

This is the BBU for the RAID card above (proof of compatibility can be found here: http://www.lsi.com/products/raid-controllers/pages/cache-protection.aspx#tab/tab2)

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118162&cm_re=LSIiBBU09-_-16-118-162-_-Product

 

...If you're going to spend the money for a raid card, BUY THE BATTERY BACKUP UNIT! They are worth their weight in fine hardwood.

 

 

The 9260-8i is probably acceptable as well, it uses a different BBU however. Not a big problem, you'll just need to find it yourself. Spending $500-800 on a good raid card + BBU is totally worth if it you really need the performance/redundancy.

Link to comment
Share on other sites

Link to post
Share on other sites

Hey guys,

 

I recently acquired 16TB's (8x2tb) worth of drives and i need a decent RAID card.

My plan is to setup a RAID 50 or 60 (if needed), the setup itself will be used as storage for a hypervisor in a small business.

Ive been looking around and LSI 9260-8i really strikes my eye, though i would like to spend a bit less if possible, i would be happy with "last gen" if they offer similar features.

LSI's naming scheme and all the forks by dell, hp, etc... really makes everything confusing.

I like the IBM ServeRAID M1015, but i don't think you can attach a BBU to it, also having pins for HDD activity led's would be a plus.

 

I don't want to go FreeNAS, because i really don't like the stability on it and there would be no need for a second server.

 

Any suggestions ?

The 9260-8i is "last gen" at this point. You won't get much less expensive than that. Since you're using it for a hypervisor, I wouldn't recommend cheaping out on the RAID card -- performance will really stink without an on-board cache.

 

I'd recommend it, or the one that asquirrel recommended.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

First thing, i recommend against going with RAID 50 on such a large dataset. The rebuild time would be HORRIBLE. I strongly recommend using several smaller RAID 5 pools or RAID 10. That said, do as you like.

 

 

 

I thought that RAID50 exactly does that: dividing the drives into two RAID5 sets (each consisting of 4 disks in this example by the OP) and these are then again striped via RAID0 - and as long as you lose only one disk, only the affected RAID5 should need to be rebuilt - the other RAID 5 and the overlaying RAID0 should not be affected at all - and a 4x2TB RAID5 isn't all that terrible to rebuild... or did I get something completely wrong?

[Main rig "ToXxXiC":]
CPU: Intel Core i7-4790K | MB: ASUS Maximus VII Formula | RAM: G.Skill TridentX 32GB 2400MHz (DDR-3) | GPU: EVGA GTX980 Hydro Copper | Storage: Samsung 850 Pro 256GB SSD + Samsung 850 EVO 1TB SSD (+NAS) | Sound: OnBoard | PSU: XFX Black Edition Pro 1050W 80+ Gold | Case: Cooler Master Cosmos II | Cooling: Full Custom Watercooling Loop (CPU+GPU+MB) | OS: Windows 7 Professional (64-Bit)

Link to comment
Share on other sites

Link to post
Share on other sites

I thought that RAID50 exactly does that: dividing the drives into two RAID5 sets (each consisting of 4 disks in this example by the OP) and these are then again striped via RAID0 - and as long as you lose only one disk, only the affected RAID5 should need to be rebuilt - the other RAID 5 and the overlaying RAID0 should not be affected at all - and a 4x2TB RAID5 isn't all that terrible to rebuild... or did I get something completely wrong?

Depends on the RAID card. LSIs are pretty good, so they should only rebuild the corresponding RAID 5.

It's still a lot of space to rebuild. RAID 10 is the best by far for rebuild times. @rufee, if you don't need more than 8TB I would go with RAID 10.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Depends on the RAID card. LSIs are pretty good, so they should only rebuild the corresponding RAID 5.

It's still a lot of space to rebuild. RAID 10 is the best by far for rebuild times. @rufee, if you don't need more than 8TB I would go with RAID 10.

Yes, I'd never go for RAID50 or 60 without a good RAID card anyway (also had LSI in mind). :)

 

Of course, all parity based RAID arrays (5,6, 50, or 60) will take a lot longer to rebuild than mirror/stripe arrays (1 or 10). What I personally don't like about RAID10 is that you lose the space of half of your drives. Ok, two out of four is still kinda acceptable, but four out of eight (and the OP already got 8 drives, if I understood that correctly)... that would be a little too much waste for me personally. With 8-10 drives I'd probably prefer going raid 50/60 as well. (Especially if I really was correct above and "only" one of the sub-arrays would have to be rebuilt...)

[Main rig "ToXxXiC":]
CPU: Intel Core i7-4790K | MB: ASUS Maximus VII Formula | RAM: G.Skill TridentX 32GB 2400MHz (DDR-3) | GPU: EVGA GTX980 Hydro Copper | Storage: Samsung 850 Pro 256GB SSD + Samsung 850 EVO 1TB SSD (+NAS) | Sound: OnBoard | PSU: XFX Black Edition Pro 1050W 80+ Gold | Case: Cooler Master Cosmos II | Cooling: Full Custom Watercooling Loop (CPU+GPU+MB) | OS: Windows 7 Professional (64-Bit)

Link to comment
Share on other sites

Link to post
Share on other sites

Yes, I'd never go for RAID50 or 60 without a good RAID card anyway (also had LSI in mind). :)

 

Of course, all parity based RAID arrays (5,6, 50, or 60) will take a lot longer to rebuild than mirror/stripe arrays (1 or 10). What I personally don't like about RAID10 is that you lose the space of half of your drives. Ok, two out of four is still kinda acceptable, but four out of eight (and the OP already got 8 drives, if I understood that correctly)... that would be a little too much waste for me personally. With 8-10 drives I'd probably prefer going raid 50/60 as well. (Especially if I really was correct above and "only" one of the sub-arrays would have to be rebuilt...)

It's true, there is much less efficiency. However, your overall fault tolerance increases as you add drives to a RAID 10. In addition, rebuilds for very large RAID arrays become impractical with parity RAID.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

It's true, there is much less efficiency. However, your overall fault tolerance increases as you add drives to a RAID 10. In addition, rebuilds for very large RAID arrays become impractical with parity RAID.

 

Yes, that's why I'd only go for RAID50 (still under the assumption made above) and certainly not RAID5 for the whole array.  :)  Well, of course also because of the improved performance over RAID5 (or let alone RAID6). :D  And (still under the assumption from above) I probably wouldn't necessary call a 4x2TB RAID5 "very large", would you? If so, may I ask, what size/configuration you'd still think acceptable in a parity RAID setup then?

[Main rig "ToXxXiC":]
CPU: Intel Core i7-4790K | MB: ASUS Maximus VII Formula | RAM: G.Skill TridentX 32GB 2400MHz (DDR-3) | GPU: EVGA GTX980 Hydro Copper | Storage: Samsung 850 Pro 256GB SSD + Samsung 850 EVO 1TB SSD (+NAS) | Sound: OnBoard | PSU: XFX Black Edition Pro 1050W 80+ Gold | Case: Cooler Master Cosmos II | Cooling: Full Custom Watercooling Loop (CPU+GPU+MB) | OS: Windows 7 Professional (64-Bit)

Link to comment
Share on other sites

Link to post
Share on other sites

I probably wouldn't necessary call a 4x2TB RAID5 "very large", would you? If so, may I ask, what size/configuration you'd still think acceptable in a parity RAID setup then?

No, that's a very reasonable number of drives and amounts of storage.

 

Most of what follows comes from my experience working in enterprise storage. For a RAID 5, I wouldn't ever use more than seven drives (pretty standard server configuration), or more than about 15TB in total (including the parity drive). You can theoretically use up to 32 drives in a RAID 5, but more drives means more of a chance for failure, and more TB means a larger likelihood of failure. So I wouldn't use more than 7x 2TB, 5x 3TB, or 4x 4TB for a consumer NAS. If you were to purchase enterprise-grade drives, on the other hand, then it's not nearly as much of an issue.

For RAID 6, I would cap it at 11 drives or 30TB. If you want to go past the 30TB mark, I really recommend RAID 10, because rebuilds will take forever.

 

If you go with a ZFS-based RAID, then you can do RAID Z3 which I would cap at 15 drives, or 45TB. I wouldn't recommend this at all, though.

 

Personally, I would go RAID 6 if I needed more than 6 drives, RAID Z3 if I needed more than 9 drives. After 12 drives, I would just go RAID 10 if space efficiency wasn't an issue. If it was, I would probably go RAID 60 or striped RAID Z3.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

When determining how many disks to use in a RAIDZ, the following configurations provide optimal

performance. Array sizes beyond 12 disks are not recommended. 

• Start a RAIDZ1 at at 3, 5, or 9 disks. 

• Start a RAIDZ2 at 4, 6, or 10 disks. 

• Start a RAIDZ3 at 5, 7, or 11 disks. 

The recommended number of disks per group is between 3 and 9. If you have more disks, use multiple

groups. 

 

 

also recommended that if you are using ZFS that you have the sata/sas controller in HBA mode and not raid mode.

Can Anybody Link A Virtual Machine while I go download some RAM?

 

Link to comment
Share on other sites

Link to post
Share on other sites

Id prefer more storage space.

I run a few RAID5's at work, it takes overnight to rebuild on 4TB's of storage which is not so bad.

If the rebuild time for 50 on 16TB is 3-4days then that is acceptable for me.

Something wrong with your connection ?

Run the damn cable :)

Link to comment
Share on other sites

Link to post
Share on other sites

No, that's a very reasonable number of drives and amounts of storage.

 

Most of what follows comes from my experience working in enterprise storage. For a RAID 5, I wouldn't ever use more than seven drives (pretty standard server configuration), or more than about 15TB in total (including the parity drive). You can theoretically use up to 32 drives in a RAID 5, but more drives means more of a chance for failure, and more TB means a larger likelihood of failure. So I wouldn't use more than 7x 2TB, 5x 3TB, or 4x 4TB for a consumer NAS. If you were to purchase enterprise-grade drives, on the other hand, then it's not nearly as much of an issue.

For RAID 6, I would cap it at 11 drives or 30TB. If you want to go past the 30TB mark, I really recommend RAID 10, because rebuilds will take forever.

 

If you go with a ZFS-based RAID, then you can do RAID Z3 which I would cap at 15 drives, or 45TB. I wouldn't recommend this at all, though.

 

Personally, I would go RAID 6 if I needed more than 6 drives, RAID Z3 if I needed more than 9 drives. After 12 drives, I would just go RAID 10 if space efficiency wasn't an issue. If it was, I would probably go RAID 60 or striped RAID Z3.

Thanks! That was a very detailed posting.

[Main rig "ToXxXiC":]
CPU: Intel Core i7-4790K | MB: ASUS Maximus VII Formula | RAM: G.Skill TridentX 32GB 2400MHz (DDR-3) | GPU: EVGA GTX980 Hydro Copper | Storage: Samsung 850 Pro 256GB SSD + Samsung 850 EVO 1TB SSD (+NAS) | Sound: OnBoard | PSU: XFX Black Edition Pro 1050W 80+ Gold | Case: Cooler Master Cosmos II | Cooling: Full Custom Watercooling Loop (CPU+GPU+MB) | OS: Windows 7 Professional (64-Bit)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×