Jump to content

LSI 9266-8i vs LSI 9271-8i

shms

Hey guys, i'm about to go from software raid (mdadm) to hardware raid and i thought these two candidates were a good pick, they are awfully similar on paper, the only difference between them seems to be that the 9266-8i is running on pcie 2.0 and the 9271-8i runs on pcie 3.0, is this the only difference between them? The price difference is quite severe, is the more expensive one worth the extra dollars?

And lastly, are these cards a good option for a hardware raid that will be running a storage array on a linux guest from a esxi host? or are there other viable options out there for my needs?

 

And does anyone have experience with this lsi cachevault technology that's supposedly is better than a regular bbu?


best regards and thanks in advance!

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Both of those LSI cards are listed as supported under ESXi 4.0 and higher so that should be fine.  How many drives do you plan to attach to the card now and in the future?  Do you use a SAS expander?  I feel like the 9266-8i would be fine under a x8 PCIe 2.0 slot.  You're talking 4GB/s in each direction on that sized PCIe 2.0 slot.  I think you'd be hard-pressed to saturate that even if you loaded it with 8 x SSDs on it.

 

Yes, LSI makes nice cards that play well under Linux, but you specifically need compatibility with VMware since you're using ESXi (which they are compatible).  Your guset being a linux OS really doesn't matter unless if you're trying to do some kind of pass-through with VMWare DirectPath.  You can use VMware's compatibility guide to verify for yourself.  You could also consider Dell's Perc controllers but they are typically rebranded LSI adapters.  You might also want to look at the LSI 9361-8i and see if it's in the same price range.  The cachevault solves some problems with reducing the need to change the battery every couple of years and might be worth it depending on your budget and if you really need that level of protection.

Workstation 1: Intel i7 4790K | Thermalright MUX-120 | Asus Maximus VII Hero | 32GB RAM Crucial Ballistix Elite 1866 9-9-9-27 ( 4 x 8GB) | 2 x EVGA GTX 980 SC | Samsung 850 Pro 512GB | Samsung 840 EVO 500GB | HGST 4TB NAS 7.2KRPM | 2 x HGST 6TB NAS 7.2KRPM | 1 x Samsung 1TB 7.2KRPM | Seasonic 1050W 80+ Gold | Fractal Design Define R4 | Win 8.1 64-bit
NAS 1: Intel Intel Xeon E3-1270V3 | SUPERMICRO MBD-X10SL7-F-O | 32GB RAM DDR3L ECC (8GBx4) | 12 x HGST 4TB Deskstar NAS | SAMSUNG 850 Pro 256GB (boot/OS) | SAMSUNG 850 Pro 128GB (ZIL + L2ARC) | Seasonic 650W 80+ Gold | Rosewill RSV-L4411 | Xubuntu 14.10

Notebook: Lenovo T500 | Intel T9600 | 8GB RAM | Crucial M4 256GB

Link to comment
Share on other sites

Link to post
Share on other sites

Hey guys, i'm about to go from software raid (mdadm) to hardware raid and i thought these two candidates were a good pick, they are awfully similar on paper, the only difference between them seems to be that the 9266-8i is running on pcie 2.0 and the 9271-8i runs on pcie 3.0, is this the only difference between them? The price difference is quite severe, is the more expensive one worth the extra dollars?

And lastly, are these cards a good option for a hardware raid that will be running a storage array on a linux guest from a esxi host? or are there other viable options out there for my needs?

 

And does anyone have experience with this lsi cachevault technology that's supposedly is better than a regular bbu?

best regards and thanks in advance!

Both will be just fine for hardware RAID. Is there a reason you're moving from software to hardware RAID?

 

If your board supports it, you can pass an HBA directly through to the OS, and continue to run software RAID even on a virtualized guest. I'm doing that right now with FreeNAS (passing an ASUS PIKE 2008 to the guest), and it works like a charm. You could purchase an LSI 9207-8i which is supported by VMWare and which is an HBA without RAID (according to LSI's product page), and so it would be appropriate for software RAID.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Both of those LSI cards are listed as supported under ESXi 4.0 and higher so that should be fine.  How many drives do you plan to attach to the card now and in the future?  Do you use a SAS expander?  I feel like the 9266-8i would be fine under a x8 PCIe 2.0 slot.  You're talking 4GB/s in each direction on that sized PCIe 2.0 slot.  I think you'd be hard-pressed to saturate that even if you loaded it with 8 x SSDs on it.

 

Yes, LSI makes nice cards that play well under Linux, but you specifically need compatibility with VMware since you're using ESXi (which they are compatible).  Your guset being a linux OS really doesn't matter unless if you're trying to do some kind of pass-through with VMWare DirectPath.  You can use VMware's compatibility guide to verify for yourself.  You could also consider Dell's Perc controllers but they are typically rebranded LSI adapters.  You might also want to look at the LSI 9361-8i and see if it's in the same price range.  The cachevault solves some problems with reducing the need to change the battery every couple of years and might be worth it depending on your budget and if you really need that level of protection.

 

Right now i would attach about 8 drives to the card without an expander, actually i don't think i would need to add anymore drives in a while (years). It needs to work under linux since i'm gonna passthrough it via VMWare DirectPath, my current motherboard/cpu has vt-d and vt-x.

Link to comment
Share on other sites

Link to post
Share on other sites

Both will be just fine for hardware RAID. Is there a reason you're moving from software to hardware RAID?

 

If your board supports it, you can pass an HBA directly through to the OS, and continue to run software RAID even on a virtualized guest. I'm doing that right now with FreeNAS (passing an ASUS PIKE 2008 to the guest), and it works like a charm. You could purchase an LSI 9207-8i which is supported by VMWare and which is an HBA without RAID (according to LSI's product page), and so it would be appropriate for software RAID.

Right now i have pass-throughed a ibm m1015 and is running a mdadm software raid, my only reason for moving towards a hw raid is because i want to try something new and this is the perfect time since i'm gonna buy a whole new set of drives. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×