Jump to content
Search In
  • More options...
Find results that contain...
Find results in...

Looking for good SAS RAID controller for use in ESXi 6.7

benny_r_t_2
 Share

Hi,

 

I'm building a vSphere ESXi 6.7 server for my homelab.   I'm new to ESXi and I'm new to setting up RAID.  I'm looking to purchase a few RAID controller cards

 

questions I have:

- can multiple SAS controller's be used to make a single array?

- can we have some drives excluded from RAID but still connected to same controller that has other drives in an array?

- any recommendations on reliable 8i or 16i SAS RAID controller for ESXi?  VMware Compatibility Guide lists a ton.

 

 

backplane:

The chassis is a Norco 4224 and has 6 backplanes (not expander backplane).  Each of the 6 backplanes accepts an internal mini SAS SFF8087 cable.  4 drive per backplane.

 

Motherboard:

We have a Supermicro with onboard Intel controller with three SAS connectors (good for 3 of the 6 backplanes)

 

ESXi doesn't like the Intel RTS software RAID via the onboard SAS connectors.  If I didn't want RAID that would be fine.  If I want to go with a RAID 10 I will need to purchase a SAS controller card.

 

To feed all 6 backplanes without using a SAS expander, I think I need three SAS 8i (two min SAS connectors per card... eight internal drives per card).  Or woud you recommend two 16i cards (that would give me two extra lanes after connecting all six backplanes)?  Or perhaps a 16i + 8i.

 

Thanks!

 

-Benny

Link to comment
Share on other sites

Link to post
Share on other sites

1. Yes, but the requirements for it are pretty strict. If I remember right, for LSI, you needed a SAS expander chip backplane (Or have your six direct wire ones plugged into a SAS expander), the same model RAID cards, and SAS hard drives / SSDs.

 

2. If you buy a pure RAID card, there's sadly no way to get it to be passthrough. You would need to get an HBA card (passthrough, but only basic RAID 0/1/10 usually, good for software OS raid such as freeNAS).

 

3. The LSI 9361-8i (Get the cache vault module too) is a good SAS 12Gb/s RAID card. The more budget is the previous gen 9261 or other 92xx card above xx40 (Also want to get the battery / cachevault module for these) for SAS 6Gb/s

 

Personally if this array isn't going to be used for some extreme bandwidth, you can get away with a 8i card and a 36 lane SAS expander (Two ports in, six ports out to the backplanes) because three 8i RAID cards aren't exactly cheap (Nor is two 16i cards)

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/4/2019 at 7:28 AM, benny_r_t_2 said:

To feed all 6 backplanes without using a SAS expander, I think I need three SAS 8i (two min SAS connectors per card... eight internal drives per card).  Or woud you recommend two 16i cards (that would give me two extra lanes after connecting all six backplanes)?  Or perhaps a 16i + 8i.

Honestly I would advise using an expander card or keeping the RAID controllers separate and just have more than one data store. I've never done a multiple RAID card extended array before so I have no idea how well it works or how safe it is and I'm not even sure how common it is, personally never seen anyone do it.

 

My choice would actually to have a single LSI 9361-8i with CacheVault and dedicate that to a primary datastore with important stuff hosted on that which need to be run up first. Then also get some cheap true HBAs or flashed RAID cards in to HBA mode and pass those through to a storage VM and create the larger single array there. From that storage VM you can pass the storage back to the ESXi host using either NFS or iSCSI, you can also use that storage VM for other things like network shares not related to ESXi at all.

 

The above is how I do it, I have some 10k RPM SAS disks in an array and place important things on it like my storage VM, Domain Controller, vCenter etc on that. I have the storage VM set to auto start and the VMs that live on datastores provided by that VM delayed auto start set long enough that the datastores would be marked back online when ESXi goes to start the VMs.

 

The simplest but most expensive would be 3 LSI 9361-8i or 3 LSI 926x-8i and 3 datastores.

 

On 1/4/2019 at 7:28 AM, benny_r_t_2 said:

can we have some drives excluded from RAID but still connected to same controller that has other drives in an array?

Depends what you want to do and the RAID card. RAID card support multiple arrays so disks can be allocated how you like to different arrays/drive groups. Newer RAID cards also support putting disks in to JBOD mode, the usefulness of that is rather limited for ESXi because it's not easy to pass through a single disk to a VM, more typically you have to pass through the entire PCIe device and then you can no longer use that for anything ESXi host related, it's gone to the VM exclusively.

 

It may actually be better to list out what you are wanting to achieve with the server and your requirements so we can advise based on those rather than specific technical questions. An example would be how much storage do you require under a single datastore, that would impact if using multiple would fit your requirements or not.

Link to comment
Share on other sites

Link to post
Share on other sites

Hi.  thank you both for the informative replies ( @leadeater and @scottyseng)  This ESXi build is for learning at home, development and proof of concepts, etc.  So I'm not under pressure to meet a project deadline.

 

And I'll do my best to explain my current setup and what I'm hoping to achieve by building an ESXi server... and I hope I make sense.   My existing setup I have a few older boxes running RHEL and Oracle Linux.  They are for Oracle Apps (and all the services that entails) and their corresponding databases, also Subversion repository, Plexmediaserver, etc. and none of it is virtualized.

 

A high level overview of how I've setup one of the boxes (the main box housing most of the Oracle Apps environments)...it has 6 Oracle base homes (six oracle Apps environments and each has their own corresponding oracle database), each base home comprising two tiers... the Oracle Application tier and it's corresponding Oracle Database tier.  Each of the six Oracle homes is roughly 500GB - 600GB (including both the apps tier and database tier).  With these running on bare metal on a single machine I run out of resources pretty fast and can generally can run one Oracle home environment at a time (I can get away with running two envioronments at a time, but not ideal).

 

I want to build a decent esxi host to virtualize each Oracle tier and have all of my Oracle environments plus Subverison repository/services, plexmedia, etc running in VMs instead of on baremetal.  And instead of having the paring of applciation tier and its corresponding database tier in one guest, I'd like to seperate the tiers into individual guests and also experiment creating multiple application nodes for same application tier (cluster) and have each node in it's own guest vm.  I need a server that has resources to accommodate all my CPU/RAM hungry Oracle tiers/nodes, etc (many of which I want to run concurrently).

 

regarding running some guests as JBOD, i was thinking of plexmediaserver where I don't necessarily want to stripe media accross an array and have multiple drives spinning just to watch a single movie.  It's not critical requirement, but I was curious if possible.

 

In the beginning I think I'll just run JBOD until I get my feet wet with ESXi.  I'll keep it very simple at first.    I currently have a wide mix of drives, spindle speeds, capacities...  SSDs mixed with SATA HDDs and  (none are SAS).  I think it is maybe smart (maybe a requirement?) for a RAID 10 datastore to be made of of same/like drives.  So I may wait on the RAID 10 idea until I can switch over to more 883DCT enterprise SSDs instead the mix of odd drives I have now.

 

And I liked the idea of setting up multiple datastores.  Perhaps one datastore per Oracle environment (comprising the multiple app nodes and database tier).  I'll have to give that more thought and I may have better idea after I get into this more.  

 

Thanks

 

* if you want to follow where I'm at now in this build, I'm on servethehome: https://forums.servethehome.com/index.php?threads/lga3647-esxi-build-to-host-my-oracle-apps-databases.22870/

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/4/2019 at 9:16 PM, scottyseng said:

1. Yes, but the requirements for it are pretty strict. If I remember right, for LSI, you needed a SAS expander chip backplane (Or have your six direct wire ones plugged into a SAS expander), the same model RAID cards, and SAS hard drives / SSDs.

 

2. If you buy a pure RAID card, there's sadly no way to get it to be passthrough. You would need to get an HBA card (passthrough, but only basic RAID 0/1/10 usually, good for software OS raid such as freeNAS).

 

3. The LSI 9361-8i (Get the cache vault module too) is a good SAS 12Gb/s RAID card. The more budget is the previous gen 9261 or other 92xx card above xx40 (Also want to get the battery / cachevault module for these) for SAS 6Gb/s

 

Personally if this array isn't going to be used for some extreme bandwidth, you can get away with a 8i card and a 36 lane SAS expander (Two ports in, six ports out to the backplanes) because three 8i RAID cards aren't exactly cheap (Nor is two 16i cards)

Hi @scottyseng,  great info, thanks.  Regarding the cachevault and battery backup... do they make cards that come with those or is that an add on additional  purchase?

 

I was looking at this RAID controller:   https://www.amazon.com/LSI-Logic-MegaRAID-Controller-LSI00416/dp/B00GTDTCTM

and then also this for the cache and battery:  https://www.newegg.com/Product/Product.aspx?Item=N82E16816118232

 

Can anyone point me to a good 36 lane SAS expander?  

 

Regarding cost, if an 8i RAID controller + cachevault/BBU + expander(s) cost (new)  is approaching about $1000 USD total... would it make more sense if I simply went with one 24i RAID controller such as this:  https://www.amazon.com/LSI-Controller-05-25699-00-9305-24i-PCI-Express/dp/B01BDZWLV6  for about $700 and then buy a cachevault + BBU for it?  bringing total to about the same but then no worry about compatibility between controllers and expanders, etc.

 

*edit: oh, nevermind about that above 24i controller.  It is HBA (no RAID).  Here is a RAID 24 for about $1100 USD   https://www.cdw.com/product/broadcom-megaraid-sas-9361-24i-storage-controller-raid-sata-sas-12g/4675177   this 24i has cachevault, but does not appear to have lithium battery.

 

Thanks again

 

Benny

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

They do sell them in kits with the cache vault, but i can't remember the model numbers for them. Intel makes a good sas expander, but a sas 12Gb one is not going to be cheap. https://www.amazon.com/Intel-Storage-Controller-Upgrade-RES3FV288/dp/B00NBL30R0/ref=sr_1_1?ie=UTF8&qid=1547135030&sr=8-1&keywords=intel+sas+expander

I cant seem to find the 36 lane 12Gb version (not on my PC at the moment)

 

Oh, just a general fyi, be sure to keep track of sas 12Gb and sas 6Gb connectors. They are different and you may need to adapt as needed.

 

Yeah, I don't know too much about oracle and the high performance virtualization aspect of esxi. leadeater is more suited to answer that. I'm learning esxi myself for a home storage server.

Link to comment
Share on other sites

Link to post
Share on other sites

Thanks scotty.

 

My Oracle VMs do not require alot of super fast disk I/O.  I'd like to have RAID 10 more for the redundancy/mirroring, with the performance from striping as a bonus.  And even then I don't really require mirroring as I'm not a production environment that would require immediate failover.  However, because I don't want to blow a ton of money on RAID hardware, and new drives the array(s) before I understand it all fully, I'm going to simply use JBOD initially and I'll learn more about RAID as I go along and I study up on it more.  For now I plan to simply use the three motherboard SAS connectors to feed my 10-12 drives.  No array, just a bunch of drives.  The onboard controller goes through PCH chipset though, so it is not as ideal as using an HBA or SAS controller connected to PCIe slot.

 

I did end up finding some HP SAS expanders with 9 SAS connections (more than enough for my 6 backplanes) 

 

the 3Gbps versions are very inexpensive at under $20 USD on ebay.  *I probably don't want 3Gbps expanders.  But even so it would probably be sufficient for my Oracle VMs that do not require a super fast disk I/O.  However, when I have multiple VMs open concurrently, the bandwidth could get choked perhaps with only 3Gb expander.

https://www.ebay.com/bhp/sas-expander-card

 

The 12GB are between $300(used)-$550(new) and would be more ideal

https://www.ebay.com/sch/i.html?_from=R40&_trksid=m570.l1313&_nkw=sas+expander+12GB&_sacat=56091

Link to comment
Share on other sites

Link to post
Share on other sites

Another question just occured to me.

 

If I had a SAS 8i controller would I be able to plug both SAS connectors from the controller into the expander for greater bandwidth compared to plugging in just one connector from the controller into the expander?

 

 

for example,  two connectors from the controller...

 

51yBIqDzLnL.jpg.51474806cdbb9f300154042c29fa5777.jpg

...into two of the nine connectors on the expander...

 

imageServlet.jpg.8a6d542a884255bf88dc968d01402f64.jpg

 

...for greater bandwidth compared to if I only connected one cable from the controller?  If not then I would probably just want to buy a 4i controller?

 

*edit:  actually, thinking on it some more, I could use an 8i controler, with one connector going from controller to an expander to feed 20 drives off of the expander, and use the other connector on the 8i controller to go directly to 4 SSDs (bypassing the expander on that).  Fast datastore on the 4 SSDs and the 20 HDDs fed from the expander.

 

Good thing I've not purchased any RAID hardware yet, I'm coming up with new questions all the time.  I could go with an LSI SAS 926i 8i (6Gbps) and a cheap $20 HP 3Gbps expander.... using one miniSAS on the 9261 controller to a single 4 drive backplane, then the other 9261 connector to the expander for the remainder of those 5 backplanes (20 drives).  It's mind bogling.  Think I better step away from RAID for now and work on simply getting the rest of my system up and POST first.

Link to comment
Share on other sites

Link to post
Share on other sites

On 1/11/2019 at 5:58 AM, benny_r_t_2 said:

the 3Gbps versions are very inexpensive at under $20 USD on ebay.  *I probably don't want 3Gbps expanders.

Be careful of 3Gb SAS, the expander may only support 2TB drives. Stick to SAS 6Gb.

 

Just as a note you don't actually need 12Gb, SAS is multi-lane and the bandwidth per port even going through an expander splitting the bandwidth is still very high. You'd only have to worry about it for an all SSD array, but then only if you care about getting more than 3GB/s out of the array using a single SAS 6Gb port.

 

On 1/11/2019 at 7:20 AM, benny_r_t_2 said:

*edit:  actually, thinking on it some more, I could use an 8i controler, with one connector going from controller to an expander to feed 20 drives off of the expander, and use the other connector on the 8i controller to go directly to 4 SSDs (bypassing the expander on that).  Fast datastore on the 4 SSDs and the 20 HDDs fed from the expander.

This would be the best method and would be how I'd do it.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share


×