Jump to content

ESXI Server; looking for PCIE Hardware raid and advice

Ericarthurc
Go to solution Solved by Jarsky,
3 hours ago, Phoinix said:

 

1) What are some good pcie sata raid cards? (I don't really have a price limit)

 

stick with the "LSI" chipset controllers, theyre the industry norm and are supported by ESXi. 

You'll probably want to do RAID6 for your datastores. If you want to get max speed then go with their newer 9300 series with the SAS3008 chipset, that supports 12Gbps SAS3. 

In particular you'll want the 9361-8i or 9361-16i for internal drives. If you're going to use a hotswap case keep in mind that you'll either need an expensive case with SFF-8643 connectors, or you can use SFF-8643 to SFF-8087 cables to connect the older SFF-8087 standard cases which will max our SATA drives anyway. 

 

3 hours ago, Phoinix said:

2) Do you recommend that each VM have its own set of raided drives? Or would it be fine if for example; I had like 4 1tb SSDs in raid 5 and all the vms share the storage. I know each one having there own would be safer, because if the raid drops there goes all the vms; thats just expensive and probably not possible with PCIE lane limitations on the mobo. Is there a more standard solution for this? Like using a Raid 1 or Raid 10? Or maybe 4 vms on one raid and 4 on another?

 

Thanks in advance!

 

It's generally better to have a more robust RAID, e.g a single RAID6 or a RAID10, rather than multiple RAID5's as it adds more resiliency. 

Often it can be safer though if you have the disk space, to have 2 RAID6's with smaller ESXi datastores, than a single large RAID6. 

 

As for RAID levels, RAID6 is slower than RAID10 due to the extra parity calculations, but its far safer as you can have ANY 2 disk failures.

RAID10 if you have the wrong disks fail then it can be catastrophic. 

 

PCI-e lanes dont mean anything when we're talking about hardware raid controllers. All the communication is done via the card (RoC). With the 9300, they do 12Gbps per channel, and there are 4 channels per port, so 42Gbps per port of aggregate bandwidth. 

 

RAID6 or RAID10 is generally the standard solution for a single ESXi host with local storage. You shouldn't be using RAID5 typically. 

If you have multiple ESXi hosts, you can also use an HBA instead (or a RAID card in passthrough/IT mode) and configure it at a software level in VMware vSAN. 

Of course you can also have your storage completely seperate in a SAN/NAS and create an iSCSi LUN to share it with the ESXi host(s)

 

 

Hey so I am rebuilding our ESXI server with some upgraded hardware. Going with an AMD Epyc 7301; our current server is an 1600x with non ecc memory, I did the build for $1000 as a temporary solution while budgets readjusted, been using it for 1.5 years strong! But as demand grows I think its wise to do it nice with server grade hardware. 

 

So I have a couple of questions in regards to hardware raid controllers. I know ESXI doesn't support software raid which is totally fine! I just don't have any experience with hardware raid controllers, and was wondering what the best option for me would be. 

I run about 8 different vms in our ESXI server, and maybe more in the future. 

 

Two questions:

1) What are some good pcie sata raid cards? (I don't really have a price limit)

2) Do you recommend that each VM have its own set of raided drives? Or would it be fine if for example; I had like 4 1tb SSDs in raid 5 and all the vms share the storage. I know each one having there own would be safer, because if the raid drops there goes all the vms; thats just expensive and probably not possible with PCIE lane limitations on the mobo. Is there a more standard solution for this? Like using a Raid 1 or Raid 10? Or maybe 4 vms on one raid and 4 on another?

 

Thanks in advance!

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Phoinix said:

 

1) What are some good pcie sata raid cards? (I don't really have a price limit)

 

stick with the "LSI" chipset controllers, theyre the industry norm and are supported by ESXi. 

You'll probably want to do RAID6 for your datastores. If you want to get max speed then go with their newer 9300 series with the SAS3008 chipset, that supports 12Gbps SAS3. 

In particular you'll want the 9361-8i or 9361-16i for internal drives. If you're going to use a hotswap case keep in mind that you'll either need an expensive case with SFF-8643 connectors, or you can use SFF-8643 to SFF-8087 cables to connect the older SFF-8087 standard cases which will max our SATA drives anyway. 

 

3 hours ago, Phoinix said:

2) Do you recommend that each VM have its own set of raided drives? Or would it be fine if for example; I had like 4 1tb SSDs in raid 5 and all the vms share the storage. I know each one having there own would be safer, because if the raid drops there goes all the vms; thats just expensive and probably not possible with PCIE lane limitations on the mobo. Is there a more standard solution for this? Like using a Raid 1 or Raid 10? Or maybe 4 vms on one raid and 4 on another?

 

Thanks in advance!

 

It's generally better to have a more robust RAID, e.g a single RAID6 or a RAID10, rather than multiple RAID5's as it adds more resiliency. 

Often it can be safer though if you have the disk space, to have 2 RAID6's with smaller ESXi datastores, than a single large RAID6. 

 

As for RAID levels, RAID6 is slower than RAID10 due to the extra parity calculations, but its far safer as you can have ANY 2 disk failures.

RAID10 if you have the wrong disks fail then it can be catastrophic. 

 

PCI-e lanes dont mean anything when we're talking about hardware raid controllers. All the communication is done via the card (RoC). With the 9300, they do 12Gbps per channel, and there are 4 channels per port, so 42Gbps per port of aggregate bandwidth. 

 

RAID6 or RAID10 is generally the standard solution for a single ESXi host with local storage. You shouldn't be using RAID5 typically. 

If you have multiple ESXi hosts, you can also use an HBA instead (or a RAID card in passthrough/IT mode) and configure it at a software level in VMware vSAN. 

Of course you can also have your storage completely seperate in a SAN/NAS and create an iSCSi LUN to share it with the ESXi host(s)

 

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Jarsky said:

 

stick with the "LSI" chipset controllers, theyre the industry norm and are supported by ESXi. 

You'll probably want to do RAID6 for your datastores. If you want to get max speed then go with their newer 9300 series with the SAS3008 chipset, that supports 12Gbps SAS3. 

In particular you'll want the 9361-8i or 9361-16i for internal drives. If you're going to use a hotswap case keep in mind that you'll either need an expensive case with SFF-8643 connectors, or you can use SFF-8643 to SFF-8087 cables to connect the older SFF-8087 standard cases which will max our SATA drives anyway. 

 

 

It's generally better to have a more robust RAID, e.g a single RAID6 or a RAID10, rather than multiple RAID5's as it adds more resiliency. 

Often it can be safer though if you have the disk space, to have 2 RAID6's with smaller ESXi datastores, than a single large RAID6. 

 

As for RAID levels, RAID6 is slower than RAID10 due to the extra parity calculations, but its far safer as you can have ANY 2 disk failures.

RAID10 if you have the wrong disks fail then it can be catastrophic. 

 

PCI-e lanes dont mean anything when we're talking about hardware raid controllers. All the communication is done via the card (RoC). With the 9300, they do 12Gbps per channel, and there are 4 channels per port, so 42Gbps per port of aggregate bandwidth. 

 

RAID6 or RAID10 is generally the standard solution for a single ESXi host with local storage. You shouldn't be using RAID5 typically. 

If you have multiple ESXi hosts, you can also use an HBA instead (or a RAID card in passthrough/IT mode) and configure it at a software level in VMware vSAN. 

Of course you can also have your storage completely seperate in a SAN/NAS and create an iSCSi LUN to share it with the ESXi host(s)

 

 

Oh my gosh, thank you! That was extremely informative. 

Thank you so much!

Link to comment
Share on other sites

Link to post
Share on other sites

In regards to going 12Gbps, just make sure that you actually do get SAS 12Gbps SSDs.... It doesn't make too much sense spending the money on a 12Gbps backplane and controller if you will be using 6Gbps end-devices. Intels DataCenter SSDs are among the most robust and popular, but afaik, they only make SATA-based SSD's, not SAS. So that would mean that these would run on 6Gbps. And we all know that HDDs can't reach anything near of even 3Gbps speeds...

 

If you go the HDD route, make sure to get a controller with a decent battery-backed flash-cache. Otherwise your 4K IOps will be pretty bad... If you go for SSD's, then yea go for 12Gbps if you go the SAS-way, or go for a 6Gbps if you go the SATA-way. Nevertheless, PCIe controllers are soon around the corner with NVMe backplanes, so SATA/SAS will probably be phased out "soon".

 

And yes, use RAID6 or RAID10, however just know that you aren't protected from bitrot-issues (these are rare but they occur and can corrupt your entire RAID), like with ZFS or btrfs (since they use CoW and are self-healing). You also have no compression, deduplication or datastore replication options when running the disks locally. What is popular today is to run a VM appliance who hosts the storage on something like ZFS or btrfs, this is what most hyperconverged solutions do today. Proxmox (other hypervisor vendor) supports this kind of distributed storage through ceph (it also supports zfs root), but since you are using VMware and just one host, this probably doesn't apply to you.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×