Jump to content

Help to solve a ESXI storage problem.

My company currently use 2 Hp 10th gen DL 380 with enough ram and cores to virtualize our servers using ESXI. The problem is they are running low on local storage for the VMs and new HP disks, particularly the SSDs, are pricy.

We’re thinking of building a server to house some third party SSDs to act as storage for the VMs, they would be connected to the server with 10Gb LAN, but my experience in making this is limited so I want to hear what people can suggests for the following:

The disks will be 6x7.2 TB NVME PCIe 4 SSD.

Raid: 1 for safety or Raid 5 for space? There will be at least 1 hotspare, but could you make a bank of say 5x7.2 SSDs in raid 5 with 1 hotspares? Or is it better with RAID 1 2x2 with 2 hotspares.

This is for storing the VMS so I need to know if one raid will make a big enough difference to choose it over the rest.( They currently run on 4x960GB Sata SDD in raid 1 (2Tb total) or 12x1.2TB 10k SAS disk in raid 1 (7.2Tb) on each server, so they aren't handeling large volumes of data pr day)

What is the best OS for the server when it needs to act as storage for ESXI?: Unraid or TrueNas or something third?

I hope you guys can give me some suggestions and comments to help me make the best selections, nothing is really set in the project so far, so any suggestions will be appreciated.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Razerian said:

We’re thinking of building a server to house some third party SSDs to act as storage for the VMs, they would be connected to the server with 10Gb LAN, but my experience in making this is limited so I want to hear what people can suggests for the following:

The disks will be 6x7.2 TB NVME PCIe 4 SSD.

If your plan is to build a SAN server and you're only connecting it to the host server over 10Gig I don't see the value in using anything NVMe. Especially Gen4 unless you plan to host the VMs on this server itself. You would want to investigate 40Gig or higher. Aggregate multiple 40Gig links or even consider 100Gig with redundant fail-over. The shear bandwidth of NVMe Gen4 would be wasted considerably on 10Gig.

 

I would opt for SATA based SSD storage and/or stick with spinning rust. The entry cost is generally lower, comes in equally as high capacities and would be just as reliable. Look into Intel's D3 series or Micron PRO series SSDs for high write endurance, high availability drives. They're built for datacenter applications without going strait to SAS though options are available.

 

38 minutes ago, Razerian said:

Raid: 1 for safety or Raid 5 for space? There will be at least 1 hotspare, but could you make a bank of say 5x7.2 SSDs in raid 5 with 1 hotspares? Or is it better with RAID 1 2x2 with 2 hotspares.

Personally I would go with a single array of RAID6. This comes with a fair bit of parity overhead on the CPU but you can lose up to any two disks before you lose everything providing ample time to replace the failed drive(s). Depending on your workflow demands will determine if this arrangement would be OK or if you need to use something that offers more performance.

 

42 minutes ago, Razerian said:

What is the best OS for the server when it needs to act as storage for ESXI?: Unraid or TrueNas or something third?

To be honest in most cases here it's all a matter of how do you want to interface with the system? Services like SMB, NFS, or iSCSI performance is pretty comparable no matter which OS you choose. If you want the full performance of the RAID array though I would stay away from unRAID as in the name doesn't use traditional RAID so you don't get the performance bump.

 

If you want something with a pretty front-end you could use TrueNAS. Personally I used FreeBSD which is what TrueNAS is based off of it's just CLI only. For a business work environment Windows Server might be suggested since it will receive regular security updates and offers an easy to use GUI or more professional Linux distributions like Red Hat Enterprise Linux might be preferred over pro-sumer OS's like TrueNAS.

 

All depends on how professional you have to be in the setup of this server. If your company has any strict guidelines for the setup and use of the server as far as security and OS goes or if you really have the freedom to use whatever you want.

Link to comment
Share on other sites

Link to post
Share on other sites

NVMe over fabric is really becoming a thing for data center storage and is becoming more popular everyday. It will likely become the standard for this type of storage However, its still a bit proprietary.

 

For a more mainstream host for your data stores (NAS/SAN) it depends on the limits of your NVMe host. This type of storage server doesn't exactly grow on trees. 

 

My conservative approach would be to suggest SATA based storage mentioned above. VMware has tech notes for TrueNAS. 

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×