Jump to content

Have some questions about esxi

Hey, right now i have a hp proliant server and the CPU isnt cutting it for my needs anymore. I am planning to upgrade to an I5 or something similar for some more performance. I figured this would be a good time to play with VM stuff on a server.


My current server has 4x4TB drives in ZFS raid an the OS running on a 32gb ssd. If i wanted to recreate that in a VM how should I go about doing that?
From what i've pieced together it seems I should install esxi on a usb/sd card, create the VM and give it space from the SSD and 4x4TB drives then create a ZFS raid with the 4TB drives? (I also read something about just passing the controller through to the VM?)


Basically I just want to let this VM have access to a big 12TB mount for movies and tv shows and such. It doesnt need to be ZFS it's just what i've worked with. Or should I do raid 5 from the motherboard and assign the VM the one partition instead of doing software raid in the VM.


This is the hardware I am planning on using
https://pcpartpicker.com/list/gwmJGf (cheaper variant)
https://pcpartpicker.com/list/ZBGNZ8
Thanks!

Link to comment
Share on other sites

Link to post
Share on other sites

@12jdlovins

You'll want to pass through the disks entirely to the ZFS VM as the is the recommended way to do it. To do that you'll need a cheap HBA SAS card which can be found on ebay for around $70-$120 as you pass through the controller to the VM which means that VM has direct control of that and the disks attached to it.

 

Example SAS HBA:

http://www.ebay.com/itm/New-IT-Mode-LSI-9210-8i-SAS-SATA-8-port-PCI-E-6Gb-RAID-Controller-Card-/121752761447?hash=item1c5907b467:g:oE8AAOSw0JpV7IAy

Link to comment
Share on other sites

Link to post
Share on other sites

ESXI can be installed on pretty much anything. A USB Drive or SD Card is fine, but I'd personally install it onto an SSD, because I prefer the reliability of those over removable storage.

 

You can even just partition out a small part of the SSD for ESXI.

 

After that, you can pass some hardware directly to VM's. I would partition out a portion (or all) of the SSD for VM OS drives. ESXI can take a partition (or whole drive) and use it as a "datastore" which is basically just virtual HDD space that can be assigned as needed and turned into one or many Virtual HDD's.

 

At this point, create a Linux or FreeNAS VM (or whatever) for your File Server. If you're using a separate RAID Card/HBA, you can pass that directly to the VM and it'll have direct disk access, just as it would natively. I do not know if you can pass SATA ports directly to a VM - possibly? But I assume you'd have to pass the entire SATA controller to the VM, which could cause other complications - I would personally avoid this. Picking up a cheap HBA and passing that to the VM would be better.

 

I would avoid the complications of Motherboard RAID 5. If you want to use ZFS, then that's totally fine. You could even use Windows Storage Spaces via a Windows Server VM in a similar fashion.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, leadeater said:

@12jdlovins

You'll want to pass through the disks entirely to the ZFS VM as the is the recommended way to do it. To do that you'll need a cheap HBA SAS card which can be found on ebay for around $70-$120 as you pass through the controller to the VM which means that VM has direct control of that and the disks attached to it.

 

Example SAS HBA:

http://www.ebay.com/itm/New-IT-Mode-LSI-9210-8i-SAS-SATA-8-port-PCI-E-6Gb-RAID-Controller-Card-/121752761447?hash=item1c5907b467:g:oE8AAOSw0JpV7IAy

 

 

5 minutes ago, dalekphalm said:

ESXI can be installed on pretty much anything. A USB Drive or SD Card is fine, but I'd personally install it onto an SSD, because I prefer the reliability of those over removable storage.

 

You can even just partition out a small part of the SSD for ESXI.

 

After that, you can pass some hardware directly to VM's. I would partition out a portion (or all) of the SSD for VM OS drives. ESXI can take a partition (or whole drive) and use it as a "datastore" which is basically just virtual HDD space that can be assigned as needed and turned into one or many Virtual HDD's.

 

At this point, create a Linux or FreeNAS VM (or whatever) for your File Server. If you're using a separate RAID Card/HBA, you can pass that directly to the VM and it'll have direct disk access, just as it would natively. I do not know if you can pass SATA ports directly to a VM - possibly? But I assume you'd have to pass the entire SATA controller to the VM, which could cause other complications - I would personally avoid this. Picking up a cheap HBA and passing that to the VM would be better.

 

I would avoid the complications of Motherboard RAID 5. If you want to use ZFS, then that's totally fine. You could even use Windows Storage Spaces via a Windows Server VM in a similar fashion.

Yikes that adds the budget quite a bit. I see though, that would make a lot more sense. I guess the only issue is right now the SSD is through a pcie card and the motherboards that fit in the case i want only have 1. 

 

Assuming i went this HBA/SAS card route, i would need another harddrive for OS stuff since the 4 drives connected to the HBA/SAS would be basically owned the the filestore server correct? 

 

I have this old ssd (https://www.newegg.com/Product/Product.aspx?Item=N82E16820148348) that i bought back in 2011, Would be able to use that for the OS partitions of guest operating systems or since its like 6 years old now should i avoid it? It has been sitting in my closet for about 2-3 years now and Im trying to find a use for it.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, 12jdlovins said:

 

 

Yikes that adds the budget quite a bit. I see though, that would make a lot more sense. I guess the only issue is right now the SSD is through a pcie card and the motherboards that fit in the case i want only have 1. 

 

Assuming i went this HBA/SAS card route, i would need another harddrive for OS stuff since the 4 drives connected to the HBA/SAS would be basically owned the the filestore server correct? 

 

I have this old ssd (https://www.newegg.com/Product/Product.aspx?Item=N82E16820148348) that i bought back in 2011, Would be able to use that for the OS partitions of guest operating systems or since its like 6 years old now should i avoid it? It has been sitting in my closet for about 2-3 years now and Im trying to find a use for it.

Indeed it can add a lot to the cost.

 

With that in mind, direct pass through is only necessary for direct-access level RAID systems like ZFS and Storage Spaces.

 

If you were willing to ditch ZFS, you could do motherboard RAID5, and then just "assign" that storage to the file server VM (using whatever, Linux or Windows, etc) through the ESXI settings. An HBA would not be required in this scenario.

 

But if you want ZFS or Storage Spaces, and their advantages, you'll need an HBA to pass through to the VM. That SSD you liked is probably fine. I would check the drive stats via something like CrystalDiskInfo. You want to check the total read/write stats mostly. But also general SMART data and drive health.

 

If the SMART stats look good, and if the read/write stats aren't too high (write stats in particular) for the kind of NAND flash that drive uses (TLC, MLC, etc), then the drive should be totally fine to use for 2-4 more years.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, dalekphalm said:

Indeed it can add a lot to the cost.

 

With that in mind, direct pass through is only necessary for direct-access level RAID systems like ZFS and Storage Spaces.

 

If you were willing to ditch ZFS, you could do motherboard RAID5, and then just "assign" that storage to the file server VM (using whatever, Linux or Windows, etc) through the ESXI settings. An HBA would not be required in this scenario.

 

But if you want ZFS or Storage Spaces, and their advantages, you'll need an HBA to pass through to the VM. That SSD you liked is probably fine. I would check the drive stats via something like CrystalDiskInfo. You want to check the total read/write stats mostly. But also general SMART data and drive health.

 

If the SMART stats look good, and if the read/write stats aren't too high (write stats in particular) for the kind of NAND flash that drive uses (TLC, MLC, etc), then the drive should be totally fine to use for 2-4 more years.

 

Alright I'll have to check the read/write stats on the SSD. I dont really need zfs its just the first solution recommended to me to make a big 12TB pool and my current server happened to have ECC memory so it was a pretty nice solution. The new server wont have ECC since server mobos are way expensive but I'm not too worried about that. What would you recommend if at the end of the day I'm just trying to turn 4x4TB drives into 1 big mount point, zfs or raid 5?

 

Also with that HBA linked in the second post it says 8 ports but I only see 2 on it, am i missing something?

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, 12jdlovins said:

 

Alright I'll have to check the read/write stats on the SSD. I dont really need zfs its just the first solution recommended to me to make a big 12TB pool and my current server happened to have ECC memory so it was a pretty nice solution. The new server wont have ECC since server mobos are way expensive but I'm not too worried about that. What would you recommend if at the end of the day I'm just trying to turn 4x4TB drives into 1 big mount point, zfs or raid 5?

 

Also with that HBA linked in the second post it says 8 ports but I only see 2 on it, am i missing something?

Honestly ZFS vs motherboard RAID5 is a personal choice. Both will work fine. ZFS gives you a higher level of protection, and usually comes with better monitoring and reporting features (I'm running FreeNAS on my server currently), so if a drive fails, etc, I get an email notification instantly.

 

ZFS also protects against bitrot, which motherboard RAID will not (Bitrot is random bit flips on the HDD over time - a bit flip is when a bit randomly switches from a 0 to a 1, or vice versa, and corrupts that bit, and potentially, the file that the bit is part of).

 

In ZFS (along with enterprise grade RAID solutions, and some other software level RAID solutions), it performs a "scrub", where it scans the entire disk and compares all data to the parity hash calculations. If anything is out of wack, it'll just rewrite the file from Parity information - thus finding and fixing corrupted files actively. Motherboard RAID generally does not have these kinds of features.

 

As for the HBA, those ports are 4-channel "multi-port" connectors - called SFF-8087. These connectors each contain 4 SAS or SATA "channels" (channel being the bandwidth equivalent to a single port).

 

You'll need to buy an SFF-8087 to SATA breakout cable, which has a single SFF-8087 port on one end, and 4 regular SATA ports on the other end. They look like this:

222sa3wa00.jpg

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, dalekphalm said:

Honestly ZFS vs motherboard RAID5 is a personal choice. Both will work fine. ZFS gives you a higher level of protection, and usually comes with better monitoring and reporting features (I'm running FreeNAS on my server currently), so if a drive fails, etc, I get an email notification instantly.

 

ZFS also protects against bitrot, which motherboard RAID will not (Bitrot is random bit flips on the HDD over time - a bit flip is when a bit randomly switches from a 0 to a 1, or vice versa, and corrupts that bit, and potentially, the file that the bit is part of).

 

In ZFS (along with enterprise grade RAID solutions, and some other software level RAID solutions), it performs a "scrub", where it scans the entire disk and compares all data to the parity hash calculations. If anything is out of wack, it'll just rewrite the file from Parity information - thus finding and fixing corrupted files actively. Motherboard RAID generally does not have these kinds of features.

 

As for the HBA, those ports are 4-channel "multi-port" connectors - called SFF-8087. These connectors each contain 4 SAS or SATA "channels" (channel being the bandwidth equivalent to a single port).

 

You'll need to buy an SFF-8087 to SATA breakout cable, which has a single SFF-8087 port on one end, and 4 regular SATA ports on the other end. They look like this:

 

Ah of course, more cost :P Luckily only 13$ or so. 

 

So essentially my choices are as follows,

1. use raid 5 to combine all the drives into 1 mount point i can assign the guest host 

2. get the HBA to hook up to the 4 drives and pass the HBA controller to the guest host to use ZFS with. 

 

I had absolutely no idea that HBA/SAS existed so I'm glad i posted here. I was going to try to assign like 4 separate datastore chunks to the guest host then use zfs on them like that. or at least attempt to and probably meet failure. 

 

Mostly all the drives are going to host is media files like tv shows, movies, etc so I'm not entirely sure if i need to worry about bitrot when it comes to that but I suppose it wouldn't be a bad thing to protect against incase I decide to expand from that stuff. 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, 12jdlovins said:

I had absolutely no idea that HBA/SAS existed so I'm glad i posted here. I was going to try to assign like 4 separate datastore chunks to the guest host then use zfs on them like that. or at least attempt to and probably meet failure. 

Sure that would actually work but it's highly recommended not to do that, unless you just want to test ZFS with different disk configurations and simulate failures/disk swaps etc.

Link to comment
Share on other sites

Link to post
Share on other sites

If you watch out for it when buying your mainboard, you can get away without buying a separate HBA.

In ESXi you can passthrough the Storage Controller of the mainboard itself. When the board only got one, that does not make much sense - but alot of server-grade boards do have additional controller.

 

My X9SRi-F i.e. got 2 controller and i am able to pass them through to VMs.

sku.JPG

 

Red: 6 Port controller on Mainboard

Blue: Dell Perc H200 in IT mode

Black: 4 Port controller on mainboard

 

sku2.JPG

On the board it looks like this (is the 3F version, thats why there are more.)

 

Link to comment
Share on other sites

Link to post
Share on other sites

7 hours ago, leadeater said:

Sure that would actually work but it's highly recommended not to do that, unless you just want to test ZFS with different disk configurations and simulate failures/disk swaps etc.

Well yeah it would work but I would be using that like all the time which would probably be a bad idea. I'm definitely mulling over the HBA/SAS thing. There only like 65$ on ebay which isnt too bad.

 

3 hours ago, TapfererToaster said:

If you watch out for it when buying your mainboard, you can get away without buying a separate HBA.

In ESXi you can passthrough the Storage Controller of the mainboard itself. When the board only got one, that does not make much sense - but alot of server-grade boards do have additional controller.

If only server boards weren't a good chunk more expensive than regular boards however I guess once you take into consideration the price of the HBA/SAS board being built into the motherboard its not too much more. Thanks for the tip, I'll have to look into doing that since then I could get ECC memory as well

 

edit: is there a way to find out if the motherboard has 2 separate controllers beforehand?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, 12jdlovins said:

Well yeah it would work but I would be using that like all the time which would probably be a bad idea. I'm definitely mulling over the HBA/SAS thing. There only like 65$ on ebay which isnt too bad.

 

If only server boards weren't a good chunk more expensive than regular boards however I guess once you take into consideration the price of the HBA/SAS board being built into the motherboard its not too much more. Thanks for the tip, I'll have to look into doing that since then I could get ECC memory as well

 

edit: is there a way to find out if the motherboard has 2 separate controllers beforehand?

The specs of the board should generally indicate if there is a separate 2nd controller. It's usually on boards with a lot of SATA ports as well (10+). Sorry I can't be more specific, maybe someone else can.

For Sale: Meraki Bundle

 

iPhone Xr 128 GB Product Red - HP Spectre x360 13" (i5 - 8 GB RAM - 256 GB SSD) - HP ZBook 15v G5 15" (i7-8850H - 16 GB RAM - 512 GB SSD - NVIDIA Quadro P600)

 

Link to comment
Share on other sites

Link to post
Share on other sites

On 20.3.2017 at 6:29 PM, 12jdlovins said:

edit: is there a way to find out if the motherboard has 2 separate controllers beforehand?

Supermicro calls the standard controller just SATA and if it got an additional one, its under SCU (Storage Control Unit) in the specification list.

 

sku.JPG

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×