Jump to content

Best OS to use?

lgice

Im currently running ms server hyper v on a 6core amd cpu with 32gb of memory and 6x2tb drives with a 250gb ssd for os

 

im using software  raid1+0 from the windows OS

 

Running currently 3 VMs

Ubuntu VM for Plex

Open Media Vault for File sharing

ubuntu server for game server mainly minecraft

 

im just wondering what would be a better option for a host OS instead of windows server with hyper v

 

Thanks,

LG

Link to comment
Share on other sites

Link to post
Share on other sites

Hyper-V isn't bad but I find Windows has always had unnecessary overhead as a hypervisor if you need the VM's for compute applications.

 

For cases like this I like PROXMOX or Ubuntu+QEMU.

 

A similar hypervisor that is popular in the enterprise is VMWare ESXi but it only supports hardware RAID.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

Similarly popular hypervisors in the enterprise are VMWare ESXi but it only supports hardware RAID.

I am currently using ESXI it is good over all.  I feel it is a little bit easier to use then Proxmox. (But that might just be because it was my first Hypervisor). And it was probably 5+ years ago I played with Proxmos so I am sure they have gotten much better now. 

 

The are three things I don't like about ESXI is

 

  • The free version doesn't really have a good backup solution. There are really cool products like Veeam community Edition that are designed to backup Virtual machines but they can't access the Free ESXI because VMWare has locked down the API on the free version. AKA you need VSphere.
  • No local software RAID support. 
  • Hardware support can be a bit limited. It has gotten better, but you definitely want to make sure your network cards and any RAID cards are supported.

Now that said have found a way around these problems but they are very clunky and a pain if you reboot your server.

I basically setup ESXI and then I create a VM and install  Freenas inside. Using hardware pass through I give the Freenas VM full access to my local HBA cards where all of my hard drives are connected to.

 

Using FreeNAS I setup software RAID and then I setup ISCSI shares and share them back to ESXI. ESXI sees the ISCSI as new drives and I put all of my VMs on that. FreeNAS has native backup support to other Freenas boxes so have some backups (It is not as nearly as nice as Veeam would be but owell).

 

I am sure there are performance losses doing this weird loop around but I haven't really notice them.  There are a few more hardware requirements doing it this way that I haven't gotten into but this is basically the overview.

 

*Edit Oh I forgot to mention to biggest downside of my setup. Because FreeNAS is sharing all of the data to ESXI over ISCSI. When you reboot the ESXI server, ESXI comes up expecting to see the FreeNAS ISCSI shares but FreeNAS hasn't booted yet because it is a VM on ESXI. ESXI just gives up trying to search for the ISCSI shares. Because of this once FreeNAS is up and running you need to manually login to ESXI and manually refresh and find the ISCSI targets in ESXI. Then all of your VMs on the ISCSI shairs will reappear and you can manually start them.

 

 

I think I was to start fresh today, I would give Proxmos or Unraid a closer look.
 

 

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

@Catsrules ESXi is a hypervisor I've been meaning to experiment with but I currently don't have a box available to put it on.

 

Your solution to having a means of backup without paying licensing fee's is unorthodox but I kind of like it from an experimental perspective. If ESXi supports VirtIO you might be able to drop the latency to negligible levels. If not a peer-to-peer connection between a spare Ethernet or SFP+ port and one passed to the FreeNAS VM would probably be not much worse with the bandwidth to support quite a few active VM's.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Windows7ge said:

@Catsrules ESXi is a hypervisor I've been meaning to experiment with but I currently don't have a box available to put it on.

 

Your solution to having a means of backup without paying licensing fee's is unorthodox but I kind of like it from an experimental perspective. If ESXi supports VirtIO you might be able to drop the latency to negligible levels. If not a peer-to-peer connection between a spare Ethernet or SFP+ port and one passed to the FreeNAS VM would probably be not much worse with the bandwidth to support quite a few active VM's.

Yes it does support VirtIO that is how I am passing the HBA card to FreeNAS so it has direct access to the card and all of the hard drives on the card. It is actually fairly responsive, I have been happy with the performance.  I haven't run any drive latency tests on it. But just pulling up a random Windows 10 VM task manager saids drive latency is any where from .5 ms to 40ms. I think this particular VM is running on a striped RAIDZ array aka RAID50 on 8 hard drives. Unfortunately I don't have a quick way to test latency on any of my VMs running on my SSD pool, I would assume they would be a little bit better latency.

But I probably should run a latency tester at some point.

 

My networking is a virtual between the Freenas and the ESXI. The virtual NICs are both 10GB figured why not doesn't cost my anything being virtual :). I am guessing the virtual networking is all handled via the CPU. It would be cool to pass an entire NIC to FreeNAS but unfortunately the server is out of PCI slots :(.

 

I am at about 12 active VMs and I haven't really seen a major performance problems and it has been going strong for the last 5ish years. Had a few hard drive failures but apart from that it has been smooth sailing. (knock on wood) 

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Catsrules said:

Yes it does support VirtIO that is how I am passing the HBA card to FreeNAS so it has direct access to the card and all of the hard drives on the card. It is actually fairly responsive, I have been happy with the performance.  I haven't run any drive latency tests on it. But just pulling up a random Windows 10 VM task manager saids drive latency is any where from .5 ms to 40ms. I think this particular VM is running on a striped RAIDZ array aka RAID50 on 8 hard drives. Unfortunately I don't have a quick way to test latency on any of my VMs running on my SSD pool, I would assume they would be a little bit better latency.

But I probably should run a latency tester at some point.

 

My networking is a virtual between the Freenas and the ESXI. The virtual NICs are both 10GB figured why not doesn't cost my anything being virtual :). I am guessing the virtual networking is all handled via the CPU. It would be cool to pass an entire NIC to FreeNAS but unfortunately the server is out of PCI slots :(.

 

I am at about 12 active VMs and I haven't really seen a major performance problems and it has been going strong for the last 5ish years. Had a few hard drive failures but apart from that it has been smooth sailing. (knock on wood) 

Virtio for the network should produce lower latency and even higher speeds then going through a NIC. If you just bridge FreeNAS to the same NIC the host is using and set it up with paravirtualization it should be great when connecting to the iSCSI host. If that isn't what you've done already.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Windows7ge said:

Virtio for the network should produce lower latency and even higher speeds then going through a NIC. If you just bridge FreeNAS to the same NIC the host is using and set it up with paravirtualization it should be great when connecting to the iSCSI host. If that isn't what you've done already.

Ahh interesting, I will look into that. The iSCSI portion is not attached to any physical NICs it is all virtual networking. I will have to dig a little deeper into how ESXI handles networking.   

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Catsrules said:

Ahh interesting, I will look into that. The iSCSI portion is not attached to any physical NICs it is all virtual networking. I will have to dig a little deeper into how ESXI handles networking.   

I can't guarantee that it's possible on ESXi. Never used it before but if it is the bandwidth will be insane. Well in excess of 10Gbit.

Link to comment
Share on other sites

Link to post
Share on other sites

cool thanks guys for the advise im just not sure what to do other then going with hardware raid which i dont have a controller since im using an older 6 core pc as my server hardware.

Link to comment
Share on other sites

Link to post
Share on other sites

I use VirtualBox in my main pc that's running Pop_OS! (It's basically Ubuntu). It's not the best option and I want to build a server and install XCP-NG/Xen Orchestra

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Windows7ge said:

I can't guarantee that it's possible on ESXi. Never used it before but if it is the bandwidth will be insane. Well in excess of 10Gbit.

For vSphere it's VMXNET3. Same idea as virtIO.

 

For iscsi, just create your vSwitch, add your physical adapters, then create your kernel adapters for MPIO Connections and assign them to the switch.

Link to comment
Share on other sites

Link to post
Share on other sites

28 minutes ago, lgice said:

cool thanks guys for the advise im just not sure what to do other then going with hardware raid which i dont have a controller since im using an older 6 core pc as my server hardware.

Depending on your application for the server really depends on weather Hardware RAID or Software RAID is better (sometimes applications can benefit from both at the same time).

 

For what you're doing Software RAID would work perfectly fine and wouldn't cost you anything. You wouldn't be sacrificing performance for your use case by doing so either.

Link to comment
Share on other sites

Link to post
Share on other sites

So would PROXMOX handle the software raid as well as the virtualization portion that im wanting? ive never really used PROXMOX before

Link to comment
Share on other sites

Link to post
Share on other sites

Definition of "best" is a personal matter. You simply can't ask a question like that and get a true answer.

What is best depends on your knowledge with the system you choose to use. Everything else is relative.

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/5/2020 at 9:07 AM, lgice said:

So would PROXMOX handle the software raid as well as the virtualization portion that im wanting? ive never really used PROXMOX before

You can let FreeNAS/ZFS/Ubuntu Server+ ZoL manage RAID. It could be just a VM. https://www.reddit.com/r/homelab/comments/4jx8u3/recommended_running_freenas_in_vm/

 

Since you have Windows Server, is there any specific reasons why should you not want to go with Microsoft Storage Spaces as a software RAID?

Link to comment
Share on other sites

Link to post
Share on other sites

no i have nothing against windows server and using storage spaces to manage the raid i was just wondering what others would recommend that might be a good viable alternative since im only using the free version of windows server and not the full blown version

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/4/2020 at 11:07 AM, lgice said:

Im currently running ms server hyper v on a 6core amd cpu with 32gb of memory and 6x2tb drives with a 250gb ssd for os

 

im using software  raid1+0 from the windows OS

 

Running currently 3 VMs

Ubuntu VM for Plex

Open Media Vault for File sharing

ubuntu server for game server mainly minecraft

 

im just wondering what would be a better option for a host OS instead of windows server with hyper v

 

Thanks,

LG

 

Why are you using MS Server + Hyper V for the host and then linux for the rest? Why not just use linux on the host?

Then you can thin it out a bit with containers (like lxc/lxd/docker)

Instead of the full overhead of a VM (with a separate kernel + dozens of processes), you'll just have the processes you need.

I use debian/testing as my base, use docker/plex, docker/postgres, docker/splunk, and docker/pihole

Docker is nice. Here, look here (I'm not affiliated with them in any way) https://docker-curriculum.com/

 

Link to comment
Share on other sites

Link to post
Share on other sites

so how would you handle with just linux and containers by making an array of my drives to total 6tbs at that point? im genuinely asking because idk other then hardware raid

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, lgice said:

so how would you handle with just linux and containers by making an array of my drives to total 6tbs at that point? im genuinely asking because idk other then hardware raid

containers/docker are irrelevant when it comes to making arrays. You just create mount points for them on the storage you want those mount points to be in. 

As far as building raid, theres Linux RAID (mdadm) and ZoL (ZFS on Linux) which are the most popular Linux software solutions. You can also use a hardware RAID card if you really want which you can get fairly cheap these days for the older LSI 9260-8i's and OEM variants and use their Linux MSM manager or WebBIOS to configure. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

If you already have Ubuntu VMs running and know your way around (cli) Linux, drop all this nice, fancy GUI stuff and just use ubuntu Server. 

Then install kvm+qemu. install docker (maybe in a VM) and your golden for all your services. 

 

Also, please no Hardware RAID. I dont know you, but i can assure you with 99,99999% certainty that you don't need a Hardware controller, on the contrary, it makes probably more issues than its worth and also any solution in software (mdadm, zfs, btrfs) is equally good for you and has less overhead. 

 

Cheers!

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×