Jump to content

What is your preferred Server-Hypervisor and why

What is your preferred Server-Hypervisor and why  

44 members have voted

  1. 1. What is your preferred Server-Hypervisor

    • VMware ESXi
      27
    • Microsoft Hyper-V
      8
    • Proxmox
      3
    • XenServer
      3
    • KVM (what most Linux use)
      2
    • different (please tell in answer)
      1


What is your preferred Server-Hypervisor and why

 

I prefer VMware ESXi because it's footprint is very small (for VMware ESX 5.1 around 400 MByte), it is very good for Linux guest AND Windows guest and works very good in a HA - setup

 

Even I'm a Windows-(server) - guy, I still prefer VMware ESX over MS Hyper-V ...

Link to comment
Share on other sites

Link to post
Share on other sites

VMware ESXi is my favorite compared to UNRAID by Line-Tech. They both are awesome, but  VMware ESXi is free mostly to begin with. If you are looking for VMware direct resource sharing I would go with UNRAID.

   / | / /__  _________/ / /_____ _/ (_) /___  __
  /  |/ / _ \/ ___/ __  / __/ __ `/ / / __/ / / /
 / /|  /  __/ /  / /_/ / /_/ /_/ / / / /_/ /_/ / 
/_/ |_/\___/_/   \__,_/\__/\__,_/_/_/\__/\__, /  
                                        /____/

--------------------------------------------------------------------------------

 

Hi, 「Neͥrdͣtͫality」noice to meet you... :3

 

Link to comment
Share on other sites

Link to post
Share on other sites

Honestly anything but Hyper-V.  The setup for Hyper-V is absolute garbage, and honestly it's not all that good.

QUOTE ME OR I PROBABLY WON'T SEE YOUR RESPONSE 

My Setup:

 

Desktop

Spoiler

CPU: Ryzen 9 3900X  CPU Cooler: Noctua NH-D15  Motherboard: Asus Prime X370-PRO  RAM: 32GB Corsair Vengeance LPX DDR4 @3200MHz  GPU: EVGA RTX 2080 FTW3 ULTRA (+50 core +400 memory)  Storage: 1050GB Crucial MX300, 1TB Crucial MX500  PSU: EVGA Supernova 750 P2  Chassis: NZXT Noctis 450 White/Blue OS: Windows 10 Professional  Displays: Asus MG279Q FreeSync OC, LG 27GL850-B

 

Main Laptop:

Spoiler

Laptop: Sager NP 8678-S  CPU: Intel Core i7 6820HK @ 2.7GHz  RAM: 32GB DDR4 @ 2133MHz  GPU: GTX 980m 8GB  Storage: 250GB Samsung 850 EVO M.2 + 1TB Samsung 850 Pro + 1TB 7200RPM HGST HDD  OS: Windows 10 Pro  Chassis: Clevo P670RG  Audio: HyperX Cloud II Gunmetal, Audio Technica ATH-M50s, JBL Creature II

 

Thinkpad T420:

Spoiler

CPU: i5 2520M  RAM: 8GB DDR3  Storage: 275GB Crucial MX30

 

Link to comment
Share on other sites

Link to post
Share on other sites

VMware. We need to managed over 1000 VMs across multiple sites with DR and have good integration with Netapp storage and Commvault backups. The only one that truly meets all these requirements is VMware.

 

Hyper-V in Server 2016 has made some very important improvements but still falls way short when compared to vCenter etc versus System Center Virtual Machine Manager etc.

Link to comment
Share on other sites

Link to post
Share on other sites

Sorry, completely forgot UNRAID ...

 

but UNRAID uses KVM from Linux.

So if someone is using UNRAID and prefering it, the correct answer would be KVM ;)

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

VMware. We need to managed over 1000 VMs across multiple sites with DR and have good integration with Netapp storage and Commvault backups. The only one that truly meets all these requirements is VMware.

 

Hyper-V in Server 2016 has made some very important improvements but still falls way short when compared to vCenter etc versus System Center Virtual Machine Manager etc.

...and don't forget: In VMware vCenter you can better adjust permissions based on single VMs or group of VMs ... Sys Center VM Manager ... is not really good for that...

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, pat-e said:

...and don't forget: In VMware vCenter you can better adjust permissions based on single VMs or group of VMs ... Sys Center VM Manager ... is not really good for that...

Well for that issue you need to hop on the Microsoft hype train for Azure Stack..... lol.

Link to comment
Share on other sites

Link to post
Share on other sites

Well, depending on your business model, not always can you give up your own datacenter and go into the "cloud" (like in Germany we have some laws for Banks [money] using the own datacenter)...

Link to comment
Share on other sites

Link to post
Share on other sites

I would have said ESXi a few years ago as nothing was close to it in terms of what it can do, but lately the other L1 HV's have been catching it up. I see ALOT of clients moving from ESXi to Hyper-V just to save money on the licencing cost and to be fair Hyper-V has come a long way in a few years and is pretty much on par with ESXi in terms of functionality + its free! ---Microsoft are always late to the party but when they arrive they take out the competition, for example If you remember back in the days of Windows server 2003, it was laughable at how bad it was compared to Linux variants... Then 2008 came out and people started to realize it was very VERY good.... Mark my words Hyper-V (maybe in a few years) will be the dominant Hypervisor. I can only hope they all put some effort into Containers as thats the future of the virtualization side of IT.

 

Just dont attempt to use XenServer its awful and the UI is awful and breaks all the time.

Intel I9-9900k (5Ghz) Asus ROG Maximus XI Formula | Corsair Vengeance 16GB DDR4-4133mhz | ASUS ROG Strix 2080Ti | EVGA Supernova G2 1050w 80+Gold | Samsung 950 Pro M.2 (512GB) + (1TB) | Full EK custom water loop |IN-WIN S-Frame (No. 263/500)

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, Altecice said:

I would have said ESXi a few years ago as nothing was close to it in terms of what it can do, but lately the other L1 HV's have been catching it up. I see ALOT of clients moving from ESXi to Hyper-V just to save money on the licencing cost and to be fair Hyper-V has come a long way in a few years and is pretty much on par with ESXi in terms of functionality + its free!

 

Just dont attempt to use XenServer its awful and the UI is awful and breaks all the time.

I agree if your setup only contains Windows OS.

 

But if you have a mixed environment like Windows, RedHat Linux, Sun Solaris and Oracle OS, you can't really go Hyper-V (especially when you have some older Sun Solaris installations can't easily upgrade the OS to fit to Hyper-V)

 

And the "free" comes with a different price-tag: Hyper-V is made for Windows OS as Guests ... and those needs licensing....

 

AND... Hyper-V needs a local disk to be installed to and can't boot from USB or CF (like ESX) (my setup contains multiple Hosts without any local disk, only CF with ESX and the rest of the storage if FC ... I know "boot from FC" is possible, but a complete pain-in-the-a** )

 

BUT

Everyone can use whatever you want ... this thread is not meant to "convert" people over to using different software ...

I just like to hear the pros and cons for either of any

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, pat-e said:

I agree if your setup only contains Windows OS.

 

But if you have a mixed environment like Windows, RedHat Linux, Sun Solaris and Oracle OS, you can't really go Hyper-V (especially when you have some older Sun Solaris installations can't easily upgrade the OS to fit to Hyper-V)

 

And the "free" comes with a different price-tag: Hyper-V is made for Windows OS as Guests ... and those needs licensing....

 

I agree, thats why I said give it a few years... personally its a MUCH better setup than any other Hypervisors as it uses its microkernallization meaning you dont have to wait for support by Hyper-V (in theory) to use whatever product you want on it:

 

 

Hyper-V.png

ESXI licensing is still vastly more costly than buying a Windows Server licence, not to mention when Azure gets off the ground it will have the best cloud support (something VMware is not able to get a grip of yet)

Intel I9-9900k (5Ghz) Asus ROG Maximus XI Formula | Corsair Vengeance 16GB DDR4-4133mhz | ASUS ROG Strix 2080Ti | EVGA Supernova G2 1050w 80+Gold | Samsung 950 Pro M.2 (512GB) + (1TB) | Full EK custom water loop |IN-WIN S-Frame (No. 263/500)

Link to comment
Share on other sites

Link to post
Share on other sites

I'm using Hyper-V right now because when I first set up my network I had free Server 2012R2 licenses from my college, and now that it's set up and I really don't have spare hardware to play around with, I haven't tried anything else. In the near future I should have a server ready to play with other hypervisors and maybe make a complete swap after testing. My issue is that my VM host is also my NAS, using Storage Spaces, and I am also pretty much tied to that. unRAID or an ESXi vSAN look like the possible contenders for my setup.

Looking to buy GTX690, other multi-GPU cards, or single-slot graphics cards: 

 

Link to comment
Share on other sites

Link to post
Share on other sites

If I need Windows and I own the hardware, I'll go with ESXi.

If I need Windows and I'm going to rent the hardware/resources, I'll go with KVM.

If I don't need Windows, I go with OpenVZ.

If I'm selling the resources, I go with OpenVZ.

-KuJoe

Link to comment
Share on other sites

Link to post
Share on other sites

XenServer because not only is it free, but is more feature-rich than the free version of ESXi.

Plus, There is a free linux-based management utility (Doesn't require a windows license to manage the host)

 

ESXi SysAdmin

I have more cores/threads than you...and I use them all

Link to comment
Share on other sites

Link to post
Share on other sites

Haven't ventured out to other hypervisors, mainly working with ESXi. I really like the networking interface on the desktop client. If they would make the web client a little more responsive and more similar to the desktop client I would be content - but with since about 5.5 they've been pushing the webclient pretty hard.

 

My only other complaints for ESXi right now is network storage. Have tried NFS and ISCSi and i'm not seeing the performance I'd like. I don't think CIFS should be double-triple the speed of iSCSI/NFS. So I have been thinking of trying out Hyper-V or KVM.

 

Pros: Networking is a breeze to setup and understand. You can look at anybody's networking within ESXi and get a pretty good understanding very quickly.

Cons: Network attached storage under-performs, could be me but have found others with same complaints and better hardware.

 

Hypervisor hardware, Dell C1100 - 2x Xeon 5520s, 32gb of ram... IBM x3650v2 - 2x 5520, 32gb of ram.

Link to comment
Share on other sites

Link to post
Share on other sites

I used to use Vmware (vSphere) but have since moved my datacenter equipment over to Xenserver.

 

PAT-e, the GUI now is vastly improved and MUCH easier to use than VMware.  I got sick of paying through the ass with VMware products just to have slightly above basic abilities. With Xenserver I've been able to do the same thing yet cut my costs heavily. 

Link to comment
Share on other sites

Link to post
Share on other sites

15 hours ago, Mikensan said:

Haven't ventured out to other hypervisors, mainly working with ESXi. I really like the networking interface on the desktop client. If they would make the web client a little more responsive and more similar to the desktop client I would be content - but with since about 5.5 they've been pushing the webclient pretty hard.

 

My only other complaints for ESXi right now is network storage. Have tried NFS and ISCSi and i'm not seeing the performance I'd like. I don't think CIFS should be double-triple the speed of iSCSI/NFS. So I have been thinking of trying out Hyper-V or KVM.

 

Pros: Networking is a breeze to setup and understand. You can look at anybody's networking within ESXi and get a pretty good understanding very quickly.

Cons: Network attached storage under-performs, could be me but have found others with same complaints and better hardware.

 

Hypervisor hardware, Dell C1100 - 2x Xeon 5520s, 32gb of ram... IBM x3650v2 - 2x 5520, 32gb of ram.

We run a thousand servers off of Netapp NFS datastores perfectly fine. There are client tools you need to install on the ESXi hosts to add hardware acceleration etc support so that may be something you need to look in to for what ever storage you are trying to use.

 

NFS has some very nice advantages over all other storage options as the datastores auto grow if you expand the underlying storage, any dedup results in actual visible extra storage and finally most important for backups is VM snapshots do not effect the datastore as a whole at all.

 

I also run iSCSI datastores on my ESXi servers at home with no performance problems.

Link to comment
Share on other sites

Link to post
Share on other sites

11 hours ago, Kazar said:

I used to use Vmware (vSphere) but have since moved my datacenter equipment over to Xenserver.

 

PAT-e, the GUI now is vastly improved and MUCH easier to use than VMware.  I got sick of paying through the ass with VMware products just to have slightly above basic abilities. With Xenserver I've been able to do the same thing yet cut my costs heavily. 

I've seen the 2014 GUI from Xenserver and chaning the Storage layout later was a complete pain in the a**. But if it changed, I'll give it a try.

 

My problem was: I was working too much with VMware and knowing you can easily setup and change a datastore (or even: expanding a datastore by adding another disk to it). I was not able to do it in Xenserver (and googling at the time I only found some shell-options I have to enter)...

 

Sorry, I'm a Windows-Guy and have only basic knowledge about Linux (still learning that stuff). And to change the storage layout in Xenserver was (or still is, I don't know) not working for me, even with the (at that time new) GUI that came with that XenServer.

 

If XenServer now is better, I will give it a try... but I would only use it for private use. For business use, I still stick to VMware.

 

PS: I really like it to see this nice (and with high quality filled comments) conversation about different Hypervisors.

 

And again: YOU can choose whatever you want, always make your own decisions on what you want to use (but let others help you with making decisions)

Link to comment
Share on other sites

Link to post
Share on other sites

I don't do a lot of virtualization any longer, because I'm running FreeBSD and jails now.  But when I was heavily invested in a hypervisor: KVM.  All the way.  I chucked ESXi right into /dev/null (where it belongs) a while ago because I got sick of:

  • The absolutely atrocious UI written in... God help us all: .Net
  • The ridiculous hardware requirements

ESXi, for support reasons, made it harder and harder to install it on commodity hardware.  Don't have the right NICs?  It's not going to install.  It doesn't recognize a piece of hardware in your rig?  Nope: won't install.

 

I can easily edit KVM's XML files by hand in literally no time at all, and have a new VM up in running in seconds.  GUIs are for... well... I'll be nice and just say they're not for me.

Editing Rig: Mac Pro 7,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro W6800X Duo | Lots of SSD and NVMe storage |

Audio: Universal Audio Apollo Thunderbolt-3 Interface |

Displays: 3 x LG 32UL950-W displays |

 

Gaming Rig: PC

System Specs:  Asus ROG Crosshair X670E Extreme | AMD 7800X3D | 64GB G.Skill Trident Z5 NEO 6000MHz RAM | NVidia 4090 FE card (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case (RIP CaseLabs ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Display: Asus PG32UQX 4K/144Hz displayBenQ EW3280U display

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Quantum Magnitude CPU block | EK 4090FE waterblock | AlphaCool 480mm x 60mm rad | AlphaCool 560mm x 60mm rad | 13 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, leadeater said:

We run a thousand servers off of Netapp NFS datastores perfectly fine. There are client tools you need to install on the ESXi hosts to add hardware acceleration etc support so that may be something you need to look in to for what ever storage you are trying to use.

 

NFS has some very nice advantages over all other storage options as the datastores auto grow if you expand the underlying storage, any dedup results in actual visible extra storage and finally most important for backups is VM snapshots do not effect the datastore as a whole at all.

 

I also run iSCSI datastores on my ESXi servers at home with no performance problems.

When I setup a NFS based datastore I got fantastic writes but pretty bad reads, dipped down to 10-20mbps. Writes were maxing out the gigabit connection, but this is probably because of my SLOG. Internally on the NAS I see 400-500mbytes-second and with iperf/cifs I can saturate my gigabit network. Only an issue when pairing ESXi with NFS or iSCSI. At work we use direct attached storage over SFF, so I don't have anything to personally compare settings with. We only run 20 servers. What's strange is I can mount the same iSCSI target in windows 8, format it with NTFS and saturate my gigabit network. It's just something within ESXi I'm doing wrong or maybe the hardware.

 

Lot of people successfully use NFS/iSCSI with ESXi, but it hasn't been exactly turn-key for me.

Link to comment
Share on other sites

Link to post
Share on other sites

Hyper-V because it is more economical. Running so many VMs on data center cluster has made it very cost effective for myself. 

Link to comment
Share on other sites

Link to post
Share on other sites

17 hours ago, jasonvp said:

I don't do a lot of virtualization any longer, because I'm running FreeBSD and jails now.  But when I was heavily invested in a hypervisor: KVM.  All the way.  I chucked ESXi right into /dev/null (where it belongs) a while ago because I got sick of:

  • The absolutely atrocious UI written in... God help us all: .Net
  • The ridiculous hardware requirements

ESXi, for support reasons, made it harder and harder to install it on commodity hardware.  Don't have the right NICs?  It's not going to install.  It doesn't recognize a piece of hardware in your rig?  Nope: won't install.

 

I can easily edit KVM's XML files by hand in literally no time at all, and have a new VM up in running in seconds.  GUIs are for... well... I'll be nice and just say they're not for me.

You shouldn't really have to edit XML files to make things install, I agree KVM is a great product though. VMware products aren't always the best fit for everyone but is still the market leader when it comes to large corporate usage. Installing ESXi on commodity hardware was never supposed to be done anyway and is not their target market, they strictly support server chipsets only, server nics and other specific server oriented parts and this will never change.

 

KVM is the core behind some really good alternative products on the market, Nutanix hypervisor is based on KVM. We have Nutanix clusters at work but we run ESXi on them to more easily integrate with our existing VM management.

 

As for GUI's, if you have a large team of people with different skill sets and interface with storage, network, backup and other products this is by far and away the best and easiest way to manage things. Command line is there to do specific diagnostics and trouble shooting, only the most die hard people actually want to use this for day to day tasks. Also I happen to like the GUI layout and the way it works, new web interface is still annoying but getting better.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, leadeater said:

You shouldn't really have to edit XML files to make things install

Why not?  While I'll agree that XML isn't the most human-friendly format to edit (quite frankly: it sucks), it beats point, clicking, and droooooling your way through a VM configuration.  Why?  Because it teaches you way more.  Vastly more than a GUI will.

Quote

 

VMware products aren't always the best fit for everyone but is still the market leader when it comes to large corporate usage.

 

All fine and dandy.  They preyed on the cluelessness of "corporate" IT and engineering people for years.  People who are doing real production stuff at great scales (think: GOOG, Amazon, any other cloud provider, etc) are not using ESXi.  They're not because of A)the ridiculous cost for it, B)the ridiculous cost of hardware for it, and C)well, quite frankly: it sucks at scale.

 

"Corporate scale" =/= scale.

5 hours ago, leadeater said:

Installing ESXi on commodity hardware was never supposed to be done anyway and is not their target market, they strictly support server chipsets only, server nics and other specific server oriented parts and this will never change.

And this is what will bring the end of ESXi and VMWare in the mid-long run.  For the time being, enterprise customers will continue to lean towards them.  But budgets are a fickle thing, and they generally shrink versus grow, even for very successful companies.  When that happens, VMWare will find themselves on the losing end of that proposition.  Count on it.

Quote

KVM is the core behind some really good alternative products on the market, Nutanix hypervisor is based on KVM. We have Nutanix clusters at work but we run ESXi on them to more easily integrate with our existing VM management.

LibVirt-based hypervisors such as KVM, as well as others like Xen are what make up the vast majority of cloud infrastructures today.  They're not "alternative", to use your word.  They're very much right in the forefront.  Ignore that at your (employment) peril.

Quote

Command line is there to do specific diagnostics and trouble shooting, only the most die hard people actually want to use this for day to day tasks.

It's responses like this that make me weep for the future of the computing industry.

 

Editing Rig: Mac Pro 7,1

System Specs: 3.2GHz 16-core Xeon | 96GB ECC DDR4 | AMD Radeon Pro W6800X Duo | Lots of SSD and NVMe storage |

Audio: Universal Audio Apollo Thunderbolt-3 Interface |

Displays: 3 x LG 32UL950-W displays |

 

Gaming Rig: PC

System Specs:  Asus ROG Crosshair X670E Extreme | AMD 7800X3D | 64GB G.Skill Trident Z5 NEO 6000MHz RAM | NVidia 4090 FE card (OC'd) | Corsair AX1500i power supply | CaseLabs Magnum THW10 case (RIP CaseLabs ) |

Audio:  Sound Blaster AE-9 card | Mackie DL32R Mixer | Sennheiser HDV820 amp | Sennheiser HD820 phones | Rode Broadcaster mic |

Display: Asus PG32UQX 4K/144Hz displayBenQ EW3280U display

Cooling:  2 x EK 140 Revo D5 Pump/Res | EK Quantum Magnitude CPU block | EK 4090FE waterblock | AlphaCool 480mm x 60mm rad | AlphaCool 560mm x 60mm rad | 13 x Noctua 120mm fans | 8 x Noctua 140mm fans | 2 x Aquaero 6XT fan controllers |

Link to comment
Share on other sites

Link to post
Share on other sites

+1 for ESX, pretty much the same reasons as above.  For me it's free and easy to put on a home server.  Really useful for development sandboxes and whatnot.

Link to comment
Share on other sites

Link to post
Share on other sites

ESXi, because I had experience with it from when I worked at Dell. If you run the paid version, the features are incredibly flexible. A cluster of three ESXi nodes that have VSAN enabled can have an entire host go offline, and you lose none of your VMs or storage. You can enable flash caching at the hypervisor level to give every VM a performance boost.

 

If I didn't have knowledge about how to use ESXi, I would probably go for a Linux-based approach, since the VMs I run are Linux. The free version of ESXi misses out on a lot of the good stuff. On the other hand, ESXi is great because it supports many different networking and storage setups out of the box.

I do not feel obliged to believe that the same God who has endowed us with sense, reason and intellect has intended us to forgo their use, and by some other means to give us knowledge which we can attain by them. - Galileo Galilei
Build Logs: Tophat (in progress), DNAF | Useful Links: How To: Choosing Your Storage Devices and Configuration, Case Study: RAID Tolerance to Failure, Reducing Single Points of Failure in Redundant Storage , Why Choose an SSD?, ZFS From A to Z (Eric1024), Advanced RAID: Survival Rates, Flashing LSI RAID Cards (alpenwasser), SAN and Storage Networking

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×