Jump to content

General Virtualization Discussion Thread

Welcome to what is intended to be a discussion thread for anything and everything virtualization.

 

Regardless of your choice of hypervisor.

VMWare ESXi, PROXMOX, Hyper-V, VMWare for Workstations, QEMU/KVM w/ virt-manager, QEMU/KVM w/ Cockpit, VMWare vSphere, UnRAID, Oracle VM Virtualbox, Docker, Citrix, any others that I missed. :P

 

Regardless of your choice of hardware:

Intel i3 | i5 | i7 | i9 | E3 | E5 | E7 | Bronze | Silver | Gold | Platinum

AMD Operon | EPYC | Ryzen 3/5/7/9 | Ryzen Threadripper

 

Regardless of the application:

Education, Web hosting, File Sharing, Pen-testing, Gaming, Game hosting, VPN services, Home-entertainment, etc.

 

Feel welcome to ask for help or discuss current projects. :D

 

I have a couple of tutorials up for anyone who wants to try something new or if you've never played with virtualization before.

I'll be here to answer questions or share my experiences.

Link to comment
Share on other sites

Link to post
Share on other sites

Nice! ESXi user here, certainly always nice to know there is a good thread out there if I ever need help with things!

Rig: i7 13700k - - Asus Z790-P Wifi - - RTX 4080 - - 4x16GB 6000MHz - - Samsung 990 Pro 2TB NVMe Boot + Main Programs - - Assorted SATA SSD's for Photo Work - - Corsair RM850x - - Sound BlasterX EA-5 - - Corsair XC8 JTC Edition - - Corsair GPU Full Cover GPU Block - - XT45 X-Flow 420 + UT60 280 rads - - EK XRES RGB PWM - - Fractal Define S2 - - Acer Predator X34 -- Logitech G502 - - Logitech G710+ - - Logitech Z5500 - - LTT Deskpad

 

Headphones/amp/dac: Schiit Lyr 3 - - Fostex TR-X00 - - Sennheiser HD 6xx

 

Homelab/ Media Server: Proxmox VE host - - 512 NVMe Samsung 980 RAID Z1 for VM's/Proxmox boot - - Xeon e5 2660 V4- - Supermicro X10SRF-i - - 128 GB ECC 2133 - - 10x4 TB WD Red RAID Z2 - - Corsair 750D - - Corsair RM650i - - Dell H310 6Gbps SAS HBA - - Intel RES2SC240 SAS Expander - - TreuNAS + many other VM’s

 

iPhone 14 Pro - 2018 MacBook Air

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, LIGISTX said:

Nice! ESXi user here, certainly always nice to know there is a good thread out there if I ever need help with things!

That's the plan if we can get enough like-minded people to join the thread.

 

Like @leadeater maybe? My memory is failing me but did you say you manage some VM servers where you work? Anything you'd like to chat about?

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, Windows7ge said:

Like @leadeater maybe? My memory is failing me but did you say you manage some VM servers where you work? Anything you'd like to chat about?

Yea, something like 100 ESXi hosts with all the extra stuff like multi site vCenter, SRM etc. I look out for posts as it is, not many actually ask about VM hosting here though.

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, leadeater said:

Yea, something like 100 ESXi hosts with all the extra stuff like multi site vCenter, SRM etc. I look out for posts as it is, not many actually ask about VM hosting here though.

Nice. I've been meaning to at least give ESXi a go but QEMU/KVM has had me covered for pretty much everything.

 

I've been on the forum for 4 years now and you about 4 & 1/2 months longer. I may not be nor ever been a mod but even being a regular member for all this time I can say your statement is incredibly accurate, this forum is not ideal for a thread like this but I'm giving it a go anyways. There is a small community of established virtualization aficionados here it's just a matter of giving them a place to congregate. We do get new people asking for help with virtualization but it's like once a month. There was one today if that means anything.

 

I know I'd be better off over on the Level1techs forum but I started here and I like it here so I'd rather not hop forums if I can get something going.

Link to comment
Share on other sites

Link to post
Share on other sites

Joining to learn! I'm building a VM host project right now-- EVGA SR-2, 4x GPU, idea being to pass through the GPUs for 4x gaming VMs as a LAN-in-a-Box for friends. Currently in the building phase, been planning this for a while now.

Link to comment
Share on other sites

Link to post
Share on other sites

21 minutes ago, bimmerman said:

Joining to learn! I'm building a VM host project right now-- EVGA SR-2, 4x GPU, idea being to pass through the GPUs for 4x gaming VMs as a LAN-in-a-Box for friends. Currently in the building phase, been planning this for a while now.

What GPUs are you planning on using?

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Windows7ge said:

What GPUs are you planning on using?

TBD at the moment. I have 2x R9 290, 1x 5870HD, and 1x 1080ti available.....but am leaning towards buying something like 4 1060s. Ideally watercooled to convert to single-slot in order to use an HBA to get around the 6x SATA through 1x PCIe lane equivalent chipset bandwidth bottleneck. I have 7x PCIe available but 4x dual slot cards will cover all of them.

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Windows7ge said:

Nice. I've been meaning to at least give ESXi a go but QEMU/KVM has had me covered for pretty much everything.

 

You can setup a vCenter lab on KVM btw. Thats how I have mine running. 

My UnRAID box has 2 x ESXi Guests and a OpenFiler server with vdisks mounted to create iSCSi LUN's for the VMware Datastores. I then have a vCenter running on the cluster. Keep in mind to setup an entire vCenter, the VCSA/PSC installer requires 10GB memory allocated, so you need 12GB assigned to the ESXi host during install. It also requires ~250GB space, but you can thin provision it (mines using about 17GB). 

 

image.thumb.png.89b1faf2cda8da6cfe3fa811de981850.png

image.thumb.png.8def1552e89a5e50ebbccb0fff44c425.png

image.thumb.png.ca3df5df7c8c354e951b904cbd6a8e2a.png

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Windows7ge said:

I may not be nor ever been a mod but even being a regular member for all this time I can say your statement is incredibly accurate, this forum is not ideal for a thread like this but I'm giving it a go anyways.

Well what i meant was those asking beyond just general curiosity. Virtualization questions tend to stop at like "I need 2 VMs", which is great but at that point almost anything will do so the scope of advice and questions is much smaller. Point really was, I'd like more bigger questions/posts.

Link to comment
Share on other sites

Link to post
Share on other sites

6 hours ago, leadeater said:

Well what i meant was those asking beyond just general curiosity. Virtualization questions tend to stop at like "I need 2 VMs", which is great but at that point almost anything will do so the scope of advice and questions is much smaller. Point really was, I'd like more bigger questions/posts.

Ah, I mis-interpreted your statement. If you're looking out there for the data-center sized projects then yeah, this isn't the best place. I can't recall the last time anyone had a question for a big VM cluster. Just in general the Level1tech forum has a bigger server community. You might find more people with bigger projects over there.

 

Just don't forget that us over here still exist. :3 

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, bimmerman said:

TBD at the moment. I have 2x R9 290, 1x 5870HD, and 1x 1080ti available.....but am leaning towards buying something like 4 1060s. Ideally watercooled to convert to single-slot in order to use an HBA to get around the 6x SATA through 1x PCIe lane equivalent chipset bandwidth bottleneck. I have 7x PCIe available but 4x dual slot cards will cover all of them.

Are you aware of NVIDIA error code 43? Do you have a plan to get around it?

 

8 hours ago, Jarsky said:

You can setup a vCenter lab on KVM btw. Thats how I have mine running. 

My UnRAID box has 2 x ESXi Guests and a OpenFiler server with vdisks mounted to create iSCSi LUN's for the VMware Datastores. I then have a vCenter running on the cluster. Keep in mind to setup an entire vCenter, the VCSA/PSC installer requires 10GB memory allocated, so you need 12GB assigned to the ESXi host during install. It also requires ~250GB space, but you can thin provision it (mines using about 17GB). 

I was thinking of virtualizing it just so I can give it a go. There's a server I want to buy that I was thinking of installing ESXi on but I have nowhere to put the server.

 

Something I'm arguing with myself about is I plan on changing the OS on my primary server after I populate all of it's bays with drives and I'm having a hard time deciding weather to go with PROXMOX or Ubuntu Server + QEMU/KVM + Install a desktop + virt-manager.

 

Both have pros, both have cons.

 

Also, question. Do you have any experience with LXC Containers?

Link to comment
Share on other sites

Link to post
Share on other sites

14 minutes ago, Windows7ge said:

Something I'm arguing with myself about is I plan on changing the OS on my primary server after I populate all of it's bays with drives and I'm having a hard time deciding weather to go with PROXMOX or Ubuntu Server + QEMU/KVM + Install a desktop + virt-manager.

I recently just tried out quite a few setups which included the above options.

Virt-Manager is ok but still a lot of manual command work needs to be done for managing VM's. Cockpit is just not ready to be standalone and I found it horribly disjointed. 

 

Honestly my favorite configuration with the most flexibility was doing a Debian Buster 10 install with Proxmox VE and zfsutils

 

Quote

Also, question. Do you have any experience with LXC Containers?

Not really, just haven't had a need to use LXC containers. I'm a big fan of the flexibility and ease of Docker. 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Jarsky said:

Not really, just haven't had a need to use LXC containers. I'm a big fan of the flexibility and ease of Docker.

I, on the other hand, use LXC-containers all the time via LXD and actually find Docker horribly limited and clunky in comparison. Any time I try Docker, I get frustrated almost immediately with how stupid the whole mess is.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

12 minutes ago, Jarsky said:

I recently just tried out quite a few setups which included the above options.

Virt-Manager is ok but still a lot of manual command work needs to be done for managing VM's. Cockpit is just not ready to be standalone and I found it horribly disjointed. 

 

Honestly my favorite configuration with the most flexibility was doing a Debian Buster 10 install with Proxmox VE and zfsutils

+ zfsutils. Forgot that part, but that's part of why I'm a little conflicted. The server has 0.5TB of RAM so I'd be a fool not to use it for a lot of virtualization. For me PROXMOX has the easiest interface but as you said virt-manager has virsh edit where you can modify the VMs .XML file to do a lot of performance optimization and that's really appealing for some of the demanding projects I have in mind. The server has IPMI so I can remote into a GUI so I'm really tedering back and forth in terms of what I want more.

 

21 minutes ago, Jarsky said:

Not really, just haven't had a need to use LXC containers. I'm a big fan of the flexibility and ease of Docker. 

3 minutes ago, WereCatf said:

I, on the other hand, use LXC-containers all the time via LXD and actually find Docker horribly limited and clunky in comparison. Any time I try Docker, I get frustrated almost immediately with how stupid the whole mess is.

Ah, I'm trying to understand why something I was doing wasn't working.

 

I had a project that required docker and I tried to install it in a LXC Container. Docker didn't like it. Refused to run. Any idea why? Would it be a issue revolving around nesting VMs? I'm thinking it's something else since a Container isn't a VM.

Link to comment
Share on other sites

Link to post
Share on other sites

8 minutes ago, Windows7ge said:

I had a project that required docker and I tried to install it in a LXC Container. Docker didn't like it. Refused to run. Any idea why? Would it be a issue revolving around nesting VMs? I'm thinking it's something else since a Container isn't a VM.

Containers are actually technically VMs, even if they don't pretend to be full computers. And yes, you need to set security.nesting to true in order to allow nesting inside LXC-containers -- nesting is required to run LXC-containers inside LXC-containers, or Docker, or KVM, so yes, you were actually correct.

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

13 minutes ago, WereCatf said:

Containers are actually technically VMs, even if they don't pretend to be full computers. And yes, you need to set security.nesting to true in order to allow nesting inside LXC-containers.

If only this information came to me sooner I could have done this instead of making a full VM and passing through an HBA.

 

I see PROXMOX does have a feature option for LXC Containers labeled "Nesting" I can assume this is it. I may test it but I don't know how interested I am in doing my config all over again. If the VM/HBA pass-though ever screws up and kills the VM I'll give this a shot again.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Windows7ge said:

If only this information came to me sooner I could have done this instead of making a full VM and passing through an HBA

Oof, that's quite a bit of completely unneeded overhead just to use Docker. But oh well, if it works for you..

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, WereCatf said:

Oof, that's quite a bit of completely unneeded overhead just to use Docker. But oh well, if it works for you..

Yeah, not up to me, it's the application. I can't dedicate a box to run it native. If a docker container itself could/can run directly inside a LXC container without installing docker that'd be splendid but I have no idea.

 

It doesn't matter though. This is for a storage application not a compute application. I'm limited by my Internet up/down speed so the overhead really isn't hurting the program.

Link to comment
Share on other sites

Link to post
Share on other sites

Just the topic for me. Don't need any help at the moment though. But now i'll atleast get notifications so my old brain don't forget.. 

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Windows7ge said:

Are you aware of NVIDIA error code 43? Do you have a plan to get around it?

I've heard about it and done very light research into solutions. From my limited understanding you can either bios mod or some hosts (e.g., Unraid) have built in solutions for getting around it.

 

I had a brainwave last night that using a dual-gpu card (e.g., R9 295x2) could free up PCIe slots for other devices (e.g., HBA, 10gBe, USBs, etc)...starting to look into whether that's feasible or not. There are posts online about people trying with Nvidia dual-GPUs, but not much on the 295x2. Those are so cheap nowadays that I'm curious....once I figure out how to get a single 290 to work, might try it with the dual.

 

I'm very much in the tinkering frame of mind on this project. Between the weird nf200 chips on the board and the rando GPUs, who knows if it'll work!

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, bimmerman said:

I've heard about it and done very light research into solutions. From my limited understanding you can either bios mod or some hosts (e.g., Unraid) have built in solutions for getting around it.

 

I had a brainwave last night that using a dual-gpu card (e.g., R9 295x2) could free up PCIe slots for other devices (e.g., HBA, 10gBe, USBs, etc)...starting to look into whether that's feasible or not. There are posts online about people trying with Nvidia dual-GPUs, but not much on the 295x2. Those are so cheap nowadays that I'm curious....once I figure out how to get a single 290 to work, might try it with the dual.

 

I'm very much in the tinkering frame of mind on this project. Between the weird nf200 chips on the board and the rando GPUs, who knows if it'll work!

Yep. I don't know what UnRAID's workaround is but it does seem to have one if that's the route you plan on going.

 

I think this won't work. Both GPUs will likely show up as components within the same device. Even if they didn't they would likely appear in the same IOMMU group. This means your options are to give the whole device to one VM or...actually I think that is your only option.

 

If you want a single GPU to service multiple VMs you'll need to look into NVIDIA vGPU or AMD SR-IOV.

 

Alternatively if the board supports PCI_e Bi-furcation you might be able to use a single slot to service 2 or 4 VMs by running adapters to two x8x8 GPUs or x4x4x4x4. I'm hearing some NVIDIA GPUs don't like being used in x4 slots though.

Link to comment
Share on other sites

Link to post
Share on other sites

7 minutes ago, Windows7ge said:

Yep. I don't know what UnRAID's workaround is but it does seem to have one if that's the route you plan on going.

 

I think this won't work. Both GPUs will likely show up as components within the same device. Even if they didn't they would likely appear in the same IOMMU group. This means your options are to give the whole device to one VM or...actually I think that is your only option.

 

If you want a single GPU to service multiple VMs you'll need to look into NVIDIA vGPU or AMD SR-IOV.

 

Alternatively if the board supports PCI_e Bi-furcation you might be able to use a single slot to service 2 or 4 VMs by running adapters to two x8x8 GPUs or x4x4x4x4. I'm hearing some NVIDIA GPUs don't like being used in x4 slots though.

I'm not sure if the board supports bifurcation, I'll take a look.

 

I did see one post where someone posted IOMMU groups of a 295x2, and it appears to have each GPU in its own group with its own audio driver (link: https://forums.unraid.net/topic/57568-amd-gpu-passthrough-woes/ ). I'm still learning about what that all means, but it appears to be viable from a hardware perspective.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, bimmerman said:

I'm not sure if the board supports bifurcation, I'll take a look.

 

I did see one post where someone posted IOMMU groups of a 295x2, and it appears to have each GPU in its own group with its own audio driver (link: https://forums.unraid.net/topic/57568-amd-gpu-passthrough-woes/ ). I'm still learning about what that all means, but it appears to be viable from a hardware perspective.

Not only do they appear under different IOMMU groups but different device addresses...OK, it may very well be possible. If you do go that route let me know how it goes.

Link to comment
Share on other sites

Link to post
Share on other sites

37 minutes ago, Windows7ge said:

Not only do they appear under different IOMMU groups but different device addresses...OK, it may very well be possible. If you do go that route let me know how it goes.

I found a guy on reddit who was discussing passthrough for the R9 295x2 and said it specifically wouldn't work, as all the display outputs are associated with only one of the GPUs. Back to the drawing board...

 

or, yolo and try anyway.

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×