Jump to content

2 Gamers, 1 CPU - Virtualized Gaming Build Log

I can't say for certain with KVM, as I've never used it, but with XenServer / ESXi, they use a 'fake' video driver.  In the case of ESXi (since that's what I run), you can use the vSphere client to access the VM console (which is what you use to create the VM / install the guest OS anyway), and then you can set the guest OS up for remote access (RDP, Chrome Remote Desktop, Team Viewer, Splashtop, VNC, etc).  No monitor / graphics card required.  

In fact, this is exactly how my WHS2011 / Plex Media Server / Sonarr / sabnzbd VM works.

 

:)

 

With KVM, a key differentiator is the ability to disable the use of an emulated graphics device in the case where you are passing through a physical graphics device.  The experience is that when you go to install the OS on your VM, you would actually do so on the monitor connected to the GPU you've assigned.  That said, you could opt to NOT pass through physical graphics and just use the emulated graphics in which case you can connect to it through our webGui (noVNC) or using any VNC client software on another device.

Link to comment
Share on other sites

Link to post
Share on other sites

I am really looking forward to trying this out but was curious if this could be turned off...

So could I use this dual virtulization when I have friends come over to share my pc and then turn it off and gain all my performance back for when I'm alone?

Link to comment
Share on other sites

Link to post
Share on other sites

I am really looking forward to trying this out but was curious if this could be turned off...

So could I use this dual virtulization when I have friends come over to share my pc and then turn it off and gain all my performance back for when I'm alone?

 

you can shut down each VM individually. You can also reallocate all the CPU cores to specific VM really easily (just need to stop / restart the VM)

Link to comment
Share on other sites

Link to post
Share on other sites

Cool concept but the money and effort to do it is way too much.

Very interesting, though, good watch.

Link to comment
Share on other sites

Link to post
Share on other sites

You certainly don't need the hardware Linus used in the video.  Obviously, the newer the hardware, *generally* the better performance.  That said, I'm rocking a VM with a Radeon HD 6970 hosted within an HP Z800 with two Xeon X5677's and 32GB of DDR3 under ESXi.  

The Z800 (motherboard, power supply, chassis, and one CPU fan (no CPUs)) cost me $375 on Ebay.  I used 16GB of RAM from my i7-2600k, and picked up another 16GB on eBay (same model / part number as the 16GB I already had, just to make sure it all matched).  I purchased a Dell R710 at a local auction for $35.  Why?  It had the two X5677's in it, and they're compatible with the Z800.  I started with some HDD's I had laying around for the datastore, and later upgraded to three 240GB SSD's (one of which came from the i7-2600k, two of which I purchased on Amazon when they were on sale in the $90 a piece).  I also use a special 50' HDMI cable and a USB -> Cat5 extender so I can have the Z800 housed in a closet in a bedroom, but have the monitor / keyboard / mouse in my living room.  If you don't mind the noise of whatever you put together, this is most certainly *not* a requirement.  :)

 

For the two games I play (Star Trek Online, and very occasionally, Champions Online), I get 40+ FPS at 1920x1200, which is roughly what I got on the i7-2600k.  I suspect those numbers would improve with a newer / more powerful graphics card (which I can easily support, since the Z800 comes with an 1100w power supply).

So, the cash I have involved in the essential parts of this:

 

$375 - Z800

$82 - extra 16GB RAM, to match the 16GB I started with

$35 - Dell R710 (so I could harvest its Xeon X5677's)

$180 - two 240GB SSD's (which were added to the 240GB SSD I already had)

$48 - second CPU heatsink (the Z800 only came with one installed)

---

$715

 

I got my monitor for free, though I did have to spend a bit to buy a new T-Con board for it (before I replaced the T-Con board, the monitor would turn on, and either show a very garbled picture, or just a solid white image).  As I'm not at home, I couldn't tell you the model, but I can tell you it's a Dell 24" LCD.  The 6 1TB drives I'm using for my storage pool I carried over from my previous "NAS" (which was a WHS2011 system with StableBit's DrivePool, which I've since P2V'd, and it now lives on the Z800).  I'm also using the keyboard and mouse I had from my i7-2600k.

 

With a setup like this, you're not going to be getting ~80+ FPS in the current / newest games, but you're also not going to spend $5k on the system, either, while still being able to play said games with decent performance (for a system from 2009).  You can easily start with a couple video cards in the $200 range, and upgrade piece at a time (which is how I built up this Z800, actually -- I didn't buy all of the stuff I have in it up front).

 

The nice thing about starting off in the way I did is that you can migrate your VM's to newer / faster hardware as time and funding becomes available.  Starting off with this type of hardware also lets you get some experience with this type of setup, so you'll be better prepared for when you *can* spend over $5k on a rig...  :P

As to the effort -- well, it was a practical matter for me, in addition to being an interesting topic for my mind to tinker with.  I had 4 physical PC's running in the house, each providing different services.  Even though I live alone, it was tiring / annoying having to take care of all those systems.  Condensing them down into a single system 1) gave me something to keep my mind busy and 2) actually lowered my electricity usage (though I'm on 'budget billing' or 'level billing', so I won't notice the difference until somewhere around March of next year).  Also, I find working with this stuff fun, so there was that to look forward to.  :)

 

Cool concept but the money and effort to do it is way too much.

Very interesting, though, good watch.

Link to comment
Share on other sites

Link to post
Share on other sites

alright i just need to know one thing can i set up a 7970 and a gtx780 one for each desktop and would a 4790k be good enough for this? if so im stoked to set this up

the 4790k is working fine for me i have two running with a GTX980Ti and a GTX660

Link to comment
Share on other sites

Link to post
Share on other sites

the third card is needed (could be an integrated GPU though).  The card Linus puts in the top slot (hence why he says how important it is to use that location) is the one that the physical computer uses to boot.  If you remove it, it will use one of the nVidia cards, once that happens they will be unavailable for the virtual machines to use.

 

I am just wondering if the third gpu is really needed? 

Link to comment
Share on other sites

Link to post
Share on other sites

Does this mean the 1 discreet gpu can be shared by 2 VMs? for gaming too?
or does this mean the 1 onboard video will be used for the 1 VM ( with awful performance )?

I have 1 980, playing 2 people even at ~50 fps would be awesome.

 

No, the whole device* is passed through. No matter how many HDMI, DVI ports etc a card has. You can only do GPU pass through with a single discreet GPU to one guest VM at a time.

 

*A physical card may however contain more than one device, in the video (8:40) you can see that two of the cards have audio devices too:

card one (the Host unRAID OS card no audio device):
01:00.0 VGA compatible controller: NVIDIA Corporation G96 [GeForce 9500 GS] (rev a1)
card two:
02:00.0 VGA compatible controller: NVIDIA Corporation GM200 [GeForce GTX TITAN X] (rev a1)
02:00.1 Audio device: NVIDIA Corporation Device 0fb0 (rev a1)
card three:
04:00.0 VGA compatible controller: NVIDIA Corporation Device 17c8 (rev a1)
04:00.1 Audio device: NVIDIA Corporation Device 0fb0 (rev a1)
 
The xx:xx.0 video and xx:xx.1 audio devices are passed through separately (at 13:00 in the video).
Link to comment
Share on other sites

Link to post
Share on other sites

I don't know about XenServer, but in ESXi, this is not the case.  Whatever card the system uses, as default, for the boot process *can* be assigned (via pcipassthrough) to a VM.  The only configuration one needs to do, at the local console, in ESXi, is to set the networking.  Once that's done, you shouldn't need to do it again, which normally isn't a problem. :)

 

 

the third card is needed (could be an integrated GPU though).  The card Linus puts in the top slot (hence why he says how important it is to use that location) is the one that the physical computer uses to boot.  If you remove it, it will use one of the nVidia cards, once that happens they will be unavailable for the virtual machines to use.

 

the third card is needed (could be an integrated GPU though).  The card Linus puts in the top slot (hence why he says how important it is to use that location) is the one that the physical computer uses to boot.  If you remove it, it will use one of the nVidia cards, once that happens they will be unavailable for the virtual machines to use.

Link to comment
Share on other sites

Link to post
Share on other sites

I don't know about XenServer, but in ESXi, this is not the case.  Whatever card the system uses, as default, for the boot process *can* be assigned (via pcipassthrough) to a VM.  The only configuration one needs to do, at the local console, in ESXi, is to set the networking.  Once that's done, you shouldn't need to do it again, which normally isn't a problem. :)

 

GPUs are a special breed.  General PCI devices work the same way in ESXi as they do with KVM/unRAID, but GPUs are unique because of VGA.  Some GPUs don't like to be initialized by one OS/driver and then get reinitialized again by another.  Also, this isn't universally true with all GPUs either.  There are some that will work being assigned to a VM even after they've been initialized by the host.  It comes down to a combination of the GPU and the motherboard that will ultimately determine that and no, there is no easy way to predict that just yet.  That's why we recommend to plan around using the primary GPU slot for unRAID and the other PCI slots for your guests.

 

You don't feel this with ESXi because your VMs always have emulated graphics provided by ESXi in addition to the GPU-based graphics.  This is a different situation than when the GPU needs to be set as primary graphics.

Link to comment
Share on other sites

Link to post
Share on other sites

Love the new workshop set.

 

  1. GLaDOS: i5 6600 EVGA GTX 1070 FE EVGA Z170 Stinger Cooler Master GeminS524 V2 With LTT Noctua NFF12 Corsair Vengeance LPX 2x8 GB 3200 MHz Corsair SF450 850 EVO 500 Gb CableMod Widebeam White LED 60cm 2x Asus VN248H-P, Dell 12" G502 Proteus Core Logitech G610 Orion Cherry Brown Logitech Z506 Sennheiser HD 518 MSX
  2. Lenovo Z40 i5-4200U GT 820M 6 GB RAM 840 EVO 120 GB
  3. Moto X4 G.Skill 32 GB Micro SD Spigen Case Project Fi

 

Link to comment
Share on other sites

Link to post
Share on other sites

This is a bit of a long shot, but are there any experiences with audio latency using ASIO4All drivers (or ALSA if you're into Linux) + dedicated soundcards (USB or otherwise)? This configuration would be extremely useful for live audio production jams.

Another question, which is less of a long shot... any experiences with network latency between VMs?

Link to comment
Share on other sites

Link to post
Share on other sites

So I have a question, it may sound silly but I am noobish. 

 

I have two, identical, Gigabyte R9 390s. can I use one in each VM?

 

Also, another question, could I use a GT610 as the primary GPU just to boot into (or whatever the correct terminology is)?

Link to comment
Share on other sites

Link to post
Share on other sites

So I have a question, it may sound silly but I am noobish. 

 

I have two, identical, Gigabyte R9 390s. can I use one in each VM?

 

Also, another question, could I use a GT610 as the primary GPU just to boot into (or whatever the correct terminology is)?

1) Most likely

2) Yes.

Link to comment
Share on other sites

Link to post
Share on other sites

This is a bit of a long shot, but are there any experiences with audio latency using ASIO4All drivers (or ALSA if you're into Linux) + dedicated soundcards (USB or otherwise)? This configuration would be extremely useful for live audio production jams.

Another question, which is less of a long shot... any experiences with network latency between VMs?

 

So I know what you're talking about only because of the Windows 8/10 audio issue with microphones (where if you try to output your mic back through your headset, there's a huge delay).  I did some research on this at one point and everything pointed to ASIO4All, but no matter what I did, I couldn't solve this issue on Windows 8.1 or 10.  This was even with a physical Windows install (no VM).  That said, I'm not an audio expert and really didn't know what I was doing.  As such, if you could give me a test to perform (step by step instructions on what to do), I would be more than willing to do that and make sure this works properly for your needs.

Link to comment
Share on other sites

Link to post
Share on other sites

You'll need at least some type of graphics to power the OS itself (could be onboard integrated graphics or any old cheap old GPU you have lying around).

Assuming no onboard graphics, is it required to have the O/S GPU in the first PCIe slot as mentioned in the video? Or is there a way to configure the software to use a specific GPU? Thanks.

Link to comment
Share on other sites

Link to post
Share on other sites

It's not the software -- it's whichever video card your BIOS/EFI picks for POST.  You may need to move your cards around a little to find out which slot your mainboard / BIOS/EFI picks first.  ;)

 

 

Assuming no onboard graphics, is it required to have the O/S GPU in the first PCIe slot as mentioned in the video? Or is there a way to configure the software to use a specific GPU? Thanks.

Link to comment
Share on other sites

Link to post
Share on other sites

you should use your 12 core Xeon CPU lol

For gaming, games generally prefer few faster cores rather than a large number of slower cores.

"It pays to keep an open mind, but not so open your brain falls out." - Carl Sagan.

"I can explain it to you, but I can't understand it for you" - Edward I. Koch

Link to comment
Share on other sites

Link to post
Share on other sites

Assuming no onboard graphics, is it required to have the O/S GPU in the first PCIe slot as mentioned in the video? Or is there a way to configure the software to use a specific GPU? Thanks.

The motherboard doesn't have gpu "on board". It's depends of the CPU installed in that mobo. Motherboard only support or not "on board" GPU but doesn't have a GPU chipset. Check your CPU specs if it have one. For example i7 4790k have integrated GPU.

But answering your question. Yes you still need dedicated GPU for boot up. If you have GPUs dedicated for both virtual systems connected to PCI then BIOS will automatically dissable integrated GPU. And usually BIOS post the first GPU connected to the PCI slot closer to the CPU. If I'm not mistaken it's called PCI-E #0.

OS: Windows 10 Pro x64 / CASE: Corsair Graphite 780T / MOBO: MSI X99S GAMING 7 / PSU: Corsair RM850, 80 PLUS Gold / CPU: Intel Core i7-5960X @3.5 GHz / RAM: G.Skill Ripjaws4 32GB DDR4 2133 Mhz / GPU: 2x MSI GeForce GTX Titan X 12GB / SSD: Kingston SSDNow V300 120 GB / SSD: Kingston SSDNow V300 240 GB / SSD: Kingston SSDNow V300 480 GB / HDD: ST1000DM003-1CH162 1TB / MIC: Blue Yeti USB / HEADSET: SteelSeries Siberia V2 MSI Dragon Army Limited Edition / REC: Action! & OBS / MOUSE: Logitech T400 / CAM: Logitech HD Pro Webcam C920 / PAD: Xbox One Controller for Windows PC / NET: Unlimited LTE 150/40 Mb/s
Link to comment
Share on other sites

Link to post
Share on other sites

Assuming no onboard graphics, is it required to have the O/S GPU in the first PCIe slot as mentioned in the video? Or is there a way to configure the software to use a specific GPU? Thanks.

 

That's motherboard BIOS specific.  We've found that good motherboards provide a control for this in the BIOS so the user can choose between integrated graphics and discrete (PCI) graphics (and really good ones let you pick which specific GPU if multiple are present).  Bad motherboards don't provide a control option and just force to onboard unless a GPU is detected in a PCIe slot, in which case that auto toggle to that.

 

 

The motherboard doesn't have gpu "on board". It's depends of the CPU installed in that mobo. Motherboard only support or not "on board" GPU but doesn't have a GPU chipset. Check your CPU specs if it have one. For example i7 4790k have integrated GPU.

 

Actually some motherboards do have "on board" GPUs.  Specifically AMD motherboards don't use a "shared GPU/CPU" concept but rather tend to have a Radeon GPU just soldered into the motherboard itself.  Some SuperMicro motherboards like the one we use in our server product (out of stock right now unfortunately) uses an ASPEED on-board graphics adapter, which isn't a part of the CPU nor use the CPU for graphics in that situation.  But with Intel, you're right, its really a feature of the CPU, but the motherboard still provides the output port (VGA, DVI, HDMI, DisplayPort, etc.).

Link to comment
Share on other sites

Link to post
Share on other sites

For gaming, games generally prefer few faster cores rather than a large number of slower cores.

 

QFT!

 

I did a benchmark analysis with 3D Mark on my primary workstation where I got a bare metal install of Windows 10, benchmarked it, then compared that to 3 different VM configurations of Windows 10 with GPU pass through, assigning different vCPU amounts for each test (all 8, 6, and then just 4).  The performance differences between the 8 core, 6 core, and 4 core test were pretty minimal, showing that after 4 cores, the law of diminishing return definitely kicks in.  Even did a side-by-side video comparison of each benchmark:

 

 

You'll notice very little difference between each, but here were the actual scores:

 

Here's a link to the full blog post on it.

Link to comment
Share on other sites

Link to post
Share on other sites

Could one use virtualization to have an editing rig and gaming rig in one tower instead of two gaming rigs?

Link to comment
Share on other sites

Link to post
Share on other sites

[...]the Windows 8/10 audio issue with microphones (where if you try to output your mic back through your headset, there's a huge delay). I did some research on this at one point and everything pointed to ASIO4All, but no matter what I did, I couldn't solve this issue on Windows 8.1 or 10. This was even with a physical Windows install (no VM). [...]

Unfortunately, I'm assuming that's because Windows ships with a really generic implementation for audio stuff, and most audio drivers don't expose buffer rate settings. This means the background settings (buffer rate and sample size) which affect the amount of delay (latency) are set so the 'common denominator' of use cases won't be negatively impacted. What's worse is that (I think) the lack of exposed settings to the user also applies at the Windows kernel level (which could potentially request different settings from the driver). For the record, I'm not really sure about 3/4 of what I just wrote, so take it with a grain of salt. :D

[...]As such, if you could give me a test to perform (step by step instructions on what to do)[...]

Thanks very much, I really appreciate it! The following steps should work...

Requirements:

1. A Windows install on the VM

2. Optionally, a native Windows install using the same hardware (this would be really nice for comparison!)

3. 1/8th inch male headphone jack to 1/8th inch male cable (or 1/4" depending on ports available on audio card)

4. Download and ASIO4All

http://www.asio4all.com/

5. Download Test Utility

http://www.centrance.com/downloads/ltu/

Latency Test Process:

1. Identify the audio cards used by OS, and locate the associated line-in (or microphone) and audio out ports. If none are available, the test cannot be completed.

2. Plug one jack of the cable into a microphone port, and the terminating jack into an audio out port. For accurate results, use ports driven by the same device driver.

3. Launch the Test Utility.

4. Choose ASIO4ALL as the ASIO driver in the dropdown list. This should open the ASIO4All Config Panel. If not, the Config Panel can be launched by clicking the icon in the system icons portion of the taskbar (by the clock). The icon should be either a blue square, green play, or yellow pause icon.

5. In the ASIO4All config panel window, click the wrench button (advanced options) in the bottom right-hand corner.

6. Click the 'power' buttons on any unused audio devices.

7. Expand the audio device you hooked the cables up to, and only 'power on' the input/outputs used by the cables.

8. In the Test Utility window, set the buffer size to 64 samples, and sample rate to 44100 Hz and click 'Measure!'. Record the ms result.

9.a. Optionally, repeat step 7 at 128, and 256 samples.

108. Optionally, repeat in different environments (native vs VM)

No worries if you don't get a chance to, but it would be nice to know! Thanks again.

Link to comment
Share on other sites

Link to post
Share on other sites

Could one use virtualization to have an editing rig and gaming rig in one tower instead of two gaming rigs?

 

Thats quite a good idea!

Link to comment
Share on other sites

Link to post
Share on other sites

Guest
This topic is now closed to further replies.


×