Jump to content

General Virtualization Discussion Thread

50 minutes ago, bimmerman said:

I found a guy on reddit who was discussing passthrough for the R9 295x2 and said it specifically wouldn't work, as all the display outputs are associated with only one of the GPUs. Back to the drawing board...

 

or, yolo and try anyway.

If the system will still run with both GPUs assigned to different VMs Looking Glass may enable you to to view what's on the output of each and do it with near native performance.

Link to comment
Share on other sites

Link to post
Share on other sites

8 hours ago, bimmerman said:

I've heard about it and done very light research into solutions. From my limited understanding you can either bios mod or some hosts (e.g., Unraid) have built in solutions for getting around it.

I've already mentioned it in other cases in this subforum, but to get around Error 43 you have to use a modded VBIOS, and then also edit the libvirt XML to enable multifunction on the device, and ensure theyre on the same bus & slot with just different function numbers, as the driver expects that the devices are on the same physical bus being theyre a single card. 

 

So it ends up looking like below (this is from my UnRAID):

 

Quote

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0f' slot='0x00' function='0x0'/>
      </source>
      <alias name='hostdev0'/>
      <rom file='/storage/ISO/VBIOS/EVGA.GTX1070.8192.161103.rom'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0' multifunction='on'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0f' slot='0x00' function='0x1'/>
      </source>
      <alias name='hostdev1'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x1'/>

 

Spoiler

Desktop: Ryzen9 5950X | ASUS ROG Crosshair VIII Hero (Wifi) | EVGA RTX 3080Ti FTW3 | 32GB (2x16GB) Corsair Dominator Platinum RGB Pro 3600Mhz | EKWB EK-AIO 360D-RGB | EKWB EK-Vardar RGB Fans | 1TB Samsung 980 Pro, 4TB Samsung 980 Pro | Corsair 5000D Airflow | Corsair HX850 Platinum PSU | Asus ROG 42" OLED PG42UQ + LG 32" 32GK850G Monitor | Roccat Vulcan TKL Pro Keyboard | Logitech G Pro X Superlight  | MicroLab Solo 7C Speakers | Audio-Technica ATH-M50xBT2 LE Headphones | TC-Helicon GoXLR | Audio-Technica AT2035 | LTT Desk Mat | XBOX-X Controller | Windows 11 Pro

 

Spoiler

Server: Fractal Design Define R6 | Ryzen 3950x | ASRock X570 Taichi | EVGA GTX1070 FTW | 64GB (4x16GB) Corsair Vengeance LPX 3000Mhz | Corsair RM850v2 PSU | Fractal S36 Triple AIO | 12 x 8TB HGST Ultrastar He10 (WD Whitelabel) | 500GB Aorus Gen4 NVMe | 2 x 2TB Samsung 970 Evo Plus NVMe | LSI 9211-8i HBA

 

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, Windows7ge said:

If the system will still run with both GPUs assigned to different VMs Looking Glass may enable you to to view what's on the output of each and do it with near native performance.

Ooo I like this idea, though I wonder how that would work with separate input devices.

 

From reading up on Looking Glass, my understanding is it lets you view the graphics from one VM in a window and that you can move your input device between the windowed VM and its host VM/OS. As a hypothetical, if I'm running the 295x2 dual GPU, with two monitors plugged into the card and have set it up to have one gaming VM per GPU all run by the non-graphics-enabled hypervisor (unraid/esxi/whatever), and with a keyboard/mouse passed through to each VM. Would/Could it be as simple as setting the 'host' VM to be the display-connected GPU VM, and have its game output be full screen on one of the two monitors, with the 'guest' VM (the non-display-connected GPU in the dual card) full screen windowed on the other monitor via Looking Glass? Would the full screen gaming VM have any issues with, say, mouse and keyboard getting confused between VMs?

 

15 hours ago, Jarsky said:

I've already mentioned it in other cases in this subforum, but to get around Error 43 you have to use a modded VBIOS, and then also edit the libvirt XML to enable multifunction on the device, and ensure theyre on the same bus & slot with just different function numbers, as the driver expects that the devices are on the same physical bus being theyre a single card. 

 

So it ends up looking like below (this is from my UnRAID):

 

 

Interesting, there's no way to get around the error without a vbios mod? Thanks for the code, that'll come in handy as a starting point to tailor to my setup!

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, bimmerman said:

Ooo I like this idea, though I wonder how that would work with separate input devices.

 

From reading up on Looking Glass, my understanding is it lets you view the graphics from one VM in a window and that you can move your input device between the windowed VM and its host VM/OS. As a hypothetical, if I'm running the 295x2 dual GPU, with two monitors plugged into the card and have set it up to have one gaming VM per GPU all run by the non-graphics-enabled hypervisor (unraid/esxi/whatever), and with a keyboard/mouse passed through to each VM. Would/Could it be as simple as setting the 'host' VM to be the display-connected GPU VM, and have its game output be full screen on one of the two monitors, with the 'guest' VM (the non-display-connected GPU in the dual card) full screen windowed on the other monitor via Looking Glass? Would the full screen gaming VM have any issues with, say, mouse and keyboard getting confused between VMs?

I had to think about it a little bit more but I actually have not verified that it's possible to run multiple simultaneous instances of Looking Glass. It may be if you name multiple different video ram folder and point each instance to a different one. I have never tried it. As far as using the output of a GPU then using Looking glass for the other interior GPU I really have no idea. This is beyond anything I've ever tested.

 

12 minutes ago, bimmerman said:

Interesting, there's no way to get around the error without a vbios mod? Thanks for the code, that'll come in handy as a starting point to tailor to my setup!

I'm curious as to if this is a UnRAID specific hack. The work-arounds I've heard of that have worked for people on QEMU/KVM have not involved modding the GPU BIOS. Before you go potentially bricking your GPU I'd consider using QEMU/KVM before UnRAID if a vBIOS mod is a requirement.

Link to comment
Share on other sites

Link to post
Share on other sites

38 minutes ago, Windows7ge said:

I had to think about it a little bit more but I actually have not verified that it's possible to run multiple simultaneous instances of Looking Glass. It may be if you name multiple different video ram folder and point each instance to a different one. I have never tried it. As far as using the output of a GPU then using Looking glass for the other interior GPU I really have no idea. This is beyond anything I've ever tested.

 

I'm curious as to if this is a UnRAID specific hack. The work-arounds I've heard of that have worked for people on QEMU/KVM have not involved modding the GPU BIOS. Before you go potentially bricking your GPU I'd consider using QEMU/KVM before UnRAID if a vBIOS mod is a requirement.

Thanks, I figured it's edge case within edge cases of all edge cases. I've asked on the Level1techs Looking Glass forum as well. Good point on the vbios, I'd muuuuuuuuuch rather not mod the cards if I can avoid it. 

Link to comment
Share on other sites

Link to post
Share on other sites

9 minutes ago, bimmerman said:

Thanks, I figured it's edge case within edge cases of all edge cases. I've asked on the Level1techs Looking Glass forum as well. Good point on the vbios, I'd muuuuuuuuuch rather not mod the cards if I can avoid it. 

I'd love to play with something like that to see if it's possible. In Step 8.1 of my guide when you create /dev/shm/looking-glass I do wonder if the software isn't hard coded to use the filename looking-glass. If you could potentially create looking-glass2, looking-glass3 and point each VM to those folders. In theory this would enable multiple Looking Glass instances.

 

As for keyboard/mouse I'd pass-through a USB controller to each VM instead of relying on the SPICE client at least with that type of setup.

 

Did/do you have a particular reason for wanting to go UnRAID?

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

I'd love to play with something like that to see if it's possible. In Step 8.1 of my guide when you create /dev/shm/looking-glass I do wonder if the software isn't hard coded to use the filename looking-glass. If you could potentially create looking-glass2, looking-glass3 and point each VM to those folders. In theory this would enable multiple Looking Glass instances.

 

As for keyboard/mouse I'd pass-through a USB controller to each VM instead of relying on the SPICE client at least with that type of setup.

 

Did/do you have a particular reason for wanting to go UnRAID?

Hm, I might just have to get the card and try the Looking Glass idea. Worst case, won't lose money on it on ebay.

 

For USB, planning on exactly that-- USB add in card that has one controller per port so that I can pass through each port to its own VM, and plug in USB hubs that way (thus solving input devices and potentially audio). Apparently some cards allow you to do this and can be hot plugged-- this is why I'd need some way to free up a PCIe slot. My mobo has a bunch of 2.0 connectivity but I don't know how it's set up IOMMU-wise (board is in its shipping box still). It has 2x 3.0 ports....and on my other X58 rig the add in cards are way faster and more reliable than the onboard anyway. This card is the one I've heard is a solid option: https://www.amazon.com/dp/B015CQ8DCS/?coliid=I1D63LHIOCEHM0&colid=124RBD3B0BD1T&ref_=lv_ov_lig_dp_it&th=1 -- check the reviews by M.I. and Connor, comes up as the 1st and 5th review for me.

 

As for Unraid, it's only for the convenience factor-- I have never messed with Linux before so having GUI most-the-things is appealing. But, no better way to learn than to dive in, so am not tied to it.

Link to comment
Share on other sites

Link to post
Share on other sites

24 minutes ago, bimmerman said:

Hm, I might just have to get the card and try the Looking Glass idea. Worst case, won't lose money on it on ebay.

 

For USB, planning on exactly that-- USB add in card that has one controller per port so that I can pass through each port to its own VM, and plug in USB hubs that way (thus solving input devices and potentially audio). Apparently some cards allow you to do this and can be hot plugged-- this is why I'd need some way to free up a PCIe slot. My mobo has a bunch of 2.0 connectivity but I don't know how it's set up IOMMU-wise (board is in its shipping box still). It has 2x 3.0 ports....and on my other X58 rig the add in cards are way faster and more reliable than the onboard anyway. This card is the one I've heard is a solid option: https://www.amazon.com/dp/B015CQ8DCS/?coliid=I1D63LHIOCEHM0&colid=124RBD3B0BD1T&ref_=lv_ov_lig_dp_it&th=1 -- check the reviews by M.I. and Connor, comes up as the 1st and 5th review for me.

 

As for Unraid, it's only for the convenience factor-- I have never messed with Linux before so having GUI most-the-things is appealing. But, no better way to learn than to dive in, so am not tied to it.

I will gladly support you remotely. I'm very curious as to if this is possible.

 

If you enable IOMMU groups both on the hardware and within the OS then run the IOMMU script:

#!/bin/bash
for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*}; n=${n%%/*}
  printf 'IOMMU Group %s ' "$n"
  lspci -nns "${d##*/}"
done

This will show you how many independent controllers there are and what group they're in. Usually if multiple PCI_e slots appear in the same IOMMU group you can expect that they're going through the chipset. You'll want each GPU to have it's own dedicated link to the CPU(s).

 

I don't know if multiple controllers on a single slot can be passed to different VMs. You'll have to give it a go and report back. Personally I have enough misc controllers on my motherboard so I can just give my VM a set of USB ports on the rear. Hardware can also be passed though on a per device basis so you don't necessarily have to go the USB controller route if it doesn't work.

 

If you would like to give something else a try before spending money on UnRAID and modding your GPU BIOS try following the VFIO guide I have in the original post. Everything related to Looking Glass you can ignore unless you want to try going that route.

Link to comment
Share on other sites

Link to post
Share on other sites

23 hours ago, Windows7ge said:

I will gladly support you remotely. I'm very curious as to if this is possible.

 

If you enable IOMMU groups both on the hardware and within the OS then run the IOMMU script:


#!/bin/bash
for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*}; n=${n%%/*}
  printf 'IOMMU Group %s ' "$n"
  lspci -nns "${d##*/}"
done

This will show you how many independent controllers there are and what group they're in. Usually if multiple PCI_e slots appear in the same IOMMU group you can expect that they're going through the chipset. You'll want each GPU to have it's own dedicated link to the CPU(s).

 

I don't know if multiple controllers on a single slot can be passed to different VMs. You'll have to give it a go and report back. Personally I have enough misc controllers on my motherboard so I can just give my VM a set of USB ports on the rear. Hardware can also be passed though on a per device basis so you don't necessarily have to go the USB controller route if it doesn't work.

 

If you would like to give something else a try before spending money on UnRAID and modding your GPU BIOS try following the VFIO guide I have in the original post. Everything related to Looking Glass you can ignore unless you want to try going that route.

Thanks, I'll definitely reach out as I get going. The Looking Glass developer suggested it won't work since Windows won't create a desktop without a monitor plugged into a display output. I'm not sure how to work around that. My board does apparently have a bunch of USB 2.0 ports and headers but only two "2.0/3.0" so....yea thinking I'll need to get a card to try. Hooray amazon.....

 

I think my first step is going to be figuring out how to do the GPU passthrough with the R9 290s that I have before trying to get a 295x2 to work-- am reading through your guide! The air coolers for my board just arrived so that I can start assembling and tinkering this weekend.

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, bimmerman said:

Thanks, I'll definitely reach out as I get going. The Looking Glass developer suggested it won't work since Windows won't create a desktop without a monitor plugged into a display output. I'm not sure how to work around that. My board does apparently have a bunch of USB 2.0 ports and headers but only two "2.0/3.0" so....yea thinking I'll need to get a card to try. Hooray amazon.....

 

I think my first step is going to be figuring out how to do the GPU passthrough with the R9 290s that I have before trying to get a 295x2 to work-- am reading through your guide! The air coolers for my board just arrived so that I can start assembling and tinkering this weekend.

Look for HDMI Dummy Plugs this tricks the GPU into thinking a display is connected.

Link to comment
Share on other sites

Link to post
Share on other sites

Just now, Windows7ge said:

Look for HDMI Dummy Plugs this tricks the GPU into thinking a display is connected.

Yep-- that'd work for the GPU that has outputs; issue is the second GPU on the 295x2 has no direct connection to the outputs despite being its own iommu device/group. Issue might be as simple (ha!) as figuring out how to spoof an output for the VM to load.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, bimmerman said:

Yep-- that'd work for the GPU that has outputs; issue is the second GPU on the 295x2 has no direct connection to the outputs despite being its own iommu device/group. Issue might be as simple (ha!) as figuring out how to spoof an output for the VM to load.

I didn't think of this. When CFX is configured no display needs to be connected to the second GPU but what you'd be doing is attempting to operate them independently.

 

There may be a software workaround but I can only think of doing a hardware mod that I don't even know if is possible.

 

Quite the conundrum.

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, Windows7ge said:

I didn't think of this. When CFX is configured no display needs to be connected to the second GPU but what you'd be doing is attempting to operate them independently.

 

There may be a software workaround but I can only think of doing a hardware mod that I don't even know if is possible.

 

Quite the conundrum.

Whelp, as an added wrinkle, I found this video: 

 

 

In it, @GabenJr managed to get a display-output-free GPU to work and play games with a negligible enough performance penalty, using the onboard iGPU as the output device and some sketchy drivers.

 

Makes me wonder though whether a similar thing can be done if you pass some single slot potato GPU and the second GPU of the R9 295x2 into a VM, plug a monitor into the potato, and have the second GPU work as the renderer....hm. Wouldn't need to run crazy drivers either, necessarily, potentially, since the system sees that second GPU as a bog-standard R9 290X. Might be a way around the no-monitor = no-desktop-rendering issue with passing through just the second of the dual GPUs....

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, bimmerman said:

Whelp, as an added wrinkle, I found this video: 

 

In it, @GabenJr managed to get a display-output-free GPU to work and play games with a negligible enough performance penalty, using the onboard iGPU as the output device and some sketchy drivers.

 

Makes me wonder though whether a similar thing can be done if you pass some single slot potato GPU and the second GPU of the R9 295x2 into a VM, plug a monitor into the potato, and have the second GPU work as the renderer....hm. Wouldn't need to run crazy drivers either, necessarily, potentially, since the system sees that second GPU as a bog-standard R9 290X. Might be a way around the no-monitor = no-desktop-rendering issue with passing through just the second of the dual GPUs....

Only problem I see here for your situation is the whole point of the dual GPU plan was to cut the number of used slots in half. If you have to plug in a dummy GPU to act as a intermediary you've just filled four GPU slots where you may as well have bought four proper independent GPUs.

 

What I would probably do here is buy one of those cryptominer rack mountable chassis then run the GPUs off riser cables. This will free up the slots between the GPUs for anything else.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

Only problem I see here for your situation is the whole point of the dual GPU plan was to cut the number of used slots in half. If you have to plug in a dummy GPU to act as a intermediary you've just filled four GPU slots where you may as well have bought four proper independent GPUs.

 

What I would probably do here is buy one of those cryptominer rack mountable chassis then run the GPUs off riser cables. This will free up the slots between the GPUs for anything else.

True, and agreed on crypto rigs--been looking at those. With the dummy gpus it would only make sense after watercooling the 295s, as they become single slots at that point. I'd use up 4 pcie slots for 2x 295 and 2x dummy but would still have 3 unblocked slots (which is all I need). Not ideal though, a crypto miner or just biting the bullet on 4 single slot converted cards makes more sense I think than the 295 shenanigans, as fun as that'd be. More thinking to do....

Link to comment
Share on other sites

Link to post
Share on other sites

9 hours ago, bimmerman said:

True, and agreed on crypto rigs--been looking at those. With the dummy gpus it would only make sense after watercooling the 295s, as they become single slots at that point. I'd use up 4 pcie slots for 2x 295 and 2x dummy but would still have 3 unblocked slots (which is all I need). Not ideal though, a crypto miner or just biting the bullet on 4 single slot converted cards makes more sense I think than the 295 shenanigans, as fun as that'd be. More thinking to do....

You'd have to find a single slot PCI_e bracket that fits the card.

 

What motherboard are you using? The SR-X?

Link to comment
Share on other sites

Link to post
Share on other sites

2 hours ago, Windows7ge said:

You'd have to find a single slot PCI_e bracket that fits the card.

 

What motherboard are you using? The SR-X?

I think the Bykski blocks have single slot brackets; I know the EK ones had when they were available. Similarly the 1080ti blocks have a single slot bracket. If I can get away with only needing to add a single USB or SAS card, I could just do waterblock on my existing 1080ti and then it's a moot point for the rest-- can get much cheaper cards that are dual slot brackets and be fine.

 

I'm using the SR-2 motherboard, the LGA1366 stuff. Should have all my parts today to start diving into your guide this weekend, with the aim to set up at least two VMs if the hardware works.

Link to comment
Share on other sites

Link to post
Share on other sites

@bimmerman I've update the VFIO tutorial with various improvements including something that may prove useful to your setup: 10.3 - Hyperv

Link to comment
Share on other sites

Link to post
Share on other sites

@Windows7ge Thanks! I had some troubleshooting to do in order to get the board and CPUs up and running so didn't get to installing anything (long story, CPUs didn't communicate with each other or ram, sockets needed some pin fixing and decontaminating). I should have some time this week/end to start with the VFIO stuff-- I'll take a look at the guide additions!

Link to comment
Share on other sites

Link to post
Share on other sites

I want to follow my own VFIO tutorial but I've got both my GPUs cranking for Einstein@home and AMD drivers on 19.04 are kind of broken. Gonna have to wait a while then I can setup my own dual-OS desktop.

 

In the meantime I'm slowly adding information to the Looking Glass WiKi page. If you guys need any extra information you may find it there.

Link to comment
Share on other sites

Link to post
Share on other sites

@2FA

527189733_Screenshotfrom2020-03-0422-53-06.png.693b84000dfffecf20c6b3137d244548.png

 

We need to make the ultimate desktop. Windows, GNU/Linux, & MacOS with one keyboard & mouse. :D

Link to comment
Share on other sites

Link to post
Share on other sites

On 3/1/2020 at 6:18 PM, Windows7ge said:

I want to follow my own VFIO tutorial but I've got both my GPUs cranking for Einstein@home and AMD drivers on 19.04 are kind of broken

Navi working with F@H yet or still pending?

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, leadeater said:

Navi working with F@H yet or still pending?

Latest AMD GPU's? (I'm still on 2xR9 290X :D) Last I heard the drivers weren't optimized but that was months ago. Also if you recall I'm on BOINC I don't know anything about F@H so you're probably better off asking over on the Folding Community Board sub-forum.

Link to comment
Share on other sites

Link to post
Share on other sites

14 hours ago, leadeater said:

Navi working with F@H yet or still pending?

I haven't seen any info about OpenCL functionality on Navi other than when it was first reported it didn't work on Linux. I can test it as soon as F@H sends me my passkey.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

16 hours ago, leadeater said:

Navi working with F@H yet or still pending?

 

1 hour ago, 2FA said:

I haven't seen any info about OpenCL functionality on Navi other than when it was first reported it didn't work on Linux. I can test it as soon as F@H sends me my passkey.

I could not get it working on Arch which still has 19.30 in AUR, I've seen possible mention of working with 19.50 but not sure on that. It works in Windows now that Core22 is out and 20.xx drivers.

 

EDIT: Found this so it's in beta support on Linux https://foldingforum.org/viewtopic.php?t=31710&start=90

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×