Jump to content

Attaching GPUs to HyperV VM (Bios / Motherboard limitation?)

Hello everyone!

 

Hopefully, I used to correct forum for this if not please tell me which the correct one is. :D

I am working on a hobby project where I want to have multiple VM that each get their own dedicated GPU.


The parts used:
8x NVIDIA GTX 1070 Founder Editions
1x Intel Xeon E5-2695 V3 ЕS 2.3GHz 14 Core 28 Threads LGA2011-3
1x Asus Ramgage V Extreme / U3.1 Motherboard
 

OS: Windows Server 2019 Datacenter with HyperV.

(As far as I know Linus and his team used Red Hat KVM for their projects but I have no access to that OS)

 

I am following this explaination: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/

 

Unfortunately this part stopped me:

Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force –Verbose

I am getting this error message:
"The current configuration does not allow for os control of the pci express bus. please check your bios or uefi"

 

I belive that my Motherboard doesn't support this kind of action but I am not sure.

Does anybody know how to enable this?
If it is not possible with my motherboard: Does anybody know a motherboard which allowes me to do this? :)

There is no reason why I want to do this... :D

I just thought about it the other day and since then I am trying to get this to work.

 

I am looking forward to your Ideas! 

(If you need pictures or more information I will happily provide it!)

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×