Jump to content

Is IOMMU/VT-d enabled in the BIOS and in GRUB on the host and did you pass both 3:00.0 & 3:00.1 together? What CPU/Motherboard are you using?

 

This process can be a little tricky but you seem to have the bulk of it figured out.

 

I assume you made it to the Windows 11 desktop? Does Device Manager report more than one Microsoft Basic Display Adapter? Did you disable the SPICE display? Do you get any output out of the GPU with or without the guest driver?

 

Link to post
Share on other sites

2 hours ago, Windows7ge said:

Is IOMMU/VT-d enabled in the BIOS and in GRUB on the host and did you pass both 3:00.0 & 3:00.1 together? What CPU/Motherboard are you using?

 

This process can be a little tricky but you seem to have the bulk of it figured out.

 

I assume you made it to the Windows 11 desktop? Does Device Manager report more than one Microsoft Basic Display Adapter? Did you disable the SPICE display? Do you get any output out of the GPU with or without the guest driver?

 

 

I have VT-d enabled in BIOS. I pass both. I have i5-13600K and ASRock Z790 PG Sonic.

Link to post
Share on other sites

3 hours ago, Rad25 said:

This GPU not supported yet for GPU Passthrough or?

I don't know. I haven't kept up with the bleeding edge of hardware and it's compatability with VFIO.

 

Do you have a different older card on hand we could test with? Try swapping them out and see if you get a picture. If you do then yes it may be a compatibility issue.

Link to post
Share on other sites

12 hours ago, Rad25 said:

I got this screen when i install AMD Drivers (GPU Passthrough and Single GPU Passthrough):

This one may be related to the re-occuring GPU Reset Issues AMD has had. I can not however confirm that.
This is for the 7000 Series but it may be relevant here, https://forum.level1techs.com/t/the-state-of-amd-rx-7000-series-vfio-passthrough-april-2024/210242

Link to post
Share on other sites

Check if you have eufi enabled on both.

 

I've had this issue before with proxmox and I've realised proxmox is in boot mode not uefi mode so I've fixed it then it worked gpu passthrough although I'm using nvidia so yeah.

I'm jank tinkerer if it works then it works.

Regardless of compatibility 🐧🖖

Link to post
Share on other sites

Threads merged, please don't repost threads.

 

Might need to install/build a newer kernel.

Never a good idea to get the latest hardware for linux use, give it 6-12 months depending on distro for the kinks to be ironed out. 

F@H
Desktop: i9-13900K, ASUS Z790-E, 64GB DDR5-6000 CL36, RTX3080, 2TB MP600 Pro XT, 2TB SX8200Pro, 2x16TB Ironwolf RAID0, Corsair HX1200, Antec Vortex 360 AIO, Thermaltake Versa H25 TG, Samsung 4K curved 49" TV, 23" secondary, Mountain Everest Max

Mobile SFF rig: i9-9900K, Noctua NH-L9i, Asrock Z390 Phantom ITX-AC, 32GB, GTX1070, 2x1TB SX8200Pro RAID0, 2x5TB 2.5" HDD RAID0, Athena 500W Flex (Noctua fan), Custom 4.7l 3D printed case

 

Asus Zenbook UM325UA, Ryzen 7 5700u, 16GB, 1TB, OLED

 

GPD Win 2

Link to post
Share on other sites

Fedora is still on mesa 24 and you need mesa 25 as stated here 

 

The only distro im aware of that is on Mesa 25 currently is Arch.

 

Ubuntu will with 25.10 LTS towards the end of April.

Fedora 42 still shows mesa 24.

 

 

If your VM can't see your GPU then you haven't passed it through. You need to provide more Information about your KVM/libvirt setup and verify vfio is attached to your GPU. As for your previous attempts have you even looked at the GPU reset issue I previously linked to to see if it can resolve your issue?

Link to post
Share on other sites

On 3/21/2025 at 3:45 AM, Rad25 said:

Everything works in EndeavourOS but i got that screen on DisplayPort (Windows 11 Guest).

 

Zaslonska slika_20250321_033753.png

Zaslonska slika_20250321_033916.png

20250318_214111.jpg.dc43f613f44645aecc3cdd8b51ccd3ca.jpg

I figured it out for my set up. I'm not in front of my PC now. I will try to share my configs, Grub settings, driver rebinding steps and so on in the next few days. 

 

I had this exact same issue with my 9070 XT. Everything worked fine using an actual attached monitor (no dummy plug, spice or RDP). As long as I stayed on the "Microsoft Basic Display Adapter" driver. The moment I installed the AMD Adrenalin driver in the Windows guest it did exactly this weird test screen looking thing.

 

The secret sauce was in the OP forum post linked by @Nayr438. According to the post, AMD drivers see that it is a virtual machine, and disables physical displays and adds virtual displays. That's why it is necessary to "hide" that it is a VM at all.

You could probably get away ignoring most of it since it seems like the reset bug is not happening for me (yet). I even left ROM BAR ticked and have Resizable Bar enabled. But, the parts you want to add (only add, leave the existing ones as is for now) to your XML is in the "features" section. As well as the QEMU Args at the bottom of the linked post regarding Resizable BAR if you have that activated on your motherboard. 

 

<hyperv>
  <vendor_id state="on" value="0123456789ab"/>
</hyperv>
<kvm>
  <hidden state="on"/>
</kvm>
On 3/15/2025 at 1:21 PM, Nayr438 said:

This one may be related to the re-occuring GPU Reset Issues AMD has had. I can not however confirm that.

This is for the 7000 Series but it may be relevant here, https://forum.level1techs.com/t/the-state-of-amd-rx-7000-series-vfio-passthrough-april-2024/210242

 

Link to post
Share on other sites

On 3/27/2025 at 3:41 AM, Rad25 said:

Is it possible to use GPU Passthrough in Linux (Guest) - Linux (Host)?

Aye, the VM has "direct access" to the passed-through hardware, so as long as it has drivers for said hardware available it'll work, just make sure any other VM that uses the hardware is properly shut down first.

If the differing OS's try to load different GPU firmware on driver instantiation this can cause issues, but you can normalise this.

 

For performance purposes (read: gaming) it's better to pass an entire drive to a Windows VM, not just a qcow image.

An xml "whole drive" snippet:

<disk type="block" device="disk">
  <driver name="qemu" type="raw" cache="none" io="native" discard="unmap"/>
  <source dev="/dev/sda"/>
  <target dev="sda" bus="sata"/>
  <boot order="1"/>
  <address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk>

it might work with <source dev="/dev/sda1"/> to pass a partition as a whole drive, but I've never tried it as a boot device so...

 

It's also worth noting you can share the same install drive (qcow or physical) and have a "non-passthrough instance" of your VM, so if you just want to access some windows only software but don't need the performance of a native GPU you can run it up using spice and save the time/chores of the detachment/attachment process.

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×