Jump to content

VFIO GPU Pass-though w/ Looking Glass KVM on Ubuntu 19.04

 Index

Spoiler

Introduction

1. Requirements

    1.1 - Hardware

        1.1.1 - CPU

            1.1.1.1 - Intel

            1.1.1.2 - AMD

            1.1.1.3 - Multiple NUMA Nodes (Multi-Socket/Multi-Die - AMD & Intel)

        1.1.2 - Motherboard

        1.1.3 - RAM

        1.1.4 - GPU

    1.2 - Software

        1.2.1 - The Operating System

        1.2.2 - The Hypervisor Back-End (QEMU)

        1.2.3 - The Hypervisor Front-End (virt-manager)

        1.2.4 - Looking Glass

        1.2.5 - GRUB2

        1.2.6 - lstopo (hwloc)

        1.2.7 - VirtIO

        1.2.8 - IVSHMEM Library

        1.2.9 - Git

        1.2.10 - Scream Audio

        1.2.11 - SPICE Guest Tools

2. Recovery

3. Enabling & Verifying both Virtualization & IOMMU Groups.

    3.1 - Virtualization & IOMMU Groups on Hardware

        3.1.1 - On Intel

        3.1.2 - On AMD

    3.2 - IOMMU Groups within Ubuntu 19.04

    3.3 - Verifying IOMMU Groups

4. Installing Applications & Downloading Drivers

    4.1 - The Hypervisor

    4.2 - Looking Glass

    4.3 - VirtIO

    4.4 - Git & Scream

5. Blocking the Kernel Driver(s)

    5.1 - Blacklist the Driver

    5.2 - Override Device Driver Based on Device ID

    5.3 - Override Device Driver Based on Device Address

6. Configuring Virt-manager

    6.1 - New Virtual Machine

    6.2 - Customizing Configuration

        6.2.1 - Overview

        6.2.2 - CPUs

        6.2.3 - Boot Options

        6.2.4 - SATA Disk 1

        6.2.5 - Adding Hardware

            6.2.5.1 - Storage

            6.2.5.2 - PCI Host Device

            6.2.5.3 - Looking Glass Components

7. Installing VirtIO Drivers

    7.1 - Installing VirtIO Driver for Storage

    7.2 - Installing VirtIO Driver for Network

        7.2.1 - Creating a Network Bridge for Virt-manager

        7.2.2 - Installing VirtIO driver in Windows

8. Looking Glass Configuration

    8.1 - Modify libvirt Config

    8.2 - Making AppArmor Exceptions

    8.3 - Installing the IVSHMEM Driver

    8.4 - Installing the SPICE Guest Tools

    8.5 - Installing the Looking Glass Host (on Windows)

        8.5.1 - Installing & Auto-Launching the Looking Glass Host

        8.5.2 - Disabling the Microsoft Basic Display Adapter

    8.6 - Launching Looking Glass (on Host)

9. Scream Audio

    9.1 - Windows Setup

    9.2 - Linux Setup

    9.3 - Making Things Easy

10. Performance Optimization

    10.1 - Hugepages

    10.2 - CPU Pinning w/ Multiple NUMA (Non-Uniform Memory Access) Nodes

    10.3 - Hyperv

11. Troubleshooting

    11.1 - NVIDIA Driver Won't Install: Device Manager Error Code 43

    11.2 - Looking Glass Issues

        11.2.1 - Desktop Doesn't Appear (AMD GPU)

        11.2.2 - Lock Screen Doesn't Load | Security Dialog Boxes Don't Load

        11.2.3 - Cursor on Guest does not stay aligned with cursor on Host

        11.2.4 - Scroll wheel on mouse goes the same direction regardless of spin

        11.2.5 - Keyboard/Mouse input (SPICE) stops working after installing SPICE Guest Tools

    11.3 - NUMA Nodes

        11.3.1 - lstopo (I Have Multiple Nodes but Only See One)

    11.4 - Error: Invalid Argument: Could not find capabilities for arch=x86_64 domaintype=kvm

        11.4.1 - memoryBacking/Hugepages | CPU pinning

    11.5 - Using an iGPU?

    11.6 - IOMMU Groups

        11.6.1 - My GPU appears in the same group as another/other device(s).

    11.7 - Audio

        11.7.1 - Static Noises

    11.8 - Misc Problems After Pass-though

 

Introduction

Spoiler

Dual-boot. The function where-in a computer is setup with 2 (potentially more) Operating Systems for the purpose of expanded functionality most often to allow a user to have Microsoft Windows and GNU/Linux or other derivatives at their disposal. There's one large caveat that comes with it though, only one Operating System can be ran at a time. Solutions such as virtualization through applications like Oracle VM VirtualBox (available on both Windows & GNU/Linux) doesn't allow for the high performance users would like to see from a native OS. Introducing QEMU

 

win-linux-desktop.thumb.png.03604dbe3e0b179b7c01d22c84558ee5.png

A two monitor setup. Left monitor Windows 10, right monitor Ubuntu 19.04, both controlled with one keyboard & mouse.

 

With QEMU Debian can be used as the host system and Microsoft Windows can be ran within a Virtual Machine. With the assistance of GPU Pass-though and a program known as Looking Glass near native performance can be achieved enabling the use of both Operating Systems simultaneously using one keyboard & mouse with very little performance compromise.

 

For the step by step process that will be discussed here I'm going to be using the following equipment:

  • Ryzen Threadripper 1950X
  • 32GB (4x8GB) 2400MHz Memory
  • ASUS Prime X399-A
  • 2x Sapphire R9 290X's
  • AX1200i PSU

 

 

1. Requirements

Spoiler

Your choice of hardware is going to need to meet a number of requirements along with it's compatibility with an OS that supports the required software for this setup..

 

1.1 - Hardware

Spoiler

1.1.1 - CPU

Spoiler

1.1.1.1 - Intel

Spoiler

In the event of using an Intel processor be it an i3, i5, i7, i9, E3, E5, E7, etc the CPU needs to support two functions:

  1. VT-x w/ EPT
    1. "Intel® Virtualization Technology (VT-x) allows one hardware platform to function as multiple “virtual” platforms." - Intel
    2. "Intel® VT-x with Extended Page Tables (EPT), also known as Second Level Address Translation (SLAT), provides acceleration for memory intensive virtualized applications." - Intel
  2. VT-d
    1. "Intel® Virtualization Technology for Directed I/O (VT-d) continues from the existing support for IA-32 (VT-x) and Itanium® processor (VT-i) virtualization adding new support for I/O-device virtualization." - Intel

NOTE: Some Intel processors have problems with their Virtualization functionality such as the popular retired server processor Intel Xeon E5-2670v1. With this CPU to have functional VT-d support a Stepping of C2 or later is required. Earlier versions will not work. Make sure to do research before purchasing.

 

1.1.1.2 - AMD

Spoiler

In the event of using an AMD processor be it a Ryzen 5, 7, 9, Ryzen Threadripper, EPYC, etc there are 2 to 3 settings that need to be supported dependent on the CPU.

  1. SVM (AMD-V)
    1. "AMD Virtualization (AMD-V™) technology is a set of unique on-chip features that enables AMD PRO-based clients to run multiple operating systems and applications on a single machine." - AMD
  2. IOMMU (Input Output Memory Management Unit)
    1. "In computing, an input–output memory management unit (IOMMU) is a memory management unit (MMU) that connects a direct-memory-access–capable (DMA-capable) I/O bus to the main memory." - Wikipedia
  3. Enumerate all IOMMU in IVRS
    1. Within multi-die processors this enables IOMMU on both/all dies.

 

1.1.1.3 - Multiple NUMA Nodes (Multi-Socket/Multi-Die - AMD & Intel)

Spoiler
  1. Memory Interleave
    1. A setting that we will use to switch between a UMA operation mode and NUMA where each Node or Die will send or fetch data to/from it's own bank of RAM off it's own memory controller(s). This will increase performance under certain applications (such as virtualization).

 

 

1.1.2 - Motherboard

Spoiler
  • Multi-GPU support
  • Support for 16GB+ of RAM (Recommended)

 

1.1.3 - RAM

Spoiler
  • 16GB or more (Recommended)

 

1.1.4 - GPU

Spoiler

Two dedicated GPUs. This can be:

  1. Two identical AMD GPUs
  2. Two different AMD GPUs
  3. Two identical NVIDIA GPUs
  4. Two different NVIDIA GPUs
  5. One AMD GPU & One NVIDIA GPU

NOTE: For NVIDIA desktop series GPUs NVIDIA built a component into their driver that checks if the GPU is running within a Virtual Environment. When it detects this the driver will refuse to install and Device Manager will report Error 43. There are work arounds but results are not guaranteed. It should also be noted that not all GPUs pass-though and work flawlessly. This impacts both AMD & NVIDIA. If poor results are experienced with one GPU test another.

 

 

1.2 - Software

Spoiler

1.2.1 - The Operating System

Spoiler

For the purposes of this Tutorial Ubuntu 19.04 will be used however other distributions such as:

  • Linux Mint
  • PopOS
  • Lubuntu

may be used with small alterations to the tutorial. However, only Ubuntu 19.04 will be covered in detail.

 

1.2.2 - The Hypervisor Back-End (QEMU)

Spoiler

QEMU as stated on their website is a generic and open source hypervisor. This is the back-end of what will handle the Virtual Machine(s). Relative to this Tutorial QEMU supports:

  1. Hardware Pass-though. This includes but is not limited to:
    1. GPUs
    2. Network Cards
    3. HBAs
    4. USB controllers
    5. Storage Drives (HDDs/SSDs)
    6. Audio devices
  2. CPU Pinning
    1. The processes where-in a vCPU is pinned to a physical core or thread on the processor (performance optimizer).
  3. Huge Pages
    1. The process where in the standard 4K chunks stored in System Memory (RAM) are dramatically increased in size (performance optimizer).

 

1.2.3 - The Hypervisor Front-End (virt-manager)

Spoiler

Virt-manager is the UI (User Interface) which we will be using for managing & manipulating QEMU. It's a relatively simple and user friendly interface for setting up and managing multiple Virtual Machines. This is going to be used in conjunction with OVMF which enables UEFI support for Virtual Machines.

 

virt-manager-main-windows.png.0127b9de7cc85db7fa21d815e0df9514.png

 

NOTE: At this time Virt-manager is currently in the process of being replaced by a WebUI solution called Cockpit. This is the newer standard should it be desired to be on the cutting edge. However at this time Cockpit is still missing much of the UI that this Tutorial is going to rely on and for that reason will not be covered.

 

1.2.4 - Looking Glass

Spoiler

Looking Glass is a KVM solution which enables a user to control a Virtual Machine which has had a GPU passed though to it at near native performance.

 

looking-glass.thumb.png.f1cab2cf4e3cd01ab7afb6b32bf5f098.png

 

What Looking Glass does it is copies the frame buffer from GPU memory at the time it was to be displayed. It then takes this copy and displays it in the Looking Glass window on the primary GPU. The delay in this is minimal. At the same time Looking Glass doubles as a SPICE client passing Keyboard & Mouse input through to the Virtual Machine.

 

1.2.5 - grub2

Spoiler

Grub2 or GRand Unified Bootloader v2 is a bootloader program that enables the booting of one of multiple Operating Systems installed on a PC along with enabling the ability to choose a Kernel version to boot using. GRUB2 also comes with enhanced scripting functionality compared to the original GRUB. (We need to make some changes which is why we're going to use it over systemd)

 

1.2.6 - lstopo (hwloc)

Spoiler

Lstopo is a very simple utility that provides a diagram of the CPU(s) and NUMA Node groups and lists what devices are connected to each via Device ID.

 

lstopo.png.2fc623821789c0f4509192c616c4eed3.png

 

This program is going to play an important role later on which will help optimize the performance of the Virtual Machine(s).

 

1.2.7 - VirtIO

Spoiler

"Virtio is a virtualization standard for network and disk device drivers where just the guest's device driver "knows" it is running in a virtual environment, and cooperates with the hypervisor." - libvirt

 

In terms of how this impacts the system it allows functions such as 10Gbit networking and the high performance of storage devices (outside of direct hardware pass-though which would be even better).

 

1.2.8 - IVSHMEM Library

Spoiler

"The IVSHMEM library facilitates fast zero-copy data sharing among virtual machines (host-to-guest or guest-to-guest) by means of QEMU’s IVSHMEM mechanism." - DPDK

 

1.2.9 - Git

Spoiler

"Git is an open source distributed version control system..." - Ubuntu

 

1.2.10 - Scream Audio

Spoiler

Scream is a network audio application that enables the broadcasting of sound over a switched network via Multicast or Unicast Transmissions.

 

1.2.11 - SPICE Guest Tools

Spoiler

SPICE Guest Tools are a set of drivers that enable much of the convenient functionality the comes with interfaces with VM through a SPICE Client such as the Clipboard (copy/paste).

 

 

 

2. Recovery

Spoiler

Much of what we are going to be doing here has the potential to cause you to lose access to the desktop. If this occurs installing the SSH service will make recovery easier.

 

To install OpenSSH Server open a terminal and run:


sudo apt install -y openssh-server

To test if access is available go to another computer and run:


ssh user@ip-of-server

Where:

  • user = The installed user on the computer.
  • ip-of-server = Is the IP of the computer.

NOTE: Most people have DHCP enabled on their systems where their computer will get a random IP address from the Router. If the IP noted down is not working after a restart check the Routers WebUI. An entry may be listed with the computers name and the IP it was given. Alternatively a Static IP can be set on the computer until the setup is complete. If you do this do take care to pick an IP that won't be handed out by DHCP.

 

3. Enabling & Verifying both Virtualization & IOMMU Groups.

Spoiler

Before going into setting up any of the primary applications it's important to verify that the chosen hardware will support both Virtualization & enabling IOMMU groups. As discussed in Steps 1.1.1.1 & 1.1.1.2 the Intel & AMD platforms require the enabling of very different settings.. Enter your motherboard BIOS and enable the following features based on your CPU vendor.

 

3.1 - Virtualization & IOMMU Groups on Hardware

Spoiler

3.1.1 - On Intel

Spoiler

Two CPU features are required for both virtualization support and device pass-though:

  1. VT-x (Virtualization Support)
  2. VT-d (Device Pass-though Support)

 

3.1.2 - On AMD

Spoiler

Depending on the socket (AM4, TR4, SP3, etc) two to three features are required.

  1. Across all sockets the following apply:
    1. SVM (Virtualization Support)
    2. IOMMU (Device Pass-though Support)
      1. This can usually be found under the North Bridge settings.
  2. For TR4 and/or SP3
    1. Enumerate all IOMMU in IVRS (enables IOMMU on all dies)

 

 

3.2 - IOMMU Groups within Ubuntu 19.04

Spoiler

In order to make Ubuntu recognize the IOMMU Groups it needs to be told to look for them. To do that install grub2 with:



sudo apt install -y grub2

When it's done restart the machine.

 

With the system restarted grub needs to be edited to enable IOMMU. Now with different distributions the grub file may be located in different places or go under different names.

  1. On Ubuntu 19.04 the grub file can be found in /etc/default/grub
  2. On PopOS 19.04 the grub file can be found in /boot/efi/loader/entries/Pop_OS-current.conf
  3. On other distributions it may also vary.

To edit the grub file on Ubuntu 19.04 run:



sudo nano /etc/default/grub

The following content will appear within the nano text editor:

 

grub-file.png.b89025b68068955cabeb1583583da1cf.png

 

Only one line here needs to be edited and what you append to it depends on your platform.



GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"

This needs to be edited with either:

  1. intel_iommu=on
  2. amd_iommu=on

It will look like this after the edit:

grub-edited.png.a4de6d60e9fb7837724a2bf022758b35.png

 

After this Ctrl+O to Save, then Ctrl+X to quit. Next run:



sudo update-grub

Then restart the system.

 

 

3.3 - Verifying IOMMU Groups

Spoiler

After logging back in open a terminal and run:



nano ls-iommu.sh

Now copy/paste the following code into the editor. This can be done with Ctrl+C then Ctrl+Shift+V.



#!/bin/bash
for d in /sys/kernel/iommu_groups/*/devices/*; do
  n=${d#*/iommu_groups/*}; n=${n%%/*}
  printf 'IOMMU Group %s ' "$n"
  lspci -nns "${d##*/}"
done

Now Save (Ctrl+O), and exit (Ctrl+X)

 

Next add the execute permission with:



chmod 744 ls-iommu.sh

Then run the script with:



./ls-iommu.sh

If IOMMU Groups were enabled successfully on all fronts the output should look similar to this:

 

list-iommu-groups.thumb.png.d0a985ce3bc27ca6654d0cbb845a21be.png

 

If nothing shows up or if there is an error then the configuration will need to be double-checked.

 

 

4. Installing Applications & Downloading Drivers

Spoiler

4.1 - The Hypervisor

Spoiler

To install both our hypervisor front-end, back-end, and UEFI support for Virtual Machines run the terminal command:



sudo apt install -y virt-manager ovmf

 

 

4.2 - Looking Glass

Spoiler

Looking Glass has a few more steps to it than the other packages. For starters it has a number of dependencies that must be installed first. To do this run:



sudo apt install -y binutils-dev cmake fonts-freefont-ttf libsdl2-dev libsdl2-ttf-dev libspice-protocol-dev libfontconfig1-dev libx11-dev nettle-dev

Next the application needs to be built from Source Code. Download the .ZIP file from the website under Official Releases. Download button is the Source/Source Code button on the right (left of Windows download).

 

Once this is downloaded extract it to a location such as your home folder. After extracting it open a Terminal within the folder and run the commands:



mkdir client/build
cd client/build
cmake ../
make

 

 

4.3 - VirtIO

Spoiler

The VirtIO drivers that we are going to need can be downloaded here.

  1. Scroll all the way down to the Direct Downloads section.
  2. Download the applicable VirtIO driver (Recommend: Stable virtio-win iso)

We will come back to this.

 

4.4 - Git & Scream

Spoiler

To download Scream we're going to use Git. Start by downloading Git:



sudo apt install git

Then download scream:



git clone https://github.com/duncanthrax/scream.git

If the download was successful you should see this:

 

scream-git-download.png.6e7d8e1b2624f9db0a6dbf5e774b6a48.png

 

We will come back to Scream later.

 

 

5. Blocking the Kernel Driver(s)

Spoiler

With IOMMU Groups working there's only one more thing stopping a hardware device from being passed-though to a VM and that's that the Kernel has already loaded a driver for the device:

 

kernel-drivers.png.672ad5bd9d5a1538f831ac8a9d9bf846.png

 

There are three methods to making the Kernel let go and each method works for different circumstances.

 

5.1 - Blacklist the Driver

Spoiler

Blacklisting a driver is the process where-in a driver is stopped from being loaded by the Kernel at system startup system-wide. This is useful in this application if there are multiple hardware devices that all use the same driver and the host doesn't need any of them.

 

NOTE: It's important to understand with this method if the Primary GPU that Ubuntu will hold on to uses the same driver as the GPU to be passed-though that this method will not work.

 

To blacklist a driver run:



sudo nano /etc/modprobe.d/blacklist.conf

This will bring up the blacklist configuration file:

 

blacklist-config-file.png.36439d4e1066733bf1ab0e73506ae53d.png

 

To the end of the file append the drivers required for each component of the PCI_e device starting with the word "blacklist". One per line. For the GPU being passed-though in this example it would appear as the following:



blacklist radeon
blacklist amdgpu
blacklist snd_hda_intel

Lines starting with "#" are commented out. This is good for identifying why the drivers have been blacklisted. After adding the drivers the file will look like the following:

 

blacklisted-drivers-added.png.658b1020ceeef23cb5cd2db1a6d172df.png

 

Ctrl+O, Ctrl+X, then restart the computer.

 

Both Method 5.2 & 5.3 rely on the vfio-pci driver. This driver may not load by itself on Ubuntu 19.04. It can be checked by running:


lsinitramfs /boot/initrd.img-5.0.0-38-generic | grep vfio

 

NOTE: The Kernel version on your system may not be the same as the Kernel version shown here. Tab can be used for completing commands. Generally the latest version on the system is the one being used by initramfs.

 

If there is no output, the drivers need to be loaded manually. If the following lines are output then everything is good to go:

 

vfio-drivers-loaded.png.2cf5e2fb5ec57f8b6d0bf51ad4663e1c.png

 

To load the drivers manually run:


sudo nano /etc/initramfs-tools/modules

To the file append the lines:


vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd

Save the file (Ctrl+O), and Exit (Ctrl+X). Now update initramfs:


sudo update-initramfs -u

Now re-run:


lsinitramfs /boot/initrd.img-5.0.0-38-generic | grep vfio

To verify the drivers will be loaded.

 

5.2 - Override Device Driver Based on Device ID

Spoiler

The Device ID is a hardware identifier assigned to a device by the vendor. This usually appears as a string of eight characters separated in the middle by a colon.

 

device-id.thumb.png.560650548b71e57357b49097ac0bc1fd.png

 

NOTE: In the event of using two identical GPUs (as is the case here) overriding the device driver via Device ID will not work.

 

First to find the hardware Device ID run:



lspci -nn

This will list every device on the system and it's Device ID. GPU's (regardless of vendor, AMD or NVIDIA) generally start with "VGA Compatible Controller". It's important when passing though a device to pass-though all of the functions belonging to the device. The following picture is an example:

 

device-functions.thumb.png.98ae634093bd79d37be25220cc8f21f9.png

 

The device has two functions, the GPU itself and an Audio device. These need to go together. Additional functions that are a part of the GPU can be identified by the Device Address. In the above example that would be 0a:00. GPUs can have numerous additional functions which can be identified by the .1, .2, .3, etc after the VGA controller itself and all of them must be grouped together when passed though. As shown above this means the Device ID's 1002:67b0 & 1002:aac8 need to be written down.

 

To perform the vfio-pci driver override a .conf file needs to be added in the modprobe.d directory:



sudo nano /etc/modprobe.d/vfio-driver-override.conf

Add the following line to the file replacing 1002:67b0,1002:aac8 with the Device ID's associated with the GPU's to be passed-though.



options vfio-pci ids=1002:67b0,1002:aac8

Now:



sudo update-initramfs -u

And restart.

 

5.3 - Override Device Driver Based on Device Address

Spoiler

The last option which will work with identical GPUs when one is for the host and one is for the VM is assigning the vfio-pci driver based on Device Address. The Device Address is the first identifiable set of characters for an entry when running the lspci command. For the GPU being used in this example this is 0a:00.0 & 0a:00.1.

 

To assign the vfio-pci driver using this create a file using the command:



sudo nano /etc/initramfs-tools/scripts/init-top/vfio-driver-override.sh

Within this file copy the following lines replacing 0a:00.0 & 0a:00.1 for the GPU Device Addresses associated with your GPU.



#!/bin/sh
PREREQS=""
DEVS="0000:0a:00.0 0000:0a:00.1"
for DEV in $DEVS;
  do echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override
done

modprobe -i vfio-pci

NOTE: For your hardware it's important to use the correct prefix. The prefix is the 0000: coming before the Device Addresses in the above example. Across different platform configurations the prefix can be 0001:, or 000a:, etc. You can check what prefix your hardware uses by running the command:



ls -l /sys/bus/pci/devices

After verifying and applying the correct prefix save and exit the file. Then run the commands:



sudo chmod 755 /etc/initramfs-tools/scripts/init-top/vfio-driver-override.sh
sudo chown root:root /etc/initramfs-tools/scripts/init-top/vfio-driver-override.sh
sudo update-initramfs -u 

Now to verify the that script will run at system start run the command:



lsinitramfs /boot/initrd.img-5.0.0-13-generic | grep vfio

If the script will load at startup it will show up as an entry underneath our vfio drivers:

 

script-appended.png.c6d401cafd0ef1ea2ef1519fffb0500a.png

 

Now restart the computer.

 

NOTE: At this time if a display is connected to the GPU that just had it's driver overridden once the system has fully rebooted the display should no longer register that there's a signal.

 

If either the 5.2 or 5.3 methods were used the vfio-pci driver can be verified by running:


lspci -vnn

Locate the GPU and it's sub-components. Their Kernel drivers should be vfio-pci.

 

vfio-driver-installed.thumb.png.234373d92f5909945818943c08e9d16d.png

 

 

6. Configuring Virt-manager

Spoiler

Virt-manager is what we will be using to manage our Virtual Machine(s). It is a fairly simplistic UI for setting up most of the virtual components we will need. Some of the later performance optimization will require us to go back into the Console though.

 

virt-manager-main-windows.png.0127b9de7cc85db7fa21d815e0df9514.png

 

6.1 - New Virtual Machine

Spoiler

Start by creating a new Virtual Machine by pressing the icon in the upper left or by going to File -> New Virtual Machine.

 

vmm-make-new.png.1aa52703c51f78fa55976260173e3903.png

 

Step 1. Choose a installer option. This includes ISO/CDROM, a Network Install, Network Boot, or import an existing disk image. For the purposes of this Tutorial we will go with a .ISO file.

 

vmm-choosing-boot-method.png.fe697d835e42ddd8d0d0a3bffe420be0.png

 

Step 2.  Locate the .ISO file or CDROM media by clicking Browse.

 

vmm-cdrom-browse.png.f487642233ee5d3326b0cd08731af8b2.png

 

Click Browse Local in the bottom-right of the next window.

 

vmm-iso-browse-local.png.26cc10a354742d321ccdbc05855e2d08.png

 

Locate your .ISO file and click Open. With the .ISO loaded the "Choose the operating system...." option should auto-populate. If it does not, un-select "Automatically detect..." and scroll for it manually. Afterwards click Forward.

 

vmm-win10-iso-forward.png.5bfd3f5d9fe964ce82d0d92869502e16.png

 

Step 3. Select how much RAM and number of CPU cores/threads you want to allocate to the Virtual Machine then click Forward.

 

vmm-mem-cpu-forward.png.ec8991ec3047c738e469d3eabf58deeb.png

 

Step 4. Create storage for the VM. This can be done by creating a new virtual disk or by selecting an existing virtual disk. When you're done press Forward.

 

vmm-new-disk-forward.png.c328cd0371833307d4aa4f432799602c.png

 

Step 5. Finalizing, name your virtual machine. I recommend making it something simple because we need to address it later on from the Console. After that check the box "Customize configuration before install". Then press Finish.

 

vmm-finalizing.png.b1170e84d56e0ac0f4fb9c1eaeae0213.png

 

 

6.2 - Customizing Configuration

Spoiler

Within the Window that follows we have a series of variables we will want to edit before starting our Virtual Machine.

 

adv-config-overview.png.db4fe583f0c81a3d4a7237119e82ee78.png

 

6.2.1 - Overview

Spoiler

Two things need to be edited here.

  • Change Chipset to Q35.
    • This chipset comes with a virtual PCI_e bus. This is what our physical GPU will interface with.
  • Change Firmware to UEFI.
    • Allows for the use of newer hardware/software functions & features for our VM.

adv-config-overview2.png.5180ababdb7e9e20c7ba0d826d619c43.png

 

 

6.2.2 - CPUs

Spoiler

We need to tell our Guest (the VM) the topology of the CPU we are giving to it. This includes parameters such as:

  • CPUs
    • How many vCPUs to be assigned to the VM.
  • Configuration
    • Tells the VM that the CPU manufacturer & series is the same as the host.
      • NOTE: Newer series like Ryzen are not supported and will report as EPYC or another.
  • Topology
    • Sockets
      • How many physical sockets the VM will think the hardware has.
        • NOTE: Windows 10 Home/Pro only allow up to 2 physical sockets.
    • Cores
      • How many cores does the vCPU have?
    • Threads
      • How many threads does each vCPU core have?

 

adv-config-cpus.png.94caff18921aee5d6ec2f30c738ccd7f.png

 

 

6.2.3 - Boot Options

Spoiler

Boot Options gives us a series of basic parameters on how or when to start the VM:

  • Autostart
    • Starts the VM automatically when the host is started.
  • Boot device order
    • Set the order in which bootable devices will be attempted.

adv-config-boot-options.png.3ff8a7817c3e417395e4e0e0f39e89c7.png

 

 

6.2.4 - SATA Disk 1

Spoiler

There are multiple methods for bootable devices within virt-manager including SATA, SCSI, USB, VirtIO, and direct drive pass-though. For the purposes of this tutorial we are going to use VirtIO. This offers the best performance outside of passing through an entire storage device.

 

To do this click the arrow right of Disk bus and select VirtIO.

 

adv-config-sata-disk-1.png.5cc012e8c6d3053ff84fd349bb562cfb.png

 

 

6.2.5 - Adding Hardware

Spoiler

There are a number of components that we need to add to the configuration including our VirtIO.iso file, our GPU, it's sub-components, Looking Glass requires some changes, and a bridged network adapter (we can add this after Windows is installed.) 

 

To add hardware to the configuration click on Add Hardware in the bottom left corner. The following menu will be brought up.

 

add-new-hardware.png.388378a982c0fa0afc673029f53f09ac.png

 

 

 

6.2.5.1 - Storage

Spoiler

On this menu we want to add a virtual CD drive.

  1. Change the Device Type to CDROM Drive.
  2. Click Manage. A new window will pop up.
  3. Click Browse Local. Search for where you saved the virtio.iso file.
  4. Click Open.
  5. Make sure Bus type is set to SATA.

When you're done it will look like this:

 

new-cdrom.png.56912dc39189c69d38c986d30b8b2405.png

 

Press finish. The new CD drive will be added to the configuration.

 

6.2.5.2 - PCI Host Device

Spoiler

Click Add Hardware again. This time scroll down to PCI Host Device. Within the right side list you will need to find the GPU and it's accompanying functions then click Finish. You will have to repeat this for each function.

 

pci-host-device.png.9c68d71a01b06c4022694790dbfdf5b6.png

 

 

6.2.5.3 - Looking Glass Components

Spoiler

Looking Glass itself requires both the addition and removal of a number of components. Start by adding a QXL Video Device if one does not exist already (however at current the default video device should already be QXL)

 

qxl-video-device.png.3dbd94f2a554ab1b83f4b64926763cea.png

 

Now remove the Tablet within the hardware list.

 

virtual-tablet.png.67dd0e38c9d73d9b83061ea9a6a6c4e1.png

 

Now go back to Add Hardware and add both a PS/2 Mouse & a VirtIO Keyboard. (PS/2 mouse should add automatically after pressing Begin Installation)

 

virtio-keyboard.png.100f7c6c5ebd082ef9bfac228e1b25fc.png

 

Make sure that a Spice Display is connected/configured.

 

spice-display.png.6eee249c36657a629a9a1529c2b267d9.png 

 

Last Item we need to add is a Channel Device. This is going to enable the Clipboard (Copy/Paste) ability between the two OS's.

 

spice-channel-cut-paste.png.8c5f5a9a01b229f730018d4a6b98c6f3.png

 

 

Once everything is done being added the configuration should look similar to this.

 

final-config.png.52d8c815a74c0d3a11272c0c22ae3d45.png

 

Go ahead and click Begin Installation. Virt-manager will create the new VM and boot it. If everything related to IOMMU and Driver Overide worked the VM will start without error.

 

 

 

7. Installing VirtIO Drivers

Spoiler

Two VirtIO drivers can be installed. One for our storage and one for our Network adapter(s).

 

7.1 - Installing VirtIO Driver for Storage

Spoiler

During install when you reach the point where you choose weather to Upgrade or Custom choose Custom. You will see that there's no disks available. In here click on Load Driver. The following window will pop-up.

 

installing-virtio-storage.png.bd3e53a96d904f46cc08aa5edac3c74a.png

 

Click browse, This will bring up a menu of available sources to search. One of the available sources will be the virtio.iso file we mounted as a CD. Click this and go to the directory E:/amd64/w10. Highlight the directory and click OK.

 

finding-vfio-storage-file.png.bd6a008634fe3d40f198b806cbf7bb7a.png

 

This will load the compatible drivers and ask which to install. There should only be one. Click Next.

 

loading-vfio-driver-file.png.c3b0fde4eabda4dc20b9d1fdf82b1c78.png

 

When it is done it will bring you back to the Custom Install menu where you can click New to create partitions and continue the Windows install.

 

 

7.2 - Installing VirtIO Driver for Network

Spoiler

By default the virtual network adapter that Virt-manager connects to the VM automatically is an emulated 1Gbit e1000 that traverses NAT.

 

nic-page.png.e604b242226ba8cbe24acc905a84d574.png

 

Although for most traffic traversing NAT is fine it isn't good for numerous applications such as connecting to local servers and some gaming services and our Scream audio setup. If 10Gbit networking is also desired we need to use something else. A network card can be passed through if the slots are available but if not we can create a Network Bridge off an existing host adapter and if the adapter is 10Gbit capable the VirtIO driver will enable 10Gbit connectivity for the VM.

 

7.2.1 - Creating a Network Bridge for Virt-manager

Spoiler

Within our host we need to make changes to the current Network Adapter Settings. Navigate to Settings -> Network and edit the configuration of the adapter you would like to create a bridge on.

 

settings-network-menu.png.b6ce6755e24d3fc48b80ae518fef88d5.png

 

From here click Remove Connection Profile. We no longer require the default Ubuntu configuration.

NOTE: This will cause a temporary loss of Internet access.

 

remove-connection-profile.png.357f6c94f4bcc6c7706fc55dfcc3b42d.png

 

Now open a Terminal and run the command:




nm-connection-editor

This will bring up a menu called Network Connections.

 

network-connections-main.png.de4175e9bc44e2df550d7430309390c8.png

 

From here click the plus sign in the bottom left corner. A Window will pop up. Scroll down the options menus until you see Bridge (underneath Virtual) then press create. From the following setup page click Add on the right.

 

new-bridge-add.png.9d0e471786896a4ec29048bd85d92995.png

 

Leave the Connection Type as Ethernet and click Create... Next on the next pop-up page click the drop-down page to the right of Device. Locate your Physical Network Adapter then click Save.

 

editing-bridge0-slave.png.1fa104d3ca4af87d1c71c3b2dc9cf936.png

 

Now exit out of Network Connections and run the command:




ip ad

This will list all of your network adapters and their assigned IP addresses. What you should see is your network adapter with no IP configuration but the addition of a bridge (the one we just created). This should have an IP. If you run a ping you should have Internet access.

 

With your VM turned off highlight it on the main virt-manager page and click open. From there click the blue lowercase i icon. This will bring up the configuration menu for our VM. From here select the NIC in the left column list. This will have an option called Network source. Click the drop-down. Our bridge should be there. Also be sure to change the Device Model from e1000 to VirtIO.

 

nic-edit.png.dffc4f77c1985924e2a0100c60f02a03.png

 

Click apply and start the VM again. Next-up installing the VirtIO driver.

 

7.2.2 - Installing VirtIO driver in Windows

Spoiler

 After starting the VM you'll find that you have no internet access. This is because Windows has no VirtIO network driver. What you need to do is go to Device Manager. This can be done by right clicking on Start (the Windows icon) and going up to Device Manager. Immediately upon entering you will see under Other devices an Ethernet Controller.

 

device-manager-other-devices.thumb.png.22e04dd2939d8e980677d0a922c2b4fc.png

 

Now:

  1. Right-Click it.
  2. Update Driver
  3. Browse my computer for driver software
  4. Scroll down to the virtio-win CD. Click on it. Then press OK. Then Next.
  5. Windows Security: Press Install

With it installed Windows should recognize that there's a new active network connection. With the install done you can click Close. After this open CMD and run: 




ipconfig /all

You should see your local Default Gateway, have a local IP, and have the ability to communicate with local devices on the network.

 

win-cmd.thumb.png.e1ecb97e83699158b21745035c5423df.png

 

 

 

 

8. Looking Glass Configuration

Spoiler

Looking Glass is the special sauce that will enable us to not have to dedicate a monitor to the VM's GPU while at the same time allowing both our Keyboard & Mouse to control Windows without any extra hoops to jump though. We've already installed it on the host. We still need to configure and install it on the Guest.

 

8.1 - Modify libvirt Config

Spoiler

Here is where we go back into the Console. As part of Virt-manager each VM has a .XML file. This file comes with many additional tunables that cannot be addressed though the Virt-manager UI. We can start editing it by using the virsh edit command followed by the name of our Virtual Machine.



virsh edit win-10-pro-x64

You will be prompted for your preference in command line editor. nano (#1) would be the easiest. Scroll down to the end of the file, you need to add four lines in-between 

</memballon> and </devices> as follows:



</memballoon>
<shmem name='looking-glass'>
  <model type='ivshmem-plain'/>
  <size unit='M'>32</size>
</shmem>
</devices>

NOTE: A unit size of 32M is for a display resolution of 1920x1080. For larger displays this will have to be adjusted using the following formula:



width x height x 4 x 2 = total bytes
total bytes / 1024 / 1024 = total megabytes + 2

NOTE: This value must be rounded up to the nearest power of 2.

 

It will look like this when completed:

 

libvirt-edited.png.d6e75dcdd3a2072940eb19e12f95e9c3.png

 

Save, & Exit.

 

Now we need to create the shared memory file and assign the proper owner:permissions. This can be accomplished with the following command:



sudo touch /dev/shm/looking-glass && sudo chown vfio:kvm /dev/shm/looking-glass && sudo chmod 660 /dev/shm/looking-glass

 

 

8.2 - Making AppArmor Exceptions

Spoiler

In order for Looking Glass to mediate a connection in-between the Guest and our Host we have to make some firewall/security exceptions. This can be done by editing libvirt-qemu:



sudo nano /etc/apparmor.d/abstractions/libvirt-qemu

Hold Ctrl and hit W. Then type "usb access" and hit enter. This will take you directly to where you need to make the edits. Modify the libvirt-qemu file to reflect the following snippit.



# for usb access
   /dev/bus/usb/** rw,
   /etc/udev/udev.conf r,
   /sys/bus/ r,
   /sys/class/ r,
   /run/udev/data/* rw,
   /dev/input/* rw,

# Looking Glass
   /dev/shm/looking-glass rw,

Save, Exit, and restart AppArmor with:



sudo systemctl restart apparmor

 

 

8.3 - Installing the IVSHMEM Driver

Spoiler

Within our Windows VM start by downloading the driver from here. We need version 0.1-161 or later. Now extract the Windows 10 folder from the .zip archive.

 

To perform the installation open Device Manager and find PCI standard RAM Controller under System devices.

 

ivshmem-driver-install.png.a5f1ef01ffa8c9c0c4d31c00273febd9.png

Now:

  1. Right Click -> Update Driver
  2. Browse my computer for driver software -> Browse
  3. Find the 0.1.161 or later Win10 folder, click it. -> OK -> Next
  4. If you are asked if you'd like to install say yes.

You should now see a IVSHMEM Device.

 

ivshmem-driver-installed.png.570f1d3ca43c8c2b3b5921c863f8a523.png

 

 

8.4 - Installing the SPICE Guest Tools

Spoiler

In order to achieve clipboard functionality among solving some other issues we need to install the SPICE Guest Tools. They can be downloaded from here. The specific download we want is located under Windows binaries and is called spice-guest-tools. These can be installed normally like any other application. Afterwards restart the VM then test the clipboard by copy/pasting something through Looking Glass to the Linux Host and vis-versa.

 

8.5 - Installing the Looking Glass Host (on Windows)

Spoiler

Before we can launch Looking Glass on our host we have to do two things:

  1. Install the launcher on our Guest.
  2. Disable the Microsoft Basic Display Adapter

8.5.1 - Installing & Auto-Launching the Looking Glass Host

Spoiler

Start by downloading the client from here. The download comes from the Windows logo on the far right of the page.

 

NOTE: Be sure if you've gone with Official Release when you downloaded the Source Code that you use the accompanying download for the Windows Host.

 

With this downloaded we need it to automatically run at each system startup. This can be done by creating a Windows Task. To create the Task launch CMD as an Administrator.

 

NOTE: You must have admin privileges here or else it will not work.

 

Modify the below command to suit your configuration then run it:




SCHTASKS /Create /TN "Looking Glass" /SC  ONLOGON /RL HIGHEST /TR C:\Users\<YourUserName>\<YourPath>\looking-glass-host.exe

Now whenever the VM is started Windows should automatically launch the looking-glass-host.exe file.

 

8.5.2 - Disabling the Microsoft Basic Display Adapter

Spoiler

Due to interference reasons we have to disable our viewing adapter before Looking Glass can operate correctly.

 

NOTE: This will disable our viewing window within virt-manager. At this time you may want to plug a display into the passed-though GPU if you haven't already. The Virt-manager window can still act as Keyboard/Mouse pass-though though.

 

Go into Device Manager and look under Display adapters:

 

microsoft-basic-display-adapter.png.04e4d19097340156903a08e12774b547.png

 

  1. Right-Click
  2. Disable device
  3. Yes
  4. Now restart the VM

 

 

 

8.6 - Launching Looking Glass (on Host)

Spoiler

Referring back to Step 8.1 when we ran this command:



sudo touch /dev/shm/looking-glass && sudo chown vfio:kvm /dev/shm/looking-glass && sudo chmod 660 /dev/shm/looking-glass

Part of this needs to be ran each time Looking Glass is executed on the Host. To do this we're going to write a very simple script. I'm going to create mine in my home folder but you can make yours anywhere.

 

Start by naming it something recognizable.



nano looking-glass-launcher.sh

Now within the editor paste the following lines:



sudo chown vfio:kvm /dev/shm/looking-glass
sudo chmod 660 /dev/shm/looking-glass

NOTE: Don't forget to replace VFIO in VFIO:KVM with the user you plan to use for executing the looking-glass-client.

 

Before we append the launcher we need to specify an argument. All available arguments can be viewed by running



./looking-glass-client -h

from the build directory. However for the time being we're only interested in one:



win:size=lengthxwidth

This will force the correct window size when displaying Looking Glass. So to add the argument when launching the client would look like this.



./LookingGlass-Release-B1/client/build/looking-glass-client win:size=1920x1080

Then to put the script all together:



sudo chown vfio:kvm /dev/shm/looking-glass
sudo chmod 660 /dev/shm/looking-glass
./LookingGlass-Release-B1/client/build/looking-glass-client win:size=1920x1080

Save, Exit. Now to make the script executable:



chmod 744 looking-glass-launcher

Now simply run it.



./looking-glass-launcher

If all is well you will be prompted for your password then Looking Glass will launch and you will see your Windows desktop. To full screen the window to the display do Print Screen + f. You should now be able to freely cross between Windows and Linux with your Keyboard & Mouse.

 

 

9. Scream Audio

Spoiler

Scream Audio is a network audio streaming application that can work very well for transmitting audio from our Windows VM to our Linux host via Multicast or Unicast transmissions enabling to use one set of headphones/speaker regardless of using GNU/Linux or Windows.

 

9.1 - Windows Setup

Spoiler

Start by visiting the Github project page and downloading the latest version of the .ZIP file: https://github.com/duncanthrax/scream/releases

 

Open the .ZIP file and extract the install folder to your desktop. Now open the folder and run Install.bat as an administrator. 

 

scream-windows-install.thumb.png.62cb818e27951bd90a1e1048bcb326ee.png

 

Accept the pop-up messages and continue. Once it's done exit everything and restart the VM.

 

After Windows finishes starting up look at your audio devices in the bottom right corner of the desktop. You need to change the default audio output device to the Scream audio device.

 

Now go back to your Linux desktop (leave the VM running)

 

9.2 - Linux Setup

Spoiler

Currently scream should already be downloaded as we did earlier. Before we can use it on the host we have to build it using make. Find the scream folder and navigate to scream/Receivers/pulseaudio and run the make command:



make

You should see the following output:

 

scream-make.png.4bec698dcb7bca1a75ad0b8214c4a358.png

 

Scream comes with a few arguments that can be appended to the executable:

 

scream-help.png.7d32ac603851b3d45852c6be8a2e0e12.png

 

For our basic application however all we need to pay attention to is -i. This is going to let us specify the interface (or bridge) we want Scream to listen on. While setting up your bridge if the default name of bridge0 was left as is then to run Scream use the following command. If not you will need to change it to reflect the name of your network bridge:



./screan-pulse -i bridge0

If the execution was successful and Scream found the stream from the VM you should see the following output.

 

scream-pulse-running.png.a09aa2a4a0c2f4a3902a2089a31b9fd4.png

 

Without any additional configuration Scream supports Stereo audio (2.1). If you want 5.1 or 7.1 surround Scream may require additional setup. Now go back to your Windows VM and play something with sound. If you can hear it Scream is working.

 

9.3 - Making Things Easy

Spoiler

We already have our script for executing Looking Glass. Now we want Scream to run with it so we don't have to launch both independently. To do this we will need to append an ampersand to the end of the last line of the script then add a line that points to the scream-pulse executable with our arguments:



./LookingGlass-Release-B1/client/build/looking-glass-client win:size=1920x1080 &
./scream/Receivers/pulseaudio/scream-pulse -i bridge0

The full file should look like this when you're done:

 

scream-appended-to-script.png.a5b10af4523023d8328fad00fe15babe.png

 

Now both Looking Glass and Scream will launch together.

 

 

10. Performance Optimization

Spoiler

10.1 - Hugepages

Spoiler

 Hugepages is a function revolving around system memory (RAM) that can greatly increase the performance of Virtual Machine guests.

 

NOTE: Enabling hugepages permanently takes the memory away from the host. This means the host will no longer be able to use it regardless of weather the VM is running or not.

 

First off lets check if it's not already enabled by running:



cat /proc/meminfo | grep Huge

Some distributions enable this by default.

 

Unfortunately with Ubuntu 19.04 here this does not seem to be the case:

 

hugepages-disabled.png.21bbb80af86127197df22e5c07b77de8.png

 

To enable Hugepages start by looking inside sysctl.conf:



sudo nano /etc/sysctl.conf

for the following entires:



vm.nr_hugepages=8192
vm.hugetlb_shm_group=48

In our case these entries don't exist so we will have to add them. The above values are appropriate for passing about 16GB of RAM to a VM. If your VM needs less or more you will have to adjust these values. You will need approx 1 hugepage for every 2MB of RAM. Once you are done. Save, Exit.

 

It's recommended by RedHat to disable the old transparent pages after doing this.



sudo echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
sudo echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled

 

Now we need to enable our VM to use hugepages by editing the .XML file again:



virsh edit name-of-vm

By adding the following lines:



<memoryBacking>
    <hugepages/>
  </memoryBacking>

Beneath:



<memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>

 

It will look like this when complete:

 

hugepages-xml-appended.png.aa72d23a11c36d03cf43797046588435.png

 

Save, Exit, then reboot the machine. When it's done rerun:



cat /proc/meminfo | grep Huge

You should now see zeros for AnonHugePages & ShmemHugePages then values higher than zero for HugePages_Total, HugePages_Free, & Hugetlb.

 

hugepages-enabled.png.c73ab4fabc168459f39bbc2f1d84d1c8.png

 

You should also see that the amount of RAM the system is currently utilizing is higher than the RAM you gave to the VM even when the VM isn't running. When you do start the VM RAM utilization should not increase by much at all. This means hugepages are enabled and working.

 

10.2 - CPU Pinning w/ Multiple NUMA (Non-Uniform Memory Access) Nodes

Spoiler

CPU pinning is a process where-in a vCPU is tied to a physical CPU core or thread. On systems with multiple NUMA nodes (which is often the case with multi-socket systems or multi-die processors) this prevents requests to/from memory (RAM) from having to cross Intels QPI link or AMDs Infinity Fabric to answer said request. The result is a noticeable performance boost under certain applications.

 

NOTE: This is not the case for all systems/processors, in many instances the system will utilize UMA where there is only one CPU and one bank of memory. Do your research on your hardware. Now pinning the vCPUs on UMA systems won't hurt anything and if there's still anything to be had it may still be worth doing.

 

Getting started I am using a Ryzen Threadripper 1950X. This is a two die 16 core processor with two NUMA Nodes. Although the hardware knows this the BIOS manufacturers will frequently set a feature known as Memory Interleave to Auto. This creates an layer of abstraction which makes the NUMA nodes operate like a UMA Node. It's important to change this from Auto to Channel.

 

By running the program lstopo it will display to us the topology of which PCI_e devices (by Device ID) are connected to which CPU or die and accompanying CPU cores/threads. To launch lstopo run the console command:



lstopo

 

lstopo.png.2fc623821789c0f4509192c616c4eed3.png 

 

NOTE: When using identical GPUs the command lspci -vnn can be used to display which NUMA Node the GPU is connected to. Just search for the Device ID or Device Address. It will list the NUMA Node.

 

Verifying that our GPU is connected to NUMA Node 0 we need to pin our vCPUs to cores 0-7. This will be threads 0-7, 16-23. For my configuration underneath the vcpu section I will be appending:



<vcpu placement='static'>16</vcpu>
  <iothreads>2</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='5'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='7'/>
    <vcpupin vcpu='8' cpuset='16'/>
    <vcpupin vcpu='9' cpuset='17'/>
    <vcpupin vcpu='10' cpuset='18'/>
    <vcpupin vcpu='11' cpuset='19'/>
    <vcpupin vcpu='12' cpuset='20'/>
    <vcpupin vcpu='13' cpuset='21'/>
    <vcpupin vcpu='14' cpuset='22'/>
    <vcpupin vcpu='15' cpuset='23'/>
    <emulatorpin cpuset='8-15,24-31'/>
    <iothreadpin iothread='1' cpuset='8-15'/>
    <iothreadpin iothread='2' cpuset='24-31'/>
  </cputune>
  • vcpupin = Pins the vCPU to a physical CPU thread
  • emulatorpin = Helps with cache locality.
  • iothreadpin = Helps the performance of virtio devices.

It should look like this when you're done:

 

vcpupin-xml-edit.png.104fba44b7c1da9151c5aeba6bd6c02e.png

 

After saving and exiting running the command:



virsh vcpupin name-of-vm

should show you the CPU affinity you assigned:

 

cpu-affinity.png.ef2f615233fae66a1a1057f71b8caf40.png

 

When the VM runs all of the vCPUs will now be locked to these threads.

 

10.3 - Hyperv

Spoiler

This section is less of a performance enhancer and more of a resource optimizer. By making a few small changes to the hyperv section of the .XML file you can save some resources on the host when the VM is sitting idle. This can be helpful with scaling.

 

Add the following lines to the end of the hyperv section of the .XML file:



<vpindex state='on'/>
<runtime state='on'/>
<synic state='on'/>
<stimer state='on'/>

It will look like this when you're done:

 

hyperv-edted.png.744eacee6b11f9d162bb68d5b1021b3e.png

 

Save, then Exit.

 

 

11. Troubleshooting

Spoiler

This will be a progressively grown list to help people with various issues. The more issues we solve the more problems will be appended to this section to help others in the future.

 

11.1 - NVIDIA Driver Won't Install: Device Manager Error Code 43

Spoiler

An issue that has plagued NVIDIA desktop series GPU's is the notorious driver error code 43 in Device Manager during the driver installation. The reason for this is since driver version 337.88 NVIDIA installed a component within their driver software that detects weather or not the GPU is being used within a Virtual Environment. If it detects that it is and that the GPU is not a Quadro, Tesla, or other workstation/server/compute series card it will refuse to install the driver and Device Manager will report error 43.

 

Workarounds for this are not guaranteed but some have helped/worked. The easiest solution is to pass-though an AMD GPU but some solutions for NVIDIA include editing the VM .XML file with:



<features>
	<hyperv>
		...
		<vendor_id state='on' value='linustech'/>
		...
	</hyperv>
	...
	<kvm>
	<hidden state='on'/>
	</kvm>
</features>

NOTE: The "value" parameter can be anything so long as it does not exceed 12 characters in length.

 

11.2 - Looking Glass Issues

Spoiler

11.2.1 - Desktop Doesn't Appear (AMD GPU)

Spoiler

Any AMD GPU made after the Radeon HD series has been designed in such a way that the GPU effectively goes to sleep when a display is not connected to it. You can do two things to fix this:

  1. Use a spare port on your monitor.
    1. This can double as a backup in the event Looking Glass fails.
  2. Buy one of the cheap HDMI dongles used by crypto currency miners.
    1. They're surprisingly handy for headless operations such as this.

 

 

11.2.2 - Lock Screen Doesn't Load | Security Dialog Boxes Don't Load

Spoiler

Due to the way in which Windows has setup their security they've made it so screen capture can't run during times when there are secure pop-ups. This also applies to the login screen. This is for reasons of preventing malware or other viruses from reading security dialogs and gaining unauthorized access. Right now the only workaround I have is to disable password login and set your account to not notify you with security dialogs.

 

It may be possible to disable the security function that locks out screen capture during Secure Pop-ups but I currently have not researched it.

 

11.2.3 - Cursor on Guest does not stay aligned with cursor on Host

Spoiler

This is caused by Windows mouse acceleration. This can be disabled in Windows by going to:




Start > Control Panel > Hardware and Sound > Mouse > Pointer Options > Enhance pointer precision

Untick the box then calibrate the cursor by moving it to all four corners of the display. This will greatly help with the cursor seemly hitting invisible walls due to the Host/Guest cursor misalignment.

 

11.2.4 - Scroll wheel on mouse goes the same direction regardless of spin

Spoiler

This should be fixed by installing the SPICE Guest Tools.

 

11.2.5 - Keyboard/Mouse input (SPICE) stops working after installing SPICE Guest Tools

Spoiler

This can be caused if the host is operating using Wayland. If you don't know if you are or not look at the Terminal you used to launch Looking Glass. The output on the Terminal will have messages similar to the following:




[I] main.c:1015 | run | Wayland detected
[I] main.c:1024 | run | SDL_VIDEODRIVER has been set to wayland

If this is the case, restart the host. When you reach the login screen check the login options available (on Ubuntu 19.04 it's a Gear next to the login button). Make sure this isn't set to Wayland. Then login and start the VM.

 

NOTE: It may be necessary to un-install and re-install SPICE Guest Tools then to restart the VM before Clipboard will start working.

 

 

11.3 - NUMA Nodes

Spoiler

11.3.1 - lstopo (I Have Multiple Nodes but Only See One)

Spoiler

If you have verified that your system does use multiple NUMA Nodes be sure that you've set Memory Interleave from Auto -> Channel within your BIOS.

 

 

11.4 - Error: Invalid Argument: Could not find capabilities for arch=x86_64 domaintype=kvm

Spoiler

11.4.1 - memoryBacking/Hugepages | CPU pinning

Spoiler

This looks to be a glitch in the software as far as I have seen. A very strange "solution" I found was to create a new VM. Edit it's config with the same parameters and save it. Then go back to the original VM and perform the same edit again. Seems to work. Afterwords you can just delete the second VM if you have no use for it.

 

 

11.5 - Using an iGPU?

Spoiler

I unfortunately have no way of testing if passing through an iGPU is possible. Theoretically if the iGPU appears as a PCI_e device and it appears within it's own IOMMU Group after running ls-iommu.sh I see no reason why it cannot be. Your alternitive option will be to pass-though your dGPU and run your host off the iGPU.

 

11.6 - IOMMU Groups

Spoiler

11.6.1 - My GPU appears in the same group as another/other device(s).

Spoiler

From what I have heard/learned it is possible to break IOMMU groups but that the process is tedious and/or complicated. How to do this is something I have not researched.

 

The easiest/best option is to try installing the GPU into a different slot or to pass-though the other GPU that does appear in it's own IOMMU group.

 

 

11.7 - Audio

Spoiler

11.7.1 - Static Noises

Spoiler

If you're experience static one possible solution is change from the default multicast transmission to unicast. This can be done by appending the -u argument:




./scream-pulse -i bridge0 -u

 

 

 

11.8 - Misc Problems After Pass-though

Spoiler

One issue that I currently don't have a specific error message/sign for is that not all GPUs play friendly with GPU pass-though. This is independent of NVIDIAs Error Code 43. In the event you are having any sort of crashing or freezing or black screens, etc you may want to try a different GPU.

 

 

Link to comment
Share on other sites

Link to post
Share on other sites

Damn, I kinda wish I had a second GPU now, this seems like it would be amazing learning journey for advanced Linux stuff.

Main Rig:-

Ryzen 7 3800X | Asus ROG Strix X570-F Gaming | 16GB Team Group Dark Pro 3600Mhz | Corsair MP600 1TB PCIe Gen 4 | Sapphire 5700 XT Pulse | Corsair H115i Platinum | WD Black 1TB | WD Green 4TB | EVGA SuperNOVA G3 650W | Asus TUF GT501 | Samsung C27HG70 1440p 144hz HDR FreeSync 2 | Ubuntu 20.04.2 LTS |

 

Server:-

Intel NUC running Server 2019 + Synology DSM218+ with 2 x 4TB Toshiba NAS Ready HDDs (RAID0)

Link to comment
Share on other sites

Link to post
Share on other sites

2 minutes ago, Master Disaster said:

Damn, I kinda wish I had a second GPU now, this seems like it would be amazing learning journey for advanced Linux stuff.

Quote

10.5 - Using an iGPU?

If you meet this circumstance it may be possible. I have not tested it though as I have no Ryzen or Intel iGPU equip CPU at my disposal.

Link to comment
Share on other sites

Link to post
Share on other sites

5 hours ago, Windows7ge said:

For NVIDIA desktop series GPUs NVIDIA built a component into their driver that checks if the GPU is running within a Virtual Environment.

Yeah, this just makes me resent my GTX 1080 even more. Not only do Pascal GPUs suffer from a specific blackscreen issue on X79 systems with VT-D enabled, NVIDIA also wants to make extra sure that even if you do enable it anyway you can't really use it.

 

I swear if it didn't draw double the power I'd have swapped it for a Vega64 a long time ago. Most Vega64 owners I know totally wouldn't mind exchanging video cards (I wonder why that might be...) :/

Meanwhile in 2024: Ivy Bridge-E has finally retired from gaming (but is still not dead).

Desktop: AMD Ryzen 9 7900X; 64GB DDR5-6000; Radeon RX 6800XT Reference / Server: Intel Xeon 1680V2; 64GB DDR3-1600 ECC / Laptop: AMD Ryzen 5 2500 /w Vega Graphics; 8GB DDR4

Link to comment
Share on other sites

Link to post
Share on other sites

5 minutes ago, silentdragon95 said:

Yeah, this just makes me resent my GTX 1080 even more. Not only do Pascal GPUs suffer from a specific blackscreen issue on X79 systems with VT-D enabled, NVIDIA also wants to make extra sure that even if you do enable it anyway you can't really use it.

 

I swear if it didn't draw double the power I'd have swapped it for a Vega64 a long time ago. Most Vega64 owners I know totally wouldn't mind exchanging video cards (I wonder why that might be...) :/

Quote

10.1 - NVIDIA Driver Won't Install: Device Manager Error Code 43

You may have some luck with this section. Can't guarantee the results though.

Link to comment
Share on other sites

Link to post
Share on other sites

Is there any way to create a gaming vm with just one gpu ? adding gpu means reducing avilable pci lanes, which id prefer to avoid. . . . and an extra cost

Link to comment
Share on other sites

Link to post
Share on other sites

19 hours ago, Windows7ge said:

You may have some luck with this section. Can't guarantee the results though.

That solution worked with my own 1080 with '1234567890ab'  as the vendor id.

 

37 minutes ago, GotCritified said:

Is there any way to create a gaming vm with just one gpu ? adding gpu means reducing avilable pci lanes, which id prefer to avoid. . . . and an extra cost

If you mean use the same GPU for both host and guest in a single GPU configuration, the effective answer is no. If you have multiple GPUs and you want to use one of the GPUs on both the guest and host, you could potentially reset the GPU to bind to either the host or the guest without rebooting but it depends on the exact card. Navi currently does not support resetting, for example.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

36 minutes ago, GotCritified said:

Is there any way to create a gaming vm with just one gpu ? adding gpu means reducing avilable pci lanes, which id prefer to avoid. . . . and an extra cost

If you have a AMD card that supports SR-IOV or an NVIDIA GPU that supports vGPU it is possible to have a GPU run multiple VMs. I don't know about having said GPU run the host & a VM.

 

Does your system have an iGPU? This could be a potential option. I just don't have the resources to check it.

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, 2FA said:

That solution worked with my own 1080 with '1234567890ab'  as the vendor id.

That's good to hear.

Link to comment
Share on other sites

Link to post
Share on other sites

Does this CPU pinning make sense for the topology? Ryzen is a bit different than Threadripper.

Spoiler

image.png.c18e715384b2028d0614da6d54e7f2b1.png

 

Spoiler

image.png.1e465c643c856e2548c9df278f4b72af.png

 

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

16 minutes ago, 2FA said:

Does this CPU pinning make sense for the topology? Ryzen is a bit different than Threadripper.

I don't know if the 3900X falls under the same rules as NUMA with what I believe is 2 dies on one substrate.

 

I'm reading that AMD has sped up the buses that connect the two but if you can force the vCPUs to stay on one die (the one closest to your GPU) it will yield the best results even if the performance increase isn't significant.

 

I don't know if the AM4 platform has a Memory Interleave setting in your BIOS though. If it's not an option then you can't force the vCPUs to either die.

 

Although there still may be a benefit to forcing a vCPU to staying on one thread/core so it doesn't have to constantly go back and forth to cache/memory when a process moves to a different core/thread.

 

From what I read unless you're using virtio devices you shouldn't need <iothreadpin>.

 

<emulatorpin> is suppose to help with what cores/threads are used for actually managing the emulation/the hypervisor. That way cores Windows is set to use aren't used by QEMU (at least that's what I read).

Link to comment
Share on other sites

Link to post
Share on other sites

3 minutes ago, Windows7ge said:

I don't know if the 3900X falls under the same rules as NUMA with what I believe is 2 dies on one substrate.

 

I'm reading that AMD has sped up the buses that connect the two but if you can force the vCPUs to stay on one die (the one closest to your GPU) it will yield the best results even if the performance increase isn't significant.

 

I don't know if the AM4 platform has a Memory Interleave setting in your BIOS though. If it's not an option then you can't force the vCPUs to either die.

 

Although there still may be a benefit to forcing a vCPU to staying on one thread/core so it doesn't have to constantly go back and forth to cache/memory when a process moves to a different core/thread.

 

From what I read unless you're using virtio devices you shouldn't need <iothreadpin>.

 

<emulatorpin> is suppose to help with what cores/threads are used for actually managing the emulation/the hypervisor. That way cores Windows is set to use aren't used by QEMU (at least that's what I read).

3900X is a single chip with two CCDs, one NUMA node. I checked and Memory Interleaving is enabled by default.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

19 minutes ago, 2FA said:

3900X is a single chip with two CCDs, one NUMA node. I checked and Memory Interleaving is enabled by default.

Memory Interleave has four or five different settings (that may just be for TR and multi-socket systems though). Is it set to channel?

 

And that's what I was reading. Not certain what a CCD is but it sounds like it uses UMA where the two dies share one bus to one bank of memory so cpupinning won't have as profound of an impact on the performance of the VM for you. I would still do it anyways just so the VM doesn't have it's tasks thrown around a pool of random cores. Forcing jobs to stay on assigned cores could still have some performance benefit.

 

If you'd be willing to run a couple of benchmarks both with and without pinning for me I could use that as a future point of reference as to if there's any point at all on the AM4 Ryzen 3XXX series CPUs.

Link to comment
Share on other sites

Link to post
Share on other sites

1 minute ago, Windows7ge said:

Memory Interleave has four or five different settings (that may just be for TR and multi-socket systems though). Is it set to channel?

 

And that's what I was reading. Not certain what a CCD is but it sounds like it uses UMA where the two dies share one bus to one bank of memory so cpupinning won't have as profound of an impact on the performance of the VM for you. I would still do it anyways just so the VM doesn't have it's tasks thrown around a pool of random cores. Forcing jons to stay on assigned cores could still have some performance benefit.

 

If you'd be willing to run a couple of benchmarks both with and without pinning for me I could use that as a future point of reference as to if there's any point at all on the AM4 Ryzen 3XXX series CPUs.

It only had the option of Auto and Disabled, probably since it's a single node.

 

CCDs are the building blocks of Zen/Zen+/Zen2. Each CCD contains two CCX with a CCX being comprised of several cores sharing L3 cache. Zen/Zen+ maxed out at one CCD per chip but Zen2 increased that to two CCDs per chip.

Spoiler

The King is Dead, Long Live the King! - Ryzen 3900X ...

Looking at that topology, I'm not sure if it's better to use one thread per code or all on the same CCD. I'll have to test that.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

Oof, using half my 3900X threads crushes even a 3700X in Cinebench R20, let alone the 3600X.

3900X (12t) - 5272

3700X (16t) - 4760

3600X (12t) - 3714

 

EDIT: Single core ended up being 496, which isn't far behind bare metal.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

30 minutes ago, 2FA said:

It only had the option of Auto and Disabled, probably since it's a single node.

 

CCDs are the building blocks of Zen/Zen+/Zen2. Each CCD contains two CCX with a CCX being comprised of several cores sharing L3 cache. Zen/Zen+ maxed out at one CCD per chip but Zen2 increased that to two CCDs per chip.

 

Looking at that topology, I'm not sure if it's better to use one thread per code or all on the same CCD. I'll have to test that.

Problem here is if it doesn't use a NUMA node technology lstopo can't show you which threads are a part of which CCD, or CCX for that matter. You'll have to use another program to identify which threads belong to who.

 

16 minutes ago, 2FA said:

Oof, using half my 3900X threads crushes even a 3700X in Cinebench R20, let alone the 3600X.

3900X (12t) - 5272

3700X (16t) - 4760

3600X (12t) - 3714

 

EDIT: Single core ended up being 496, which isn't far behind bare metal.

Hugepages are probably the bigger factor in that but something very CPU compute heavy should see a perk from CPU pinning if any is to be had.

Link to comment
Share on other sites

Link to post
Share on other sites

1 hour ago, Windows7ge said:

Problem here is if it doesn't use a NUMA node technology lstopo can't show you which threads are a part of which CCD, or CCX for that matter. You'll have to use another program to identify which threads belong to who.

 

Hugepages are probably the bigger factor in that but something very CPU compute heavy should see a perk from CPU pinning if any is to be had.

I just ran it again, granted the emulator pinning wasn't really correct in both cases, but using one thread per core is looking to be a lot better than keeping it on two CCXs. Using dedicated cores, to no one's surprise, is better than SMT.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

4 minutes ago, 2FA said:

I just ran it again, granted the emulator pinning wasn't really correct in both cases, but using one thread per core is looking to be a lot better than keeping it on two CCXs. Using dedicated cores, to no one's surprise, is better than SMT.

If you're going to tinker with the .XML file for performance optimization beyond what I outlined above consider posting it here when you think you're done. It may help others with similar series processors.

Link to comment
Share on other sites

Link to post
Share on other sites

EDIT: Man I suck at remembering what I did each run.

 

3900X 12c/24t CPU

12 threads assigned to Windows 10 VM

Testing done via Cinebench R20

 

Topology:

Spoiler

image.png.1011dace16ee83ffc54512df54ac40b8.png

 

No thread pinning:

Spoiler

3score.thumb.png.673ceadfe7988221961846f7a3dcf1b3.png

Keeping all threads on two CCXs (and I think one CCD):

Spoiler

2score.thumb.png.0bb99d04858de974f88a67fda662289b.png2config.png.bacaec454d06ea862c65049dc5980515.png

Pinning one thread per core:

Spoiler

1score.thumb.png.32c9db89f77234bf107df78dc0a3cc5d.png1config.png.a361ba515177a3fa620ee07e86254208.png

 

Emulator pinning could have been better optimized but stayed consisted except in the case of no pinning at all. In conclusion, pin one thread per core for maximum performance.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

Personally, I use virtio-scsi instead of virtio-blk and set both the "Discard mode" to "unmap" and "Detect zeroes" as "unmap" -- this way, the HDD-image on the hypervisor only ever uses as much space as the data inside it (as long as the hypervisor filesystem supports it, like e.g. Btrfs or ZFS) Without these options, if you e.g. installed some 20GB application inside the VM, then later on deleted it, the image would not shrink by the aforementioned 20GB -- any blocks that the VM allocated would remain allocated, even if the data inside the block was considered deleted by the guest OS.

 

#The image for one of my VMs, after having gone through multiple large Windows-updates, all sorts of temporary software-installations and all, still only consumes 13GB of space on the hypervisor's SSD.

root@visor:/var/lib/libvirt/images# dir win20
-rw------- 1 root root 80G helmi  17 17:02 win20
root@visor:/var/lib/libvirt/images# du -hs win20
13G     win20
root@visor:/var/lib/libvirt/images#

 

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

Another optimization I do to my Windows 10 VMs is add the following lines to the config in the hyperv - section:

      <vpindex state='on'/>
      <runtime state='on'/>
      <synic state='on'/>
      <stimer state='on'/>

Without these, even when the VM is completely idle, it's consuming 100% - 119% CPU, but with these the CPU-usage drops down to 5%-7% range when idle. Quite a big improvement for little work.

 

Hand, n. A singular instrument worn at the end of the human arm and commonly thrust into somebody’s pocket.

Link to comment
Share on other sites

Link to post
Share on other sites

@WereCatf Should I find the time to test these on one of my own VMs I'll add them as sections under Performance Optimization.

Link to comment
Share on other sites

Link to post
Share on other sites

You may want to consider adding a note about Scream, which works with ALSA or PulseAudio to pass audio from the VM whilst using Looking Glass. I followed this guide and it is working perfectly so far.

[Out-of-date] Want to learn how to make your own custom Windows 10 image?

 

Desktop: AMD R9 3900X | ASUS ROG Strix X570-F | Radeon RX 5700 XT | EVGA GTX 1080 SC | 32GB Trident Z Neo 3600MHz | 1TB 970 EVO | 256GB 840 EVO | 960GB Corsair Force LE | EVGA G2 850W | Phanteks P400S

Laptop: Intel M-5Y10c | Intel HD Graphics | 8GB RAM | 250GB Micron SSD | Asus UX305FA

Server 01: Intel Xeon D 1541 | ASRock Rack D1541D4I-2L2T | 32GB Hynix ECC DDR4 | 4x8TB Western Digital HDDs | 32TB Raw 16TB Usable

Server 02: Intel i7 7700K | Gigabye Z170N Gaming5 | 16GB Trident Z 3200MHz

Link to comment
Share on other sites

Link to post
Share on other sites

3 hours ago, 2FA said:

You may want to consider adding a note about Scream, which works with ALSA or PulseAudio to pass audio from the VM whilst using Looking Glass. I followed this guide and it is working perfectly so far.

Audio is something worth researching and that's a great lead. Current thoughts were just USB audio. I also need to research the clipboard so people can copy/paste information between Windows & GNU/Linux.

Link to comment
Share on other sites

Link to post
Share on other sites

On 2/18/2020 at 1:50 AM, WereCatf said:

Another optimization I do to my Windows 10 VMs is add the following lines to the config in the hyperv - section:


      <vpindex state='on'/>
      <runtime state='on'/>
      <synic state='on'/>
      <stimer state='on'/>

Without these, even when the VM is completely idle, it's consuming 100% - 119% CPU, but with these the CPU-usage drops down to 5%-7% range when idle. Quite a big improvement for little work.

 

I would like to ask is this a utilization reduction that Windows (the VM) reports or does this reduce the Linux hosts utilization keeping the VM running?

Link to comment
Share on other sites

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


×