Virtual machines with PCI passthrough on Ubuntu 20.04, straightforward guide for gaming on a virtual machine

IOMMU groups on ASUS Prime x370-pro

Preamble

The direct way to a PCI passthrough virtual machines on Ubuntu 20.04 LTS. I try limit changes of the host operating system to a minimum, but provide enough details, that even Linux rookies are able to participate.

The final system will run Xubuntu 20.04 as host operating system(OS), and Windows 10 2004 as guest OS. Gaming is the main use-case of the guest system.

Unfortunately, the setup process can be pretty complex. It consists of fixed base settings, some variable settings and several optional (mostly performance) settings. In order to sustain readability of this post, and because I aim to use the virtual machine for gaming only, I minimized the variable parts for latency optimization. The variable topics itself are linked in articles – I hope this makes sense. 🙂

The article is based on my former guides for Ubuntu 18.04 and 16.04 host systems.

Introduction to VFIO, PCI passthrough and IOMMU

Virtual Function I/O (or VFIO) allows a virtual machine (VM) direct access to a PCI hardware resource, such as a graphics processing unit (GPU). Virtual machines with set up GPU passthrough can gain close to bare metal performance, which makes running games in a Windows virtual machine possible.

Let me make the following simplifications, in order to fulfill my claim of beginner friendliness for this guide:

PCI devices are organized in so called IOMMU groups. In order to pass a device over to the virtual machine, we have to pass all the devices of the same IOMMU group as well. In a prefect world each device has its own IOMMU group — unfortunately that’s not the case.

Passed through devices are isolated, and thus no longer available to the host system. Furthermore, it is only possible to isolate all devices of one IOMMU group at the same time.

This means, even when not used in the VM, a device can no longer be used on the host, when it is an “IOMMU group sibling” of a passed through device.

Requirements

Hardware

In order to successfully follow this guide, it is mandatory that the used hardware supports virtualization and has properly separated IOMMU groups.

The used hardware

  • AMD Ryzen7 1800x (CPU)
  • Asus Prime-x370 pro (Mainboard)
  • 2x 16 GB DDR4-3200 running at 2400MHz (RAM)
  • Nvidia GeForce 1050 GTX (Host GPU; PCIE slot 1)
  • Nvidia GeForce 1060 GTX (Guest GPU; PCIE slot 2)
  • 500 GB NVME SSD (Guest OS; M.2 slot)
  • 500 GB SSD (Host OS; SATA)
  • 750W PSU

When composing the systems’ hardware, I was eager to avoid the necessity of kernel patching. Thus, the ACS override patch is not required for said combination of mainboard and CPU.

If your mainboard has no proper IOMMU separation you can try to solve this by using a patched kernel, or patch the kernel on your own.

BIOS settings

Enable the following flags in your BIOS:

  • Advanced \ CPU config - SVM Module -> enable
  • Advanced \ AMD CBS - IOMMU -> enable

Attention!

The ASUS Prime x370/x470/x570 pro BIOS versions for AMD RYZEN 3000-series support (v. 4602 – 5220), will break a PCI passthrough setup.

Error: “Unknown PCI header type ‘127’“.

BIOS version up to (and including) 4406, 2019/03/11 are working.

BIOS version from (and including) 5406, 2019/11/25 are working.

I used Version: 4207 (8th Dec 2018)

Host operating system settings

I have installed Xubuntu 20.04 x64 (UEFI) from here.

Ubuntu 20.04 LTS ships with kernel version 5.4 which works good for VFIO purposes – check via: uname -r

Attention!

Any kernel, starting from version 4.15, works for a Ryzen passthrough setup.

Except kernel versions 5.1, 5.2 and 5.3 including all of their subversion.

Before continuing make sure that your kernel plays nice in a VFIO environment.

Breaking changes for older passthrough setups in Ubuntu 20.04

Kernel version

Starting with kernel version 5.4, the “vfio-pci” driver is no longer a kernel module, but build-in into the kernel. We have to find a different way instead of using initramfs-tools/modules config files, as I recommended in e.g. the 18.04 guide.

QEMU version

Ubuntu 20.04 ships with QEMU version 4.2. It introduces audio fixes. This needs attention if you want to use a virtual machine definition from an older version.

If you want to use a newer version of QEMU, you can build it on your own.

Attention!

QEMU version 5.0.0 – 5.0.0-6 should not be used for a passthrough setup due to stability issues.

Bootmanager

This is no particular “problem” with Ubuntu 20.04, but at least for one popular distro which is based on Ubuntu – Pop!_OS 20.04.

I think starting with version 19.04, Pop!_OS uses systemd as boot manager, instead of grub. This means triggering kernel commands works differently in Pop!_OS.

Install the required software

Install QEMU, Libvirt, the virtualization manager and related software via:

sudo apt install qemu-kvm qemu-utils libvirt-daemon-system libvirt-clients bridge-utils virt-manager ovmf

Setting up the PCI passthrough

We are going to passthrough the following devices to the VM:

  • 1x GPU: Nvidia GeForce 1060 GTX
  • 1x USB host controller
  • 1x SSD: 500 GB NVME M.2

Enabling IOMMU feature

Enable the IOMMU feature via your grub config. On a system with AMD Ryzen CPU run:

sudo nano /etc/default/grub

Edit the line which starts with GRUB_CMDLINE_LINUX_DEFAULT to match:

GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt"

In case you are using an Intel CPU the line should read:

GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

Once you’re done editing, save the changes and exit the editor (CTRL+x CTRL+y).

Afterwards run:

sudo update-grub

→ Reboot the system when the command has finished.

Verify if IOMMU is enabled by running after a reboot:

dmesg |grep AMD-Vi

dmesg output

[ 0.792691] AMD-Vi: IOMMU performance counters supported 
[ 0.794428] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40 
[ 0.794429] AMD-Vi: Extended features (0xf77ef22294ada): 
[ 0.794434] AMD-Vi: Interrupt remapping enabled 
[ 0.794436] AMD-Vi: virtual APIC enabled 
[ 0.794688] AMD-Vi: Lazy IO/TLB flushing enabled

[collapse]

For systemd boot manager as used in Pop!_OS

One can use the kernelstub module, on systemd booting operating systems, in order to provide boot parameters. Use it like so:

sudo kernelstub -o "amd_iommu=on amd_iommu=pt"

Identification of the guest GPU

Attention!

After the upcoming steps, the guest GPU will be ignored by the host OS. You have to have a second GPU for the host OS now!

In this chapter we want to identify and isolate the devices before we pass them over to the virtual machine. We are looking for a GPU and a USB controller in suitable IOMMU groups. This means, either both devices have their own group, or they share one group.

The game plan is to apply the vfio-pci driver to the to be passed-through GPU, before the regular graphic card driver can take control of it.

This is the most crucial step in the process. In case your mainboard does not support proper IOMMU grouping, you can still try patching your kernel with the ACS override patch.

One can use a bash script like this in order to determine devices and their grouping:

#!/bin/bash
# change the 999 if needed
shopt -s nullglob
for d in /sys/kernel/iommu_groups/{0..999}/devices/*; do
    n=${d#*/iommu_groups/*}; n=${n%%/*}
    printf 'IOMMU Group %s ' "$n"
    lspci -nns "${d##*/}"
done;

source: wiki.archlinux.org + added sorting for the first 999 IOMMU groups

script output


IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 1 00:01.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 2 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 3 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 4 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 5 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 6 00:03.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]
IOMMU Group 7 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 8 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 9 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 10 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-1fh) PCIe Dummy Host Bridge [1022:1452]
IOMMU Group 11 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]
IOMMU Group 12 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59)
IOMMU Group 12 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 13 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 13 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 13 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 13 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 13 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 13 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 13 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 6 [1022:1466]
IOMMU Group 13 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 14 01:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] (rev 03)
IOMMU Group 15 02:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset USB 3.1 xHCI Controller [1022:43b9] (rev 02)
IOMMU Group 15 02:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset SATA Controller [1022:43b5] (rev 02)
IOMMU Group 15 02:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] X370 Series Chipset PCIe Upstream Port [1022:43b0] (rev 02)
IOMMU Group 15 03:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 03:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 03:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 03:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 03:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 03:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 15 07:00.0 USB controller [0c03]: ASMedia Technology Inc. ASM1143 USB 3.1 Host Controller [1b21:1343]
IOMMU Group 15 08:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 15 09:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)
IOMMU Group 15 0a:04.0 Multimedia audio controller [0401]: C-Media Electronics Inc CMI8788 [Oxygen HD Audio] [13f6:8788]
IOMMU Group 16 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)
IOMMU Group 16 0b:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)
IOMMU Group 17 0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 6GB] [10de:1b83] (rev a1)
IOMMU Group 17 0c:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
IOMMU Group 18 0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Raven/Raven2 PCIe Dummy Function [1022:145a]
IOMMU Group 19 0d:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]
IOMMU Group 20 0d:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
IOMMU Group 21 0e:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Zeppelin/Renoir PCIe Dummy Function [1022:1455]
IOMMU Group 22 0e:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)
IOMMU Group 23 0e:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457]

[collapse]

The syntax of the resulting output reads like this:

iommu script output description
Figure1: IOMMU script output description

The interesting bits are the PCI bus id (marked dark red in figure1) and the device identification (marked orange in figure1).

These are the devices of interest:

selected devices for isolation


NVME M.2 
========
IOMMU group 14
        01:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] (rev 03)
                Driver: nvme

Guest GPU - GTX1060 
=================
IOMMU group 17
        0c:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 6GB] [10de:1b83] (rev a1)
                Driver: nvidia
        0c:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)
                Driver: snd_hda_intel

USB host
=======
IOMMU group 20
        0d:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
                Driver: xhci_hcd

[collapse]
IOMMU groups on ASUS Prime x370-pro
Figure2: IOMMU groups for virtual machine passthrough, on ASUS Prime x370-pro

We will isolate the GTX 1060 (PCI-bus 0c:00.0 and 0c:00.1; device id 10de:1b83, 10de:10f0).

The USB-controller (PCI-bus 0d:00.3; device id 1022:145c) is used later on.

The NVME SSD can be passed through without identification numbers. It is crucial though that it has its own group.

Isolation of the guest GPU

In order to isolate the GPU we have two options. Select the devices by PCI bus address or by device ID. Both options have pros and cons.

Apply VFIO-pci driver by device id (via bootmanager)

This option should only be used, in case the graphics cards in the system are not exactly the same model.

Update the grub command again, and add the PCI device ids with the vfio-pci.ids parameter.

Run sudo nano /etc/default/grub and update the GRUB_CMDLINE_LINUX_DEFAULT line again:

GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt kvm.ignore_msrs=1 vfio-pci.ids=10de:1b83,10de:10f0"

Remark! The command “ignore_msrs” is only necessary for Windows 10 versions higher 1803 (otherwise BSOD).

Save and close the file. Afterwards run:

sudo update-grub

Attention!

After the following reboot the isolated GPU will be ignored by the host OS. You have to use the other GPU for the host OS NOW!

→ Reboot the system.

Retrospective for a systemd boot manager system:

sudo kernelstub --add-options "vfio-pci.ids=10de:1b80,10de:10f0,8086:1533"

Apply VFIO-pci driver by PCI bus id (via script)

This method works even if you want to isolate one of two identical cards. Attention though, in case PCI-hardware is added or removed from the system the PCI bus ids will change (also sometimes after BIOS updates).

Create another file via sudo nano /etc/initramfs-tools/scripts/init-top/vfio.sh and add the following lines:

#!/bin/sh

PREREQ=""

prereqs()
{
   echo "$PREREQ"
}

case $1 in
prereqs)
   prereqs
   exit 0
   ;;
esac

for dev in 0000:0c:00.0 0000:0c:00.1 
do 
 echo "vfio-pci" > /sys/bus/pci/devices/$dev/driver_override 
 echo "$dev" > /sys/bus/pci/drivers/vfio-pci/bind 
done

exit 0

Thanks to /u/nazar-pc on reddit. Make sure the line

for dev in 0000:0c:00.0 0000:0c:00.1

Has the correct PCI bus ids for the GPU you want to isolate. Now save and close the file.

Make the script executable via:

sudo chmod +x /etc/initramfs-tools/scripts/init-top/vfio.sh

Create another file via sudo nano /etc/initramfs-tools/modules

And add the following lines:

options kvm ignore_msrs=1

Save and close the file.

When all is done run:

sudo update-initramfs -u -k all

Attention!

After the following reboot the isolated GPU will be ignored by the host OS. You have to use the other GPU for the host OS NOW!

→ Reboot the system.

Verify the isolation

In order to verify a proper isolation of the device, run:

lspci -nnv

find the line "Kernel driver in use" for the GPU and its audio part. It should state vfio-pci.

output

0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 6GB] [10de:1b83] (rev a1) (prog-if 00 [VGA controller])
	Subsystem: ASUSTeK Computer Inc. GP104 [1043:8655]
	Flags: fast devsel, IRQ 44
	Memory at f4000000 (32-bit, non-prefetchable) [size=16M]
	Memory at c0000000 (64-bit, prefetchable) [size=256M]
	Memory at d0000000 (64-bit, prefetchable) [size=32M]
	I/O ports at d000 [size=128]
	Expansion ROM at f5000000 [disabled] [size=512K]
	Capabilities: <access denied>
	Kernel driver in use: vfio-pci
	Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia

[collapse]

Congratulations, the hardest part is done! 🙂

Creating the Windows virtual machine

The virtualization is done via an open source machine emulator and virtualizer called QEMU. One can either run QEMU directly, or use a GUI called virt-manager in order setup, and run a virtual machine.

I prefer the GUI. Unfortunately not every setting is supported in virt-manager. Thus, I define the basic settings in the UI, do a quick VM start and “force stop” it right after I see the GPU is passed-over correctly. Afterwards one can edit the missing bits into the VM config via virsh edit.

Make sure you have your windows ISO file, as well as the virtio windows drivers downloaded and ready for the installation.

Pre-configuration steps

As I said, lots of variable parts can add complexity to a passthrough guide. Before we can continue we have to make a decision about the storage type of the virtual machine.

Creating image container if needed.

In this guide I pass my NVME M.2 SSD to the virtual machine. Another viable solution is to use a raw image container, see the storage post for further information.

Creating an Ethernet Bridge

We will use a bridged connection for the virtual machine. This requires a wired connection to the computer.

I simply followed the great guide from heiko here.

see the ethernet setups post for further information.

Create a new virtual machine

As said before we use the virtual machine manager GUI to create the virtual machine with basic settings.

In order to do so start up the manager and click the “Create a new virtual machine” button.

Step 1

Select “Local install media” and proceed forward (see figure 3).

Virt-manager select installation type for operating system
Virt-manager select installation type for operating system

Step2

Now we have to select the windows ISO file we want to use for the installation (see figure3). Also check the automatic system detection. Hint:Use the button “browse local” (one of the buttons on the right side) to browse to the ISO location.

select the windows iso file
Figure4: Create a virtual machine step 2 – select the windows ISO file.

Step 3

Put in the amount of RAM and CPU cores you want to passthrough and continue with the wizard. I want to use 8 Cores (16 is maximum, the screenshot shows 12 by mistake!) and 16384 MiB of RAM in my VM.

Figure4: Memory and CPU settings.
Figure5: Create a virtual machine step 3 – Memory and CPU settings

Step 4

In case you use a storage file, select your previously created storage file and continue. I uncheck the “Enable storage for this virtual machine” check-box and add my device later on.

Figure5: Create a virtual machine step 4 - Select the previous created storage.
Figure6: Create a virtual machine step 4 – Select the previous created storage.

Step 5

On the last steps are slightly more clicks required.

Put in a meaningful name for the virtual machine. This becomes the name of the xml config file, I guess I would not use anything with spaces in it. It might work without a problem, but I wasn’t brave enough to do so in the past.

Furthermore make sure you check “Customize configuration before install”.

For the “network selection” pick “Specify shared device name” and type in the name of the network bridge we had created previously. You can use ifconfig in a terminal to show your Ethernet devices. In my case that is “bridge0”.

Figure6: Create a virtual machine step 5 - Before installation
Figure7: Create a virtual machine step 5 – Before installation.

Remark! The CPU count in Figure7 is wrong. It should say 8 if you followed the guide.

First configuration

Once you have pressed “finish” the virtual machine configuration window opens. The left column displays all hardware devices which this VM uses. By left clicking on them, you see the options for the device on the right side. You can remove hardware via right click. You can add more hardware via the button below. Make sure to hit apply after every change.

The following screenshots may vary slightly from your GUI (as I have added and removed some hardware devices).

Overview

On the Overview entry in the list make sure that for “Firmware” UEFIx86_64 [...] OVMF [...] is selected. “Chipset” should be Q35 see figure8.

Figure7: Overview configuration
Figure8: Virtual machine configuration – Overview configuration

CPUs

For the “Model:” click in to the drop-down, as if it is a text field, and type in

host-passthrough

This will pass all CPU information to the guest. You can read the CPU Model Information chapter in the performance guide for further information.

For “Topology” check “Manually set CPU topology” with the following values:

  • Sockets: 1
  • Cores: 4
  • Threads: 2
Figure8: CPU configuration
Figure9: Virtual machine configuration – CPU configuration *outdated screenshot

Disk 1

Setup with raw image container

When you first enter this section it will say “IDE Disk 1”. We have to change the “Disk bus:” value to VirtIO.

Figure9: Disk configuration
Figure10: Virtual machine configuration – Disk configuration
Setup with passed-through drive

Find out the correct device-id of the drive via lsblk to get the disk name.

When editing the VM configuration for the first time add the following block in the <devices> section (one line after </emulator> should be fitting).

<disk type='block' device='disk'>
  <driver name='qemu' type='raw' cache='none' io='native' discard='unmap'/>
  <source dev='/dev/nvme0n1'/>
  <target dev='sdb' bus='sata'/>
  <boot order='1'/>
  <address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>

Edit the line

<source dev='/dev/nvme0n1'/>

To match your drive name.

VirtIO Driver

Next we have to add the virtIO driver ISO, so it is used during the Windows installation. Otherwise, the installer can not recognize the storage volume we have just changed from IDE to virtIO.

In order to add the driver press “Add Hardware”, select “Storage” select the downloaded image file.

For “Device type:” select CD-ROM device. For “Bus type:” select IDE otherwise windows will also not find the CD-ROM either 😛 (see Figure10).

Figure10: Adding virtIO driver CDROM.
Figure11: Virtual machine configuration – Adding virtIO driver CD-ROM.

The GPU passthrough

Finally! In order to fulfill the GPU passthrough, we have to add our guest GPU and the USB controller to the virtual machine. Click “Add Hardware” select “PCI Host Device” and find the device by its ID. Do this three times:

  • 0000:0c:00.0 for GeForce GTX 1060
  • 0000:0c:00.1 for GeForce GTX 1060 Audio
  • 0000:0d:00.3 for the USB controller
Figure11: Adding PCI devices.
Figure12: Virtual machine configuration – Adding PCI devices (screenshot is still with old hardware).

Remark: In case you later add further hardware (e.g. another PCIE device), these IDs might/will change – just keep in mind if you change the hardware just redo this step with updated Ids.

This should be it. Plugin a second mouse and keyboard in the USB ports of the passed-through controller (see figure2).

Hit “Begin installation”, a Tiano core logo should appear on the monitor connected to the GTX 1060. If a funny white and yellow shell pops-up you can use exit in order to leave it.

When nothing happens, make sure you have both CD-ROM device (one for each ISO windows10 and virtIO driver) in your list. Also check the “boot options” entry.

Once you see the Windows installation, use “force off” from virt-manager to stop the VM.

Final configuration and optional steps

In order edit the virtual machines configuration use: virsh edit your-windows-vm-name

Once your done editing, you can use CTRL+x CTRL+y to exit the editor and save the changes.

I have added the following changes to my configuration:

AMD Ryzen CPU optimizations

I moved this section in a separate article – see the CPU pinning part of the performance optimization article.

Hugepages for better RAM performance

This step is optionaland requires previous setup: See the Hugepages post for details.

find the line which ends with </currentMemory> and add the following block behind it:

  <memoryBacking>   
    <hugepages/> 
  </memoryBacking>

Remark: Make sure <memoryBacking> and <currentMemory> have the same indent.

Performance tuning

This acrticle describes performance optimizations for gaming on a virtual machine (VM) with GPU passthrough.

Troubleshooting

Removing Error 43 for Nvidia cards

This guide uses an Nvidia card as guest GPU. Unfortunately, the Nvidia driver throws Error 43 , if it recognizes the GPU is being passed through to a virtual machine.

I rewrote this section and moved it into a separate article.

Getting audio to work

After some sleepless nights I wrote a separate article about that – see chapter Pulse Audio with QEMU 4.2 (and above).

Removing stutter on Guest

There are quiet a few software and hardware version combinations which will result in a bad Guest performance.

I have created a separate article on known issues and common errors.

My final virtual machine libvirt XML configuration

This is my final XML file


[collapse]

to be continued…


Sources

The glorious Arch wiki

heiko-sieger.info: Really comprehensive guide

Great post by user “MichealS” on level1techs.com forum

Wendels draft post on Level1techs.com

Updates

  • 2020-06-18 …. initial creation of the 20.04 guide

4 Comment

  1. BunnieofDoom says: Reply

    Good day Mathias,

    Do you know if there is a way to fix the ACPI tables? A game I am currently playing seems to use that check to flag if your playing from inside a VM and disconnect you.

    1. Mathias Hueber says: Reply

      Sorry, no idea. Which game is it though?

  2. Using this configuration, does the video output of the guest OS appear on the HDMI port of the guest GPU? And is the second keyboard/mouse required all the time or only during installation?

    I am planning to attach the HDMI output to a spare port on my monitor, but I don’t want a separate keyboard and mouse.

    1. Mathias Hueber says: Reply

      I use a kvm USB switch for mouse and keyboard. Depending on your use case (latency) there are other options too.

      The video output for the guest comes always directly from guest gpu.
      My monitor uses one hdmi for the host and display port for the guest input. I change input channels on the monitor directly to switch between host and guest. Depending on your monitor and gpu outputs you can choose what signals you use.

Leave a Reply