Beginner friendly guide to windows virtual machines with GPU passthrough on Ubuntu 18.04; or how to play competitive games in a virtual machine.

Preamble

The intent of this document is to provide a complete, step-by-step guide on how to setup a virtual machine(VM) with graphics cards(GPU) passthrough – detailed enough that even Linux rookies are able to participate.

The final system will run Xubuntu 18.04 as host operating system(OS), and Windows 10 as guest OS, considering gaming as main use-case of the guest.

The article is based on my last years guide, which used Ubuntu 16.04 as host system. I updated the former guide regularly while optimizing performance and hardware 😉

I am still very happy with my distro choice (Xubuntu), but I have to emphasize that (X)Ubuntu (or any Debian based distro) is not the easiest distribution to perform virtual machine passthrough. Most of the guides I found online were targeting either Fedora or Arch as the host operating system. Especially Fedora 26 should be easy to setup for passthrough (as recommended by level1techs).

Introduction to VFIO and PCI passthrough

Virtual Function I/O (or VFIO) allows a virtual machine (VM) direct access to a pci hardware resource, such as a graphics processing unit (GPU). Virtual machines with set up GPU passthrough can gain close to bare metal performance, which makes running games in a Windows virtual machine possible.

Unfortunately, the setup process can be pretty complex. It consists of fixed base settings, some variable settings and several optional (mostly performance) settings. In order to sustain readability of this post, and because I aim to use the virtual machine for gaming only, I minimized the variable parts for latency optimization. The variable topics itself are linked in articles – I hope this makes sense. 🙂

Requirements

Hardware

In order to successfully follow this guide, it is mandatory that the used hardware supports virtualization and IOMMU groups.

When composing the systems hardware, I was eager to avoid the necessity of kernel patching. The ACS patch is not required for the given combination of processor and mainboard. The Nested Page Tables(NPT)-bug has been fixed in Kernel version >4.15rc1 (Dec. 2017).

The setup used for this guide is:

  • Ryzen7 1800x
  • Asus Prime-x370 pro
  • 32GB RAM DDR4-3200 running at 2800MHz (2x 16GB G.Skill RipJaws V black, CL16 Dual Kit)
  • Nvidia Geforce 1050 GTX (Host GPU-PCIe slot1)
  • Nvidia Geforce 1060 GTX (Guest GPU-PCIe slot2)
  • 750W PSU
  • 220GB SSD for host system
  • 2x 120GB SSD for guest image

BIOS settings

Make sure your BIOS is up to date.

I used Version: 4207 (8th Dec 2018)

Enable the following flags in the bios menu:

  • Advanced \ CPU config – SVM Module -> enable
  • Advanced \ AMD CBS – IOMMU -> enable

Operating System

I installed Xubuntu 18.04 x64 (UEFI) from here.

I used the 4.19.5 kernel, installed via ukuu. *since version 19.01, needs a paid license. Any kernel version from 4.15 should work for a Ryzen passthrough.

In Ubuntu 18.04, Xorg is still the default display server – I use it with the latest Nvidia driver (415) in order to have proper graphics support on the host.

So before continuing make sure your:

  • used kernel is at least 4.15 (check via uname -r)
  • used Nvidia driver is at least 415 (you can check via “additional drivers” and install e.g. like this)

The required software

Before we start, install the virtualization manager and related software via:

sudo apt-get install libvirt-bin bridge-utils virt-manager qemu-kvm ovmf

Optional step – use the latest QEMU version on Ubuntu 18.04

Additionally, I like to use QEMU in verison 3.1 or higher. Unfortunately, Ubuntu 18.04 ships with version 2.11. If you want to use a newer version (3.1 in my case) you have to build it first. This is basically copy and paste from the excelent reddit post by liquify. It was suggested to use checkinstall in order to create an .deb-file and install it via aptitude (this would for example automatically take care of the apparmor configuration). Unfortunately, I was unable to get this working. Thus I sticked with the original post by liquify. Atleast the build version will not compromise the distro managed version.

How to build and use QEMU 3.1

  1. Enable source apt list (check Source Code in “Software & Updates”, “Ubuntu Software” tab.)
  2. Download the build dependencies for qemu via:

sudo apt-get build-dep qemu

3. Download the latest qemu source code (3.1 in my case)

wget https://download.qemu.org/qemu-3.1.0.tar.xz
tar xvJf qemu-3.1.0.tar.xz
cd qemu-3.1.0 
./configure --target-list=x86_64-softmmu --audio-drv-list=alsa,pa make

4. When this is done we can move the qemu binary and the BIOS directory to a place were it can be used

sudo cp x86_64-softmmu/qemu-system-x86_64 /usr/local/bin/qemu3.1-system-x86_64
sudo cp -r pc-bios /usr/local/share/qemu

5. Add the the first apparmor additions

sudo nano /etc/apparmor.d/abstractions/libvirt-qemu

and paste this block at the end of the file

  # Custom QEMU binary rules
  /usr/local/bin/qemu3.1-system-x86_64 rmix,
  /usr/local/share/qemu/** r,

Make sure the indent is matchting with the other lines. Save and close the file.

6. Add the the second apparmor additions

sudo nano /etc/apparmor.d/usr.sbin.libvirtd

and paste this block at the end of the file

  # Custom QEMU binary rule
  /usr/local/bin/qemu3.1-system-x86_64 PUx,

Save and close the file. now reload the apparmor service, or reboot

sudo service apparmor reload

When this is done you can later use:

<os>
  ...
  <type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
  ...
</os>

and

<devices>
  ...
  <emulator>/usr/local/bin/qemu3.1-system-x86_64</emulator>
  ...
</devices>

in the virtual machine xml file – see last chapter of this guide. sources: great reddit post by liquify, and askubuntu.com answer by N0rbert

[collapse]

Setting up the vfio ryzen passthrough

Let me make the following simplifications, in order to fulfill my claim of beginner friendliness for this guide:

Devices connected to the mainboard, are members of (IOMMU) groups – depending on where and how they are connected. It is possible to pass devices into a virtual machine. Passed through devices have nearly bare metal performance when used inside the VM.

On the downside, passed through devices are isolated and thus no longer available to the host system. Furthermore it is only possible to isolate all devices of one IOMMU group at the same time. This means, even when not used in the VM if a devices is IOMMU-group sibling of a passed through device, it can not be used on the host system.

Enabling IOMMU feature

Modify the grub config: open sudo nano /etc/default/grub and edit it to match: GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt kvm_amd.npt=1" afterwards use: sudo update-grub and reboot your system.

Afterwards one can verify if iommu is enabled:

dmesg |grep AMD-Vi

dmesg output

[ 0.792691] AMD-Vi: IOMMU performance counters supported [ 0.794428] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40 [ 0.794429] AMD-Vi: Extended features (0xf77ef22294ada): [ 0.794434] AMD-Vi: Interrupt remapping enabled [ 0.794436] AMD-Vi: virtual APIC enabled [ 0.794688] AMD-Vi: Lazy IO/TLB flushing enabled

[collapse]

Identification of the guest GPU

Attention:After following the upcoming steps, the guest GPU will be ignored by the host OS. You have to use a second GPU for the host OS.

In order to activate the hardware passthrough for virtual machines, we have to make sure the nvidia driver is not taking ownership of the PCIe devices; isolate it before we can hand it over.

This is done by applying the vfio-pci to the guest GPU, during the system startup.

Depending on the PCIe slot installed, the hardware has different IOMMU group affilation. One can use a bash script like this in order to determine devices and their grouping:

#!/bin/bash shopt -s nullglobfor d in /sys/kernel/iommu_groups/*/devices/*; do     n=${d#*/iommu_groups/*}; n=${n%%/*}     printf 'IOMMU Group %s ' "$n"     lspci -nns "${d##*/}" done;

source: wiki.archlinux.org

script output

IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 10 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]

IOMMU Group 11 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59)

IOMMU Group 11 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)

IOMMU Group 12 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]

IOMMU Group 12 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]

IOMMU Group 12 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]

IOMMU Group 12 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]

IOMMU Group 12 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]

IOMMU Group 12 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]

IOMMU Group 12 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6 [1022:1466]

IOMMU Group 12 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]

IOMMU Group 13 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b9] (rev 02)

IOMMU Group 13 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b5] (rev 02)

IOMMU Group 13 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b0] (rev 02)

IOMMU Group 13 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)

IOMMU Group 13 02:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)

IOMMU Group 13 02:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)

IOMMU Group 13 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)

IOMMU Group 13 02:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)

IOMMU Group 13 02:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)

IOMMU Group 13 06:00.0 USB controller [0c03]: ASMedia Technology Inc. Device [1b21:1343]

IOMMU Group 13 07:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)

IOMMU Group 13 08:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)

IOMMU Group 13 09:04.0 Multimedia audio controller [0401]: C-Media Electronics Inc CMI8788 [Oxygen HD Audio] [13f6:8788]

IOMMU Group 14 0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)

IOMMU Group 14 0a:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)

IOMMU Group 15 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [10de:1b83] (rev a1)

IOMMU Group 15 0b:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

IOMMU Group 16 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]

IOMMU Group 17 0c:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]

IOMMU Group 18 0c:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]

IOMMU Group 19 0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]

IOMMU Group 1 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]

IOMMU Group 20 0d:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)

IOMMU Group 21 0d:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457]

IOMMU Group 2 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 3 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 4 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]

IOMMU Group 5 00:03.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]

IOMMU Group 6 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 7 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 8 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]

IOMMU Group 9 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

[collapse]

We are looking for the device id of the guest GPU and a suitable USB controller for isolation. Keep in mind that the GPU usually comes combined with an audio device.

We will isolate the GPU in PCIe slot 2, and the USB-controller from group 18 see figure 1.

Figure1: IOMMU groups for passthrough, on ASUS Prime x370-pro (BIOS version 3402)
Figure1: IOMMU groups for passthrough, on ASUS Prime x370-pro (BIOS version 3402)

 

selected devices for isolation

IOMMU Group 14 0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)

IOMMU Group 14 0a:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)

IOMMU Group 15 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [10de:1b83] (rev a1)

IOMMU Group 15 0b:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

[collapse]

For the next step only the GPU-Id is needed.

We have to isolate 10de:1b83 and 10de:10f0. The USB-controller Id (1022:145c) is later used.

Isolation of the guest GPU

In order to isolate the gfx card modify /etc/initramfs-tools/modules via:

sudo nano /etc/initramfs-tools/modules and add:

vfio vfio_iommu_type1 vfio_virqfd vfio_pci ids=10de:1b83,10de:10f0

modify /etc/modules aswell via: sudo nano /etc/modules and add:

vfio vfio_iommu_type1 vfio_pci ids=10de:1b83,10de:10f0

These changes will pass device-ids to the vfio_pci module, in order to reserve these devices for the passthrough. It is crucial that the vfio_pci module claims the GPU before the actual driver (in this case the nvidia graphic-cards driver) loads, otherwise it is not possible to isolate the GPU. Make sure your cards are using the Nvidia driver (not nouvea one).

In order to alter the load sequence in favour to vfio_pci before the nvidia driver, create a file in the modprobe.d folder via sudo nano /etc/modprobe.d/nvidia.conf and add the the following line:

softdep nouveau pre: vfio-pci 
softdep nvidia pre: vfio-pci 
softdep nvidia* pre: vfio-pci

save and close the file.

Create another file via sudo nano /etc/modprobe.d/vfio.conf to add the the following line:

options vfio-pci ids=10de:1b83,10de:10f0

Obviously, the ids have to be the same we have added before to the modules file. Now save and close the file.

Since the Windows 10 update 1803 the following additional entry needs to be set (otherwise BSOD) create the kvm.conf file via sudo nano /etc/modprobe.d/kvm.conf to add the the following line:

options kvm ignore_msrs=1

Save and close the file.

when all is done run: sudo update-initramfs -u -k all

Attention: After the following reboot the isolated GPU will be ignored by the host OS. You have to use the other GPU for the host OS NOW!

-> reboot the system.

Verify the isolation

In order to verify a proper isolation of the device, run:

lspci -nnv

find the line "Kernel driver in use" for the GPU and its audio part. It should state vfio-pci.

output

0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1) (prog-if 00 [VGA controller]) Subsystem: CardExpert Technology GM204 [GeForce GTX 970] [10b0:13c2] Flags: bus master, fast devsel, latency 0, IRQ 44 Memory at f4000000 (32-bit, non-prefetchable) [size=16M] Memory at d0000000 (64-bit, prefetchable) [size=256M] Memory at e0000000 (64-bit, prefetchable) [size=32M] I/O ports at c000 [size=128] Expansion ROM at f5000000 [disabled] [size=512K] Capabilities: <access denied> Kernel driver in use: vfio-pci Kernel modules: nvidiafb, nouveau

[collapse]

Congratulations, the hardest part is done! 🙂

Creating the Windows virtual machine

The virtualization is done via an open source machine emulator and virtualizer called QEMU. One can either run qemu directly, or use a GUI called virt-manager in order setup, and run a virtual machine. I prefer using the GUI. Unfortunately not every settings is supported in the Virtual Manager. Thus, I define the basic settings in the UI do a quick VM start and force stop it right after I see the GPU is passed-over correctly. Afterwards one can edit the missing bits into the VM config via virsh.

Make sure you have your windows iso file, as well as the virtio windows drivers downloaded and ready for the instalation.

Preconfiguration steps

As I said, lots of variable parts can add complexity to a passthrough guide. Before we can continue we have to make a decision about the storage type of the virtual machine.

Creating image container

In this guide I use a raw image container, see the storage post for further information.

fallocate -l 111G /media/vm/win10.img

The 111G were set in order to maximize the size of the image file and still fit on the 120GB SSD.

Creating an Ethernet Bridge

We will use a bridged connection for the virtual machine. This requires a wired connection to the computer.

I simply followed the great guide from heiko here.

see the ethernet setups post for further information.

Create a new virtual machine

As said before we use the virtual machine manager GUI to create the virtual machine with basic settings.

In order to do so start up the manager and click the “Create a new virtual machine” button.

Step 1

Select “Local install media” and proceed forward (see figure 2).

Figure2: select local installation
Figure2: Create a virtual machine step 1 – select local installation medium

Step2

Now we have to select the windows iso file we want to use for the installation (see figure3). Also check the automatic system detection. Hint:Use the button “browse local” (one of the buttons on the right side) to browse to the iso location.

select the windows iso file
Figure3: Create a virtual machine step 2 – select the windows iso file.

Step 3

Put in the amount of RAM and CPU cores you want to passthrough and continue with the wizard. I want to use 12 Cores (16 is maximum) and 16384 MiB of RAM in my VM.

Figure4: Memory and CPU settings.
Figure4: Create a virtual machine step 3 – Memory and CPU settings

Step 4

Here we have to choose our previously created storage file and continue.

Figure5: Create a virtual machine step 4 - Select the previous created storage.
Figure5: Create a virtual machine step 4 – Select the previous created storage.

Step 5

On the last steps are slightly more clicks required.

Put in a meaningful name for the virtual machine. This becomes the name of the xml config file, I guess I would not use anything with spaces in it. It might work without a problem, but I wasn’t brave enough to do so in the past.

Furthermore make sure you check “Customize configuration before install”.

For the “network selection” pick “Specify shared device name” and type in the name of the network bridge we had created previously. You can use ifconfig in a terminal to show your ethernet devices. In my case that is “bridge0”.

Figure6: Create a virtual machine step 5 - Before installation
Figure6: Create a virtual machine step 5 – Before installation.

First configuration

Once you have pressed “finish” the virtual machine configuration window opens. The left column displays all hardware devices which this VM uses. By left clicking on them, you see the options for the device on the right side. You can remove hardware via right click. You can add more hardware via the button below. Make sure to hit apply after every change.

The following screenshots may vary slightly from your GUI (as I have added and removed some hardware devices).

Overview

On the Overview entry in the list make sure that for “Firmware” UEFIx86_64 [...] OVMF [...] is selected. “Chipset” should be i440FX see figure7.

Figure7: Overview configuration
Figure7: Virtual machine configuration – Overview configuration

CPUs

For the “Model:” click in to the drop-down, as if it is a text field, and type in host-passthrough.

For “Topology” check “Manually set CPU topology” with the following values:

  • Sockets: 1
  • Cores: 6
  • Threads: 2
Figure8: CPU configuration
Figure8: Virtual machine configuration – CPU configuration

Disk 1

When you first enter this section it will say “IDE Disk 1”. We have to change the “Disk bus:” value to VirtIO.

Figure9: Disk configuration
Figure9: Virtual machine configuration – Disk configuration

VirtIO Driver

Next we have to add the virtIO driver iso, so it be used during the windows installation. Otherwise the installer can not recognize the storage volume we have just changed from ide to virtio.

In order to add the driver press “Add Hardware”, select “Storage” select the downloaded image file.

For “Device type:” select CDROM device. For “Bus type:” select IDE otherwise windows will also not find the CDROM either 😛 (see Figure10).

Figure10: Adding virtIO driver CDROM.
Figure10: Virtual machine configuration – Adding virtIO driver CDROM.

The GPU passthrough

Finally! In order to fulfill the GPU passthrough, we have to add our guest GPU and the usb controller to the virtual machine. Click “Add Hardware” select “PCI Host Device” and find the device by its ID. Do this three times:

  • 0000:0a:00.0 for Geforce GTX 970
  • 0000:0a:00.1 for Geforce GTX 970 Audio
  • 0000:0b:00.3 for the USB controller
Figure11: Adding PCI devices.
Figure11: Virtual machine configuration – Adding PCI devices (screenshot is still with old hardware).

Remark: In case you later add further hardware (e.g. another PCIe device), these IDs might/will change – just keep in mind if you change the hardware just redo this step with updated Ids (see Update 2).

 

This should be it. Plugin a second mouse and keyboard in the USB ports of the passed through controller (see figure1).

Hit “Beginn installation”, a Tiano core logo should appear on the monitor connected to the GTX 970. If a funny white and yellow shell pops-up you can use exitto leave it.

When nothing happens, make sure you have both CDROM device (one for each iso windows10 and virtIO driver) in your list. Also check the “boot options” entry.

Once you see the windows installation, use force off from the virtual machine manager to stop the VM.

Final configuration and optional steps

In order edit the virtual machine run

cd /etc/libvirt/qemu

sudo virsh define windows10.xml (change this according to your virtual machine name)

sudo virsh edit windows10

Once your done with the edits and have saved your config re-run

sudo virsh define windows10.xml

I have added the following changes to my configuration:

AMD Ryzen CPU optimizations

I rewrote this section in a separate article – see here.

Hugepages for better RAM performance

This step is optionaland requires previous setup: See the Hugepages post for details.

find the line which ends with </currentMemory> and add the following block behind it:

  <memoryBacking>   
    <hugepages/> 
  </memoryBacking>

Attention:Make sure <memoryBacking> and <currentMemory> have the same indent.

Troubleshooting

Removing Error 43 for Nvidia cards

This guide uses an Nvidia card as guest GPU. Unfortunately, the Nvidia driver throws Error 43 , if it recognizes the GPU is being passed through to a virtual machine.

I rewrote this section and moved it into a separate article.

Getting audio to work

After some sleepless nights I wrote an separate article on that matter.

Removing stutter on Guest

  • Set <ioapic driver='kvm'/> under <features> tag
  • Enable MSI interrupts for the passed-through GPU
    • Do this in windows with the MSI tool (run as admin)
  • Consider updating to the latest QEMU version

 

to be continued…

Final virtual machine XML file configuration

#anchor

This is my final XML file

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>windows10</name>
  <uuid>5ebdbc6c-a9e4-4376-886a-fc826244111b</uuid>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <iothreads>2</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='8'/>
    <vcpupin vcpu='1' cpuset='9'/>
    <vcpupin vcpu='2' cpuset='10'/>
    <vcpupin vcpu='3' cpuset='11'/>
    <vcpupin vcpu='4' cpuset='12'/>
    <vcpupin vcpu='5' cpuset='13'/>
    <vcpupin vcpu='6' cpuset='14'/>
    <vcpupin vcpu='7' cpuset='15'/>
    <emulatorpin cpuset='0-1'/>
    <iothreadpin iothread='1' cpuset='0-1'/>
    <iothreadpin iothread='2' cpuset='2-3'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-3.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/windows10_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
  </features>
  <cpu mode='host-passthrough' check='none'>
    <topology sockets='1' cores='4' threads='2'/>
    <cache level='3' mode='emulate'/>
    <feature policy='require' name='topoext'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' tickpolicy='catchup'/>
    <timer name='pit' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/local/bin/qemu3.0-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/media/vm/win10.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/media/vm2/win1803.img'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='scsi' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:54:95:1f'/>
      <source bridge='bridge0'/>
      <model type='rtl8139'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <sound model='ich6'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </sound>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0c' slot='0x00' function='0x3'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </hostdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='1'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='apparmor' relabel='yes'/>
  <seclabel type='dynamic' model='dac' relabel='yes'/>
  <qemu:commandline>
    <qemu:arg value='-cpu'/>
    <qemu:arg value='host,hv_time,kvm=off,hv_vendor_id=null'/>
    <qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
    <qemu:env name='QEMU_PA_SAMPLES' value='8192'/>
    <qemu:env name='QEMU_AUDIO_TIMER_PERIOD' value='99'/>
    <qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>
  </qemu:commandline>
</domain>

[collapse]


Sources

The glorious Arch wiki heiko-sieger.info: Really comprehensive guide Great post by user “MichealS” on level1techs.com forum Wendels draft post on Level1techs.com  

Updates

2019 – 02- 08  –  Fixed typo and added a remark about paid licenses for ukuu.

2019 – 08- 14  – Updated SEO settings and added table of contents

16 Comment

  1. gman says: Reply

    Mathias,

    Thank you for this, I’m a newbie to this topic and have a question regarding Groups, when I run the bash script, the NVIDIA card I hope to isolate and passthrough seems to be grouped with many other USB and SATA controllers…is this problematic? My MB is a Gigabyte AB350M-Gaming3.

    I would appreciate your advise for how to proceed with this. Thank you.

    See some sample output below:

    Group 12 (which contains GPU for Isolation)

    IOMMU Group 12 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset USB 3.1 xHCI Controller [1022:43bb] (rev 02)
    IOMMU Group 12 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset SATA Controller [1022:43b7] (rev 02)
    IOMMU Group 12 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b2] (rev 02)
    IOMMU Group 12 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
    IOMMU Group 12 02:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
    IOMMU Group 12 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
    IOMMU Group 12 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 0c)
    IOMMU Group 12 05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106 [GeForce GTX 650 Ti] [10de:11c6] (rev a1)
    IOMMU Group 12 05:00.1 Audio device [0403]: NVIDIA Corporation GK106 HDMI Audio Controller [10de:0e0b] (rev a1)

    Group 13 (only contains primary Host video card)
    IOMMU Group 13 06:00.0 VGA compatible controller [0300]: NVIDIA Corporation G86 [GeForce 8400 GS] [10de:0422] (rev a1)

    I see no other groups for USB ports that aren’t already accounted for in Group 12.

    1. Mathias Hueber says: Reply

      Well, if you want to passthrough Group 12, you have to pass all device to the VM. Considering your output I would try to pass Group 13 to the VM if that is possible. I have no experiences with the Gigabyte AB350M-Gaming3, have you tried looking for other passhtrough success stories with said MB?

  2. Pat says: Reply

    Great tutorial but I think in selected devices for isolation you’ve left the IDs from your 16.04 tutorial which don’t match the ids in the script output so it get a bit confusing as 10de:0fbb doesn’t seem to exist

    1. Mathias Hueber says: Reply

      indeed you are correct. I had updated the first ID, but forgot to change the Definition Audio Controller id as well. Good catch – thank you!

  3. Ben says: Reply

    I have a 1700x and whenever I use 2 threads it tells me “qemu3.1-system-x86_64: warning: This family of AMD CPU doesn’t support hyperthreading(2)” I’m using qemu 3.1, anything I need to do ?

    1. Mathias Hueber says: Reply

      Hello,
      sorry for the delayed reply… I am lazy with these comments. Does the problem still exists?
      If so, have you used “kvm ignore_msrs=1” ?
      You can email me your VM xml if you want, so I can have look at it.

      Cheers m.

  4. Jo says: Reply

    You should not use a newer BIOS than 4207 (8th Dec 2018)!
    (I tried to in April 2019)

    After upgrading to the newest version, passing-through didn’t work anymore with the separated x16-slots (unknown PCIe header message in VMs).
    Downgrading back to 4207 was a pain cuz downgrading seems not to be supported by Asus-Tools (but it is possible).

  5. simon says: Reply

    Thanks for sharing this post, is very helpful article.

    1. Mathias Hueber says: Reply

      Your welcome

  6. James Sevener says: Reply

    Hello Mathias,

    Thank you a ton for your guide, the part I was missing is that the VM needs to be UEFI, and that you have to stop the VM starting the first time and make the changes to the XML.

    Have you had issues with windows updates failing to install with code: 0xc1900101?
    Seems to be driver related, I wiped my VM and started over and I still can’t get Windows update 1803 to install.

    1. Mathias Hueber says: Reply

      Have you enabled kvm.ignore_msrs=1? Related post can be found here: https://old.reddit.com/r/VFIO/comments/901ioi/win10_1803_installs_failing_in_kvm_on_amd_hardware/

  7. datapacket says: Reply

    Hi Mathias, do you think if it’s possible to do the same with your tutorial with two same card id?

    Because i have 2X RTX 2080 and so i have the same card ID but PCI id are different. I search a solution to do a good pci passthrough with only one(obviously), i have tested lot of tutorial and nothing seem to work (i will try your tutorial when i’m at home).

    I’m on Intel I9, 2X RTX 20800, Xubuntu 18.04

  8. OneSphere says: Reply

    Do you need to have two graphics cards for this? I have one RTX 2080 and a Ryzen 7 2700.

    1. Mathias Hueber says: Reply

      Well, in case you run the host headless you only would need one gfx card for the guest. But this would render my usecase pointless, as the host would have no dekstop 🙂
      I think only the ryzen G series have integrated graphics. The “regular” ones (including the 2700) do not have integrated graphics. Thus, two cards would be required.

  9. Beautiful article, Thank you!

Leave a Reply

Wir benutzen Cookies um die Nutzerfreundlichkeit der Webseite zu verbessen. Durch Deinen Besuch stimmst Du dem zu.