Beginner friendly guide to windows virtual machines with GPU passthrough on Ubuntu 18.04; or how to play competitive games in a virtual machine.

Preamble

The intent of this document is to provide a complete, step-by-step guide on how to setup a virtual machine(VM) with graphics cards(GPU) passthrough – detailed enough that even Linux rookies are able to participate.

The final system will run Xubuntu 18.04 as host operating system(OS), and Windows 10 as guest OS, considering gaming as main use-case of the guest.

The article is based on my last years guide, which used Ubuntu 16.04 as host system. I updated the former guide regularly while optimizing performance and hardware 😉

Update:

A newer version of this article, Ubuntu version 20.04 exists here.

I am still very happy with my distro choice (Xubuntu), but I have to emphasize that (X)Ubuntu (or any Debian based distro) is not the easiest distribution to perform virtual machine passthrough. Most of the guides I found online were targeting either Fedora or Arch as the host operating system. Especially Fedora 26 should be easy to setup for passthrough (as recommended by level1techs).

Introduction to VFIO and PCI passthrough

Virtual Function I/O (or VFIO) allows a virtual machine (VM) direct access to a pci hardware resource, such as a graphics processing unit (GPU). Virtual machines with set up GPU passthrough can gain close to bare metal performance, which makes running games in a Windows virtual machine possible.

Unfortunately, the setup process can be pretty complex. It consists of fixed base settings, some variable settings and several optional (mostly performance) settings. In order to sustain readability of this post, and because I aim to use the virtual machine for gaming only, I minimized the variable parts for latency optimization. The variable topics itself are linked in articles – I hope this makes sense. 🙂

Requirements

Hardware

In order to successfully follow this guide, it is mandatory that the used hardware supports virtualization and IOMMU groups.

When composing the systems hardware, I was eager to avoid the necessity of kernel patching. The ACS patch is not required for the given combination of processor and mainboard. The Nested Page Tables(NPT)-bug has been fixed in Kernel version >4.15rc1 (Dec. 2017).

The setup used for this guide is:

  • Ryzen7 1800x
  • Asus Prime-x370 pro
  • 32GB RAM DDR4-3200 running at 2800MHz (2x 16GB G.Skill RipJaws V black, CL16 Dual Kit)
  • Nvidia Geforce 1050 GTX (Host GPU-PCIe slot1)
  • Nvidia Geforce 1060 GTX (Guest GPU-PCIe slot2)
  • 750W PSU
  • 220GB SSD for host system
  • 2x 120GB SSD for guest image

BIOS settings

Make sure your BIOS is up to date.

Attention! The ASUS Prime x370/x470/x570 pro BIOS versions for AMD RYZEN 3000-series support (version 4602 – version 5220), will break a PCI passthrough setup. Error “Unknown PCI header type ‘127’“.

BIOS versions up to (and including) 4406, 2019/03/11 are working.

BIOS versions from (and including) 5406, 2019/11/25 are working.

I used Version: 4207 (8th Dec 2018)

Enable the following flags in the bios menu:

  • Advanced \ CPU config – SVM Module -> enable
  • Advanced \ AMD CBS – IOMMU -> enable

Operating System

I installed Xubuntu 18.04 x64 (UEFI) from here.

I used the 4.19.5 kernel, installed via ukuu.

Update – since version 19.01, ukku needs a paid license.

Attention! Any kernel version from 4.15 or higher should work for a Ryzen passthrough (except versions 5.1 and 5.2 including all subversions).

In Ubuntu 18.04, Xorg is still the default display server – I use it with the latest Nvidia driver (415) in order to have proper graphics support on the host.

So before continuing make sure your:

  • used kernel is at least 4.15 (check via uname -r)
  • used Nvidia driver is at least 415 (you can check via “additional drivers” and install e.g. like this)

The required software

Before we start, install the virtualization manager and related software via:

sudo apt-get install libvirt-bin bridge-utils virt-manager qemu-kvm ovmf

Optional step – update QEMU version on Ubuntu 18.04

Ubuntu 18.04 ships with QEMU version 2.11. If you want to use a newer version, you can build it on your own.

In the course of this article I will use QEMU version 4.1. I have created a seperate article on the update process under Ubuntu.

Setting up the vfio ryzen passthrough

Let me make the following simplifications, in order to fulfill my claim of beginner friendliness for this guide:

Devices connected to the mainboard, are members of (IOMMU) groups – depending on where and how they are connected. It is possible to pass devices into a virtual machine. Passed through devices have nearly bare metal performance when used inside the VM.

On the downside, passed through devices are isolated and thus no longer available to the host system. Furthermore it is only possible to isolate all devices of one IOMMU group at the same time. This means, even when not used in the VM if a devices is IOMMU-group sibling of a passed through device, it can not be used on the host system.

Enabling IOMMU feature

To enable the IOMMU feature on an AMD Ryzen system, modify your grub config. Run sudo nano /etc/default/grub and edit the line which starts with GRUB_CMDLINE_LINUX_DEFAULT, to match:

GRUB_CMDLINE_LINUX_DEFAULT="amd_iommu=on iommu=pt kvm_amd.npt=1 kvm_amd.avic=1"

in case you are using an Intel CPU the line should read:

GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"

Once your done editing, you can press CTRL+x CTRL+y to exit the editor and save the changes.

Afterwards run:

sudo update-grub

Reboot your system when the command has finished.

One can verify after a system start if IOMMU is enabled by running:

dmesg |grep AMD-Vi

dmesg output

[ 0.792691] AMD-Vi: IOMMU performance counters supported 
[ 0.794428] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40 
[ 0.794429] AMD-Vi: Extended features (0xf77ef22294ada): 
[ 0.794434] AMD-Vi: Interrupt remapping enabled 
[ 0.794436] AMD-Vi: virtual APIC enabled 
[ 0.794688] AMD-Vi: Lazy IO/TLB flushing enabled

[collapse]

Identification of the guest GPU

Attention! After the upcoming steps, the guest GPU will be ignored by the host OS. You have to have a second GPU for the host OS now!

In order to activate the hardware passthrough for virtual machines, we have to make sure the nvidia driver is not taking ownership of the PCIe devices; isolate it before we can hand it over.

This is done by applying the vfio-pci to the guest GPU, during the system startup.

Depending on the PCIe slot installed, the hardware has different IOMMU group affilation. One can use a bash script like this in order to determine devices and their grouping:

#!/bin/bash
shopt -s nullglob
for d in /sys/kernel/iommu_groups/*/devices/*; do
    n=${d#*/iommu_groups/*}; n=${n%%/*}
    printf 'IOMMU Group %s ' "$n"
    lspci -nns "${d##*/}"
done;

source: wiki.archlinux.org

script output


IOMMU Group 0 00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 10 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]

IOMMU Group 11 00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 59)
IOMMU Group 11 00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU Group 12 00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1460]
IOMMU Group 12 00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1461]
IOMMU Group 12 00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1462]
IOMMU Group 12 00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1463]
IOMMU Group 12 00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1464]
IOMMU Group 12 00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1465]
IOMMU Group 12 00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6 [1022:1466]
IOMMU Group 12 00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 [1022:1467]
IOMMU Group 13 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b9] (rev 02)
IOMMU Group 13 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b5] (rev 02)
IOMMU Group 13 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b0] (rev 02)
IOMMU Group 13 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 13 02:02.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 13 02:03.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 13 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 13 02:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 13 02:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
IOMMU Group 13 06:00.0 USB controller [0c03]: ASMedia Technology Inc. Device [1b21:1343]
IOMMU Group 13 07:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU Group 13 08:00.0 PCI bridge [0604]: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge [1b21:1080] (rev 04)

IOMMU Group 13 09:04.0 Multimedia audio controller [0401]: C-Media Electronics Inc CMI8788 [Oxygen HD Audio] [13f6:8788]

IOMMU Group 14 0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)

IOMMU Group 14 0a:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)

IOMMU Group 15 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [10de:1b83] (rev a1)

IOMMU Group 15 0b:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

IOMMU Group 16 0c:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:145a]

IOMMU Group 17 0c:00.2 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:1456]

IOMMU Group 18 0c:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]

IOMMU Group 19 0d:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device [1022:1455]

IOMMU Group 1 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]

IOMMU Group 20 0d:00.2 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] [1022:7901] (rev 51)

IOMMU Group 21 0d:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457]

IOMMU Group 2 00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 3 00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 4 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]

IOMMU Group 5 00:03.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]

IOMMU Group 6 00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 7 00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

IOMMU Group 8 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1454]

IOMMU Group 9 00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]

[collapse]

We are looking for the device id of the guest GPU and a suitable USB controller for isolation. Keep in mind that the GPU usually comes combined with an audio device.

We will isolate the GPU in PCIe slot 2, and the USB-controller from group 18 see figure 1.

Figure1: IOMMU groups for passthrough, on ASUS Prime x370-pro (BIOS version 3402)
Figure1: IOMMU groups for passthrough, on ASUS Prime x370-pro (BIOS version 3402)

selected devices for isolation

IOMMU Group 14 0a:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP107 [GeForce GTX 1050 Ti] [10de:1c82] (rev a1)
IOMMU Group 14 0a:00.1 Audio device [0403]: NVIDIA Corporation GP107GL High Definition Audio Controller [10de:0fb9] (rev a1)
IOMMU Group 15 0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [10de:1b83] (rev a1)
IOMMU Group 15 0b:00.1 Audio device [0403]: NVIDIA Corporation GP104 High Definition Audio Controller [10de:10f0] (rev a1)

[collapse]

For the next step only the GPU-Id is needed.

We have to isolate 10de:1b83 and 10de:10f0. The USB-controller Id (1022:145c) is later used.

Isolation of the guest GPU

In order to isolate the gfx card modify /etc/initramfs-tools/modules via:

sudo nano /etc/initramfs-tools/modules and add:

vfio vfio_iommu_type1 vfio_virqfd vfio_pci ids=10de:1b83,10de:10f0

modify /etc/modules aswell via: sudo nano /etc/modules and add:

vfio vfio_iommu_type1 vfio_pci ids=10de:1b83,10de:10f0

These changes will pass device-ids to the vfio_pci module, in order to reserve these devices for the passthrough. It is crucial that the vfio_pci module claims the GPU before the actual driver (in this case the nvidia graphic-cards driver) loads, otherwise it is not possible to isolate the GPU. Make sure your cards are using the Nvidia driver (not nouvea one).

In order to alter the load sequence in favour to vfio_pci before the nvidia driver, create a file in the modprobe.d folder via sudo nano /etc/modprobe.d/nvidia.conf and add the the following line:

softdep nouveau pre: vfio-pci 
softdep nvidia pre: vfio-pci 
softdep nvidia* pre: vfio-pci

save and close the file.

Create another file via sudo nano /etc/modprobe.d/vfio.conf to add the the following line:

options vfio-pci ids=10de:1b83,10de:10f0

Obviously, the ids have to be the same we have added before to the modules file. Now save and close the file.

Since the Windows 10 update 1803 the following additional entry needs to be set (otherwise BSOD) create the kvm.conf file via sudo nano /etc/modprobe.d/kvm.conf to add the the following line:

options kvm ignore_msrs=1

Save and close the file.

when all is done run: sudo update-initramfs -u -k all

Attention: After the following reboot the isolated GPU will be ignored by the host OS. You have to use the other GPU for the host OS NOW!

-> reboot the system.

Verify the isolation

In order to verify a proper isolation of the device, run:

lspci -nnv

find the line "Kernel driver in use" for the GPU and its audio part. It should state vfio-pci.

output

0b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP104 [GeForce GTX 1060 6GB] [10de:1b83] (rev a1) (prog-if 00 [VGA controller])
	Subsystem: ASUSTeK Computer Inc. GP104 [1043:8655]
	Flags: fast devsel, IRQ 44
	Memory at f4000000 (32-bit, non-prefetchable) [size=16M]
	Memory at c0000000 (64-bit, prefetchable) [size=256M]
	Memory at d0000000 (64-bit, prefetchable) [size=32M]
	I/O ports at d000 [size=128]
	Expansion ROM at f5000000 [disabled] [size=512K]
	Capabilities: <access denied>
	Kernel driver in use: vfio-pci
	Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia

[collapse]

Congratulations, the hardest part is done! 🙂

Creating the Windows virtual machine

The virtualization is done via an open source machine emulator and virtualizer called QEMU. One can either run qemu directly, or use a GUI called virt-manager in order setup, and run a virtual machine. I prefer using the GUI. Unfortunately not every settings is supported in the Virtual Manager. Thus, I define the basic settings in the UI do a quick VM start and force stop it right after I see the GPU is passed-over correctly. Afterwards one can edit the missing bits into the VM config via virsh.

Make sure you have your windows iso file, as well as the virtio windows drivers downloaded and ready for the instalation.

Preconfiguration steps

As I said, lots of variable parts can add complexity to a passthrough guide. Before we can continue we have to make a decision about the storage type of the virtual machine.

Creating image container

In this guide I use a raw image container, see the storage post for further information.

fallocate -l 111G /media/vm/win10.img

The 111G were set in order to maximize the size of the image file and still fit on the 120GB SSD.

Creating an Ethernet Bridge

We will use a bridged connection for the virtual machine. This requires a wired connection to the computer.

I simply followed the great guide from heiko here.

see the ethernet setups post for further information.

Create a new virtual machine

As said before we use the virtual machine manager GUI to create the virtual machine with basic settings.

In order to do so start up the manager and click the “Create a new virtual machine” button.

Step 1

Select “Local install media” and proceed forward (see figure 2).

Figure2: select local installation
Figure2: Create a virtual machine step 1 – select local installation medium

Step2

Now we have to select the windows iso file we want to use for the installation (see figure3). Also check the automatic system detection. Hint:Use the button “browse local” (one of the buttons on the right side) to browse to the iso location.

select the windows iso file
Figure3: Create a virtual machine step 2 – select the windows iso file.

Step 3

Put in the amount of RAM and CPU cores you want to passthrough and continue with the wizard. I want to use 12 Cores (16 is maximum) and 16384 MiB of RAM in my VM.

Figure4: Memory and CPU settings.
Figure4: Create a virtual machine step 3 – Memory and CPU settings

Step 4

Here we have to choose our previously created storage file and continue.

Figure5: Create a virtual machine step 4 - Select the previous created storage.
Figure5: Create a virtual machine step 4 – Select the previous created storage.

Step 5

On the last steps are slightly more clicks required.

Put in a meaningful name for the virtual machine. This becomes the name of the xml config file, I guess I would not use anything with spaces in it. It might work without a problem, but I wasn’t brave enough to do so in the past.

Furthermore make sure you check “Customize configuration before install”.

For the “network selection” pick “Specify shared device name” and type in the name of the network bridge we had created previously. You can use ifconfig in a terminal to show your ethernet devices. In my case that is “bridge0”.

Figure6: Create a virtual machine step 5 - Before installation
Figure6: Create a virtual machine step 5 – Before installation.

First configuration

Once you have pressed “finish” the virtual machine configuration window opens. The left column displays all hardware devices which this VM uses. By left clicking on them, you see the options for the device on the right side. You can remove hardware via right click. You can add more hardware via the button below. Make sure to hit apply after every change.

The following screenshots may vary slightly from your GUI (as I have added and removed some hardware devices).

Overview

On the Overview entry in the list make sure that for “Firmware” UEFIx86_64 [...] OVMF [...] is selected. “Chipset” should be i440FX see figure7.

Figure7: Overview configuration
Figure7: Virtual machine configuration – Overview configuration

CPUs

For the “Model:” click in to the drop-down, as if it is a text field, and type in host-passthrough. This will pass all CPU informations to the guest.

In case of an AMD Ryzen processor it is recommended to use “EPYC” from the model drop down, especially for QEMU versions below 4.0. You can read the CPU Model Information chapter in the performance guide for further information.

For “Topology” check “Manually set CPU topology” with the following values:

  • Sockets: 1
  • Cores: 4
  • Threads: 2
Figure8: CPU configuration
Figure8: Virtual machine configuration – CPU configuration *outdated screenshot

Disk 1

When you first enter this section it will say “IDE Disk 1”. We have to change the “Disk bus:” value to VirtIO.

Figure9: Disk configuration
Figure9: Virtual machine configuration – Disk configuration

VirtIO Driver

Next we have to add the virtIO driver iso, so it be used during the windows installation. Otherwise the installer can not recognize the storage volume we have just changed from ide to virtio.

In order to add the driver press “Add Hardware”, select “Storage” select the downloaded image file.

For “Device type:” select CDROM device. For “Bus type:” select IDE otherwise windows will also not find the CDROM either 😛 (see Figure10).

Figure10: Adding virtIO driver CDROM.
Figure10: Virtual machine configuration – Adding virtIO driver CDROM.

The GPU passthrough

Finally! In order to fulfill the GPU passthrough, we have to add our guest GPU and the usb controller to the virtual machine. Click “Add Hardware” select “PCI Host Device” and find the device by its ID. Do this three times:

  • 0000:0a:00.0 for Geforce GTX 970
  • 0000:0a:00.1 for Geforce GTX 970 Audio
  • 0000:0b:00.3 for the USB controller
Figure11: Adding PCI devices.
Figure11: Virtual machine configuration – Adding PCI devices (screenshot is still with old hardware).

Remark: In case you later add further hardware (e.g. another PCIe device), these IDs might/will change – just keep in mind if you change the hardware just redo this step with updated Ids (see Update 2).

This should be it. Plugin a second mouse and keyboard in the USB ports of the passed through controller (see figure1).

Hit “Beginn installation”, a Tiano core logo should appear on the monitor connected to the GTX 970. If a funny white and yellow shell pops-up you can use exitto leave it.

When nothing happens, make sure you have both CDROM device (one for each iso windows10 and virtIO driver) in your list. Also check the “boot options” entry.

Once you see the windows installation, use force off from the virtual machine manager to stop the VM.

Final configuration and optional steps

In order edit the virtual machines configuration use: virsh edit your-windows-vm-name

Once your done editing, you can use CTRL+x CTRL+y to exit the editor and save the changes.

I have added the following changes to my configuration:

AMD Ryzen CPU optimizations

I moved this section in a separate article – see the CPU pinning part of the performance optimization article.

Hugepages for better RAM performance

This step is optionaland requires previous setup: See the Hugepages post for details.

find the line which ends with </currentMemory> and add the following block behind it:

  <memoryBacking>   
    <hugepages/> 
  </memoryBacking>

Remark: Make sure <memoryBacking> and <currentMemory> have the same indent.

Performance tuning

This acrticle describes performance optimizations for gaming on a virtual machine (VM) with GPU passthrough.

Troubleshooting

Removing Error 43 for Nvidia cards

This guide uses an Nvidia card as guest GPU. Unfortunately, the Nvidia driver throws Error 43 , if it recognizes the GPU is being passed through to a virtual machine.

I rewrote this section and moved it into a separate article.

Getting audio to work

After some sleepless nights I wrote an separate article on that matter.

Removing stutter on Guest

There are quiet a few software and hardware version combinations which will result in a bad Guest performance.

I have created a separate article on known issues and common errors.

My final virtual machine libvirt XML configuration

This is my final XML file

<domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
  <name>win10-i440fx-lg</name>
  <uuid>073f2a4e-5ab2-4bc7-99c2-2ac006adc87e</uuid>
  <memory unit='KiB'>16777216</memory>
  <currentMemory unit='KiB'>16777216</currentMemory>
  <memoryBacking>
    <hugepages/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <iothreads>2</iothreads>
  <cputune>
    <vcpupin vcpu='0' cpuset='8'/>
    <vcpupin vcpu='1' cpuset='9'/>
    <vcpupin vcpu='2' cpuset='10'/>
    <vcpupin vcpu='3' cpuset='11'/>
    <vcpupin vcpu='4' cpuset='12'/>
    <vcpupin vcpu='5' cpuset='13'/>
    <vcpupin vcpu='6' cpuset='14'/>
    <vcpupin vcpu='7' cpuset='15'/>
    <emulatorpin cpuset='0-1'/>
    <iothreadpin iothread='1' cpuset='0-1'/>
    <iothreadpin iothread='2' cpuset='2-3'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-4.1'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
    <nvram>/var/lib/libvirt/qemu/nvram/win10-i440fx-lg_VARS.fd</nvram>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vpindex state='on'/>
      <synic state='on'/>
      <stimer state='on'/>
      <reset state='on'/>
      <vendor_id state='on' value='1234567890ab'/>
      <frequencies state='on'/>
    </hyperv>
    <kvm>
      <hidden state='on'/>
    </kvm>
    <vmport state='off'/>
    <ioapic driver='kvm'/>
  </features>
  <cpu mode='custom' match='exact' check='none'>
    <model fallback='allow'>EPYC</model>
    <topology sockets='1' cores='4' threads='2'/>
    <feature policy='require' name='topoext'/>
    <feature policy='require' name='svm'/>
    <feature policy='require' name='apic'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='invtsc'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='rtc' present='no' tickpolicy='catchup'/>
    <timer name='pit' present='no' tickpolicy='delay'/>
    <timer name='hpet' present='no'/>
    <timer name='kvmclock' present='no'/>
    <timer name='hypervclock' present='yes'/>
    <timer name='tsc' present='yes' mode='native'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <pm>
    <suspend-to-mem enabled='no'/>
    <suspend-to-disk enabled='no'/>
  </pm>
  <devices>
    <emulator>/usr/local/bin/qemu4.1-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/media/vm/win10.img'/>
      <target dev='vda' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/media/vm2/win10_drive_d.img'/>
      <target dev='vdb' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0d' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='ich9-ehci1'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci1'>
      <master startport='0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci2'>
      <master startport='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/>
    </controller>
    <controller type='usb' index='0' model='ich9-uhci3'>
      <master startport='4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='ide' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <controller type='scsi' index='0' model='lsilogic'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:54:95:1f'/>
      <source bridge='bridge0'/>
      <model type='rtl8139'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='spice' autoport='yes'>
      <listen type='address'/>
      <gl enable='no' rendernode='/dev/dri/by-path/pci-0000:0a:00.0-render'/>
    </graphics>
    <sound model='ich6'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </sound>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0c' slot='0x00' function='0x3'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x0b' slot='0x00' function='0x1'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </hostdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='1'/>
    </redirdev>
    <redirdev bus='usb' type='spicevmc'>
      <address type='usb' bus='0' port='2'/>
    </redirdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </memballoon>
    <shmem name='looking-glass'>
      <model type='ivshmem-plain'/>
      <size unit='M'>64</size>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0c' function='0x0'/>
    </shmem>
  </devices>
  <seclabel type='dynamic' model='apparmor' relabel='yes'/>
  <seclabel type='dynamic' model='dac' relabel='yes'/>
  <qemu:commandline>
    <qemu:env name='QEMU_AUDIO_DRV' value='pa'/>
    <qemu:env name='QEMU_PA_SAMPLES' value='8192'/>
    <qemu:env name='QEMU_AUDIO_TIMER_PERIOD' value='99'/>
    <qemu:env name='QEMU_PA_SERVER' value='/run/user/1000/pulse/native'/>
  </qemu:commandline>
</domain>

[collapse]

to be continued…


Sources

The glorious Arch wiki

heiko-sieger.info: Really comprehensive guide

Great post by user “MichealS” on level1techs.com forum

Wendels draft post on Level1techs.com

Updates

  • 2019 – 02- 08  –  Fixed typo and added a remark about paid licenses for ukuu.
  • 2019 – 08- 14 – Updated SEO settings and added table of contents
  • 2019 – 11 – 09 – Added further remarks and reminder to the article

67 comments on “Beginner friendly guide to windows virtual machines with GPU passthrough on Ubuntu 18.04; or how to play competitive games in a virtual machine.”

  1. gman

    Mathias,

    Thank you for this, I’m a newbie to this topic and have a question regarding Groups, when I run the bash script, the NVIDIA card I hope to isolate and passthrough seems to be grouped with many other USB and SATA controllers…is this problematic? My MB is a Gigabyte AB350M-Gaming3.

    I would appreciate your advise for how to proceed with this. Thank you.

    See some sample output below:

    Group 12 (which contains GPU for Isolation)

    IOMMU Group 12 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset USB 3.1 xHCI Controller [1022:43bb] (rev 02)
    IOMMU Group 12 01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset SATA Controller [1022:43b7] (rev 02)
    IOMMU Group 12 01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Device [1022:43b2] (rev 02)
    IOMMU Group 12 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
    IOMMU Group 12 02:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
    IOMMU Group 12 02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port [1022:43b4] (rev 02)
    IOMMU Group 12 03:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller [10ec:8168] (rev 0c)
    IOMMU Group 12 05:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK106 [GeForce GTX 650 Ti] [10de:11c6] (rev a1)
    IOMMU Group 12 05:00.1 Audio device [0403]: NVIDIA Corporation GK106 HDMI Audio Controller [10de:0e0b] (rev a1)

    Group 13 (only contains primary Host video card)
    IOMMU Group 13 06:00.0 VGA compatible controller [0300]: NVIDIA Corporation G86 [GeForce 8400 GS] [10de:0422] (rev a1)

    I see no other groups for USB ports that aren’t already accounted for in Group 12.

    Reply
    1. Mathias Hueber

      Well, if you want to passthrough Group 12, you have to pass all device to the VM. Considering your output I would try to pass Group 13 to the VM if that is possible. I have no experiences with the Gigabyte AB350M-Gaming3, have you tried looking for other passhtrough success stories with said MB?

      Reply
  2. Pat

    Great tutorial but I think in selected devices for isolation you’ve left the IDs from your 16.04 tutorial which don’t match the ids in the script output so it get a bit confusing as 10de:0fbb doesn’t seem to exist

    Reply
    1. Mathias Hueber

      indeed you are correct. I had updated the first ID, but forgot to change the Definition Audio Controller id as well. Good catch – thank you!

      Reply
  3. Ben

    I have a 1700x and whenever I use 2 threads it tells me “qemu3.1-system-x86_64: warning: This family of AMD CPU doesn’t support hyperthreading(2)” I’m using qemu 3.1, anything I need to do ?

    Reply
    1. Mathias Hueber

      Hello,
      sorry for the delayed reply… I am lazy with these comments. Does the problem still exists?
      If so, have you used “kvm ignore_msrs=1” ?
      You can email me your VM xml if you want, so I can have look at it.

      Cheers m.

      Reply
  4. Jo

    You should not use a newer BIOS than 4207 (8th Dec 2018)!
    (I tried to in April 2019)

    After upgrading to the newest version, passing-through didn’t work anymore with the separated x16-slots (unknown PCIe header message in VMs).
    Downgrading back to 4207 was a pain cuz downgrading seems not to be supported by Asus-Tools (but it is possible).

    Reply
    1. Chuck R

      You have to apply a kernel patch from https://clbin.com/VCiYJ
      I banged my head against a wall for a while due to this issue, but using this patch got me past the Unknown PCIe Header 127 (or 0x7F). Then, I just had to fix the Error 43.

      Reply
      1. Mathias Hueber

        Thanks for the input. What bios version and cpu combination are you running. I have read the agesa update for ryzen 3000 series support broke quite some setups.

        Reply
  5. simon

    Thanks for sharing this post, is very helpful article.

    Reply
    1. Mathias Hueber

      Your welcome

      Reply
  6. James Sevener

    Hello Mathias,

    Thank you a ton for your guide, the part I was missing is that the VM needs to be UEFI, and that you have to stop the VM starting the first time and make the changes to the XML.

    Have you had issues with windows updates failing to install with code: 0xc1900101?
    Seems to be driver related, I wiped my VM and started over and I still can’t get Windows update 1803 to install.

    Reply
    1. Mathias Hueber

      Have you enabled kvm.ignore_msrs=1? Related post can be found here: https://old.reddit.com/r/VFIO/comments/901ioi/win10_1803_installs_failing_in_kvm_on_amd_hardware/

      Reply
  7. datapacket

    Hi Mathias, do you think if it’s possible to do the same with your tutorial with two same card id?

    Because i have 2X RTX 2080 and so i have the same card ID but PCI id are different. I search a solution to do a good pci passthrough with only one(obviously), i have tested lot of tutorial and nothing seem to work (i will try your tutorial when i’m at home).

    I’m on Intel I9, 2X RTX 20800, Xubuntu 18.04

    Reply
  8. OneSphere

    Do you need to have two graphics cards for this? I have one RTX 2080 and a Ryzen 7 2700.

    Reply
    1. Mathias Hueber

      Well, in case you run the host headless you only would need one gfx card for the guest. But this would render my usecase pointless, as the host would have no dekstop 🙂
      I think only the ryzen G series have integrated graphics. The “regular” ones (including the 2700) do not have integrated graphics. Thus, two cards would be required.

      Reply
  9. Gareth Hauber

    Just wanted to say thanks

    Reply
  10. Wayne

    How could you switch between host & guest on a display quickly using the keyboard – if you’re using GPU passthrough (meaning the guest has a dedicated GPU)?

    Reply
  11. Jose Luis

    I buddy, you’re amazing, very clear and forward tutorial, thank you so much, I’ve a question, I own a laptop Dell G3 15 2019, the main difference with the last version is in the processor, i7 ninth generation, the new radeon 1660 ti and a new ssd-nvme, that supposedly is faster than the fastest ssd avilable now, my plan is as follows:

    – I already tryed to install video card passthrough, but it didn´t work, I was using ubuntu 19.04, but some configuration issues made me to desist.

    – I need this laptop as an ethical hacking lab, and the KVM solution is ideal for this purpose, due to the nearly bare metal speed, but, the graphic card is a waste if the laptop is only used to run virtual machines for the lab. On the other hand, full virtualization, including Hyper-V and VMWare Workstation is a pain in the neck, slow, not so easy to configure and etc.

    – So ,I plan to install a windows 10 virtual machine that use the video card for gaming, design, or whatever, my wish is not to need to use separate keyboard, mouse and monitor.

    – I completely understand your guide, congratulations again, so:

    Do you have any advice on this setup in order to have a succesful installation?

    Regards and continue with the great job.

    Reply
  12. Claus

    Very good tutorial on the Topic. It all worked perfectly for me until it came to the point where the rotating wheel from Windows should show up. It did not on the passed through monitor. I could only see it on the VirtManagers screen on the host which is mirrored. Before the TianoCore Logo and the virtual BIOS setup showed up on the redirected Monitor. I tried 2 days all sorts of things but nothing helped. Do you have any idea where this problem is allocated ? – I would really appreciate any hint here, or email me.
    Thanks in advance, Claus

    Reply
  13. Claus

    By the way, I noticed an inconsistency in your xml config file.
    Here you address qemu 3.1 in the machine attribute:
    hvm
    while here you reference qemu 3.0:
    /usr/local/bin/qemu3.0-system-x86_64

    Reply
    1. Mathias Hueber

      Thank you for the input. I have updated the section with the latest QEMU version.

      Reply
  14. Boho

    Hey, Mathias
    I have 0 experience with this and I even don’t have the hardware yet but just to be clear before I end up deadlocked with my purchase: There’s no way to have shared PCIe system with Ryzen 3000 and single GPU because of IOMMU, isn’t it?
    Since I plan to purchase gigabyte.com/us/Motherboard/X570-I-AORUS-PRO-WIFI-rev-10 and it has single PCIe slot am I doomed to fail with that config? Is there any work around?

    Reply
    1. Mathias Hueber

      Hello
      You need two gpu’s. Ryzen CPU’s do not have an integrated Gpu. Thus, you need two pcie gpu’s. It is possible to get it working with Intel integrated graphics for example.

      Reply
  15. Smack2k

    Question,

    Can you setup Multiple VMs (Windows 98 / Windows XP) using the same Extra GPU as long as you dont run both VMs at the same time?

    Reply
    1. Smack2k

      I meant to also add….can you also have two guest GPUs in the system and set each one up for a VM? One for Windows 98 and One for Windows XP? They would be PCI-e Cards of differing varieties

      Reply
      1. Mathias Hueber

        Yes this should work, but the hardware you use has to support it as well. If both vms should run at the same time, you need each gpu in its own iommu group, and a gpu for the host (not required).
        If only one of the vms is running at time one could pass always both cards but just decide in the vm which one to use

        Reply
    2. Mathias Hueber

      Yes that is possible.

      Reply
      1. Smack2k

        Thanks…..this is going to be my next project, to see how close I can get things running for both 98 and XP

        I guess my only issue would be sound for the systems, but I assume a soundcard or onboard audio could take care of that….

        Reply
        1. Smack2k

          One other thing…

          Can the base OS setup and the created VMs run on older hardware that does have VFIO support? In terms of performance and experience, would something like a i7-3770 be enough umph for it? Also, 16GB RAM? Would be for Linux OS, and the 98 and XP VMs, which dont have to be running at the same time

          Reply
  16. Jake

    “Attention! After the upcoming steps, the guest GPU will be ignored by the host OS. You have to have a second GPU for the host OS now!”

    Can you recommend any way to share a GPU with both the host and the VM? I am building my first PC and hope to use a Windows VM with passthrough for gaming, but Ubuntu for software development and data science, as well as all other day to day internet / computing tasks.

    Reply
    1. Mathias Hueber

      There are ways to do this.

      1. In case your cpu has integrated Gpu you can use it for the host

      2. If you do not need the host while the gaming vm is on, you can leave it headless

      Search in reddit.com/r/vfio the question is queit common.

      Reply
  17. LinuxUser

    Hello, and I’d like to first thank you for making this guide.

    I intend to passthrough an nvidia GPU to the vm, and I’m getting the known Error 43. However, the fix does not seem to solve the issue for me. I’ve noticed you mention QEMU 3.0 or above in the fix post, does it not work if I use a lower version? I’m currently using the one that Ubuntu 18 gets by default (2.11).

    Thank you!

    Reply
  18. Andreas

    Hi…

    first I would say huge “Thank you!” for this great guide

    initially… my cpu and software configuaration
    cpu: Ryzen 7 1700X
    OS: Linux Mint 19
    Kernel: 4.18 (using this kernel due to information in URL below)
    qemu 4.1.1

    getting always message emitted “qemu-system-x86_64: warning: This family of AMD CPU doesn’t support hyperthreading(2)” and in Windows I have 12 virtual cores.

    Found that this was a known bug mid of 2019.
    https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=897054
    And when I understand it correctly (any comments/correction is appreciated) should not be present anymore when using qemu 4.1.1 and kernel greater than 4.16.

    But situation is still present.
    Has anyone experience about this or am I missing something

    Reply
    1. Andreas

      Update:
      issue is not present anymore. Unfortunately, unknown which step was the very solution for as it was not really a scientific approach I was going…

      But it seems to me that something was incorrect with dependencies/program/packages related with the qemu version (unsure regarding wording. correction is appreciated).
      running “dpkg -l | grep libvirt” showed that several programs were installed from a ppa and not from “ubuntu8.14”-rep (when I read it correctly that entry right next to package indicates origin of respective package). I was puzzled as I was not aware having ppa in package sources

      I removed all originated from ppa with apt-get purge [package-name] + qemu* and ran “sudo apt-get install libvirt-bin bridge-utils virt-manager qemu-kvm ovmf” again. Aftewards followed step by step the detailed guide for upgrading qemu to which is linked here in this guide.

      Don’t know whether I made a mistake in first approach for updating qemu or whether really something was wrong with dependencies but emitted error/message is gone

      but rather really unfortunately, still it is not really running. Takes about 3 to minutes till I see Tianocore-logo every time I start the VM.
      Think I will try another guide.

      But nevertheless a huge thank you!
      It is very good written guide and espeacially the combination with a guide for CPU-pinning was great

      Reply
      1. Mathias Hueber

        Hey andreas,
        thank you for the kind words, glad the guides help.
        Maybe you can find some help in the subreddit /r/vfio
        the community is pretty helpful.
        A good guide especially for Mint users is from Heiko Sieger.

        *sorry for the late response, i am currently battling with comment spam.

        Reply
  19. Yousof

    Hey
    I followed this and it worked. But I needed to update my kernel (for some bluetooth issues), the I had to run `sudo dpkg-reconfigure nvidia-dkms-440` Then after reboot, the the nvidia driver get loaded for the guest GPU. I ran all the commands here again, but it did not help. Do you know how can I load vfio-pci for the guest GPU again?
    Thanks

    Reply
    1. Mathias Hueber

      First make sure this “softdep nvidia* pre: vfio-pci” is set correctly in your nvidia.conf

      I think I had a similiar problem which like you described. I think I solved it by playing around with the “Additional Drivers” settings. It was something like:
      – install new nvidia driver, reboot
      – set both to nvidia driver, reboot
      – set the guest GPU to Novuea and set the host GPU to nvidia, reboot

      ..something like this but it is a long time ago I had that problem.

      Reply
  20. Graeme

    Could you show a worked example of passing through a old VGA device that uses bios not UEFI.
    aka say and old AMD HD 4650 or better still a S3 vga card, ATI Rage XL, Cirrus GD5436 etc aka for legacy gaming

    Reply
  21. Harry Tang

    THX for your tutorial!!

    I have a question is that I want to establish two KVM and both of them has a specific GPU, run applications simultaneously.
    So, is possible to that Host use no GPU and both GPU use for two KVM?

    Reply
  22. Lindsey

    Hi. So I followed your guide. It’s working well. I am using Windows 7 with Seasbios and I didn’t have that 43 error. Games run smoothly.

    The only issue is the sound whenever I play music through the 750 TI, there is a slight crackle that occurs. So how do I go about totally eliminating the crackling noise?

    Reply
    1. Mathias Hueber

      If you can run QEMU 4.2 this should be fixed. Check out the update on the audio article

      Reply
  23. Phoenix

    Ubuntu 20.04 has vfio driver built in into the kernel now so there are some adjustments required: https://www.reddit.com/r/VFIO/comments/g8vdd3/vfio_broke_on_ubuntu_2004_upgrade/

    Reply
    1. Mathias Hueber

      Yeah I have written a new article for Ubuntu 20.04. It is link in the first paragraph of this page

      Reply
  24. Vergo

    I was wondering, how it was known in advance, that “BIOS versions from (and including) 5406, 2019/11/25 are working.” particularly for the Asus X370 Prime Pro board. Its 5601 BIOS got released less than a fortnight ago.

    Then, has anybody had success with BIOS 5601 on the Asus X370 Prime Pro ? where success means, no [Unknown PCI header type ‘127’“] error ?

    Reply
    1. Mathias Hueber

      It wasn’t knowing before. It was written in tears on the vfio subreddit 🙂

      Reply
  25. kW

    Did you do any benchmarking on this? Really curious how it performs on games. 🙂

    Reply
    1. Mathias Hueber

      It performs very well. I created a related article about performance tuning in which I wrote about the benchmarking. More important than simple FPS is the input latency for gaming in a virtual machine.

      Reply
  26. S D

    I am very new to ubuntu/linux, and my issue is when I run “lspci -nnv” to verify the install, it doesn’t change to vfio-pci. I am using a old Radeon HD 4870 to isolate for my vm. I think the problem has something to do with that since I am not using a nvidia gpu. I can try the same thing with my 1050 ti, but I would like to keep that gpu for the host system if possible. Even if we can’t find the problem I greatly appreciate any response.

    Reply
    1. x

      never mind, I have since fixed the issue

      Reply
      1. michael

        Oh my god you cant just say that and not put how you fixed the issue I am having the exact same issue.

        Reply
  27. Davis

    Thanks for the excellent guides, you’ve solved my audio issues as well as losing vfio after an update. After updating 18.0.4.4 LTS ubuntu, the kernel is now 5.4, so for those having issues. If uname -r returns 5.4 follow the 20.04 guide.

    May want to add a note to this page.

    Thanks again and keep up the great work.

    Reply
  28. guyrodge

    Many thanks for this, awesome!!!!
    Just for info :
    When adding hardware of type “usb host device” so to be able to use the second mouse, I had en error : “Cannot add my mouse in QEMU/KVM: “Vendor ID Cannot Be 0”.
    I had to edit the virsh xml and add manually into tags, the id vendor id and product id of my 2nd mouse (easily found via dmesg after “unplugged & plugged”) :

    Reply
  29. C0D3 M4513R

    I also had to install linux-aws as a dependency, for the gpu isolation to work.

    Reply
  30. Torananlis

    One of the best set of instructions ever seen. Cheers

    Reply
    1. Mathias Hueber

      Glad it is helpful

      Reply
  31. Ofesad

    Hi Mathias! Greetings from Argentina!
    I am in process of upgrading a pc and your post seems to be the guide I need.
    However, my idea for it, differs in some aspects. Let me tell you about it and maybe you could give me some pointers on what steps are mandatory and what’s not.
    First, the specs:
    AMD FX8350 + Asus M5A99X Evo R2 (latest bio, SVM on, IOMMU on) + 16gb ram DDR3 + SSD 250gb + HDD 6Tb + AMD R9 270X in pcie1 + ATI All In Wonder X800XL in pcie2

    This pc is mainly for sharing the 6tb drive over lan (like a NAS, but nothing too complicated, it’s just to store video captures) and do video capture using a Windows XP with the Ati AIW card, those captures should be saved on the 6tb drive.
    Would be great to run it on linux (ubuntu preffered) so I can use it for some other projects.

    I researched several ways to achieve the XP VM and Pcie pass through: Oracle VB (got lot of errors during install), VMware (cant affort it), Windows Server 2019 (pending to test) and others ideas waiting to test.
    But I found your guide to be quite helpul and seems to fit my needs better than others.
    Now a question that is kinda unclear to me:
    Since the Ati AIW x800 is capable of video output (it’s a gpu + capture card), but it’s not what I want, I just want it to capture and be able to see everything on the VM window from the R9 270X, meaning using only one monitor. Is this possible with this method? or should I use a second monitor connected to the AIW?

    At this point, I manage to install XP SP3 and add the X800 following your steps. Even the added the VirtIO hdd using the drivers (adding a diskette because XP doesnt read drivers from cds). Installed XP, get in desktop. So far, fine. I install the X800 drivers, reboot and freezes in the Windows logo loading (the blue moving bars stop moving at all).
    Try’d different configurations and setups. Changed the bios.
    Same results.
    I know is a lot to ask, but could you give me a hand on solving this issue?
    Thank you so much for your hard work!

    Reply
  32. Ron Rebensdorf

    Hello Mathias,
    Great article but since I am setting up a GPU passthrough in centOS7 I dont have these locations / files…
    “Isolation of the guest GPU”
    In order to isolate the gfx card modify /etc/initramfs-tools/modules via:
    sudo nano /etc/initramfs-tools/modules and add:
    vfio vfio_iommu_type1 vfio_virqfd vfio_pci ids=10de:1b83,10de:10f0

    modify /etc/modules aswell via: sudo nano /etc/modules and add:
    vfio vfio_iommu_type1 vfio_pci ids=10de:1b83,10de:10f0

    Should I create them anyways?
    Any info would be helpful.
    thanks in advance

    Reply
    1. Mathias Hueber

      Hey Ron,
      have you checked out the updated version of the article? I describe two possible solutions for the GPU isolation. One via grub command. This can be easily used in case the GPU type is not identical. The other via a script. Maybe this is helpful for your problem.

      Another thing: I am not sure how centos handles kernel versions, but on my short research for this reply I saw centos7 run kernel 3.XX, which can be problematic for VM performance especially with Ryzen systems. Keep that in mind. In case you want to optimize the performance for gaming purposes have a look at this article.

      I hope I was able to help you

      cheers,
      M.

      Reply
  33. Windows VM on Linux with dedicated GPU and memory - Boot Panic

    […] You can find the details on doing so under Ubuntu in the article Beginner friendly guide to windows virtual machines with GPU passthrough on Ubuntu 18.04. […]

    Reply
  34. Pop!_OS : setting kernel parameters (boot loader options) permanently is not working - Boot Panic

    […] experimenting GPU passthrough with QEMU (following the steps given here) on Pop!_OS and in the process I added kernel parameters. Now I want to remove the intel_iommu=on […]

    Reply
  35. vbox gpu passthrough - databaseen

    […] Beginner friendly guide to GPU passthrough on Ubuntu 18.04 […]

    Reply
  36. Windows VM on Linux with dedicated GPU and memory

    […] You can find the details on doing so under Ubuntu in the article Beginner friendly guide to windows virtual machines with GPU passthrough on Ubuntu 18.04. […]

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *