Working a bit further on the setup:
@ASRock System I don't know if there is still an error in the BIOS ACPI &&/|| Virtualization Table.
Ubuntu 20.04 LTS, Kernel v.5.4.0-40-generic, x86_64, BIOS: 3.60H, Ryzen 2400G, 16GB (2x 8GB) RAM, NVME + SATA0 SSDs
GPU has 1G memory dedicated.
Please fix (maybe not crucial):
BIOS Settings:
IOMMU = Enabled
SRV-IO = Enabled
The section with IOMMU perf counter.
The correct output would be
Here is what is found behind 0000:00:00.2 (hwinfo)
Here is what is found behind 0000:00:00.2 (sudo lspci -nvv)
I would like to present you my goal:
My goal is to switch to Linux in the long run.
Now I'm running a dual boot setup (Windows on NVME 1TB, Linux on SSD 500GB).
If I would manage to get the Virtualization work properly, I would like to switch Linux and keep a Windows VM for gaming.
Therefore I would act like following:
1. Boot into Linux, as usual
2. Work, do stuff
3. Start a script, which
*unbinds the GPU
*starts VM with binded GPU in the parameters
*do gaming/windows stuff
*shutdown Windows VM
*bind the GPU back to the Linux system
4. continue to work
5. profit
@ASRock System I don't know if there is still an error in the BIOS ACPI &&/|| Virtualization Table.
Ubuntu 20.04 LTS, Kernel v.5.4.0-40-generic, x86_64, BIOS: 3.60H, Ryzen 2400G, 16GB (2x 8GB) RAM, NVME + SATA0 SSDs
GPU has 1G memory dedicated.
Please fix (maybe not crucial):
BIOS Settings:
IOMMU = Enabled
SRV-IO = Enabled
Code:
dmesg | grep AMD-Vi
[ 0.722001] pci 0000:00:00.2: AMD-Vi: Unable to read/write to IOMMU perf counter.
[ 0.722111] pci 0000:00:00.2: can't derive routing for PCI INT A
[ 0.722111] pci 0000:00:00.2: PCI INT A: not connected
[ 0.722552] pci 0000:00:01.0: Adding to iommu group 0
[ 0.722729] pci 0000:00:01.1: Adding to iommu group 1
[ 0.722967] pci 0000:00:01.7: Adding to iommu group 2
[ 0.723144] pci 0000:00:08.0: Adding to iommu group 3
[ 0.723367] pci 0000:00:08.1: Adding to iommu group 4
[ 0.723588] pci 0000:00:08.2: Adding to iommu group 5
[ 0.723779] pci 0000:00:14.0: Adding to iommu group 6
[ 0.723801] pci 0000:00:14.3: Adding to iommu group 6
[ 0.724034] pci 0000:00:18.0: Adding to iommu group 7
[ 0.724056] pci 0000:00:18.1: Adding to iommu group 7
[ 0.724078] pci 0000:00:18.2: Adding to iommu group 7
[ 0.724098] pci 0000:00:18.3: Adding to iommu group 7
[ 0.724119] pci 0000:00:18.4: Adding to iommu group 7
[ 0.724140] pci 0000:00:18.5: Adding to iommu group 7
[ 0.724161] pci 0000:00:18.6: Adding to iommu group 7
[ 0.724181] pci 0000:00:18.7: Adding to iommu group 7
[ 0.724381] pci 0000:01:00.0: Adding to iommu group 8
[ 0.724604] pci 0000:02:00.0: Adding to iommu group 9
[ 0.724862] pci 0000:03:00.0: Adding to iommu group 10
[ 0.724973] pci 0000:03:00.0: Using iommu direct mapping
[ 0.725127] pci 0000:03:00.1: Adding to iommu group 11
[ 0.725167] pci 0000:03:00.2: Adding to iommu group 11
[ 0.725207] pci 0000:03:00.3: Adding to iommu group 11
[ 0.725246] pci 0000:03:00.4: Adding to iommu group 11
[ 0.725285] pci 0000:03:00.6: Adding to iommu group 11
[ 0.725501] pci 0000:04:00.0: Adding to iommu group 12
[ 0.725731] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40
[ 0.725732] pci 0000:00:00.2: AMD-Vi: Extended features (0x4f77ef22294ada):
[ 0.730860] AMD-Vi: Interrupt remapping enabled
[ 0.730860] AMD-Vi: Virtual APIC enabled
[ 0.730976] AMD-Vi: Lazy IO/TLB flushing enabled
[ 2.586081] AMD-Vi: AMD IOMMUv2 driver by Joerg Roedel <jroedel@suse.de>
The section with IOMMU perf counter.
The correct output would be
Code:
"AMD-Vi: IOMMU performance counters supported.
Here is what is found behind 0000:00:00.2 (hwinfo)
Code:
16: PCI 00.2: 0806 IOMMU
[Created at pci.386]
Unique ID: Z+fY.Il0DcxNZf41
SysFS ID: /devices/pci0000:00/0000:00:00.2
SysFS BusID: 0000:00:00.2
Hardware Class: unknown
Model: "AMD Raven/Raven2 IOMMU"
Vendor: pci 0x1022 "AMD"
Device: pci 0x15d1 "Raven/Raven2 IOMMU"
SubVendor: pci 0x1022 "AMD"
SubDevice: pci 0x15d1
IRQ: 25 (no events)
Module Alias: "pci:v00001022d000015D1sv00001022sd000015D1bc08sc06i00"
Config Status: cfg=new, avail=yes, need=no, active=unknown
Code:
00:00.2 0806: 1022:15d1
Subsystem: 1022:15d1
Control: I/O- Mem- BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 25
Capabilities: [40] Secure device <?>
Capabilities: [64] MSI: Enable+ Count=1/4 Maskable- 64bit+
Address: 00000000fee04000 Data: 4021
Capabilities: [74] HyperTransport: MSI Mapping Enable+ Fixed+
I would like to present you my goal:
My goal is to switch to Linux in the long run.
Now I'm running a dual boot setup (Windows on NVME 1TB, Linux on SSD 500GB).
If I would manage to get the Virtualization work properly, I would like to switch Linux and keep a Windows VM for gaming.
Therefore I would act like following:
1. Boot into Linux, as usual
2. Work, do stuff
3. Start a script, which
*unbinds the GPU
*starts VM with binded GPU in the parameters
*do gaming/windows stuff
*shutdown Windows VM
*bind the GPU back to the Linux system
4. continue to work
5. profit
Last edited: