IOMMU
#Proxmox #IOMMU To pass PCI modules to VM's in Proxmox. #Darkmode for Proxmox is nice for your eyes. I use it to save my eyes.
If you need to pass through HW modules to a VM, you need IOMMU
For some apps, you need to pass PCI devices to make them functional. This mean two things: you can't use CTs, and you need IOMMU.
Devices that may require passing through to a virtual machine include network cards, disk and raid controllers, and other PCI or USB components. Some network cards needs more options to be stable. I have a sample below.
Additional options are required to pass through graphics cards. This must be determined by your setup. As I do not utilize any of these options, I lack the necessary experience.
Make sure your server's CPU has Intel's VT-x and VT-d or AMD's AMD-Vi implemented. (Find your CPU by lscpu
and then check the specs). Then the IOMMU (I/O Memory Management Module) interrupt remapping is supported.
Depending on your motherboards quality it will work or not.
With AMD, it should be on automatically, but Intel needs some work.
Check for IOMMU support
cat /proc/cpuinfo | grep --color vmx
Motherboard requirements
Your motherboard needs to support IOMMU. Note that, as of writing, both these lists are incomplete and very out-of-date and most of the newer motherboards support IOMMU.
Update files and enable IOMMU
Start a terminal and check if you are running grub or EFI: efibootmgr -v
,
- Output: EFI variables are not supported on this system → use Update GRUB
- Output: EFI data listing → you have EFI use the Update EFI
Add the needed modules and Update
Edit the vi /etc/modules
, you need these modules to load
vi /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Update initramfs
This might take some time to execute, especially if you have many old kernels
update-initramfs -u -k all
Update GRUB
vi /etc/default/grub
Add the lines and uncomment if needed (for GPU's you need more options)
GRUB_CMDLINE_LINUX="intel_iommu=on iommu=pt"
#GRUB_CMDLINE_LINUX="intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1"
Save the file and exit nano
Uncomment to get a beep at grub start
GRUB_INIT_TUNE="480 440 1"
Make it into all modules
update-grub
Update EFI
If booting ZFS on EFI. For this PVE uses systemd-boot instead of grub,
this means that running update-grub do not work.
Edit the file vi /etc/kernel/cmdline
and
append to the end of the line: intel_iommu=on iommu=pt
You can also try other options rootdelay=10 intel_iommu=on
, it solves some issues with longer RAID-card cables used with expansion boxes.
Finally, execute this command: pve-efiboot-tool refresh
.
This might take some time to execute, especially if you have many old kernels
Reboot the system now
After the reboot check for success dmesg | grep -e IOMMU
Check for iommu groups
dmesg | grep -i -e DMAR -e IOMMU
Check if the previously added kernel modules loads correctly
lsmod | grep vfio
Find all your devices (on enterprise grade servers, the list is pages long), use grep
lspci -nnk
For NIC's the command would be
lspci | grep Ethernet
For RAID controllers type
lspci | grep SAS
Now you should be able to pass PCI modules to your VM's
Test for interrupt remapping
dmesg | grep 'remapping'
If it fails, try
echo "options vfio_iommu_type1 allow_unsafe_interrupts=1" > /etc/modprobe.d/iommu_unsafe_interrupts.conf
For more info consult your HW and CPU vendors documentation and read the Proxmox documentation.
I lost all my settings during upgrading Proxmox. Make notes!
Or even better, make a script (bash, Ansible...)
References
Documentation [1] Wikis related to us of IOMMU PCI [2] PCIe [3] Disk [4] Tutorial [5] About Motherboards [6] Use a USB to chain boot from a NVMe on old WW [7]
Proxmox Documentation ↩︎
PCI PCI Passtrough, getting started GitHub ↩︎
PCIe PCIe Passtrough ↩︎
Disk Disk Passtrough ↩︎
MB List Lists can be found on the Xen wiki and on Wikipedia ↩︎
Old computers (legacy computers) have legacy BIOS which is able to boot some drive HDD, CDROM or USB-HDD. Clover EFI bootloader make it possible to use a NVMe too GitHub, wiki pages ↩︎