IOMMU
#Proxmox #IOMMU To pass PCI modules to VM's in Proxmox. #Darkmode for Proxmox is nice for your eyes. I use it to save my eyes.
If you need to pass through HW modules to a VM, you need IOMMU
For some apps, you need to pass PCI devices to make them functional. This mean two things: you can't use CTs, and you need IOMMU.
Devices that may require passing through to a virtual machine include network cards, disk and raid controllers, and other PCI or USB components. Some network cards needs more options to be stable. I have a sample below.
Additional options are required to pass through graphics cards. This must be determined by your setup. As I do not utilize any of these options, I lack the necessary experience.
Make sure your server's CPU has Intel's VT-x and VT-d or AMD's AMD-Vi implemented. (Find your CPU by lscpu
and then check the specs). Then the IOMMU (I/O Memory Management Module) interrupt remapping is supported.
Depending on your motherboards quality it will work or not.
With AMD, it should be on automatically, but Intel needs some work.
Check for IOMMU support
cat /proc/cpuinfo | grep --color vmx
Motherboard requirements
Your motherboard needs to support IOMMU. Note that, as of writing, both these lists are incomplete and very out-of-date and most of the newer motherboards support IOMMU.
Update and enable IOMMU
Start a terminal and check if you are running grub: efibootmgr -v
Add the needed modules and Update
Edit the /etc/modules
, you need these modules to load
nano /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Update initramfs
update-initramfs -u -k all
Update GRUB
nano /etc/default/grub
Add the lines and uncomment if needed (for GPU's you need more options)
GRUB_CMDLINE_LINUX="intel_iommu=on"
#GRUB_CMDLINE_LINUX="intel_iommu=on vfio_iommu_type1.allow_unsafe_interrupts=1"
Save the file and exit nano
Make it into all modules
update-grub
Update EFI
If booting ZFS on EFI. For this PVE uses systemd-boot instead of grub,
this means that running update-grub do not work.
Edit the file /etc/kernel/cmdline
and append to that line: intel_iommu=on
You can also try rootdelay=10 intel_iommu=on
, it solves some issues with longer RAID-card cables used with expansion boxes.
Then execute pve-efiboot-tool refresh
Reboot the system now
Check for iommu groups
dmesg | grep -i -e DMAR -e IOMMU
Check if the previously added kernel modules loads correctly
lsmod | grep vfio
Find all your devices (on enterprise grade servers, the list is pages long), use grep
lspci -nnk
For NIC's the command would be
lspci | grep Ethernet
For RAID controllers type
lspci | grep SAS
Now you should be able to pass PCI modules to your VM's
For more info consult your HW and CPU vendors documentation and read the Proxmox documentation.
Final Warning ⚠️
I lost my setting during upgrading. Make notes!
References
Documentation [1] Wikis related to us of IOMMU PCI [2] PCIe [3] Disk [4] Tutorial [5] About Motherboards [6]
Proxmox Documentation ↩︎
PCI PCI Passtrough, getting started GitHub ↩︎
PCIe PCIe Passtrough ↩︎
Disk Disk Passtrough ↩︎
MB List Lists can be found on the Xen wiki and on Wikipedia ↩︎