# VIKINGS wiki

It's better when it's simple

User Tools

Site Tools


virtualisation:proxmox_ve:vms_cts:vm_best_practices_modern_setup

VM Best Practices in Proxmox VE

Recommended settings for creating new VMs on Proxmox VE 9.x with modern guest operating systems.

TL;DR / Quick Reference

Component Setting
Machine q35
BIOS OVMF (UEFI)
SCSI Controller VirtIO SCSI single
Discard
SSD Emulation
IO Thread
Cache Default (no cache)
Async IO (Ceph) native
Skip Replication
Network VirtIO
CPU Type host (homogeneous) / x86-64-v3-AES (mixed) / :!: see remarks below
Allow KSM

Details

Machine & BIOS

Setting Value
Machine Type q35
BIOS OVMF (UEFI)
  • q35 is the modern chipset model, enabling PCIe devices, Secure Boot and better hardware emulation
  • OVMF replaces the legacy SeaBIOS
  • Important: Set the BIOS type before installation — changing it afterwards will make the VM unbootable
  • Confirm the auto-created EFI disk when prompted

Disk

Setting Value Notes
Bus/Device SCSI
Controller VirtIO SCSI single Required for IO Thread support
Discard ✅ enabled Enables TRIM/UNMAP — allows thin-provisioned storage (ZFS, LVM-thin, Ceph) to actually reclaim free space
SSD Emulation ✅ enabled Signals to the guest OS that the device behaves like an SSD; especially useful for Windows guests
IO Thread ✅ enabled Better I/O performance
Cache Default (no cache) Safe default. Write back gives more performance but risks data loss on power failure
Skip Replication ❌ disabled Leave off — only relevant for ZFS-based Proxmox replication jobs, meaningless on Ceph

Async IO (for Ceph/RBD)

Mode Description Recommendation
threads Legacy thread pool, synchronous syscalls ❌ Outdated, worst performance
io_uring Modern Linux interface, PVE default ✅ Good for ZFS, NFS, thin-LVM
native Linux AIO (posix_aio), O_DIRECT Best choice for Ceph/RBD, iSCSI, NVMe
aio=native requires disk cache to be set to Default (no cache). This is already the standard for Ceph, so no extra steps needed.

Network

Setting Value
Model VirtIO (paravirtualized)

VirtIO delivers the highest network performance of all available models.

CPU

CPU Type Compatibility Additional Flags
x86-64-v2-AES Intel ≥ Westmere, AMD ≥ Opteron_G4 (~2010) +aes
x86-64-v3 Intel ≥ Haswell, AMD ≥ EPYC Zen 3 (~2015) +avx, +avx2, +bmi1, +bmi2, +fma, +xsave
x86-64-v4 Intel ≥ Skylake-SP/Icelake, AMD ≥ EPYC Zen 4 / Genoa (~2022) +avx512f, +avx512bw, +avx512cd, +avx512dq, +avx512vl
host Identical hardware only All host CPU features (passed through directly)
  • x86-64-v2-AES is the Proxmox default — safest for mixed clusters with different CPU generations
  • x86-64-v3 is recommended if all hosts support it (most CPUs ≥ 2015 do) — notable gains in compression, cryptography and modern workloads via AVX2
  • x86-64-v4 requires AVX-512 support — only available on AMD EPYC Zen 4 (Genoa, ~2022) or newer and Intel Skylake-SP/Icelake server CPUs; not supported on AMD EPYC Zen 3 (Milan/7R13) or most Intel consumer CPUs (Alder Lake and later have AVX-512 disabled)
  • host gives maximum performance by passing all host CPU flags directly to the VM, but live migration is only possible between nodes with identical CPU models and microcode versions — adding a node with a different CPU generation will break live migration for affected VMs
  • Optional: manual CPU flag additions — provided all nodes in the cluster support these flags Recommendation: * Fully homogeneous cluster (same CPU model on all nodes) → host for maximum performance * Homogeneous cluster, migration compatibility desired, or new nodes with different CPUs expected → x86-64-v3 * Mixed cluster with different CPU generations → x86-64-v2-AES * Changing from host to x86-64-v3 is non-destructive and only requires a VM reboot * :!: Do mind the remarks about Windows VMs below ==== Windows VMs ==== * Avoid host with Windows 11 / Windows Server 2025: Known issue in PVE 8.4+ where host causes severe performance degradation (CPU pegged at 100%, I/O stalls) due to interactions with Windows core isolation and Virtualization-Based Security (VBS/HVCI) features — use x86-64-v3 instead [web:1][web:9] * x86-64-v3 is the recommended CPU type for Windows VMs — stable, performant, and avoids the above issues [web:9] * Windows internalizes CPU features at first boot: Once a Windows VM has booted with a richer CPU type (e.g. host on a Zen 4 node), migrating it to a node with fewer CPU features can cause boot failures or crashes — Windows kernel structures are built around the CPU flags seen at first boot [web:54] * Backup/restore vs. live migration:** Changing CPU type between host and a generic type (e.g. x86-64-v3) and restoring from backup works fine for Windows VMs; live migration across different CPU types does not [web:30]

Memory

Setting Value
Allow KSM ✅ enabled (default)

KSM (Kernel Samepage Merging) allows the host kernel to deduplicate identical memory pages across VMs, saving RAM when running multiple similar guests.

  • Safe to leave enabled on self-managed clusters where you control all VMs
  • Disable for individual sensitive VMs if needed: qm set <vmid> –allow-ksm 0
  • Only disable globally on hosting environments with untrusted tenant VMs (side-channel risk)

See also

virtualisation/proxmox_ve/vms_cts/vm_best_practices_modern_setup.txt · Last modified: by thum