Recommended settings for creating new VMs on Proxmox VE 9.x with modern guest operating systems.
| Component | Setting |
|---|---|
| Machine | q35 |
| BIOS | OVMF (UEFI) |
| SCSI Controller | VirtIO SCSI single |
| Discard | ✅ |
| SSD Emulation | ✅ |
| IO Thread | ✅ |
| Cache | Default (no cache) |
| Async IO (Ceph) | native |
| Skip Replication | ❌ |
| Network | VirtIO |
| CPU Type | host (homogeneous) / x86-64-v3-AES (mixed) / |
| Allow KSM | ✅ |
| Setting | Value |
|---|---|
| Machine Type | q35 |
| BIOS | OVMF (UEFI) |
q35 is the modern chipset model, enabling PCIe devices, Secure Boot and better hardware emulationOVMF replaces the legacy SeaBIOS| Setting | Value | Notes |
|---|---|---|
| Bus/Device | SCSI | |
| Controller | VirtIO SCSI single | Required for IO Thread support |
| Discard | ✅ enabled | Enables TRIM/UNMAP — allows thin-provisioned storage (ZFS, LVM-thin, Ceph) to actually reclaim free space |
| SSD Emulation | ✅ enabled | Signals to the guest OS that the device behaves like an SSD; especially useful for Windows guests |
| IO Thread | ✅ enabled | Better I/O performance |
| Cache | Default (no cache) | Safe default. Write back gives more performance but risks data loss on power failure |
| Skip Replication | ❌ disabled | Leave off — only relevant for ZFS-based Proxmox replication jobs, meaningless on Ceph |
| Mode | Description | Recommendation |
|---|---|---|
threads | Legacy thread pool, synchronous syscalls | ❌ Outdated, worst performance |
io_uring | Modern Linux interface, PVE default | ✅ Good for ZFS, NFS, thin-LVM |
native | Linux AIO (posix_aio), O_DIRECT | ✅ Best choice for Ceph/RBD, iSCSI, NVMe |
aio=native requires disk cache to be set to Default (no cache).
This is already the standard for Ceph, so no extra steps needed.
| Setting | Value |
|---|---|
| Model | VirtIO (paravirtualized) |
VirtIO delivers the highest network performance of all available models.
| CPU Type | Compatibility | Additional Flags |
|---|---|---|
x86-64-v2-AES | Intel ≥ Westmere, AMD ≥ Opteron_G4 (~2010) | +aes |
x86-64-v3 | Intel ≥ Haswell, AMD ≥ EPYC Zen 3 (~2015) | +avx, +avx2, +bmi1, +bmi2, +fma, +xsave |
x86-64-v4 | Intel ≥ Skylake-SP/Icelake, AMD ≥ EPYC Zen 4 / Genoa (~2022) | +avx512f, +avx512bw, +avx512cd, +avx512dq, +avx512vl |
host | Identical hardware only | All host CPU features (passed through directly) |
x86-64-v2-AES is the Proxmox default — safest for mixed clusters with different CPU generationsx86-64-v3 is recommended if all hosts support it (most CPUs ≥ 2015 do) — notable gains in compression, cryptography and modern workloads via AVX2x86-64-v4 requires AVX-512 support — only available on AMD EPYC Zen 4 (Genoa, ~2022) or newer and Intel Skylake-SP/Icelake server CPUs; not supported on AMD EPYC Zen 3 (Milan/7R13) or most Intel consumer CPUs (Alder Lake and later have AVX-512 disabled)host gives maximum performance by passing all host CPU flags directly to the VM, but live migration is only possible between nodes with identical CPU models and microcode versions — adding a node with a different CPU generation will break live migration for affected VMshost for maximum performance
* Homogeneous cluster, migration compatibility desired, or new nodes with different CPUs expected → x86-64-v3
* Mixed cluster with different CPU generations → x86-64-v2-AES
* Changing from host to x86-64-v3 is non-destructive and only requires a VM reboot
* host with Windows 11 / Windows Server 2025: Known issue in PVE 8.4+ where host causes severe performance degradation (CPU pegged at 100%, I/O stalls) due to interactions with Windows core isolation and Virtualization-Based Security (VBS/HVCI) features — use x86-64-v3 instead [web:1][web:9]
* x86-64-v3 is the recommended CPU type for Windows VMs — stable, performant, and avoids the above issues [web:9]
* Windows internalizes CPU features at first boot: Once a Windows VM has booted with a richer CPU type (e.g. host on a Zen 4 node), migrating it to a node with fewer CPU features can cause boot failures or crashes — Windows kernel structures are built around the CPU flags seen at first boot [web:54]
* Backup/restore vs. live migration:** Changing CPU type between host and a generic type (e.g. x86-64-v3) and restoring from backup works fine for Windows VMs; live migration across different CPU types does not [web:30]| Setting | Value |
|---|---|
| Allow KSM | ✅ enabled (default) |
KSM (Kernel Samepage Merging) allows the host kernel to deduplicate identical memory pages across VMs, saving RAM when running multiple similar guests.
qm set <vmid> –allow-ksm 0