Table of Contents
ASUS RS500A-E11-RS12U Tuning
BIOS/UEFI
In most cases the BIOS “Optimised Defaults” provide a good baseline and may work straight away without additional tuning, except that RAM speed often needs to be set manually.
BIOS/UEFI baseline settings
- F5 to load optimised defaults
- Disable IPv6 on the shared LAN for the BMC
- IOMMU: Enabled
- RAM speed: set manually
- cDTP control: Auto
- Package Power Limit Control: Auto
RAM configuration and speed
Set RAM speed via: Advanced → AMD CBS → UMC Common Options → DDR4 RAM Common Options → DRAM Timing Configuration
Further optimisations (optional, YMMV)
PCIe Link Training Type (2 Step)
Chipset → PCIe Link Training Type → 2 Step
This is the recommended workaround for EPYC NVMe AER errors (BadTLP/DLLP), as “1 Step” training is often unstable on Gen4/Gen5 links and causes training failures.
Global C‑States (Disabled)
Advanced → AMD CBS → CPU Common Options → Global C‑State Control → Disabled
Disabling Global C‑States reduces VM wake‑up latency and is beneficial for responsive virtual machines.
DF C‑States (Disabled only if needed)
Advanced → AMD CBS → NBIO Common Options → SMU Common Options → DF C‑States → Disabled
Only disable DF C‑States if you observe noticeable tail latency, jitter, RDP/audio/VDI issues, highly bursty I/O, or clear performance problems in tests. For standard Proxmox VM and CT hosts without strict real‑time requirements, Auto is the more sensible setting.
cTDP (Determinism control)
Advanced → AMD CBS → NBIO Common Options → SMU Common Options
- Determinism Control: Manual
- Determinism Slider: Performance
- cTDP Control: Manual
- cTDP: 280
Use this only if you intentionally want to run the CPU at its maximum TDP (280 W); otherwise, leave at default or Auto.
IOMMU (Enabled for passthrough)
Advanced → AMD CBS → NBIO Common Options → IOMMU: Enabled
The default is usually Auto. Set IOMMU to Enabled (not Auto) in the BIOS for reliable PCI passthrough in Proxmox on AMD EPYC.
Why “Enabled”:
- Auto may fail (BIOS sometimes does not detect IOMMU properly), resulting in “No IOMMU detected”.
- Enabled explicitly turns on AMD‑IOMMU and, combined with kernel parameters such as `amd_iommu=on iommu=pt pcie_acs_override=downstream`, improves group stability.
In Proxmox this is required for GPU/NIC passthrough; verify with: `dmesg | grep IOMMU` (you should see “AMD‑Vi: AMD IOMMUv2 functionality”).
For your RS500A‑E11, use Enabled in BIOS plus the above kernel parameters and test with `iommuget`.
Above 4G Decoding (Enabled or Disabled)
: Due to AER errors with NVMe drives, it is recommended to disable this setting first. As you do not use GPUs, there is no benefit anyway.
Advanced → PCI Subsystem Settings → Above 4G Decoding: Enabled
Enabling this allows 64‑bit PCIe devices (GPUs, HBAs, NICs) to access memory above 4 GB, which is essential with more than 32 GB RAM, VFIO passthrough, or Resizable BAR. Without this, devices are limited to legacy 32‑bit mapping and may cause crashes in VMs. For Proxmox and passthrough setups, enable this only in combination with IOMMU.
SMT Control (Enabled)
Default: Auto
Advanced → CPU Common Options → Performance → SMT Control → Enabled
Simultaneous Multithreading (SMT) increases vCPU density for VMs. Only disable it for very specific low‑latency workloads, such as certain NFV or trading use cases.
BIOS settings already optimal in defaults
Trusted Computing (Disabled)
Advanced → Trusted Computing → Security Device Support → Disabled
This aligns with the “Optimised Defaults” and is recommended for standard virtualisation hosts.
DRAM power‑down (Disabled by default)
Disabling DRAM power‑down is not generally recommended unless you require the last fraction of performance and accept higher power consumption and heat.
Advanced → AMD CBS → UMC Common Options → DDR4 RAM Common Options → DRAM Controller Configuration → DRAM Power Options → Power Down Enable
Proxmox community guidance generally advises against disabling DRAM power‑down; the focus is on C‑states and power‑supply idle settings instead. DRAM power management typically saves 2–5 W per DIMM at idle while remaining stable with ECC RAM.
Disabling it is usually only justified for latency‑critical workloads (e.g. HPC), where the performance gain is marginal but power/heat increase is noticeable.
Disabling BMC on shared LAN
The BMC should only listen on the dedicated BMC port.
- Best solution: Disable “Shared LAN” via the front‑panel jumper.
- Then: Server Mgmt → BMC Network Configuration → Shared LAN → Disable IPv6 support for Shared LAN.
- Secondary option: Disable Shared LAN in UEFI/BIOS:
- Server Mgmt → BMC Network Configuration → Shared LAN → Static
Set an unused IP (e.g. Station IP Address: 10.10.10.10, Subnet Mask: 255.255.255.255)
- Set “Configure IPv6 support” → “IPv6 Support” for Shared LAN → Disabled
SVM Mode (Enabled)
Advanced → CPU Configuration → SVM Mode: Enabled
This is required for virtualisation and is already enabled by the optimised defaults.
CPPC (Auto)
Advanced → AMD CBS → NBIO Common Options → SMU Common Options → CPPC: Auto
Using Auto for CPPC (Collaborative Power Performance Control) is usually optimal because it adjusts automatically based on the operating system (e.g. Proxmox/Linux via `amd‑pstate`). This setting works well with VM scheduling and power management.
Only set to Enabled if you want manual tuning (e.g. `pstate=active`) or if you encounter specific issues; check the scaling driver with: `cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_driver` (expect `amd‑pstate`). For KVM/qmemu‑oriented workloads, leave at Auto and test with `pveperf`; switch to Enabled only if necessarily.
SR‑IOV Support (Enabled)
Advanced → PCI Subsystem Settings → SR‑IOV Support: Enabled
This is relevant for network cards (NICs) that expose virtual functions (VFs), which can then be passed through to VMs.
PCIe ARI Support (Auto)
Advanced → AMD CBS → NBIO Common Options → PCIe ARI → Auto
Using Auto for PCIe ARI is typically okay and often optimal, as the firmware enables it automatically for compatible devices (e.g. GPUs/NICs). Only force Enabled if you encounter group‑related IOMMU‑VFIO problems; otherwise Auto is sufficient and carries no disadvantages.
NUMA and RAM behaviour under Proxmox
NUMA Nodes per Socket: All Proxmox utilises NUMA effectively and can expose vNUMA to VMs when configured appropriately.
RAM‑tuning recommendations
- DDR4 Frequency: 3200 MHz (or the highest stable speed across all 8 memory channels).
- Memory interleaving: Channel interleaving enabled.
- Rank interleaving: Enabled (boosts memory bandwidth for VM workloads).
- Power Down Enable: Disabled (to avoid latency spikes caused by DRAM power‑saving states).
PCIe/NIC settings for passthrough and storage
- PCIe ASPM Support: Disabled
This is already the default since BIOS version 1301 and is well suited for passthrough of NICs/GPUs in Proxmox.
Hardware Optimisations
./.
