Table of Contents
Known Bugs and Quirks in Proxmox VE ≥ 9
Changes Proxmox VE 8 to Proxmox VE 9
Please note the official upgrade comments: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
Proxmox VE 9: VM RAM Usage Displayed Higher Than Actual
Background
- Official documentation: Proxmox Upgrade 8 to 9 – VM Memory Consumption Shown is Higher
- Confirmed in forum: Memory usage wrong on FreeBSD guests – 9.0.3
What Changed in Proxmox 9
In PVE 9, VMs that do not report memory usage via the ballooning mechanism — as well as *BSD VMs due to an incomplete balloon driver implementation — now display the host-side RAM allocation rather than the actual usage reported by the guest.
This is a deliberate change from PVE 8, where the host-side RSS of the kvm process was shown instead.
Clarification: FreeBSD is a Special Case
The issue for FreeBSD is not that the balloon driver may be missing. Even if the vtballoon driver is present and active (verifiable via sysctl dev.vtballoon), the problem is more specific:
vtballoon driver does not implement VIRTIO_BALLOON_F_STATS_VQ — the virtio feature that sends memory usage statistics back to the hypervisor. The driver can respond to balloon inflate/deflate requests from the host, but never actively reports memory usage data.
This is an open bug in FreeBSD:
- FreeBSD Bug #292570 — “virtio-balloon does not balloon correctly” (still open as of 2026)
Windows
This differs from Windows without the VirtIO BalloonService, where the driver is entirely absent. On FreeBSD, the driver exists but is functionally incomplete with respect to what Proxmox VE expects.
Affected VM Types
| Guest OS | Reason | Driver Status |
|---|---|---|
| FreeBSD / OPNsense / pfSense | VIRTIO_BALLOON_F_STATS_VQ not implemented | Driver present, stats reporting missing |
| Windows (without VirtIO BalloonService) | Balloon driver not installed | Driver absent |
Any VM with balloon: 0 | Ballooning explicitly disabled | N/A |
Practical Impact
- No impact on stability or VM operation
- RAM usage in the Proxmox VE GUI and API is misleading for affected VMs
- Dynamic memory reclamation (balloon deflation) by the host does not work reliably for FreeBSD guests
- Risk of unexpected OOM conditions on the host if memory pressure is high and FreeBSD VMs cannot be deflated
What to do?
- Disable ballooning for FreeBSD VMs (since it does not function correctly anyway): '# qm set <vmid> –balloon 0'
- Use proxmox-rmem — a community tool that reads guest-reported memory and surfaces it correctly: proxmox-rmem on Github
- Monitor via external tooling such as LibreNMS, Prometheus + node_exporter or Zabbix, deployed inside the FreeBSD VM
- Read actual guest memory usage via the QEMU Guest Agent (see next chapter):
FreeBSD VM Memory Analysis via QEMU Guest Agent
Background
When running FreeBSD as a Proxmox/QEMU guest with virtio-balloon enabled, the reported RAM usage in the Proxmox VE GUI is often misleading. This documents how to query actual memory usage directly from the guest using qm guest exec.
Querying Memory Stats
Run the following on the Proxmox node where the VM resides. As an example we use an OPNsense 14.3 VM (VM ID 100) on Proxmox VE 9.1.6 (node 'n3'):
qm guest exec 100 -- sh -c "sysctl -n \ vm.stats.vm.v_page_count \ vm.stats.vm.v_active_count \ vm.stats.vm.v_inactive_count \ vm.stats.vm.v_wire_count \ vm.stats.vm.v_free_count \ vm.stats.vm.v_cache_count \ vm.stats.vm.v_page_size"
Important: Run this command on the correct node. If the VM lives on node n3, you must be logged into n3. Otherwise you will get:
Configuration file 'nodes/n3/qemu-server/110.conf' does not exist
Example Output & Interpretation
Example raw output (out-data):
2018895\n15672\n64009\n123634\n1812399\n0\n4096
| sysctl Key | Pages | MB | % | Notes |
|---|---|---|---|---|
| v_page_count (Total) | 2,018,895 | 7,886 MB | 100% | Total RAM assigned to VM |
| v_active_count | 15,672 | 61 MB | 0.8% | Currently in active use |
| v_inactive_count | 64,009 | 250 MB | 3.2% | Reclaimable, not active |
| v_wire_count | 123,634 | 483 MB | 6.1% | Kernel/wired, non-reclaimable |
| v_free_count | 1,812,399 | 7,080 MB | 89.8% | Truly free pages |
| v_cache_count | 0 | 0 MB | 0% | ⚠️ See note below |
Calculation formulas:
- Total MB =
v_page_count × v_page_size / 1024² - Effectively used MB =
(v_active_count + v_wire_count) × v_page_size / 1024²→ ~544 MB (6.9%) - Available MB =
(v_free_count + v_inactive_count + v_cache_count) × v_page_size / 1024²→ ~7,330 MB (92.9%)
The Ballooning Problem
v_cache_count = 0
Normally FreeBSD fills the page cache aggressively. A value of 0 for v_cache_count indicates that the virtio-balloon driver is not properly returning unused pages to the hypervisor. This is a known FreeBSD bug (FreeBSD Bug #277473).
Why Proxmox Shows Wrong RAM Usage
Proxmox derives the displayed RAM usage from the balloon driver (virtio-balloon), not from the guest agent. The balloon driver reports how much RAM the VM currently “holds” (the actual balloon value), not real consumption. This causes Proxmox to display 2–3× higher RAM usage than the guest actually needs.
Summary
| Metric | Value |
|---|---|
| Effectively used RAM | ~544 MB |
| Proxmox-reported RAM | ~7.7 GB |
| Discrepancy factor | ~14× |
