Please note the official upgrade comments: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9
In PVE 9, VMs that do not report memory usage via the ballooning mechanism — as well as *BSD VMs due to an incomplete balloon driver implementation — now display the host-side RAM allocation rather than the actual usage reported by the guest.
This is a deliberate change from PVE 8, where the host-side RSS of the kvm process was shown instead.
The issue for FreeBSD is not that the balloon driver may be missing. Even if the vtballoon driver is present and active (verifiable via sysctl dev.vtballoon), the problem is more specific:
vtballoon driver does not implement VIRTIO_BALLOON_F_STATS_VQ — the virtio feature that sends memory usage statistics back to the hypervisor. The driver can respond to balloon inflate/deflate requests from the host, but never actively reports memory usage data.
This is an open bug in FreeBSD:
This differs from Windows without the VirtIO BalloonService, where the driver is entirely absent. On FreeBSD, the driver exists but is functionally incomplete with respect to what Proxmox VE expects.
| Guest OS | Reason | Driver Status |
|---|---|---|
| FreeBSD / OPNsense / pfSense | VIRTIO_BALLOON_F_STATS_VQ not implemented | Driver present, stats reporting missing |
| Windows (without VirtIO BalloonService) | Balloon driver not installed | Driver absent |
Any VM with balloon: 0 | Ballooning explicitly disabled | N/A |
When running FreeBSD as a Proxmox/QEMU guest with virtio-balloon enabled, the reported RAM usage in the Proxmox VE GUI is often misleading. This documents how to query actual memory usage directly from the guest using qm guest exec.
Run the following on the Proxmox node where the VM resides. As an example we use an OPNsense 14.3 VM (VM ID 100) on Proxmox VE 9.1.6 (node 'n3'):
qm guest exec 100 -- sh -c "sysctl -n \ vm.stats.vm.v_page_count \ vm.stats.vm.v_active_count \ vm.stats.vm.v_inactive_count \ vm.stats.vm.v_wire_count \ vm.stats.vm.v_free_count \ vm.stats.vm.v_cache_count \ vm.stats.vm.v_page_size"
Important: Run this command on the correct node. If the VM lives on node n3, you must be logged into n3. Otherwise you will get:
Configuration file 'nodes/n3/qemu-server/110.conf' does not exist
Example raw output (out-data):
2018895\n15672\n64009\n123634\n1812399\n0\n4096
| sysctl Key | Pages | MB | % | Notes |
|---|---|---|---|---|
| v_page_count (Total) | 2,018,895 | 7,886 MB | 100% | Total RAM assigned to VM |
| v_active_count | 15,672 | 61 MB | 0.8% | Currently in active use |
| v_inactive_count | 64,009 | 250 MB | 3.2% | Reclaimable, not active |
| v_wire_count | 123,634 | 483 MB | 6.1% | Kernel/wired, non-reclaimable |
| v_free_count | 1,812,399 | 7,080 MB | 89.8% | Truly free pages |
| v_cache_count | 0 | 0 MB | 0% | ⚠️ See note below |
Calculation formulas:
v_page_count × v_page_size / 1024²(v_active_count + v_wire_count) × v_page_size / 1024² → ~544 MB (6.9%)(v_free_count + v_inactive_count + v_cache_count) × v_page_size / 1024² → ~7,330 MB (92.9%)
Normally FreeBSD fills the page cache aggressively. A value of 0 for v_cache_count indicates that the virtio-balloon driver is not properly returning unused pages to the hypervisor. This is a known FreeBSD bug (FreeBSD Bug #277473).
Proxmox derives the displayed RAM usage from the balloon driver (virtio-balloon), not from the guest agent. The balloon driver reports how much RAM the VM currently “holds” (the actual balloon value), not real consumption. This causes Proxmox to display 2–3× higher RAM usage than the guest actually needs.
| Metric | Value |
|---|---|
| Effectively used RAM | ~544 MB |
| Proxmox-reported RAM | ~7.7 GB |
| Discrepancy factor | ~14× |