Table of Contents
Introduction
I started running Proxmox a little over a year ago.
Since day one, it’s been rock solid — stable, flexible, and fun to automate.
I migrated my lab and a few production-like workloads from VMware, and for months, everything ran perfectly.
Until it didn’t.
Backups started taking forever.
Some Ubuntu VMs hung after vzdump.
And the I/O pressure graphs looked like a cardiogram from a horror movie.
Everything still worked, but it wasn’t the smooth Proxmox experience I was used to.
That’s when I realized the problem wasn’t Proxmox — it was my migrated VMware VMs still carrying around legacy drivers and suboptimal configs.
So, I decided to fix them.
What followed was a weekend of tuning, scripting, and testing — and in the end, my cluster finally started to work reliably.
Why the Problems Started
VMware and Proxmox both run virtual machines, but they use very different hardware abstractions.
VMware loves vmxnet3 for network and LSI Logic for storage controllers.
Proxmox, on the other hand, is built for VirtIO — fast, paravirtualized drivers that talk directly to the hypervisor with minimal overhead.
When I migrated my VMs from VMware, they kept their old hardware settings.
That’s like moving to a new city but still driving on the wrong side of the road.
The result:
-
High I/O wait times
-
Backup stalls
-
Odd CPU scheduling
-
And one VM that refused to shut down unless I begged it
The First Fix: QEMU Guest Agent
The first major issue was that backups froze.
Turns out, none of my imported Ubuntu VMs had the QEMU Guest Agent installed.
Without it, Proxmox can’t freeze the filesystem or shut down a guest cleanly.
A quick fix inside each VM:
sudo apt install qemu-guest-agent
sudo systemctl start qemu-guest-agent
Then, on the Proxmox host:
qm set <VMID> --agent enabled=1
That alone solved half of my headaches and backups stopped hanging.
The Hardware Tune-Up
Next up: replacing the VMware hardware relics.
Each VM got a proper VirtIO upgrade:
-
Disk Controller: from
lsi→virtio-scsi-single -
NIC: from
vmxnet3→virtio -
Disk Cache:
writeback -
I/O Threads: enabled (
iothread=1)
After applying the changes, the hardware list turned orange (pending reboot), but once I restarted each VM — all white again, and noticeably faster.
CPU and Memory Tuning
During migration, VMware’s export had left my VMs configured with two sockets, one core each.
That’s technically fine, but Linux schedules tasks more efficiently with a single socket and multiple cores.
Here’s the fix:
qm set <VMID> --sockets 1 --cores 2 --cpu host
Then I disabled ballooning (since my workloads are stable) and reduced swappiness on the host:
echo "vm.swappiness=10" >> /etc/sysctl.conf
sysctl -p
No more swap usage while there’s plenty of free memory — one of those quiet optimizations that makes everything more predictable.
Storage and I/O Optimization
Backups are I/O-heavy by nature.
To make them faster and more consistent, I tuned each VM disk:
qm set <VMID> --virtio0 <storage>:vm-<VMID>-disk-0,cache=writeback,iothread=1
This enables write-back caching and assigns a separate thread to handle I/O, reducing contention during backup operations.
On the QNAP side, I optimized my NFS mounts for performance:
rw,async,noatime,nolock,vers=3,tcp,rsize=1048576,wsize=1048576
Those two small flags, async and noatime, shaved minutes off backup times.
Tuning Overview
| Area | Parameter | Where | Recommended | Why |
|---|---|---|---|---|
| CPU | --cpu |
Proxmox VM | host |
Full CPU features, best perf. |
| CPU | --sockets / --cores |
Proxmox VM | 1 socket, N cores |
Cleaner topology, better scheduling. |
| Memory | Ballooning | Proxmox VM | disable (--balloon 0) |
Avoid reclaim stalls during I/O. |
| Memory | vm.swappiness |
Proxmox host (/etc/sysctl.conf) |
10 (ZFS hosts: 1) |
Don’t swap while RAM is free. |
| Disk bus | --scsihw |
Proxmox VM | virtio-scsi-single |
Multi-queue + iothread support. |
| Disk cache | cache |
Proxmox VM disk | writeback |
Faster I/O; host protects data. |
| Disk threads | iothread |
Proxmox VM disk | 1 |
Separate I/O thread per disk. |
| NIC model | --netX |
Proxmox VM | virtio=MAC,… |
Lower CPU, lower latency. |
| Guest agent |
--agent + pkg |
Proxmox VM + Ubuntu | enable + qemu-guest-agent
|
Clean backups, shutdown, IP info. |
| Filesystems | mount opts | Host/Guests | noatime |
Fewer metadata writes. |
| Backups | schedule/limits | Proxmox job | off-peak + bwlimit/ionice
|
Reduce I/O spikes at midnight. |
| Time | chrony | All nodes | enable | Stable corosync + backups. |
Automation Script — Batch VM Optimization
Once I saw the improvements, I didn’t want to manually repeat it ten times.
So, I wrote a little script that converts multiple VMs in one go.
It updates the hardware, keeps MAC addresses, reboots each VM, and leaves everything ready for guest agent installation.
Run the script on the Proxmox host:
The Result
After the tuning:
-
Backups finished in half the time.
-
I/O pressure graphs went from chaos to calm.
-
No more backup hangs or swap weirdness.
-
CPU utilization smoothed out beautifully.
-
Network latency dropped by roughly 30%.
In short, the cluster finally felt right again.
After more than a year with Proxmox, I still think it’s one of the best platforms to run and automate infrastructure on.
But migrating from VMware isn’t just about exporting and importing VMs — it’s about teaching them the Proxmox way to run efficiently.
Once I swapped the legacy drivers, tuned CPU and memory, and optimized I/O, the entire system woke up.
Now, backups run quietly, VMs boot fast, and I don’t get random hangs in the middle of the night.
