beye.blog logo
Automation

From Hang-Ups to High Gear: How I Fixed My VMware VMs on Proxmox

5 min read
#Proxmox#VMware#VirtIO#Performance Tuning#Migration#QEMU
From Hang-Ups to High Gear: How I Fixed My VMware VMs on Proxmox
Introduction

I started running Proxmox a little over a year ago.
Since day one, it’s been rock solid — stable, flexible, and fun to automate.
I migrated my lab and a few production-like workloads from VMware, and for months, everything ran perfectly.

Until it didn’t.

Backups started taking forever.
Some Ubuntu VMs hung after vzdump.
And the I/O pressure graphs looked like a cardiogram from a horror movie.

Everything still worked , but it wasn’t the smooth Proxmox experience I was used to.
That’s when I realized the problem wasn’t Proxmox — it was my migrated VMware VMs still carrying around legacy drivers and suboptimal configs.

So, I decided to fix them.
What followed was a weekend of tuning, scripting, and testing — and in the end, my cluster finally started to work reliably.

Why the Problems Started

VMware and Proxmox both run virtual machines, but they use very different hardware abstractions.
VMware loves vmxnet3 for network and LSI Logic for storage controllers.
Proxmox, on the other hand, is built for VirtIO — fast, paravirtualized drivers that talk directly to the hypervisor with minimal overhead.

When I migrated my VMs from VMware, they kept their old hardware settings.
That’s like moving to a new city but still driving on the wrong side of the road.

The result:

  • High I/O wait times

  • Backup stalls

  • Odd CPU scheduling

  • And one VM that refused to shut down unless I begged it

The First Fix: QEMU Guest Agent

The first major issue was that backups froze.
Turns out, none of my imported Ubuntu VMs had the QEMU Guest Agent installed.
Without it, Proxmox can’t freeze the filesystem or shut down a guest cleanly.

A quick fix inside each VM:

sudo apt install qemu-guest-agent sudo systemctl start qemu-guest-agent

The Hardware Tune-Up

Next up: replacing the VMware hardware relics.

Each VM got a proper VirtIO upgrade:

  • Disk Controller: from lsivirtio-scsi-single

  • NIC: from vmxnet3virtio

  • Disk Cache: writeback

  • I/O Threads: enabled (iothread=1)

After applying the changes, the hardware list turned orange (pending reboot), but once I restarted each VM — all white again, and noticeably faster.

CPU and Memory Tuning

During migration, VMware’s export had left my VMs configured with two sockets, one core each.
That’s technically fine, but Linux schedules tasks more efficiently with a single socket and multiple cores.

Here’s the fix:

Storage and I/O Optimization

Backups are I/O-heavy by nature.
To make them faster and more consistent, I tuned each VM disk:

Tuning Overview
AreaParameterWhereRecommendedWhy
CPU--cpuProxmox VMhostFull CPU features, best perf.
CPU--sockets / --coresProxmox VM1 socket, N coresCleaner topology, better scheduling.
MemoryBallooningProxmox VMdisable (--balloon 0)Avoid reclaim stalls during I/O.
Memoryvm.swappinessProxmox host (/etc/sysctl.conf)10 (ZFS hosts: 1)Don’t swap while RAM is free.
Disk bus--scsihwProxmox VMvirtio-scsi-singleMulti-queue + iothread support.
Disk cachecacheProxmox VM diskwritebackFaster I/O; host protects data.
Disk threadsiothreadProxmox VM disk1Separate I/O thread per disk.
NIC model--netXProxmox VMvirtio=MAC,…Lower CPU, lower latency.
Guest agent--agent + pkgProxmox VM + Ubuntuenable + qemu-guest-agentClean backups, shutdown, IP info.
Filesystemsmount optsHost/GuestsnoatimeFewer metadata writes.
Backupsschedule/limitsProxmox joboff-peak + bwlimit/ioniceReduce I/O spikes at midnight.
TimechronyAll nodesenableStable corosync + backups.
Automation Script — Batch VM Optimization

Once I saw the improvements, I didn't want to manually repeat it ten times. So, I wrote a little script that converts multiple VMs in one go.

It updates the hardware, keeps MAC addresses, reboots each VM, and leaves everything ready for guest agent installation.

Loading Gist...

Run the script on the Proxmox host:

root@pve-node02:~# ./proxmox_post_migration.sh

The Result

After the tuning:

  • Backups finished in half the time.

  • I/O pressure graphs went from chaos to calm.

  • No more backup hangs or swap weirdness.

  • CPU utilization smoothed out beautifully.

  • Network latency dropped by roughly 30%.

In short, the cluster finally felt right again.

After more than a year with Proxmox, I still think it’s one of the best platforms to run and automate infrastructure on.
But migrating from VMware isn’t just about exporting and importing VMs — it’s about teaching them the Proxmox way to run efficiently.

Once I swapped the legacy drivers, tuned CPU and memory, and optimized I/O, the entire system woke up.
Now, backups run quietly, VMs boot fast, and I don’t get random hangs in the middle of the night.

Chris Beye

About the Author

Chris Beye

Network automation enthusiast and technology explorer sharing practical insights on Cisco technologies, infrastructure automation, and home lab experiments. Passionate about making complex networking concepts accessible and helping others build better systems.

Read More Like This