After hearing a lot of praise for Proxmox both online and in my courses (from both fellow students and instructors), I decided to experiment with it in my homelab environment. I also wanted to get hands on experience with it after hearing that many companies have been switching to Proxmox from VMware to cut costs, especially after VMware significantly raised their pricing after Broadcom acquired them. Historically, I've ran my server services on bare metal, but separating services by virtual machine on a single server sounds like a far more efficient way to manage. I found things about Proxmox that I liked and things I disliked. Initially starting as just an experiment, I decided to move all my services from 3 to 4 physical servers over to a single Proxmox server, but after a few weeks, I moved them back.
After experimenting running some server services I use in a VM, I decided to move my entire homelab to Proxmox. My homelab consisted of four separate servers (specifically three mini-PCs and a Raspberry Pi 5), and one dual-bay DAS. One of the biggest benefits of virtualization is that I can run multiple services that I traditionally separated by bare metal server, on a single server via virtualization. Proxmox allowed me to combine the four physical servers into a single server, and they can share a USB port to access my dual-bay DAS via USB passthrough. In this sense, virtualization is simpler.
My primary physical server (called "jupiter") was configured to run five virtual machines, each running one to three services. Below is the documentation and notes for how I had the VMs separated (not included are the two other nodes "mars" and "neptune" which didn't have any many, if any, VMs before switching back). What's neat is that each VM gets it's own IP address (though I believe this can be configured so that Proxmox handles NAT instead of the router), and I designed a basic IP scheme for the proxmox server and VMs, assigning them statically on my router's DHCP server. The physical server gets an IP ending in .10, while the VMs get a .11, .12, etc. Experimenting with cluster, the physical servers would get .20 and .30, and their VMs would get .21, .31, etc.
Granted, I'm far from a Proxmox pro, but I was regularly losing networking on the host server at random, and therefore all of the guest VMs. Sometimes it would be every couple of hours, sometimes within 48 hours. I had to install the app Keep It Up on my phone to send a ping every 5 minutes to make sure the server was still connected. I eventually solved this with a firmware update for the NIC, so this may not have been a Proxmox/Ubuntu issue, but a firmware issue. That said, I never had that issue when running it as a bare metal server with Fedora 42.
I've also been having issues with unstable VMs, and the reasons seemed to differ. I know that at least one of the reasons was due to not providing enough system resources for the VM to run, specifically storage space. However, I also had issues with my media server VM (tachyon) that ran samba, jellyfin, and cloudflare where it would crash and restart, only to load up rescue mode. By default, Fedora 42 disables the root account which makes extremely difficult when in rescue mode. I never tracked down the issue, but I rebuilt the VM, enabling the root account enabled, and no longer had the VM crash and load into rescue mode problem. However, I did have issues with rebooting at random, but after the reboot, it would behave normally - I would only notice this if I was interacting with the server when the crash happened.
I had also experimented with setting up a cluster and added two more nodes (the other two servers in my homelab). I had migrated my Pihole VM from one node to another and it seemed to work fine, but the next day I noticed that it was no longer functioning. The VM would not power on and I was not able to migrate it back to the original node. I never tracked down the cause, but it was then that I decided to go back to a "simple" bare metal server.
As I mentioned at the beginning, Proxmox allowed me to combine multiple physical servers into one server which does make things simple in a way. On the other hand, instead of managing one server, you're managing many virtual servers - each one needing it's own updates, and each needing maintenance and monitoring to remain stable. Admittedly, a lot of this due to my own ignorance with Proxmox and virtualization, and I should also point out that some of these problems may have had something to do with running Proxmox on a mini PC with limited resources. At the end of the day, I just don't have time to troubleshoot it when Proxmox/Ubuntu has issues, or instability of a VM.
Over the course of a few weeks, I spent chunks of every single day managing and fixing issues, and over time, I just stopped having fun. I needed a working server that I don't need to think much about so I can focus on other things. Even though I'm back on bare metal, this project did exactly what I wanted it to do: I've learned something. I've certainly learned a lot by playing around with Proxmox, breaking both guests and the host, and learning more about Linux. And that was the point of this experiment. I have a better understanding of not just Proxmox, but virtualization in general.
That said, I've decided to return to bare metal for core services like jellyfin, samba, and syncthing because if these services go down, family members are effected. Switching back to bare metal, I've also simplified my earlier physical setup down to two servers - the ThinkCentre M920q and the Raspberry Pi 5. I've removed a switch, two mini-PCs, and countless cabling.
This post is by no means a knock on Proxmox or on virtualization - if anything it's a knock on myself for jumping to a virtualized environment too soon before adequate testing. It's obvious to me, that virtualization is essential to by familiar with. And with any technical project, we should expect problems to arise. I may one day return to Proxmox and virtualization. If you have the inclination, it's definitely worth the time.
Thanks for reading. Feel free to send comments, questions, or recommendations to hey@chuck.is.