Hi all. I was curious about some of the pros and cons of using Proxmox in a home lab set up. It seems like in most home lab setups it’s overkill. But I feel like there may be something I’m missing. Let’s say I run my home lab on two or three different SBCs. Main server is an x86 i5 machine with 16gigs memory and the others are arm devices with 8 gigs memory. Ample space on all. Wouldn’t Proxmox be overkill here and eat up more system resources than just running base Ubuntu, Debian or other server distro on them all and either running the services needed from binary or docker? Seems like the extra memory needed to run the Proxmox software and then the containers would just kill available memory or CPU availability. Am I wrong in thinking that Proxmox is better suited for when you have a machine with 32gigs or more of memory and some sort of base line powerful cpu?
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
It all depends on how you want to homelab.
I was into low power homelabbing for a while - half a dozen Raspberry Pis - and it was great. But I’m an incessant tinkerer. I like to experiment with new tech all the time, and am always cloning various repos to try out new stuff. I was reaching a limit with how much I could achieve with just Docker alone, and I really wanted to virtualise my firewall/router. There were other drivers too. I wanted to cut the streaming cord, and saving that monthly spend helped justify what came next.
I bought a pair of ex enterprise servers (HP DL360s) and jumped into Proxmox. I now have an OPNsense VM for my firewall/router, and host over 40 Proxmox CTs, running (at a guess) around 60-70 different services across them.
I love it, because Proxmox gives me full separation of each service. Each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. On top of that, Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.
Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.
Let’s say there’s a new contender that competes with Immich. They offer the promise of a really cool feature no one else has thought of in a self-hosted personal photo library. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT), accessible via
photos.domain
on my home network.I can spin up a Proxmox CT from my custom Debian template, use my Ansible playbook to provision Docker and all the other bits, access it in Portainer and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.
I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my
photos.domain
hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.That’s a simplified example, but hopefully illustrates at least what I get out of using Proxmox the way I do.
The cons for me is the cost. Initial cost of hardware, and the cost of powering beefier kit like this. I’m about to invest in some decent centralised storage (been surviving with a couple li’l ARM-based NASes) to I can get true HA with my OPNsense firewall (and a few other services), so that’s more cost again.
Proxmox is available free. You pay for support and maybe other things with a license, but you can download it and give it a spin at no cost. I just switched to Proxmox around 1m ago when I restarted my homelab project after years on hiatus. I used to use Esxi before Broadcom bought VMware and decided to suck. I like it so far.
It might be overkill for your needs. I’m running it because I want to play with setting up and managing Win Server (I only have experience managing existing servers on Win), so there’s a distinct reason for me to be on Proxmox even though I’m a Mac and Linux person. I agree that it might be overkill for your i5 if you only plan to run one Ubuntu instance on it. However, a lot of homelabbing is about having an environment to try out and learn new skills. If that’s something that’s interesting to you, it might be worthwhile.
Keep in mind that you could also run KVM for virtualization if you find reason for VMs. You’re not limited to Proxmox. And if you see no need for VMs, you already have three devices to do the things you bought them to do.
For stability you want the enterprise subscription which is not free but is fairly reasonable
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
[Thread #839 for this sub, first seen 29th Jun 2024, 15:25] [FAQ] [Full list] [Contact] [Source code]
You need Proxmox
Seriously though it is nice to have
I use Proxmox/virtualisation because I want to be able to run services within their own OS. I’ve got a VM dedicated to docker both at home and in my colocation, since a lot of services I’m happy to just chuck on there, but there’s others with more complex setups, and other services/systems that just running them in docker isn’t an option.
Pros:
Cons:
ahhh thanks, i needed that laugh. too true…
I’m currently running proxmox on a 32 gig server running a ryzen 5600 G, it’s going fine the containers don’t actually use all that much RAM and personally I’m actually seeing a better benchmarks than I did when I just ran as a Bare Bones Ubuntu server, my biggest issue has actually been a larger IO strain than anything, because it’s a lot more IO heavy now since everything’s containerized. I think I easily could run it with a lower amount of ram I would just have to turn off some of the more RAM intensive items
As for if I regret changing, no way Jose, I absolutely love the ability of having everything containerized because I can set things up how I want it when I want it and if I end up screwing something up configuration wise or decide that I no longer need that service I can just nuke the container without having to remember well what did I install on this program so I can remove it and do other programs need this dependency to work. Plus while I haven’t tinkered as much in this area, you can hard set what resources you want a lot to each instance, so if you have a program like say a pi hole that you know is never going to use x amount of resources to be able to appropriately work you can restrict what it can do so if something does go wrong with it it doesn’t use all of your system resources
The biggest con out of it is probably having to figure out how to do the networking side because every container is going to have a different IP address, I found using a web dashboard is my friend because I can have heimdel tell me where all my services are and I just have to click the icon to bring me to the right IP address, it took a lot of work to figure out how it’s operational and how to get it working, but the benefits I’ve gotten of having it is amazing. Just make sure you have a spare disk to temporarily clone partitions to because it’s extremly difficult to use existing disks in the machine. I’ve been slowly going one at a time copying it over to an external drive nuking the and then reinitializing the disc as part of the proxmox lvm and then copying the data back over onto their appropriate image file.
For me, pros are:
Cons
Hope this is helpful.
a simple cron job pointing to an update.sh with an apt update && apt upgrade -y does the trick.
i wouldnt recommend you to completely automate it though
debian has unattended-updates by default and generally takes care of itself
I used Proxmox for a couple years and it’s good if you run a lot of VMs or LXCs, but I found that I’m not really the target audience. I ended up only running one Debian VM for my Docker containers. It was fine, but I eventually felt that Proxmox added no value for me, and the end result was sacrificing some memory and performance from using virtio emulations for CPU/GPU/RAM/filesystems. If your machines only have 8-16GB of RAM I don’t think it would be a good idea, as I’ve seen the rule of thumb is to dedicate 2GB for Proxmox’s usage, which is in addition to any guest OS’s requirements. Meanwhile I have a Debian install on a VPS that takes about 450MB of RAM.
For me, pros:
Cons:
I went exactly the same route. Years of proxmox realizing it is not KISS in any way for my use cases. Switches to Nixos on ZFS root (so no bash installation scripts ;) ).
However, docker has not the same level of isolation and security as VMs. I am currently looking into gVisor for that.
$550? For a homelab you should only need to pay €110/year. What am I missing here?
Yeah it’s €110/year here: https://shop.proxmox.com/index.php?rp=/store/proxmox-ve-community
I remember evaluating the price a long time ago and thinking it was too much for disabling a pop-up, and on writing my post I navigated to their site and saw the standard subscription and thought that’s what I had looked at a few years back: https://shop.proxmox.com/index.php?rp=/store/proxmox-ve-standard
Ah yes I did the same thing as you as I checked again but from my PC.
I just paid myself the 110 after using it for 2 years. It’s not for the popup since I was using a script to remove it. It’s more to get the production ready updates. My server has too many important things now, I don’t want less tested changes. That and to support the devs.
I run it on a 4GB Fujitsu Futro S920! 😆 All the RAM seems to be used by 3 VMs. Some SWAP is been used, ok, but the Proxmox overhead doesn’t seem that much.
Just wait until you get a few machines. You can live transfer things and dynamically allocate resources.
C’mon just move to Incus: https://lemmy.world/comment/10896868 :P
No way! For just 1 reason: I will have to learn another new thing and replace it in about 6 servers. I value my time and for now Proxmox is fine.
P.s. Incus seems nice though! NO, stop tempting me!!! I’m already in the rabbit hole with a gazilion of self hosted services and dozens piling up in the to do list 🙈🙈🙈
Well, I understand your POV… but real software freedom instead of messages asking you to buy a license and a questionable kernel is always a good choice :P
Does Incus support things like Kernel Samepage Merging? How does it handle Windows VMs? Does the WebUI give a nice and easy novnc window that just works?
Yes
ksmtuned
is your friend. For VMs it can be managed / enabled like any other Linux Kernel + QEMU/KVM running with KSM enabled.On LXC containers it may be a bit harder as it depends a LOT, best results if you’re using systemd both the host and containers. It may work out all out of the box or you’ll have to resort to
ksm_wrapper
in both the Incus executable and the stuff running inside your containers.Don’t forget that:
As one would except from QEMU… https://blog.simos.info/how-to-run-a-windows-virtual-machine-on-incus-on-linux/
Yes it works fine. https://youtu.be/wqEH_d8LC1k?feature=shared&t=508
Proxmox is based on kvm/qemu, and is very resource conservative. There is virtually no impact on performance due to the hypervisor, even on older processors. Scheduling on the cpu and hypervisor makes running multiple VMs at the same time trivial as well. RAM and I/O bandwidth are the two things that can affect performance. Running out of RAM due to too many VMs will grind you to a halt, but so would running too many applications or containers on bare metal. Running everything off of one spinning sata disk will make it impossible, but again, same downfall on bare metal.
Those minimal impacts to performance are a minor nuisance compared to the ability to run experiments and learn on sandboxed VMs. Now that TrueNAS has better virtualization support, it has caught my eye as a better homelab solution, but I will always have a proxmox server running somewhere in my stack just due to the versatility it gives me.
I don’t prefer proxmox, but I will say that when you have even a machine with 8 or 16gb RAM, virtualizing a workload on it just makes sense. At that point the cost is 12% resources, and the benefits IMHO farrr outweight that.
Virtualization has a 1-2% performance penalty
Using ProxMox has been extremely useful for me. It has allowed me to experiment with a lot more things than I ever did before—it is very easy to spin up a new VM to test things out.
I would recommend it to anyone running a home server.
Incus is way easier to work with than Proxmox, and it sits on your OS of choice instead of being the OS you must use. For home use it’s way easier to use with the web ui, it even has clustering if you want to go hard.
So you can install Incus when you want a VM/LXC container and not have to commit to a VM/LXC container OS from the start.
Also Proxmox free just had a bad update that björked some stuff if you updated when it was live. Proxmox free is rolling and apparently lacks basic sanity checks for updates.
i played around with Incus yesterday on one of my VPSs that I really don’t care about. I did find it really interesting. But im just wondering if its still a bit too much for what i use my home lab for (running local services like jellyfin, gitea, etc.). I would prefer to containerize all of those, but unless im misunderstanding something somewhere (and I probably am), running Incus to then run another instances of ubuntu 22.04 (or whatever) so i can set up Jellyfin or Gitea inside of that seems like a bit of overkill. However, as im trying to get nextcloud set up and running, having it exposed to the internet would mean spousal factor would go way up. Honestly they are about to kill me for using pihole, so having them have to turn on tailscale to connect to nextcloud, well sometimes it feels like its asking too much. So this is where running something in an isolated container would make me feel a bit more at ease. ah if only my spouse would just learn to turn on tailscale when they need it, but i don’t see that happening any time soon.
I do use it to hold internet-exposed things in LXC containers to sidestep having to figure out how to not run things as Docker root.
You do not need it for everything, but since it’s not an OS that makes it your everything, that’s ok! Run Docker containers as you need, put internet-exposed ones in an LXC container, put home assistant in a VM because it’s special.
I remember updating (maybe a year ago now) and it making all my containers unaccessable.
Incus or Proxmox (e.g., should I shift to Incus LTS or something?)
If incus works for yoy, use it. Proxmox locks you out of the option to choose your base server distros.
Ah, I was wondering which one you updated and it made your containers inaccessible!
Sorry, misunderstood. Proxmox Free broke my containers on updating a while ago.
Now I use Docker-style application containerizing, but I think LXC (the base technology powering Incus/LXD) is useful in a number of situations and perfectly viable for use. I think Incus-containerized applications are easier to upgrade individually (like software updates of your apps, no need to recreate the container image) and gives a closer to native experience of managing. You do lose out on automated deployment of applications from widely available image sources like docker.io, but the convenience-loss is minimal.
Good to know Proxmox’s bad updates are more pervasive than the latest bad update.
I have been able to install Docker in the LXC containers and pull images in with the normal commands. I do that container-in-container to get effectively rootless docker containers for stuff that I couldn’t figure out how to run rootless. So you don’t even lose out on docker if you’re determined! And as you said incus goes on any OS, you can docker just fine on the base OS of your choice and use incus for specific things!
I have tried a couple of Proxmox clusters, one with overkill specs and one with little Mini PCs. Proxmox does eat up a fair amount of memory, but I have used it with Ceph for live migrations. Its really useful to me to be able to power off a machine, work on it, then bring it back up, and have no interruptions in my services. That said, my Mini PCs always seemed to be hurting for RAM. So that’s my pros and cons.
Proxmox doesn’t have a lot of overhead. However, Ceph is a beast and requires very power hardware with at least a dedicated 10g network between hosts for transfers. You also need 5 or more nodes for it to be reliable. I wouldn’t recommend Ceph as there isn’t a lot of point to it. You can get a similar functionality with NFS or ZFS replication.
I have it working with LaCP’d 4gb networking for the transfers. Five nodes. I agree though, It’s a beast on RAM.