I just built my own automation around their official documentation; it’s fantastic.
https://www.wireguard.com/#conceptual-overview
vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.
Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.
If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.
Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation
https://docs.docker.com/engine/network/packet-filtering-firewalls/
For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.
The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)
You should consider reversing the roles. There’s no reason your homelab cannot be the client, and have your vps be the server. Once the wireguard virtual network exists, network traffic doesn’t really care which was the client and which was the server. Saves you from opening a port to attackers on your home network.
How has nobody in this thread said check_mk yet?
It’s free, you host it yourself. It’s built off of nagios, compatible with nagios plugins, supports snmp or agent based checks. It can email, SMS, slack or discord you when something breaks, you can write your own custom checks in any language that can output to a local console… I could never imagine even looking for something else.
Proxmox uses scsi for disk images, which are single access only
Smb would be quite a lot of overhead, and it doesn’t natively support linux filesystem permissions. You’ll also run into issues with any older programs that rely on file locks to operate. nfs would be a much more appropriate choice. That said, apparmor in container images will usually prevent you from mounting remote nfs shares without jumping through hoops (that are in your way for a reason). You’ll be limited to doing that with virtual machines only, no openvz/containerd.
Fun fact, it was literally the problems of sharing media storage between multiple workflows that got me to stop using virtual machines in proxmox and start building custom docker containers instead.
There are things proxmox definitely can’t do, but chances are even if you know what they are, they probably still don’t apply to your workflows.
Most things are a tradeoff between extensibility and convenience. The next layer down is what I do, Debian with containerd + qemu-kvm +custom containers/vms, automated by hand in a bunch of bash functions. I found proxmox’s upgrade process to be a little on the scuffed side and I didn’t like the way that it handled domain timeouts. It seemed kind of inexcusable how long it would take to shut down sometimes, which is a real problem in a power event with a UPS. I also didn’t like that updates to proxmox core would clobber a lot of things under the hood you might configure by hand.
The main thing is just to think about what you want to do with it, and whether you value the learning that comes with working under the hood at various tiers. My setup before this was proxmox 6.0, and I arguably was doing just as much on that before as I am now. All I really have to show for going a level deeper is a better understanding of how things actually function and a skillset to apply at work. I will say though, my backups are a lot smaller now that I’m only backing up scripts, dockerfiles, and specific persistent data. Knowing exactly how everything works lets you be a lot more agile with backup and recovery confidence.
the best way to learn is by doing!