• 0 Posts
  • 29 Comments
Joined 1Y ago
cake
Cake day: Jun 20, 2023

help-circle
rss


vyatta and vyatta-based (edgerouter, etc) I would say are good enough for the average consumer. If we’re deep enough in the weeds to be arguing the pros and cons of wireguard raw vs talescale; I think we’re certainly passed accepting a budget consumer router as acceptably meeting these and other needs.

Also you don’t need port forwarding and ddns for internal routing. My phone and laptop both have automation in place for switching wireguard profiles based on network SSID. At home, all traffic is routed locally; outside of my network everything goes through ddns/port forwarding.

If you’re really paranoid about it, you could always skip the port-forward route, and set up a wireguard-based mesh yourself using an external vps as a relay. That way you don’t have to open anything directly, and internal traffic still routes when you don’t have an internet connection at home. It’s basically what talescale is, except in this case you control the keys and have better insight into who is using them, and you reverse the authentication paradigm from external to internal.


Talescale proper gives you an external dependency (and a lot of security risk), but the underlying technology (wireguard) does not have the same limitation. You should just deploy wireguard yourself; it’s not as scary as it sounds.


Fail2ban and containers can be tricky, because under the hood, you’ll often have container policies automatically inserting themselves above host policies in iptables. The docker documentation has a good write-up on how to solve it for their implementation

https://docs.docker.com/engine/network/packet-filtering-firewalls/

For your usecase specifically: If you’re using VMs only, you could run it within any VM that is exposing traffic, but for containers you’ll have to run fail2ban on the host itself. I’m not sure how LXC handles this, but I assume it’s probably similar to docker.

The simplest solution would be to just put something between your hypervisor and the Internet physically (a raspberry-pi-based firewall, etc)


+1 for cmk. Been using it at work for an entire data center + thousands of endpoints and I also use it for my 3 server homelab. It scales beautifully at any size.


You would expose a single port to multiple vlans, and then bind multiple addresses to that single physical connected interface. Each service would then bind itself to the appropriate address, rather than “*”


You should consider reversing the roles. There’s no reason your homelab cannot be the client, and have your vps be the server. Once the wireguard virtual network exists, network traffic doesn’t really care which was the client and which was the server. Saves you from opening a port to attackers on your home network.


Sorry I should have said “carbons and carbons related qol extensions”


Did you ever get carbons working properly? (As in, mobile and desktop clients of the same user both getting messages and marking as read remotely between them)


especially true for when manufacturers stop supporting the console you invested into, stops making replacement parts, issuing security patches, etc. Having the ability to make, repair and use copies of the games you purchase is critical to digital preservation.


There are also full-suites like rancher which will abstract away a lot of the complexity


How has nobody in this thread said check_mk yet?

It’s free, you host it yourself. It’s built off of nagios, compatible with nagios plugins, supports snmp or agent based checks. It can email, SMS, slack or discord you when something breaks, you can write your own custom checks in any language that can output to a local console… I could never imagine even looking for something else.


has xmpp figured out carbons yet between multiple clients? also are there any good mobile clients?


If one doesn’t exist, it would seem to be a fairly straightforward (if not a smidge tedious) thing to implement. Ever thought about learning web development?


Are… hygienists famously known for not being smart? I don’t think that’s a thing lmao


A raspberry pi or orange pi could definitely run all of those things at very low power consumption.


Is that not the right answer though? It can do timestamps or offsets for audio cropping without re-encoding. He’s on lemmy, cli can’t be that scary


Apparmor will complain and block the nfs mount unless you disable apparmor for the container. Then in a lot of cases the container won’t be able to stop itself properly. At least that was my experience.


Nobody should run k8s/k3s without understanding how they work lol, that’s a recipe for lost data.


Proxmox uses scsi for disk images, which are single access only

Smb would be quite a lot of overhead, and it doesn’t natively support linux filesystem permissions. You’ll also run into issues with any older programs that rely on file locks to operate. nfs would be a much more appropriate choice. That said, apparmor in container images will usually prevent you from mounting remote nfs shares without jumping through hoops (that are in your way for a reason). You’ll be limited to doing that with virtual machines only, no openvz/containerd.

Fun fact, it was literally the problems of sharing media storage between multiple workflows that got me to stop using virtual machines in proxmox and start building custom docker containers instead.


There are things proxmox definitely can’t do, but chances are even if you know what they are, they probably still don’t apply to your workflows.

Most things are a tradeoff between extensibility and convenience. The next layer down is what I do, Debian with containerd + qemu-kvm +custom containers/vms, automated by hand in a bunch of bash functions. I found proxmox’s upgrade process to be a little on the scuffed side and I didn’t like the way that it handled domain timeouts. It seemed kind of inexcusable how long it would take to shut down sometimes, which is a real problem in a power event with a UPS. I also didn’t like that updates to proxmox core would clobber a lot of things under the hood you might configure by hand.

The main thing is just to think about what you want to do with it, and whether you value the learning that comes with working under the hood at various tiers. My setup before this was proxmox 6.0, and I arguably was doing just as much on that before as I am now. All I really have to show for going a level deeper is a better understanding of how things actually function and a skillset to apply at work. I will say though, my backups are a lot smaller now that I’m only backing up scripts, dockerfiles, and specific persistent data. Knowing exactly how everything works lets you be a lot more agile with backup and recovery confidence.


Flash drives are notorious for spontaneous and ungraceful failures. At the very minimum, you want a proper Hard Drive or SSD. Generally, any reputable brand marketing a “NAS” drive is probably what you want. Nothing spectacularly fast, but designed for a lot of power on hours.



I have a little 4 core/ 8gb ram VM running my work instance that monitors over a thousand clients on 60s check intervals, you may want to look into your config. I honestly have no idea what could cause 15 machines to cost that much computationally


check_mk is what I use at home and at work, it’s a fork of nagios/icinga, works with agents, nagios plugins, or snmp, and if somehow you can’t find what you want to monitor, writing custom checks is as easy as writing a bash script


Question if you know: does a lemmy instance have to be publically accessable to work? Like, if I make an instance on my homelab can the instance “fetch” content and serve it faster locally? Could I reply to a post and have others see it? Etc


Is i2p just a privacy guard to torrent over? Or does it actually help one find content as well?


How does one even use usenet today? Sorry for the dumb question. I just don’t know where to start