• 0 Posts
  • 67 Comments
Joined 1Y ago
cake
Cake day: Jun 23, 2023

help-circle
rss

In my state (Vermont), the Secretary of State has an rss feed that basically presents the results as an xml file. I’m using that to make some local results spreadsheets. Could be other states have similar things.


I’m not familiar with the Ben Eater series, but there are certainly a couple options to check out.

Mark Ferneaux did a fantastic series on the workings of pfSense. It’s a little dated, but the core concepts are still sound and apply to networking generally.

There are also several sites that do in-depth networking topics with a focus on certifications. My favorite of the bunch is Viatto.

I also quite like The Network Berg, though his videos are specifically focused on Mikrotik.


The thing that immediately came to mind was mailpiler.org. It’s been on my list to stand up for a while, but I’ve never got around to it.


Awesome. I’m glad it helps. I’d be a little weary of using the same directory in multiple containers. File systems may or may not behave well with multiple machines writing to them. Not saying anything bad will happen, but do keep an eye out for issues.


I’m making some assumptions, namely that you’re using an unprivileged LXC container and the mount point is a bind mount.

Unprivileged LXC shift user ID numbers so that an escape won’t result in root access to the host. The root user (uid 0) in the container is actually uid 100000 from the perspective of the Proxmox host.

What I usually do is set ownership of my bind mounts to that high-numbered ID (so something like chown -R 100000:100000 /path/to/bind/mount) from Proxmox. Then the root user in the container will be able to set whatever permissions you need directly.


Since you’re interested in this kind of DIY, approach, I’d seriously consider thinking the whole process through and writing a simple script for this that runs from your desktop. That will make it trivial to do an automatic backup whenever you’re active on the network.

Instead of cron, look into systemd timers and you can fire off your script after, say, one minute of being on your desktop, using a monotonic timer like OnUnitActiveSec=60.

Thinking through the script in pseudo code, it could look something like:

rsync -avzh $server_source $desktop_destination || curl -d "Backup failed" ntfy.sh/mytopic

This would pull the back from your server to your desktop and, if the backup failed, use a service such as ntfy.sh to notify you of the problem.

I think that would pretty much take care of all of your requirements and if you ever decided to switch systems (like using zfs send/recv instead of rsync), it would be a matter of just altering that one script.


I had never heard of this, but it sounds fascinating — thanks for sharing! Definitely going to try to set this up this weekend.


Dokuwiki (dokuwiki.org) is my usual go-to. It’s really simple and stores entries in markdown files so you can get at them as plain text files in a pinch. Here’s a life lesson: don’t host your documentation in the machine you’re going to be breaking! Learned that the hard way once or twice.

For reverse proxies, I’m a fan of HAProxy. It uses pretty straightforward config files and is incredibly robust.


I can’t give direct experience here, but this is exactly the use case I’ve been meaning to spin up mailpiler for: https://www.mailpiler.org/. One of these days that will rise to the top of the priority list.


If you want an image, it doesn’t matter what the underlying file system is. You should be able to use a tool like Clonezilla and get a 1:1 copy. Depending how you’ve set up partitioning, you could also use sgdisk to set up the proper partitions and zfs send/recv for the new data portion of the drive and install a boot loader. That’s probably the way I’d go in this instance.


There was a recent conversation on the Practical ZFS discourse site about poor disk performance in Proxmox (https://discourse.practicalzfs.com/t/hard-drives-in-zfs-pool-constantly-seeking-every-second/1421/). Not sure if you’re seeing the same thing, but it could be that your VMs are running into the same too-small volblocksize that PVE uses to make zvols for its Vans under ZFS.

If that’s the case, the solution is pretty easy. In your PVE datacenter view, go to storage and create a new ZFS storage pool. Point it to the same zpool/dataset as the one you’ve already got and set the block size to something like 32k or 64k. Once you’ve done that, move the VM’s disk to that new storage pool.

Like I said, not sure if you’re seeing the same issue, but it’s a simple thing to try.


My go-to for this is a plain Debian or Ubuntu container with Cockpit and the 45Drives file sharing plugin. It’s pretty straightforward and works pretty well.


You can set maintenance schedules in Uptime Kuma and alerts won’t be sent out during those times. I use that for when my backup routines run each night. That seems like a decent cross-platform work around.


I administer a handful of FreePBX systems that run pretty smoothly and are relatively friendly to use. Crosstalk Solutions on YouTube has a bunch of videos on the software if you want to get up to speed about how everything works.


Not sure how your stack works together, but sudo will let you run particular commands as a different user and you can be pretty specific with the privileges. For example you can have a script that’s only allowed to run docker compose -f /path/to/compose.yml restart containername as a user in the docker group. Maybe there’s some docker-specific approach, but this should work with traditional Unix tools and a little scripting.


Cool. That looks right. Have you checked that the bridge is set up properly and that the router doesn’t have anything silly going on for that subnet?

PVE’s network settings are in /etc/network/interfaces and that’s where you can see how the bridge is set up.

It might be beneficial to know more about your network. Is this the only subnet or do you have a bunch of VLANs? Can other devices on the subnet ping outbound? Have you looked at the firewall on PVE?


This really sounds like a problem with the default route. What’s the output of ip route? That should give us some hints about what’s up.


Depends on the seller. It’s pretty easy to drop the seller a line and ask for details (and if they’re unwilling to provide them that could be a red flag). I had two drives die during burn-in once. I try to pick reputable sellers and they were pretty quick to replace them.


I see a ton of price fluctuation in used drives. One way I’ve had some success is in seeking out drives sold in lots. Often I’ll also see SAS drives sell for less than a SATA drive of the same size.


My use of Mikrotik is somewhat limited, but I’m testing I’ve found routing between VLANs to be pretty performant. The key is to offload that routing to the hardware, which not all configurations allow. Check out the Network Berg’s YouTube channel and you should get a good idea.


I’ve not done much with podman, but my first thought is that port 53 is privileged and usually podman runs as a non-privileged user, right? Do you have some mechanism in place that would allow podman to use port 53?


You’ve got some decent answers already, but since you’re getting interested in ZFS, I wanted to make sure you know about discourse.practicalzfs.com. It’s the successor to the ZFS subreddit and it’s a great place to get expert advice.


Is this urbackup-docker in a VM or an LXC? If the latter, you don’t need to add it in storage at all; you can bind mount the folder and use it directly. Here’s some info on that. If it’s in a VM and you want to use the directory directly (as in not just make a disk image inside the directory to pass as a block device) you’ll have to do some file sharing to the VM.


It sounds like you’ve got your solution already, but just in case someone stumbles on this later, I thought I’d mention autofs.

I’m coming to prefer it over fstab entries because it handles disconnections nicely and attempts to reconnect. Worth checking out for those who haven’t played with it.


Could be. If that’s the case, it’s nothing I’ve noticed. I’ve got a 32gb VM and I’m running a bunch of LXC and docker containers on it without issue.


I’ve never heard anyone else mention them, but I’ve had really good luck with https://www.ssdnodes.com for the past several years. I don’t recall ever using their support, but I did have a policy question before buying when I first signed up and they were pretty quick to reply. I think I found them on LowEndBox.


I second mailcow. It’s what I’ve been using for years and it’s pretty great.

One thing I’ll add is before you take the plunge, make sure your VPS address isn’t on a block list somewhere. Pay a visit to mxtoolbox.com and you should find some resources there.


I’m a fan of the UniFi and Omada lines, but for your use case, I’d be looking for any AP that could run OpenWRT. That’s a super-powerful Linux-based router OS that meets all your needs and will present a nice web interface for each AP, no controller needed.

Check the project’s site for hardware compatibility, but I’ve had good luck with the GL.iNet travel routers and I bet some of their bigger models would do the trick for you.


I completely agree with this. Seems like a stellar use for either Cloudflare Tunnels or Tailscale’s similar Funnel feature.

Connect it only to the gramos deployment and that will be the only piece of your setup available publicly.


I have a couple older Minis in my Proxmox cluster. One’s a 2012 model and the other is a 2018. They both run great (and the 2018’s got 64GB of RAM and 10Gb Ethernet). I’m not sure I’d go looking for them for a homeland, but they’re great to repurpose.


A bind mount kind of shares a directory on the host with the container. To do it, unless something’s changed in the UI that I don’t remember, you have to edit the LXC config file and add something like:

mp0: /path/on/host,mp=/path/in/container

I usually make a sharing dataset and use that as the target.


From that prompt, type ls -l. That will show you a listing of the items in the /var/www/html directory and there will be columns for the user and group that own each file. It will most likely say www-data.


How about option 3: let Proxmox manage the storage and don’t set up anything that requires drive pass through.

TrueNAS and OMV are great, and I went that same VM NAS route when I first started setting things up many years ago. It’s totally robust and doable, but it also is a pretty inefficient way to use storage.

Here’s how I’d do it in this situation: make your zpools in Proxmox, create a dataset for stuff that you’ll use for VMs and stuff you’ll use for file sharing and then make an LXC container that runs Cockpit with 45Drives’ file sharing plugin. Bind mount the filesharing dataset you made and then you have the best of both worlds—incredibly flexible storage and a great UI for managing samba shares.


Not my reply, but I’ve also had mixed tests playing with Netmaker. It’s a project I really want to like, but getting clients to work together is sometimes finicky. It’s a young project, so maybe the kinks will get worked out. I do like the admin UI.


If you’re looking for something more or less in the same footprint, I understand those cheap Wyze cameras can be used. There are alternative firmwares available that can be flashed to them to open up the rtsp stream to whatever self-hosted recorder you’d like. Haven’t tried it, but have heard it mentioned on the Self Hosted podcast.


It’s been on my agenda for a while to set up a Matrix server with an iMessage bridge with the idea I could interact with all of my message protocols from one place. I haven’t gotten around to it, but it might be worth a look.


Who says you can only get one? Don’t let the perfect be the enemy of the good; just get one of the fun ones you already came up with and in the future if you need a different one get that too. That’s been my approach, anyway.


I’ve done something similar, though not with openwrt. There may be a decent way to do this on the firewall, but I ended up using the ACLs available from the Tailscale console.

I removed the default allow all rule. I made a group called admins that can access everything and then added a set of routes that everyone on the tail net could access.

I’ve only recently set this up, but initial testing seems to have this working as hoped.


Sorry to say I’ve never heard of spaceship, but wanted to make sure you know that Cloudflare now has a registrar service, so if you’re already using them for DNS, that might be worth a look for you.


This is the route I went as well. I have a couple MPU2016s at different sites. Like, u/aodhsishaj indicated, they’re pretty cheap on the used market; just bear in mind that you’ll need a module for each machine. I think this makes sense if you have multiple machines, but I’m not so sure mine can power cycle connected machines (as in with AHCI controls). I can, however, reboot from the command line and interact with BIOS, etc.