A PHP developer who, in his spare time, plays tabletop and videogames; if the weathers nice I climb rocks, but mostly fall off of indoor bouldering ones.

He/Him Blog Photos Keyoxide

  • 0 Posts
  • 22 Comments
Joined 1Y ago
cake
Cake day: Nov 04, 2023

help-circle
rss

Hugo can be as simple as installing it, configuring a site with some yaml that points at a really available theme and writing your markdown content.

It gets admittedly more complex if you’re wanting to write your own theme though.

But I think this realistically applies to most all static site generators.


It’s the multiple volumes that are throwing it.

You want to mount the drive at /media/HDD1:/media or something like that and configure Radarr to use /media/movies and /media/downloads as it’s storage locations.

Hardlinks only work on the same volume, which technically they are, but the environment inside the container has no way of knowing that.


I’ve not used dockge so it may be great but at least for this case portainer puts all the stack (docker-compose) files on disk. It’s very easy to grab them if the app is unavailable.

I use a single Portainer service to manage 5 servers, 3 local and 2 VPS. I didn’t have to relearn anything beyond my management tool of choice (compose, swarm, k8s etc)


Gonna go with… whoosh


We use them quite extensively. They work great.


Didn’t even think 4k80 was generally available yet?


There’s a couple of caveats with it, but I think neither are worse than your proposed flow.

  1. After putting things in an album you’ll need to manually run the migration job to have immich reorganise into album folders.
  2. Images in multiple albums will only be migrated to the path of the newest album.

Immich does support folders?

https://immich.app/docs/administration/storage-template/

With this you can store your photos in whatever structure you want.



Docker will have only exposed container ports if you told it to.

If you used -p 8080:80 (cli) or - 8080:80 (docker-compose) then docker will have dutifully NAT’d those ports through your firewall. You can either not do either of those if it’s a port you don’t want exposed or as @moonpiedumplings@programming.dev says below you can ensure it’s only mapped to localhost (or an otherwise non-public) IP.


Documentation people don’t read

Too bad people don’t read that advice

Sure, I get it, this stuff should be accessible for all. Easy to use with sane defaults and all that. But at the end of the day anyone wanting to using this stuff is exposing potential/actual vulnerabilites to the internet (via the OS, the software stack, the configuration, … ad nauseum), and the management and ultimate responsibility for that falls on their shoulders.

If they’re not doing the absolute minimum of R’ingTFM for something as complex as Docker then what else has been missed?

People expect, that, like most other services, docker binds to ports/addresses behind the firewall

Unless you tell it otherwise that’s exactly what it does. If you don’t bind ports good luck accessing your NAT’d 172.17.0.x:3001 service from the internet. Podman has the exact same functionality.


But… You literally have ports rules in there. Rules that expose ports.

You don’t get to grumble that docker is doing something when you’re telling it to do it

Dockers manipulation of nftables is pretty well defined in their documentation. If you dig deep everything is tagged and natted through to the docker internal networks.

As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.


Did someone manage to grab the flatpak in a usb installable format? It’s no longer on flathub, boo.


He only wins internet clout if you know who he is. I didn’t. He was just that guy in the meme.

Now he’s been named at least two people, who didn’t know of his existence, now know.

You’ve just Barbara Streisand’ this guy.


So to be clear, you want traffic coming out of your VPS to have a source address that is your home IP?

No that’s not how I read it at all. He wants his VPS to act as a NAT router for email that routes traffic through a wireguard tunnel to the mail server on his home network. His mail server would act as if it was port forwarded using his home router, only it won’t be his home IP, it’ll be the VPS’s


Flash drive hidden under the carpet and connected via a USB extension, holding the decryption keys - threat model is a robber making off with the hard drives and gear, where the data just needs to be useless or inaccessible to others.

This is a pretty clever solution. Most thieves won’t follow a cable that for all intents looks like a network cable, especially if it disappears into a wall plate or something.


If you’ve got a good network path NFS mounts work great. Don’t forget to also back up your compose files. Then bringing a machine back up is just a case of running them.



It seems the majority of the torrents with poor seeder count are in the 1.5TB+ range. I just simply don’t have the storage for that. Most everything in the 0-300GB range is pretty well covered.


Reads nice but your docs are 404’ing so I can’t investigate much :D

EDIT. Found it. You’ve got a ‘.com’ instead of a ‘.io’.


Mastodon doesn’t just use storage for local image uploads. It pulls, thumbnails and saves images from any incoming posts, including the thumbnails you might see on website links (pulled from the opengraph data most websites implement)

It’s possible to set a pretty short timeout for that data though.


I looked into Proxmox briefly but then figured that since 99% of my workload was going to be docker containers and I’d need just a single VM for them it made no sense to run it.

So that’s what I did. Ubuntu + Portainer and a shed load of stacks.