What about simply shelling out to ripgrep?
These devices have been recommended in the past, and it looks like they can run OpenWRT
https://www.amazon.com/GL-iNet-GL-SFT1200-Secure-Travel-Router/dp/B09N72FMH5
Bash and a dedicated user should work with very little effort. Basically, create a user on your VM (maybe called git
), set up passwordless (and keyless) ssh for this user but force the command to be the git-shell
. Next a simple bash script which iterates directories in this user’s home directory and runs git fetch —all
. Set cron to run this script periodically (every hour?). To add a new repository, just ssh as your regular user and su to the git user, then clone the new repository into the home directory. To change the upstream, do the same but simply update the remote.
This could probably be packaged as a dockerfile pretty easily, if you don’t mind either needing to specify the port, or losing the machine’s port 22.
EDIT: I found this after posting, might be the easiest way to serve the repositories, in combination with the update script. There’s a bunch more info in the Git Book too, the next section covers setting up HTTP…
I would probably use ntfy.sh for this purpose. It doesn’t quite meet all your requirements, but you could use a random channel name and get some amount of security…
You can self host it, or use the hosted version. (I know it’s technically not chat, but it works on a series of messages, it just happens to call them notifications.)
Yes, I have. I should probsbly test them again though, as it’s been a while, and Immich at least has had many potentially significant changes.
LVM snapshots are virtually instant, and there is no merge operation, so deleting the snapshot is also virtually instant. The way it works is by creating a new space where the difference from the main volume are written, so each time the application writes to the main volume the old block will be copied to the snapshot first. This does mean that disk performance will be somewhat lower than without snapshots, however I’ve not really noticed any practical implications. (I believe LVM typically creates my snapshots on a different physical disk from where the main volume lives though.)
You can my backup script here.
For no 1, that shouldn’t be dind, the container would be controlling the host docker, wouldn’t it?
If so, keep in mind that this is the same as giving root SSH access to the host machine.
As far as security goes, anything that allows GitHub to cause your server to download (pull) and use a set of arbitrary of Docker images with arbitrary configuration is remote code execution. It doesn’t really matter what you to secure access to the machine, if someone compromises your GitHub account.
I would probably set up SSH with a key dedicated to GitHub, specifically for deploying. If SSH is configured to only allow keys for access, it’s not much of a security risk to open it up to the internet. I would then configure that key to only be able to run a single command, which I would make a very simple bash script which runs git fetch
, and then git verify-commit origin/main
(or whatever branch you deploy), befor checking out the latest commit on that branch.
You can sign commits fairly easily using SSH keys now, which combined with the above allows you to store your data on GitHub without having to trust them to have RCE on your host.
My recommendation would be to utilize LVM. Set up a PV on the new drive and create an LV filling the drive (wit an FS), then move all the data off of one drive onto this new drive, reformat the first old drive as a second PV in the volume group, and expand the size of the LV. Repeat the process for the second old drive. Then, instead of extending the LV, set the parity option on the LV to 1. You can add further disks, increasing the LV size or adding parity or mirroring in the future, as needed. This also gives you the advantage that you can (once you have some free space) create another LV that has different mirroring or parity requirements.
See my other reply here.
See my other reply here.
See my other reply here.
I followed the guide found here, however with a few modifications.
Notably, I did not encrypt the borg repository, and heavily modified the backup script.
#!/bin/bash -ue
# The udev rule is not terribly accurate and may trigger our service before
# the kernel has finished probing partitions. Sleep for a bit to ensure
# the kernel is done.
#
# This can be avoided by using a more precise udev rule, e.g. matching
# a specific hardware path and partition.
sleep 5
#
# Script configuration
#
# The backup partition is mounted there
MOUNTPOINT=/mnt/external
# This is the location of the Borg repository
TARGET=$MOUNTPOINT/backups/backups.borg
# Archive name schema
DATE=$(date '+%Y-%m-%d-%H-%M-%S')-$(hostname)
# This is the file that will later contain UUIDs of registered backup drives
DISKS=/etc/backups/backup.disk
# Find whether the connected block device is a backup drive
for uuid in $(lsblk --noheadings --list --output uuid)
do
if grep --quiet --fixed-strings $uuid $DISKS; then
break
fi
uuid=
done
if [ ! $uuid ]; then
echo "No backup disk found, exiting"
exit 0
fi
echo "Disk $uuid is a backup disk"
partition_path=/dev/disk/by-uuid/$uuid
# Mount file system if not already done. This assumes that if something is already
# mounted at $MOUNTPOINT, it is the backup drive. It won't find the drive if
# it was mounted somewhere else.
(mount | grep $MOUNTPOINT) || mount $partition_path $MOUNTPOINT
drive=$(lsblk --inverse --noheadings --list --paths --output name $partition_path | head --lines 1)
echo "Drive path: $drive"
# Log Borg version
borg --version
echo "Starting backup for $DATE"
# Make sure all data is written before creating the snapshot
sync
# Options for borg create
BORG_OPTS="--stats --one-file-system --compression lz4 --checkpoint-interval 86400"
# No one can answer if Borg asks these questions, it is better to just fail quickly
# instead of hanging.
export BORG_RELOCATED_REPO_ACCESS_IS_OK=no
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=no
#
# Create backups
#
function backup () {
local DISK="$1"
local LABEL="$2"
shift 2
local SNAPSHOT="$DISK-snapshot"
local SNAPSHOT_DIR="/mnt/snapshot/$DISK"
local DIRS=""
while (( "$#" )); do
DIRS="$DIRS $SNAPSHOT_DIR/$1"
shift
done
# Make and mount the snapshot volume
mkdir -p $SNAPSHOT_DIR
lvcreate --size 50G --snapshot --name $SNAPSHOT /dev/data/$DISK
mount /dev/data/$SNAPSHOT $SNAPSHOT_DIR
# Create the backup
borg create $BORG_OPTS $TARGET::$DATE-$DISK $DIRS
# Check the snapshot usage before removing it
lvs
umount $SNAPSHOT_DIR
lvremove --yes /dev/data/$SNAPSHOT
}
# usage: backup <lvm volume> <snapshot name> <list of folders to backup>
backup photos immich immich
# Other backups listed here
echo "Completed backup for $DATE"
# Just to be completely paranoid
sync
if [ -f /etc/backups/autoeject ]; then
umount $MOUNTPOINT
udisksctl power-off -b $drive
fi
# Send a notification
curl -H 'Title: Backup Complete' -d "Server backup for $DATE finished" 'http://10.30.0.1:28080/backups'
Most of my services are stored on individual LVM volumes, all mounted under /mnt
, so immich is completely self-contained under /mnt/photos/immich/
. The last line of my script sends a notification to my phone using ntfy.
Here are a few more details of my setup:
Components:
custom.domain
)The home router has WireGuard port forwarded to server, with no re-mapping (I’m using the default 51820). It’s also providing DHCP services to my home network, using the 192.168.1.0/24 network.
The server is running the dynamic DNS client (keeping the dynamic domain name updated to my public IP), and I have a CNAME record on the vpn.custom.domain
pointing to the dynamic DNS name (which is an awful random string of characters). I also have server.custom.domain
with an A record pointing to 10.30.0.1
. All my DNS records are in public DNS (so no need to change the DNS settings on the computer or phone or use DNS overrides with WireGuard.)
Immich config:
version: "3.8"
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:release
entrypoint: ["/bin/sh", "./start-server.sh"]
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
env_file:
- .env
ports:
- target: 3001
published: 2283
host_ip: 10.30.0.1
depends_on:
- redis
- database
restart: always
networks:
- immich
WireGuard is configured using wg-quick
(/etc/wireguard/wg0.conf
):
[Interface]
Address = 10.30.0.1/16
PrivateKey = <server-private-key>
ListenPort = 51820
[Peer]
PublicKey = <phone-public-key>
AllowedIPs = 10.30.0.12/32
[Peer]
PublicKey = <laptop-public-key>
AllowedIPs = 10.30.0.11/32
Start WireGuard with systemctl enable --now wg-quick@wg0
.
Phone WireGuard configuration (iOS):
[Interface]
Name = vpn.custom.domain
Private Key = <phone private key>
Public Key = <phone public key>
Addresses = 10.30.0.12/32
Listen port = <blank>
MTU = <blank>
DNS servers = <blank>
[Peer]
Public Key = <server public key>
Pre-shared key = <blank>
Endpoint = vpn.custom.domain:51820
Allowed IPs = 10.30.0.0/16
Persistent Keepalive = 25
[On Demand Activation]
Cellular = On
Wi-Fi = On
SSIDs = Any SSID
This connection is then left always enabled, and comes on whenever my phone has any kind of network connection.
My laptop (running Linux), is also using wg-quick
(/etc/wireguard/wg0.conf):
[Interface]
Address = 10.30.0.14
PrivateKey = <laptop private key>
[Peer]
PublicKey = <server-public-key>
Endpoint = vpn.custom.domain:51820
AllowedIPs = 10.30.0.0/16
My wife’s window’s laptop is configured using the official WireGuard windows app, with similar settings.
No matter where we are (at home, on a WiFi hotspot, or using cellular data) we access Immich over the VPN: http://server.custom.comain:2283/.
Let me know if you have any further questions.
If you want to change the name of the directory without breaking your volumes (or running services, etc), you can specify the name of the project inside the compose file
TiddlyWiki might be a good option. Technically it’s a wiki, but it is a single HTML page with all functionality built in JavaScript, you could host it on GH pages, though you wouldn’t be able to use its save feature there (you would have to save to your local machine and the deploy a new version). It stores text in little (or large) cards which can be given a title, tags and other metadata, and it providesa full search system.
The peer range shouldn’t be your LAN, it should be a new network range, just for WireGaurd. Make sure that the server running Immich is part of the WireGaurd network.
My phone and laptop see three networks: the internet, the lan (192.168.1.0/24, typically) and WireGaurd (10.30.0.0/16). I can anonymize and share my WireGaurd config if that would help.
I use WireGaurd, it’s set to on demand for any network or cellular data (so effectively always on), no DNS records (I just use public DNS providing private range IP addresses). It doesn’t make any sort of dent in my battery life. Also, only the wiregaurd network traffic is routed through it, so if my server is down the phone/laptop’s internet continues to work. I borrowed my wife’s phone and laptop for 15 minutes to set it up, and now no one has to think about it.
If you’re seeing an OOM killer messsage note that it doesn’t necessarily kill the problem process, by default the kernel hands out memory upon requestt, regardless of whether it has ram to back the allocation. When a process then writes to the memory (at some later time) and the kernel determines that there is no physical ram to store that write, it then invokes OOM Killer. This then selects a process and kills it. MySQL (and MariaDB) use large quantities of ram for cache, and by default the kernel lies about how much is available, so they often end up using more than the system can handle.
If you have many databases in containers, set memory limits for those containers, that should make all the databases play nicer together. Additionally , you may want to disable overcommit
in the kernel, this will cause the kernel to return out of memory to a process attempting to allocate ram and stop lying about free ram to processes that ask, often greatly increasing stability.
I backup to a external hard disk that I keep in a fireproof and water resistant safe at home. Each service has its own LVM volume which I snapshot and then backup the snapshots with borg, all into one repository. The backup is triggered by a udev rule so it happens automatically when I plug the drive in; the backup script uses ntfy.sh (running locally) to let me know when it is finished so I can put the drive back in the safe. I can share the script later, if anyone is interested.
It’s been added recently, in the form of External Libraries.
Something that LVM supports but ZFS and BTRFS don’t, is the ability to reduce your storage. (That is, to empty and remove a drive from the array, without having to completely destroy the storage array.) As a home user without sufficient storage to have complete duplicates of everything, I find this an important feature.
You mean like this?