Initally some local site, then I transferred to GoDaddy, then to OVH (since GD is shit). One is still at Cloudflare (tried to move there, but they don’t support al TLDs that I use, like “.eu”).
For DNS I use Cloudflare. They provide a layer of privacy, i.e. your server IPs don’t get exposed directly.
I recently started using https://github.com/immich-app/immich
It’s basically a self hosted Google Photos and it’s working really well. You can just mount your heap of photos into the container, declare it as external library and you’re good to go.
After a few hours/days of training the face recognition, extracting meta data, generating thumbnails ans possibly transcoding videos you’ll have a very responsive and easily searchable timeline of ALL your pictures and videos.
I mean, they kill services willy nilly. Sure Gmail will probably survive, but the rest drove me away (Reader, Music, …).
Regarding your Android purchases: At the time of my move I went through my list of apps I bought and tallied the ones up, that I still used. It was less than $50 of repurchases.
Don’t let those old purchases hold you back. Cut this old baggage loose.
Do NOT self-host email! In the long run, you’ll forget a security patch, someone breaches your server, blasts out spam and you’ll end up on every blacklist imaginable with your domain and server.
Buy a domain, DON’T use GoDaddy, they are bastards. I’d suggest OVH for European domains or Cloudflare for international ones.
After you have your domain, register with “Microsoft 365” or “Google Workspace” (I’d avoid Google, they don’t have a stable offering) or any other E-Mail-Provider that allows custom domains.
Follow their instructions on how to connect your domain to their service (a few MX and TXT records usually suffice) and you’re done.
After that, you can spin up a VPS and try out new stuff and connect it also to your domain (A and CNAMR records).
Media Server? No content backup at all.
If you lose everything, just download new stuff you want to watch, or redownload a few TV series/movies.
Music? There are streaming services.
Only backup configurations and maybe application data, so that the reinstall will be easy. Those few kB/MB could sit anywhere. I’m using GitLab for this purpose.
Edit: Images! If you have your photos on there, back them up! They can’t be replaced!
Do you run your PiHole on top of Docker? There’s an issue with docker and Raspberry Pis which makes the network crap out periodically. So if your PuHole becomes unavailable until you restart your Pi it might be this:
https://github.com/raspberrypi/linux/issues/4092/
Solution is to add “denyinterfaces veth*” to the dhcpd.conf
The thing is it’s not really a “documentation” but just a collection of configs.
I have organized my containers in groups like you did (“arrs”, web server, bitwarden, …) and then made a repository for each group.
Each repository contains at least a compose file and a Gitlab CI file where a aimple pipeline is defined with basically “compose pull” and “compose up”. There are alao more complicated repository where I build my own image.
The whole “Git” management is really transparent, because with Gitlab you can edit directly on the platform in a hosted VSCode environment where you can directlY edit your files and when your satisfied you just press commit. I don’t do weird stuff with branches, pushing and pulling at all. No need for local copies of the repository.
If you want to fulltext search all your repos, I can recommend a “Sourcegraph” container, but use version 4.4.2 because starting with 4.5.0 they have limited the number of private reositories to 1. But this is something for later, when your infrastructure has grown.
I’m defining my service containers via GitLab and I deploy them via tagged and dockerized GitLab Runners.
If something fails, I change the runner tags for a service and it will be deployed on a different machine.
Incl case of a critical failiure, I just need to setup a Debian, install docker, load and run the GL runner image, maybe change some pipelines and rerun the deployment jobs.
Some things aren’t documented well, yet. Like VPN config…
Ah yes, my router is able to access GitLab as well and pull the list of static routes etc. from it.
I learned a lot from the tutorials of https://ibracorp.io/
You’ll find rather advanced things there, but they are easy to follow and well explained.
Weird! It was late. This is what I use: https://blog.zazu.berlin/software/a-almost-perfect-rsync-over-ssh-backup-script.html
I had UrBackup running for 6 months+. It wasn’t reliably backing things up, configuring it to be accessible via Internet is almost impossible, adding clienta is a hassle and the config isn’t very user friendly.
Furthermore I got the inpression, that it’s backups aren’t reliable; restoring files without UrBackup might be impossible.
That’s why I’m now back at a incremental rsync backup script. It’s reliable, you can just restore things by copying them back via ssh and it uses a lot less space (!!!) than the UrBackup backups.
Nah, use Mesh Central 2! It’s free, you can self-host it and using a little agent you can connect to any machine from it via console or even via a desktop interface without bothering with VNC etc.
I see. Sure, that’s a valid way to manage networking. I personally don’t like to do this manually anymore, just like I don’t drive stick shift anymore.
If you want to expose a service to the WWW I’d recommend using a reverse proxy. E.g. I use Traefik 2; it gets the config needed automatically from 5-6 labels per container and I don’t need to bother with IPs, certificates, NAT and what have you. It just creates virtual hosts procures a LetsEncrypt certificate and directs the traffic to the target container completely on its own.
Spinning up a container and trying it out with its own subdomain with correct SSL certificates immediately never has been easier. (I have a “*” DNS entry to my Treafik server).
You also could try installing cloudflared and create a Cloudflare tunnel. This way you don’t even have to forward any ports in your router.
Just some tips, if you want to explore new things :)
Wow, all the ideas of the author are bad.
Font size in pixels are a bad idea, because people have different screen resolutions. For some your text might be legible and for some it will be a text for ants. You don’t want that. Also a piece of paper doesn’t have pixels, so you’ll get different results for printed pages. You don’t want that as well.
And the line height thing? No. Just no. Only because I have one larger letter anywhere in my text I don’t want to screw up the line spacing for the whole paragraph/text.
Text size in px is objectively a bad idea.
Ahhhh… alright, I misunderstood. So either depends_on is your friend or you could implement a rather dirty solution: Write a little script for the NPM healthcheck that also checks if searxng is online. Then use autoheal.
But that would be my last solution and only if the searxng is very closely depending on the npm container.
This will do exactly what you want: You have to configure a healthcheck for searnxg that detects when it’s down. Maybe something with curl or whatever.
As soon as it’s down autoheal will restart the container. Doesn’t matter why it is down (update, dependency not running, …) autoheal will just restart the container.
Either use depends_on or think of a health check and use Will Farrell’s simple Docker Autoheal container that restarts containers when they become unhealthy. https://github.com/willfarrell/docker-autoheal
Have a look at GitLab.
I’m doing the same thing you are doing, but automatically. I have a repo per app and a few GitLab runners connected on my Raspis/servers. Everytime I push a change, the shell runner runs the commands configured for the pipeline. I don’t have to lift a finger after changes.