I know that for data storage the best bet is a NAS and RAID1 or something in that vein, but what about all the docker containers you are running, carefully configured services on your rpi, installed *arr services on your PC, etc.?

Do you have a simple way to automate backups and re-installs of these as well or are you just resigned to having to eventually reconfigure them all when the SD card fails, your OS needs a reinstall or the disk dies?

Monkey With A Shell
link
fedilink
English
1
edit-2
1Y

Routine backups of the VM’s and raid disk for the hypervisor running them. If the box hosting the backups went screwy there’s a problem but with something like 20TB of space used copies off-box are a bit cumbersome. To that end I just manually copy the irreplaceable stuff to a separate external storage and wish the movies and stuff good luck.

It ends up with a situation though where I’d have to lose both the disks on the hypervisor and if that happened several disks on the NAS (12 disks in a ZFS pool with each vdev being a mirror pair) or for the whole pool to get screwed up to lose the VMs fully. Depending on the day I might lose up to a week of VM state though since they only do a full copy once a week.

@ehrenschwan@feddit.de
link
fedilink
English
11Y

I use duplicati for docker containers. You just host it in docker and attach all the persistent volumes from the other containers to it, then you can set up backup jobs for each.

@Appoxo@lemmy.dbzer0.com
link
fedilink
English
2
edit-2
1Y

My whole environment is in docker-compose which is “backed” to github.
My config/system drive is backed with veeam to one drive.
The backup is backed with rsync to another drive every week.

But: I only have a 1-drive NAS because I don’t have the place for a proper PC with drive caddies and a commercial nas (synology, qnap) are not my jam because I’d need a transcoding capable gpu and those models are overpriced for what I need.
And with plain debian I get unlimited system updates (per distro release) and learn linux along the way.

I rsync my root and everything under it to a NAS, will hopefully save my data. I wrote some scripts manually to do that.

I think the next best thing to do is to doco your setup as mich as possible. Either by typed up notes, or ansible/packer/whatever, any documentation is better than nothing if you have to rebuild.

@foggy@lemmy.world
link
fedilink
English
41Y

I have a 16tb USB HDD that syncs to my NAS whenever my workstation is idle for 20 minutes.

@darvocet@infosec.pub
link
fedilink
English
21Y

I run history and then clean it up so i have a guide to follow on the next setup. It’s not even so much for drive failure but to move to the newer OS versions when available.

The ‘data’ is backed up by scripts that tar folders up and scp them off to another server.

Eskuero
link
fedilink
English
51Y

My docker containers are all configured via docker compose so I just tar the .yml files and the outside data volumes and backup that to an external drive.

For configs living in /etc you can also backup all of them but I guess its harder to remember what you modified and where so this is why you document your setup step by step.

Something nice and easy I use for personal documentations is mdbooks.

Kaldo
creator
link
fedilink
2
edit-2
1Y

Ahh, so the best docker practice is to always just use outside data volumes and backup those separately, seems kinda obvious in retrospect. What about mounting them directly to the NAS (or even running docker from NAS?), for local networks the performance is probably good enough? That way I wouldn’t have to schedule regular syncs and transfers between “local” device storage and NAS? Dunno if it would have a negative effect on drive longevity compared to just running a daily backup.

Adam
link
fedilink
English
11Y

If you’ve got a good network path NFS mounts work great. Don’t forget to also back up your compose files. Then bringing a machine back up is just a case of running them.

@ftbd@feddit.de
link
fedilink
English
51Y

By using NixOS and tracking the config files with git

@Haystack@lemmy.world
link
fedilink
English
21Y

For real, saves so much space that would be used for VM backups.

Aside from that, I have anything important backed up to my NAS, and Duplicati backs up from there to Backblaze B2.

@drkt@feddit.dk
link
fedilink
English
11Y

configs are backed up I can spin up a new container in minutes, I just accept the manual labor. It’s probably a good thing to clean out the spiders and skeletons every now and then.

adONis
link
fedilink
English
21Y

Most of the docker services use mounted folders/files, which I usually store in the users home folder /home/username/Docker/servicename.

Now, my personal habit of choice is to have user folders on a separate drive and mount them into /home/username. Additionally, one can also mount /var/lib/docker this way. I also spin up all of these services with portainer. The benefit is, if the system breaks, I don’t care that much, since everything is on a separate drive. In case of needing to re-setup everything again, I just spin up portainer again which does the rest.

However, this is not a backup, which should be done separately in one way or the other. But it’s for sure safer than putting all the trust into one drive/sdcard etc.

@Skies5394@lemmy.ml
link
fedilink
English
1
edit-2
1Y

On my main server: I have my SSD RAID1 ZFS snapshots of my container appdata, VM VHDs and docker image, that is also backed up as a full backup once per night to the RAID10 array, then rsynced to the backup server which then is uploaded to the cloud.

The data on the RAID is backups, repos or media that I’ve deposited there for an extra copy it for serving via Plex/Jellyfin. I have extra copies of the data, and if I were to lose the array totally, I wouldn’t be pleased, but my personal pictures/videos wouldn’t be in danger.

I run two back up servers, which both upload to the cloud. One of which takes bare metal images of all my computers (sans servers bulk drives), the other which takes live folders.

This is more due to convenience so that I can pull a bare metal image to restore a device, or easily go find a file with versioning online if necessary on both accounts.

As a wise man said, you can never have too many backups.

@namelivia@lemmy.world
link
fedilink
English
61Y

I have all my configuration as Ansible and Terraform code, so everything can be destroyed and recreated with no effort.

When it comes to the data, I made some bash script to copy, compress, encrypt and upload them encrypted. Not sure if this is the best but it is how I’m dealing with it right now.

rentar42
link
fedilink
41Y

I’ve got a similar setup, but use Kopia for backup which does all that you describe but also handles deduplication of data very well.

For example I’ve added older less structured backups to my “good” backup now and since there is a lot of duplication between a 4 year old backup and a 5 year old backup it barely increased the storage space usage.

Matt The Horwood
link
fedilink
English
21Y

That sounds a lot like how I keep my stuff safe, I use backblaze for my off-site backup

HeartyBeast
link
fedilink
11Y

carefully configured services on your rpi

I have a back up on an SD Card waiting for the day the SD Card fails. Slot it in and reboot

@desentizised@lemm.ee
link
fedilink
English
11Y

I recently “upgraded” one of my raspberrys SD cards to an industrial grade one. Seems to me like those are a lot slower but for that particular use case it doesnt matter to me. What matters is that the card doesn’t die. It runs noticeably cooler when lots of data is being written to it so I feel like I must be onto something there.

rentar42
link
fedilink
271Y

There’s lots of very good approaches in the comments.

But I’d like to play the devil’s advocate: how many of you have actually recovered from a disaster that way? Ideally as a test, of course.

A backup system that has never done a restore operations must be assumed to be broken. similar logic should be applied to disaster recovery.

And no: I use Ansible/Docker combined approach that I’m reasonably sure could quite easily recover most stuff, but I’ve not yet fully rebuilt from just that yet.

@deepdive@lemmy.world
link
fedilink
English
21Y

While rsync is great, I recovered partially from an outtage… Containers with databases need special care: dumping there database…

Lesson learned !

Dandroid
link
fedilink
English
41Y

I restored from a backup when I swapped to a bigger SSD. Worked perfectly first try. I use rsnapshot for backups.

Kaldo
creator
link
fedilink
21Y

I’m not sure what Ansible does that a simple Docker Compose doesn’t yet but I will look into it more!

My real backup test run will be soon I think - for now I’m moving from windows to docker, but eventually I want to get an older laptop, put linux on it and just move everything to the docker on it instead and pretend it’s a server. The less “critical” stuff I have on my main PC, the less I’m going to cry when I inevitably have to reinstall the OS or replace the drives.

rentar42
link
fedilink
2
edit-2
1Y

I just use Ansible to prepare the OS, set up a dedicated user, install/setup Rootless Docker and then Sync all the docker compose files from the same repo to the appropriate server and launch/update as necessary. I also use it to centrally administer any cron jobs like for backup.

Basically if I didn’t forget anything (which is always possible) I should be able to pick a brand new RPi with an SSD and replace one of mine with a single command.

It also allows me to keep my entire setup “documented” and configured in a single git repository.

Human Crayon
link
fedilink
English
41Y

I have (more than I’d like to admit) recovered entirely from backups.

I run proxmox, everything else in a VM. All VMs get backed up to three different places once a week, backups are tested monthly on a rando proxmox box to make sure they still work. I do like the backup system built into it, serves my needs well.

Proxmox could die and it wouldn’t make much of a difference. I reinstall proxmox, restore the VMs and I’m good to go again.

@tetris11@lemmy.ml
link
fedilink
English
81Y

Radical suggestion:

  • Once a year you buy a hard drive that can handle all of your data.
  • rsync everything to it
  • unplug it, put it back in cold storage

Once a… year? There’s a lot that can change in a year. Cloud storage can be pretty cheap these days. Backup to something like backblaze, S3 or Glacier nightly instead.

You can save periodically to it like once a month but keep one as a yearly backup.

I’ve had a complete drive failure twice within the last year (really old hardware) and my ansible + docker + backup made it really easy to recover from. I got new hardware and was back up and running within a few hours.

All of your services setup should be automated (through docker-compose or ansible or whatever) and all your configuration data should be backed up. This should make it easy to migrate services from one machine to another, and also to recover from a disaster.

I actually run everything in VMs and have two hypervisors that sync everything to each other constantly, so I have hot failover capability. They also back up their live VMs to each other every day or week depending on the criticality of the VM. That way I also have some protection against OS issues or a wonky update.

Probably overkill for a self hosted setup but I’d rather spend money than time fixing shit because I’m lazy.

HA is not redundancy. It may protect from a drive failure but it completely ignores data corruption issues.

I learned this the hard way when my cryptomator decided to corrupt some of my files, and I noticed but didn’t have backups.

rentar42
link
fedilink
21Y

yeah, there’s a bunch of lessons that tend to only be learned the hard way, despite most guides mentioning them.

similarly to how RAID should not be treated as a backup.

That’s why I also do backups, as I mentioned.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 126 users / day
  • 421 users / week
  • 1.16K users / month
  • 3.85K users / 6 months
  • 1 subscriber
  • 3.68K Posts
  • 74.2K Comments
  • Modlog