I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.
It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.
I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
deleted by creator
It’s convenient. Can’t hurt to get used to it, for sure, in that it’s useful to not have to go through dependency hell installing things sometimes. It’s based on kernel features I don’t see Linus pulling out, so I think you’ll only see it more.
As someone who runs nix-only at home, I mostly use its underlying tech in the form of snaps/flatpaks, though. I use docker itself at work constantly, but at home, snaps/flatpaks tend to do the “minimize thinking about dependencies and building” bit but in a workflow more convenient for desktop applications.
Yeah. Docker for servers.
Flatpak for Desktops/Laptops.
Although I currently run my Docker stack on my desktop because my NAS server broke down. But its on servers Docker really shines.
Learn it first.
I almost exclusively use it with my own Dockerfiles, which gives me the same flexibility I would have by just using VM, with all the benefits of being containerized and reproducible. The exceptions are images of utility stuff, like databases, reverse proxy (I use caddy btw) etc.
Without docker, hosting everything was a mess. After a month I would forget about important things I did, and if I had to do that again, I would need to basically relearn what I found out then.
If you write a Dockerfile, every configuration you did is either reflected by the bash command or adding files from the project directory to the image. You can just look at the Dockerfile and see all the configurations made to base Debian image.
Additionally with docker-compose you can use multiple containers per project with proper networking and DNS resolution between containers by their service names. Quite useful if your project sets up a few different services that communicate with each other.
Thanks to that it’s trivial to host multiple projects using for example different PHP versions for each of them.
And I haven’t even mentioned yet the best thing about docker - if you’re a developer, you can be sure that the app will run exactly the same on your machine and on the server. You can have development versions of images that extend the production image by using Dockerfile stages. You can develop a dev version with full debug/tooling support and then use a clean prod image on the server.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
15 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.
[Thread #349 for this sub, first seen 13th Dec 2023, 17:15] [FAQ] [Full list] [Contact] [Source code]
It’s basically a vm without the drawbacks of a vm, why would you not? It’s hecking awesome
i use it for gitea, nextcloud, redis, postgres, and a few rest servers and love it!, super easy
it can suck for things like homelab stablediffusion and things that require gpu or other hardware.
I never use it for databases. I find I don’t gain much from containerizing it, because the interesting and difficult bits of customizing and tayloring a database to your needs are on the data file system or in kernel parameters, not in the database binaries themselves. On most distributions it’s trivial to install the binaries for postgres/mariadb or whatnot.
Databases are usually fairly resource intensive too, so you’d want a separate VM for it anyway.
Very good points.
In my case I just need to for a couple users with maybe a few dozen transactions a day; it’s far from being a bottleneck and there’s little point in optimizing it further.
Containerizing it also has the benefit of boiling all installation and configuration into one very convenient dockercompose file… Actually two. I use one with all the config stuff that’s published to gitea and one that has sensitive data.
As someone who does AI for a living: GPU+docker is easy and reliable. Especially if you compare it to VMs.
good to hear. maybe I should try again
Yeh, I’m not a system admin in any meaning of the word, but docker is so simple that even I got around to figuring it out and to me it just exists to save time and prevent headaches (dependency hell)
removed by mod
Ive worked in enterprise and government as a software engineer and docker has been the defacto standard everywhere since at least 5 years now. It’s not going away soon.
I think it’s a good tool to have on your toolbelt, so it can’t hurt to look into it.
Whether you will like it or not, and whether you should move your existing stuff to it is another matter. I know us old Unix folk can be a fussy bunch about new fads (I started as a Unix admin in the late 90s myself).
Personally, I find docker a useful tool for a lot of things, but I also know when to leave the tool in the box.
It just making things easier and cleaner. When you remove a container, you know there is no leftover except mounted volumes. I like it.
It’s also way easier if you need to migrate to another machine for any reason.
I use LXC for all the reasons most people use Docker, it’s easy to spin up a new service, there are no leftovers when I remove a service, and everything stays separate. What I really like about LXC though is that you can treat containers like VMs, you start it up, attach and install all your software as if it were a real machine. No extra tech to learn.
Not completely true you probably have to prune some images, or volumes.
Learning docker is always a big plus. It’s not hard. If you are comfortable with cli commands, then it should be a breeze. Even if you are not comfortable, you should get used to it very fast.
As someone who is not a former sysadmin and only vaguely familiar with *nix, I’ve been able to turn my home NAS (bought strictly to hold photos and videos backed up from our phones) into a home media sever by installing Docker, learning how the yml files work, how containers network, etc, and it’s been awesome.
dude, im kinda you. i just jumped into docker over the summer… feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.
i spent a weekend containering all my home services… totally worth it and easy as pi[hole] in a container!.
As a guy who’s you before summer.
Can you explain why you think it is better now after you have ‘contained’ all your services? What advantages are there, that I can’t seem to figure out?
Please teach me Mr. OriginalLucifer from the land of MoistCatSweat.Com
You can also back up your compose file and data directories, pull the backup from another computer, and as long as the architecture is compatible you can just restore it with no problem. So basically, your services are a whole lot more portable. I recently did this when dedipath went under. Pulled my latest backup to a new server at virmach, and I was up and running as soon as the DNS propagated.
Modularity, compartmentalization, reliability, predictability.
One software needs MySQL 5, another needs mariadb 7. A third service needs PHP 7 while the distro supported version is 8. A fourth service uses cuda 11.7 - not 11.8 which is what everything in your package manager uses. a fifth service’s install was only tested on latest Ubuntu, and now you need to figure out what rpm gives the exact library it expects. A sixth service expects odbc to be set up in a very specific way, but handwaves it in the installation docs. A seventh program expects a symlink at a specific place that is on the desktop version of the distro, but not the server version. And then you got that weird program that insist on admin access to the database so it can create it’s own user. Since I don’t trust it with that, let it just have it’s own database server running in docker and good riddance.
And so on and so forth… with docker not only is all this specified in excruciating details, it’s also the exact same setup on every install.
You don’t have it not working on arch because the maintainer of a library there decided to inline a patch that supposedly doesn’t change anything, but somehow causes the program to segfault.
I can develop a service on windows, test it, deploy it to my Kubernetes cluster, and I don’t even have to worry about which machine to deploy it on, it just runs it on a machine. Probably an Ubuntu machine, but maybe on that Gentoo node instead. And if my osx friend wants to try it out, then no problem. I can just give him a command, and it’s running on his laptop. No worries about the right runtime or setting up environment or libraries and all that.
If you’re an old Linux admin… This is what utopia looks like.
Edit: And restarting a container is almost like reinstalling the OS and the program. Since the image is static, restarting the container removes all file system cruft too and starts up a pristine new copy (of course except the specific files and folders you have chosen to save between restarts)
It sounds very nice and clean to work with!
If I’m lucky enough to get the Raspberry 5 at Christmas, I will try to set it up with docker for all my services!
Thanks for the explanation.
Just remember that Raspberry is an ARM cpu, which is a different architecture. Docker can cross compile to it, and make multiple images automatically. It takes more time and space though, as it runs an arm emulator to make them.
https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ has some info about it.
No more dependency hell from one package needing
libsomething.so 5.3.1
and another service absolutely can only run withlibsomething.so 4.2.0
That and knowing that when i remove a container, its not leaving a bunch of cruft behind
Well, that wasn’t a huge investment :-) I’m in…
I understand I’ve got LOTS to learn. I think I’ll start by installing something new that I’m looking at with docker and get comfortable with something my users (family…) are not yet relying on.
If you are interested in a web interface for management check out portainer.
Forget docker run,
docker compose up -d
is the command you need on a server. Get familiar with a UI, it makes your life much easier at the beginning: portainer or yacht in the browser, lazy-docker in the terminal.I would suggest docker compose before a UI to someone that likes to work via the command line.
Many popular docker repositories also automatically give docker run equivalents in compose format, so the learning curve is not as steep vs what it was before for learning docker or docker compose commands.
There is even a tool to convert Docker Run commands to a Docker Compose file :)
Such as this one hosted by Opnxng:
https://it.opnxng.com/docker-run-to-docker-compose-converter
like just
docker run
by itself, it’s not the full command, you need a compose file: https://docs.docker.com/engine/reference/commandline/compose/Basically it’s the same as docker run, but all the configuration is read from a file, not from stdin, more easily reproducible, you just have to store those files. The important is compose commands are very important for selfhosting, when your containers expected to run all the time.
RTFM: https://docs.docker.com/compose/
Yeah, I get it now. Just the way I read it the first time it sounded like you were saying that was a complete command and it was going to do something “magic” for me :-)
you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here’s an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.
`version: ‘3’ services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: “UTC” ports: - “5810:5800” volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:NFSPATH”
cleanedaudiobooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:OTHER NFSPATH” `
Second this. Portainer + docker compose is so good that now I go out of my way to composerize everything so I don’t have to run docker containers from the cli.
dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don’t like dockge, you just go back to cli and do your docker compose up -d --force-recreate .
Definitely not a fad. It’s used all over the industry. It gives you a lot more control over the environment where your hosted apps run. There may be some overhead, but it’s worth it.