I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

@aesthelete@lemmy.world
link
fedilink
English
29M

deleted by creator

@marzhall@lemmy.world
link
fedilink
English
59M

It’s convenient. Can’t hurt to get used to it, for sure, in that it’s useful to not have to go through dependency hell installing things sometimes. It’s based on kernel features I don’t see Linus pulling out, so I think you’ll only see it more.

As someone who runs nix-only at home, I mostly use its underlying tech in the form of snaps/flatpaks, though. I use docker itself at work constantly, but at home, snaps/flatpaks tend to do the “minimize thinking about dependencies and building” bit but in a workflow more convenient for desktop applications.

@zingo@lemmy.ca
link
fedilink
English
2
edit-2
9M

Yeah. Docker for servers.

Flatpak for Desktops/Laptops.

Although I currently run my Docker stack on my desktop because my NAS server broke down. But its on servers Docker really shines.

@gornius@lemmy.world
link
fedilink
English
129M

Learn it first.

I almost exclusively use it with my own Dockerfiles, which gives me the same flexibility I would have by just using VM, with all the benefits of being containerized and reproducible. The exceptions are images of utility stuff, like databases, reverse proxy (I use caddy btw) etc.

Without docker, hosting everything was a mess. After a month I would forget about important things I did, and if I had to do that again, I would need to basically relearn what I found out then.

If you write a Dockerfile, every configuration you did is either reflected by the bash command or adding files from the project directory to the image. You can just look at the Dockerfile and see all the configurations made to base Debian image.

Additionally with docker-compose you can use multiple containers per project with proper networking and DNS resolution between containers by their service names. Quite useful if your project sets up a few different services that communicate with each other.

Thanks to that it’s trivial to host multiple projects using for example different PHP versions for each of them.

And I haven’t even mentioned yet the best thing about docker - if you’re a developer, you can be sure that the app will run exactly the same on your machine and on the server. You can have development versions of images that extend the production image by using Dockerfile stages. You can develop a dev version with full debug/tooling support and then use a clean prod image on the server.

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
0
edit-2
5M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
IP Internet Protocol
LXC Linux Containers
NAS Network-Attached Storage
PIA Private Internet Access brand of VPN
Plex Brand of media server package
RAID Redundant Array of Independent Disks for mass storage
SMTP Simple Mail Transfer Protocol
SSD Solid State Drive mass storage
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
k8s Kubernetes container management package
nginx Popular HTTP server

15 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.

[Thread #349 for this sub, first seen 13th Dec 2023, 17:15] [FAQ] [Full list] [Contact] [Source code]

@Boomkop3@reddthat.com
link
fedilink
English
15M

It’s basically a vm without the drawbacks of a vm, why would you not? It’s hecking awesome

@lefaucet@slrpnk.net
link
fedilink
English
29M

i use it for gitea, nextcloud, redis, postgres, and a few rest servers and love it!, super easy

it can suck for things like homelab stablediffusion and things that require gpu or other hardware.

DefederateLemmyMl
link
fedilink
English
29M

postgres

I never use it for databases. I find I don’t gain much from containerizing it, because the interesting and difficult bits of customizing and tayloring a database to your needs are on the data file system or in kernel parameters, not in the database binaries themselves. On most distributions it’s trivial to install the binaries for postgres/mariadb or whatnot.

Databases are usually fairly resource intensive too, so you’d want a separate VM for it anyway.

@lefaucet@slrpnk.net
link
fedilink
English
19M

Very good points.

In my case I just need to for a couple users with maybe a few dozen transactions a day; it’s far from being a bottleneck and there’s little point in optimizing it further.

Containerizing it also has the benefit of boiling all installation and configuration into one very convenient dockercompose file… Actually two. I use one with all the config stuff that’s published to gitea and one that has sensitive data.

@Aiyub@feddit.de
link
fedilink
English
49M

As someone who does AI for a living: GPU+docker is easy and reliable. Especially if you compare it to VMs.

@lefaucet@slrpnk.net
link
fedilink
English
29M

good to hear. maybe I should try again

Presi300
link
fedilink
English
4
edit-2
9M

Yeh, I’m not a system admin in any meaning of the word, but docker is so simple that even I got around to figuring it out and to me it just exists to save time and prevent headaches (dependency hell)

Aniki 🌱🌿
link
fedilink
English
59M

removed by mod

@Opeth@lemm.ee
link
fedilink
English
69M

Ive worked in enterprise and government as a software engineer and docker has been the defacto standard everywhere since at least 5 years now. It’s not going away soon.

DefederateLemmyMl
link
fedilink
English
49M

I think it’s a good tool to have on your toolbelt, so it can’t hurt to look into it.

Whether you will like it or not, and whether you should move your existing stuff to it is another matter. I know us old Unix folk can be a fussy bunch about new fads (I started as a Unix admin in the late 90s myself).

Personally, I find docker a useful tool for a lot of things, but I also know when to leave the tool in the box.

@iso@lemy.lol
link
fedilink
English
369M

It just making things easier and cleaner. When you remove a container, you know there is no leftover except mounted volumes. I like it.

@AbidanYre@lemmy.world
link
fedilink
English
159M

It’s also way easier if you need to migrate to another machine for any reason.

Nik282000
link
fedilink
English
49M

I use LXC for all the reasons most people use Docker, it’s easy to spin up a new service, there are no leftovers when I remove a service, and everything stays separate. What I really like about LXC though is that you can treat containers like VMs, you start it up, attach and install all your software as if it were a real machine. No extra tech to learn.

@Auli@lemmy.ca
link
fedilink
English
1
edit-2
9M

Not completely true you probably have to prune some images, or volumes.

alphacyberranger
link
fedilink
English
59M

Learning docker is always a big plus. It’s not hard. If you are comfortable with cli commands, then it should be a breeze. Even if you are not comfortable, you should get used to it very fast.

jrbaconcheese
link
fedilink
English
4
edit-2
9M

As someone who is not a former sysadmin and only vaguely familiar with *nix, I’ve been able to turn my home NAS (bought strictly to hold photos and videos backed up from our phones) into a home media sever by installing Docker, learning how the yml files work, how containers network, etc, and it’s been awesome.

dude, im kinda you. i just jumped into docker over the summer… feel stupid not doing it sooner. there is just so much pre-created content, tutorials, you name it. its very mature.

i spent a weekend containering all my home services… totally worth it and easy as pi[hole] in a container!.

TheMurphy
link
fedilink
English
89M

As a guy who’s you before summer.

Can you explain why you think it is better now after you have ‘contained’ all your services? What advantages are there, that I can’t seem to figure out?

Please teach me Mr. OriginalLucifer from the land of MoistCatSweat.Com

You can also back up your compose file and data directories, pull the backup from another computer, and as long as the architecture is compatible you can just restore it with no problem. So basically, your services are a whole lot more portable. I recently did this when dedipath went under. Pulled my latest backup to a new server at virmach, and I was up and running as soon as the DNS propagated.

Terrasque
link
fedilink
English
5
edit-2
9M

Modularity, compartmentalization, reliability, predictability.

One software needs MySQL 5, another needs mariadb 7. A third service needs PHP 7 while the distro supported version is 8. A fourth service uses cuda 11.7 - not 11.8 which is what everything in your package manager uses. a fifth service’s install was only tested on latest Ubuntu, and now you need to figure out what rpm gives the exact library it expects. A sixth service expects odbc to be set up in a very specific way, but handwaves it in the installation docs. A seventh program expects a symlink at a specific place that is on the desktop version of the distro, but not the server version. And then you got that weird program that insist on admin access to the database so it can create it’s own user. Since I don’t trust it with that, let it just have it’s own database server running in docker and good riddance.

And so on and so forth… with docker not only is all this specified in excruciating details, it’s also the exact same setup on every install.

You don’t have it not working on arch because the maintainer of a library there decided to inline a patch that supposedly doesn’t change anything, but somehow causes the program to segfault.

I can develop a service on windows, test it, deploy it to my Kubernetes cluster, and I don’t even have to worry about which machine to deploy it on, it just runs it on a machine. Probably an Ubuntu machine, but maybe on that Gentoo node instead. And if my osx friend wants to try it out, then no problem. I can just give him a command, and it’s running on his laptop. No worries about the right runtime or setting up environment or libraries and all that.

If you’re an old Linux admin… This is what utopia looks like.

Edit: And restarting a container is almost like reinstalling the OS and the program. Since the image is static, restarting the container removes all file system cruft too and starts up a pristine new copy (of course except the specific files and folders you have chosen to save between restarts)

TheMurphy
link
fedilink
English
29M

It sounds very nice and clean to work with!

If I’m lucky enough to get the Raspberry 5 at Christmas, I will try to set it up with docker for all my services!

Thanks for the explanation.

Terrasque
link
fedilink
English
19M

Just remember that Raspberry is an ARM cpu, which is a different architecture. Docker can cross compile to it, and make multiple images automatically. It takes more time and space though, as it runs an arm emulator to make them.

https://www.docker.com/blog/faster-multi-platform-builds-dockerfile-cross-compilation-guide/ has some info about it.

@BeefPiano@lemmy.world
link
fedilink
English
239M

No more dependency hell from one package needing libsomething.so 5.3.1 and another service absolutely can only run with libsomething.so 4.2.0

That and knowing that when i remove a container, its not leaving a bunch of cruft behind

Great Blue Heron
creator
link
fedilink
English
25
edit-2
9M

Well, that wasn’t a huge investment :-) I’m in…

I understand I’ve got LOTS to learn. I think I’ll start by installing something new that I’m looking at with docker and get comfortable with something my users (family…) are not yet relying on.

If you are interested in a web interface for management check out portainer.

@infeeeee@lemm.ee
link
fedilink
English
269M

Forget docker run, docker compose up -d is the command you need on a server. Get familiar with a UI, it makes your life much easier at the beginning: portainer or yacht in the browser, lazy-docker in the terminal.

@ChapulinColorado@lemmy.world
link
fedilink
English
22
edit-2
9M

I would suggest docker compose before a UI to someone that likes to work via the command line.

Many popular docker repositories also automatically give docker run equivalents in compose format, so the learning curve is not as steep vs what it was before for learning docker or docker compose commands.

@ARNiM@lemmy.world
link
fedilink
English
49M

There is even a tool to convert Docker Run commands to a Docker Compose file :)

Such as this one hosted by Opnxng:

https://it.opnxng.com/docker-run-to-docker-compose-converter

Great Blue Heron
creator
link
fedilink
English
3
edit-2
9M
# docker compose up -d
no configuration file provided: not found
@infeeeee@lemm.ee
link
fedilink
English
49M

like just docker run by itself, it’s not the full command, you need a compose file: https://docs.docker.com/engine/reference/commandline/compose/

Basically it’s the same as docker run, but all the configuration is read from a file, not from stdin, more easily reproducible, you just have to store those files. The important is compose commands are very important for selfhosting, when your containers expected to run all the time.

RTFM: https://docs.docker.com/compose/

Great Blue Heron
creator
link
fedilink
English
59M

Yeah, I get it now. Just the way I read it the first time it sounded like you were saying that was a complete command and it was going to do something “magic” for me :-)

you need to create a docker-compose.yml file. I tend to put everything in one dir per container so I just have to move the dir around somewhere else if I want to move that container to a different machine. Here’s an example I use for picard with examples of nfs mounts and local bind mounts with relative paths to the directory the docker-compose.yml is in. you basically just put this in a directory, create the local bind mount dirs in that same directory and adjust YOURPASS and the mounts/nfs shares and it will keep working everywhere you move the directory as long as it has docker and an available package in the architecture of the system.

`version: ‘3’ services: picard: image: mikenye/picard:latest container_name: picard environment: KEEP_APP_RUNNING: 1 VNC_PASSWORD: YOURPASS GROUP_ID: 100 USER_ID: 1000 TZ: “UTC” ports: - “5810:5800” volumes: - ./picard:/config:rw - dlbooks:/downloads:rw - cleanedaudiobooks:/cleaned:rw restart: always volumes: dlbooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:NFSPATH”

cleanedaudiobooks: driver_opts: type: “nfs” o: “addr=NFSSERVERIP,nolock,soft” device: “:OTHER NFSPATH” `

Second this. Portainer + docker compose is so good that now I go out of my way to composerize everything so I don’t have to run docker containers from the cli.

dockge is amazing for people that see the value in a gui but want it to stay the hell out of the way. https://github.com/louislam/dockge lets you use compose without trapping your stuff in stacks like portainer does. You decide you don’t like dockge, you just go back to cli and do your docker compose up -d --force-recreate .

@P1r4nha@feddit.de
link
fedilink
English
39M

Definitely not a fad. It’s used all over the industry. It gives you a lot more control over the environment where your hosted apps run. There may be some overhead, but it’s worth it.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 279 users / day
  • 589 users / week
  • 1.34K users / month
  • 4.55K users / 6 months
  • 1 subscriber
  • 3.49K Posts
  • 69.8K Comments
  • Modlog