I never understood how to use Docker, what makes it so special? I would really like to use it on my Rapsberry Pi 3 Model B+ to ease the setup process of selfhosting different things.

I’m currently running these things without Docker:

  • Mumble server with a Discord bridge and a music bot
  • Maubot, a plugin-based Matrix bot
  • FTP server
  • Two Discord Music bots

All of these things are running as systemd services in the background. Should I change this? A lot of the things I’m hosting offer Docker images.

It would also be great if someone could give me a quick-start guide for Docker. Thanks in advance!

ZeldaFreak
link
fedilink
English
26M

Docker is amazing but not needed. You can compare it to a simpler VM. You can take a docker and run it on any machine. You have an environment that is separate from your host and you and the container can only access it via defined points (volumes and ports).

Imagine you need to run a 2nd Mumble Server. I never set on up but its often that a 2nd instance is not that easy. With docker its easy. The only difference is that you need to use different ports, when you have only one network access or you use a reverse proxy. You can create a 2nd instance to test stuff, without interrupting your productive system. Its a security benefit, because its isolated to some degree and you can remove one easily.

I started using it with MSSQL Server, because I hated how invasive it is on a windows machine, especially I just needed it temporarily to do stuff with it. I’m not a microsoft admin and I know that Servers from Microsoft are a different level. Docker allowed me to start and stop it and remove it very easily. After that I started using it for a lot of and brought my NAS on the next level.

Also one thing worth mentioning are Linux Containerx (LXC). They are in Proxmox but I have less knowledge. It feels more like a full VM than docker but uses less resources. This is the reason why containers in general are more popular. They are less resource hungry than a full VM but have some benefits than running everything on one machine. LXC feels more like a full system, than docker. With docker you rarely get into the system. You may execute some commands, like a create user command or a one time job but don’t access it via a shell from the inside (its possible). LXC on the other hand, you use the shell.

ѕєχυαℓ ρσℓутσρє
link
fedilink
English
1
edit-2
6M

deleted by creator

@turkishdelight@lemmy.ml
link
fedilink
English
-14
edit-2
6M

Docker makes sense if you are deploying thousands of machines in the cloud. I don’t think it makes as much sense if you have your own hardware.

Some services do have 1-line installers with docker, so those might be useful. But they usually have 1-line non-docker installers too.

Big P
link
fedilink
English
106M

Docker still makes sense on your own hardware. Especially if you’re the type of person to try out different programs often

@marcos@lemmy.world
link
fedilink
English
186M

Try to run something that requires php7 and something else that requires php8 on the same web server; or python 2 and python 3.

You actually can, but it’s not pretty.

(The thing about a declarative setup isn’t much of a difference, you can do it for any popular Linux distro.)

Doesn’t that mean that docker containers use up much more resources since you’re installing numerous instances & versions of each program like PHP?

@marcos@lemmy.world
link
fedilink
English
25M

Oh, sure, the bloat on your images requires resources from the host.

There is the option of sharing things. But, obviously that conflicts a bit with maintaining your environments isolated.

Dockers documentation is actually pretty good, I’d recommend taking a look at it because it’s written really well and can be used as a decent primer on learning to read documentation.

I would recommend learning docker / containerization. For your use case you likely won’t see a big benefit HOWEVER it is a good technology to know.

As far as the “why” you’d use it there are too many to list but for your use case the why I’d argue is “just so you know how to do it” and you’ll come up with your own why along the way.

Simplest why beyond “it’s a good technology to know” is that updating an app is as simple as pulling a new container and relaunching it.

@kevincox@lemmy.ml
link
fedilink
English
33
edit-2
6M

I feel that a lot of people here are missing the point. Docker is popular for selfhosted services for a few main reasons:

  1. It is one package that can be used on any distribution (or even OS with a Linux VM).
  2. The package contains all dependencies required to run the software so it is pretty reliable.
  3. It provides some basic sandboxing against non-malicious services. Basically the service can’t scribble all over your filesystem. It can only write to specific directories that you have given it access to (via volumes) other than by exploiting security vulnerabilities.
  4. The volume system also makes it very obvious what data is important and needs to be backed up or similar, you have a short list.

Docker also has lots of downsides. I would generally say that if your distribution packages software I would prefer the distribution’s package over the docker image. A good distribution package will also solve all of these problems. The main issue you will see with distribution packages is a longer delay before new versions are made available.

What Docker completely dominates was previous cross-distribution packaging options which typically took one of the previous strategies.

  1. Self-contained compiled tarball. Run the program inside as your user. It probably puts its data in the extracted directory, maybe. How do you upgrade? Extract and copy a data directory? Self-update? Code is mutable and mixed with data, gross.
  2. Install script. Probably runs as root. Makes who-knows what changes to your system. Where is the data, is the service running? Will it auto-start on boot. Hope that install script supports your distro.
  3. Source tarball. Figure out the dependencies. Hope they don’t conflict with the versions your distro has. Set up users and setup scripts yourself. Hope the build doesn’t take too long.

Sorry if I’m about 10 years behind Linux development, but how does Docker compare with the latest FlatPak trend in application distribution? How you have described it sounds somewhat similar, outside of also getting segmented access to data and networks.

Docker is to servers, as flatpak is to desktop apps.
I would probably run away if i saw flatpak on a headless server

@matcha_addict@lemy.lol
link
fedilink
English
1
edit-2
6M

Flatpak has better security features than docker. While its true it’s not designed with server apps in mind, it is possible to use its underlying “bubblewrap” to create isolated environments. Maybe in the future, tooling will improve its features and bridge the gap.

@kevincox@lemmy.ml
link
fedilink
English
10
edit-2
6M

For desktop apps Flatpak is almost certainly a better option than Docker. Flatpak uses the same core concepts as Docker but Flatpak is more suited for distributing graphical apps.

  1. Built in support for sharing graphics drivers, display server connections, fonts and themes.
  2. Most Flatpaks use common base images. Not only will this save disk space if you have lots of, for example GNOME, applications as they will share the same base but it also means that you can ship security updates for common libraries separately from application updates. (Although locked insecure libraries is still a problem in general, it is just improved over the docker case.)
  3. Better desktop integration via the use of “portals” that allow requesting specific things (screenshot, open file, save file, …) without full access to the user’s system.
  4. Configuration UIs that are optimized for the desktop usecase. Graphically tools for install, uninstall, manage permissions, …

Generally I would still default to my distro’s packages where possible, but if they are unsuitable for whatever reason (not available, too old, …) then a Flatpak is a great option.

slazer2au
link
fedilink
English
596M

IMHO with docker and containerization in general you are trading drive space for consistency and relative simplicity.

a hypothetical:
You set up your mumble server and it requires the leftpad 3.7 package to run. you install it and everything is fine.
Now you install your ftp server but it needs leftpad 5.5. what do you do? hope the function that mumble uses in 3.7 still exists in 5.5? run each app in its own venv?

Docker and containerization resolve this by running each app in its own mini virtual machine. A container running mumble and leftpad 3.7 can coexist on host that also has a container running a ftp server with leftpad 5.5.

Here is a good video on what hole docker and containerization looks to fill
https://www.youtube.com/watch?v=Nm1tfmZDqo8

I would also add security, or at least accessible security. Containers provide a number of isolation features out-of-the-box or extremely easy to configure which other systems require way more effort to achieve, or can’t achieve.

Ironically, after some conversation on the topic here on Lemmy I compiled a blog post about it.

@aksdb@lemmy.world
link
fedilink
English
66M

Tbf, systemd also makes it relatively easy to sandbox processes. But it’s opt-in, while for containers it’s opt-out.

Yeah, and it also requires quite many options, some with harder-to-predict outcomes. For example RootDirectory can be used to effectively chroot the process, but that carries implications such as the application not having access to CA certificates anymore, which in general in containers is a solved problem.

Doesn’t that mean that docker containers use up much more resources since you’re installing numerous instances & versions of each program like mumble and leftpad?

slazer2au
link
fedilink
English
25M

Kinda, but it depends on the size of the dependencies, with drive space bing so cheap these days do you really worry about 50Mb of storage being wasted on 4 different versions of glib or leftpad

@TCB13@lemmy.world
link
fedilink
English
1
edit-2
6M

Docker and containerization resolve this by running each app in its own mini virtual machine

While what you’ve written is technically wrong, I get why you did the comparison that way. Now there are tons of other containerization solutions that can exactly what you’re describing without the dark side of Docker.

Riskable
link
fedilink
English
806M

Docker containers aren’t running in a virtual machine. They’re running what amounts to a fancy chroot jail… It’s just an isolated environment that takes advantage of several kernel security features to make software running inside the environment think everything is normal despite being locked down.

This is a very important distinction because it means that docker containers are very light weight compared to a VM. They use but a fraction of the resources a VM would and can be brought up and down in milliseconds since there’s no hardware to emulate.

@uzay@infosec.pub
link
fedilink
English
-7
edit-2
6M

To put it in simpler terms, I’d say that containers virtualise only the operating system rather than the whole underlying machine.

I guess not then.

Atemu
link
fedilink
English
56M

The operating system is explicitly not virtualised with containers.

What you’ve described is closer to paravirtualisation where it’s still a separate operating system in the guest but the hardware doesn’t pretend to be physical anymore and is explicitly a software interface.

Not exactly IMO, as containers themselves can simultaneously access devices and filesystems from the host system natively (such as VAAPI devices used for hardware encoding & decoding) or even the docker socket to control the host system’s Docker daemon.

They also can launch directly into a program you specify, bypassing any kind of init system requirement.

OC’s suggestion of a chroot jail is the closest explanation I can think of too, if things were to be simplified

@pztrn@bin.pztrn.name
link
fedilink
English
96M

It virtualises only parts of operating system (namely processes and network namespaces with ability to passthru devices and mount points). It is still using host kernel, for example.

I wouldn’t say that namespaces are virtualization either. Container don’t virtualize anything, namespaces are all inherited from the root namespaces and therefore completely visible from the host (with the right privileges). It’s just a completely different technology.

The word you’re all looking for is sandboxing. That’s what containers are - sandboxes. And while they a different approach to VMs they do rely on some similar principals.

@pztrn@bin.pztrn.name
link
fedilink
English
16M

I never said that it is a virtualization. Yet for easy understanding I named created namespaces “virtualized”. Here I mean “virtualized” = “isolated”. Systemd able to do that with every process btw.

Also, some “smart individuals” called comtainerization as type 3 hypervisors, that makes me laugh so hard :)

@notfromhere@lemmy.ml
link
fedilink
English
36M

FYI docker engine can use different runtimes and there is are lightweight vm runtimes like kata or firecracker. I hope one day docker will default with that technology as it would be better for the overall security of containers.

@PipedLinkBot@feddit.rocks
bot account
link
fedilink
English
26M

Here is an alternative Piped link(s):

https://www.piped.video/watch?v=Nm1tfmZDqo8

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I’m open-source; check me out at GitHub.

Vs. LXD?

One of the the main reasons why docker and kubernetes take off is they standardized the deployment process. Say, you have 20 services running on your servers. It’s much easier to maintain those 20 services as a set of yaml files that follow certain standard than 20 config files each with different format. If you only have a couple of services, the advantage is probably not apparent. But as you add more and more services, you’ll start to appreciate it.

Yep, I couldn’t run half of the services in my homelab if they weren’t containerized. Running random, complex installation scripts and maintaining multiple services installed side-by-side would be a nightmare.

@excitingburp@lemmy.world
link
fedilink
English
24
edit-2
6M

For your use case, consider it to be a packaging format (like AppImage, Flatpak, Deb, RPM, etc.) that includes all the dependencies (including services, not just libraries) for the app in question.

Should I change this?

If it’s not broken don’t fix it.

Use Podman (my preferred - the SystemD approach is awesome), containerd, or Incus. Docker is a graveyard of half-finished pet projects that have no reason for existing. Podman has a Docker-compatible socket, so 100% of Docker tooling will work with it.

I can add, podman was ignored in previous years at my day job because there were some reliability issues either with GPU access or networking I forget, however these issues have been resolved and we’re reimplementing it pretty much effortlessly

@MashedTech@lemmy.world
link
fedilink
English
56M

Yep, we’re reconsidering it at work as well. it’s grown pretty nicely

@matcha_addict@lemy.lol
link
fedilink
English
33
edit-2
6M

This blog post explains it well:

https://cosmicbyt.es/posts/demistifying-containers-part-1/

Essentially, containers are means of creating environments in which you can run software, and those environments are:

  • isolated, which makes it a very controlled environment. Much harder to run into errors
  • reproducible: we have tools that reproduce the same container from an image file
  • easy to distribute: just have the container image.
  • little to no compromises on performance (at least on Linux)

It is essentially a way for you to run a program without having to worry how to set up the environment, why it didn’t work as expected, what dependencies you’re missing, etc.

@BCsven@lemmy.ca
link
fedilink
English
106M

Install Portainer, it helps you get used to managing docker images and containers before going full command line.

RBG
link
fedilink
English
96M

I actually prefer dockge, I only have a few containers and its a lot simpler while still able to do all the basics of docker management. Portainer was overkill for me.

@ikidd@lemmy.world
link
fedilink
English
56M

I have a pile of containers both for selfhosting and for dev builds, and still wouldn’t use Portainer.

Lazydocker, FTW

I learned on portainer. I just wish it worked better. Dockge is a much better solution anyways

@Fedegenerate@lemmynsfw.com
link
fedilink
English
15
edit-2
6M

When I asked this question

So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.

  • The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
  • Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
  • Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
  • You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).

I am also new to self hosting, check my bio and post history for a giggle at how new I am, but I have taken advantage of all these points. I do use “latest” though, looking forward to seeing how that burns me later on.

But to add one more:- my system is robust, in that I can really break my containers (and I do), and to recover is a couple clicks in Portainer. Then I can try again, no harm done.

@fury@lemmy.world
link
fedilink
English
46M

The thing with using the “latest” tag is you might get lucky and nothing bad happens (the apps are pretty stable, fault tolerant, and/or backward compatible), but you also might get unlucky and a container update does break something (think a 1.x going to 2.x one day). Without pinning the container to a specific version, you might have an outage suddenly due to that container becoming incompatible with one of your other applications. I’ve seen this happen a number of times. One example is a frontend (UI) container that updates to no longer be compatible with older versions of the backend and crashes as a result.

If all your apps are pretty much standalone and you trust them to update properly every time a new version of the container is downloaded, then you may never run into the problems that make people say “never use latest”. But just keep an eye out for something like that to happen at some point. You’ll save yourself some time if you have records of what versions are running when everything’s working, and take regular backups of all their data.

@tburkhol@lemmy.world
link
fedilink
English
26M

I used to have this with homeassistant and zwavejs. Every time I’d pull a new homeassistant, the zwave integration would fail, because it required a newer version of zwavejs. Taught me to build the chain of services into one docker-compose, so they’d all update together. That’s become one of the rationales for me to use docker: got a chain of dependent processes? wrap them in a docker so you’re working with (probably) the same dependencies as the devs.

My other rationale is just portability, and docker is just one of many solutions there. In my little home environment, where servers are either retired desktops or gee-that-seems-cool SBCs, it’s nice to be able to easily move stuff independent of architecture or OS.

@Fedegenerate@lemmynsfw.com
link
fedilink
English
2
edit-2
6M

I guessed it was a “once bitten twice shy” kind of thing. This is all a hobby to me so the cost-benefit, I think, is vastly different, nothing on my setup is critical. Keeping all those records and up to date on what version everything is on, and when updates are available and what those updates do and… sound like a whole lot of effort when currently my efforts can be better spent in other areas.

In my arrogance I just installed Watchtower, and accepted it can all come crashing down. When that happens I’ll probably realise it’s not so much effort after all.

That said I’m currently learning, so if something is going to be breaking my stuff, it’s probably going to be me and not an update. Not to discredit your comment, it was informative and useful.

@hperrin@lemmy.world
link
fedilink
English
7
edit-2
6M

One benefit that might be overlooked here is that as long as you don’t use any Docker Volumes (and instead bind mount a local directory) and you’re using Docker Compose, you can migrate a whole service, tech stack and everything, to a new machine super easily. I just did this with a Minecraft server that outgrew the machine it was on. Just tar the whole directory, copy it to the new host, untar, and docker compose up -d.

@AlexPewMaster@lemmy.zip
creator
link
fedilink
English
16M

This docker compose up -d thing is something I don’t understand at all. What exactly does it do? A lot of README.md files from git repos include this command for Docker deployment. And another question: How can you automatically start the Docker container? Do you need a systemd service to run docker compose up -d?

You just need the docker and docker-compose packages. You make a docker-compose.yml file and there you define all settings for the container (image, ports, volumes, …). Then you run docker-compose up -d in the directory where that file is located and it will automatically create the docker container and run it with the settings you defined. If you make changes to the file and run the command again, it will update the container to use the new settings. In this command docker-compose is just the software that allows you to do all this with the docker-compose.yml file, up means it’s bringing the container up (which means starting it) and -d is for detached, so it does that in the background (it will still tell you in the terminal what it’s doing while creating the container).

@hperrin@lemmy.world
link
fedilink
English
2
edit-2
6M

Docker Compose is basically designed to bring up a tech stack on one machine. So rather than having an Apache machine, a MySQL machine, and a Redis machine, you set up a Docker Compose file with all of those services. It’s easier than using individual Docker commands too. It sets up a network so they can all talk to each other, then opens the ports you tell it to. It’s isolated from other Docker Compose networks, so things won’t interfere with each other. So you can basically isolate a bunch of services with their own tech stacks all on the same machine. I’ve got my Jellyfin server running on the same machine as my Mastodon instance, thanks to Docker Compose.

As long as Docker is configured to run automatically at boot (which it usually is when you install it), it will bring containers back up that are set to be restarted. You can use the “always” or the “unless-stopped” values for the restart option, depending on your needs, then Docker will bring that container back up after a reboot.

Docker Compose is also useful in this context, because you can define dependencies for services. So I can say that the Mastodon container depends on the Postgres container, and Docker Compose will always start the Postgres container first.

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
1
edit-2
5M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
CA (SSL) Certificate Authority
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HA Home Assistant automation software
~ High Availability
IP Internet Protocol
LXC Linux Containers
NAS Network-Attached Storage
SBC Single-Board Computer
SSD Solid State Drive mass storage
SSL Secure Sockets Layer, for transparent encryption

9 acronyms in this thread; the most compressed thread commented on today has 15 acronyms.

[Thread #592 for this sub, first seen 11th Mar 2024, 17:25] [FAQ] [Full list] [Contact] [Source code]

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 279 users / day
  • 589 users / week
  • 1.34K users / month
  • 4.55K users / 6 months
  • 1 subscriber
  • 3.47K Posts
  • 69.3K Comments
  • Modlog