I struggled with this for a long time, and then I just decided to use synology photos.
It has albums, tagging, geolocation, sharing. It has phone picture backup, it is inherently a backup as it’s on my NAS and I back that data up again.
I want to keep the thing that I really care about the most friction free and also not too dependent on myself so that I can still experiment.
I didn’t try PiGallery2 though, maybe I will have a look!
I really thought swarm was dead :)
To be honest, some kubernetes distributions make the cluster operations minimal (I use k0s managed via ansible)!
Either way, the moment you go from N containers on one box to N containers on M boxes you need to start considering how to handle stateful applications, load balancing, etc. And that in general requires knowledge on a domain which is different from having simply applications wrapped in containers locally.
Yeah ultimately every container has it’s own veth interface, so you can do shaping using tc on those.
Edit: I had a look at docker-tc. It does what you want, BUT. Unless your use case is complex, I would really think twice about running a tool written in bash which has access to the docker socket (I.e. trivial node escape) and runs with NET_ADMIN capability.
That’s a lot of power to do something you can also do with a few lines of code executed after you start the container. Again, provided that your use case is not complex.
Cgroups have the ability to limit TCP and total network bandwidth. I don’t know from the top of my mind whether this can be configured at runtime (I.e. via docker run), but you can specifcy at runtime the cgroup parent to use. This means you can pre-create the cgroup, set the limits and start the container with that parent cgroup.
You can also run some hook script after launch that adds the PID to a cgroup every time the container is launched, or possibly use tc.
I am not aware of the ability to only limit uplink bandwidth, but I have not researched this.
You have a bunch of options:
kubectl run $NAME --image=$IMAGE
this just creates a pod running the specific image. If you kill the pod, or it terminates, it won’t be run again. In general though, you probably want to do some customization before running (maybe you need volumes, secrets, env, ports, labels, securityContext, etc.) and for that you can simply let kubectl generate the boilerplate YAML and then simply make some edit:
kubectl run $NAME --image=$IMAGE --dry-run=client -o yaml > mypod.yaml
# edit mypod.yaml
kubectl create -f mypod.yaml
You can do the same with a deployment
or statefulset
:
kubectl create deployment $NAME -n $NAMESPACE [...] --dry-run=client -o yaml > deployment.yaml
In case you don’t need anything fancy, the kubectl create
subcommand allows you to create simple workload, so probably that’s the answer to your question.
Docker can run rootless too, see https://docs.docker.com/engine/security/rootless/
I would say Docker. There is no substantial benefit in running podman, while docker is a widely adopted tool (which means more tooling in the ecosystem, easier to find answers to questions etc.). The difference is not huge tbh, and some time ago the biggest advantage for podman was being able to run rootless, while docker was stuck with a root daemon. This is not the case anymore (docker can run rootless), so I would say unless you have some specific argument to use podman, stick with docker.
If there is already another reverse proxy, doing this IMHO is worse than just running a container and adding one more rule in the proxy (if needed, with traefik it’s not for example). I also build all my servers with IaC and a repeatable setup, so installing stuff manually breaks the model (I want to be able to migrate server with minimal manual action, as I had to do it already twice…).
The job is simple either way, I would say it mostly depends on which ecosystem someone is buying into and what secondary requirements one has.
I would consider the lack of a shell a benefit in this scenario. You really don’t want the extra attack surface and tooling.
Considering you also manage the host, if you want to see what’s going on inside the container (which for such a simple image can be done once while building it the first time more likely), you can use unshare to spawn a bash process in the container namespaces (e.g., unshare -m -p […] -t PID bash, or something like this - I am going by memory).
It really depends, if your setup is docker based (as OP’s seems to be), adding something outside is not a good solution. I am talking for example about traefik or caddy with docker plugin.
By versioning I meant that when you do a push to master, you can have a release which produces a new image. This makes it IMHO simpler than having just git and local files.
I really don’t see the complexity added, I do gain isolation (sure, static sites have tiny attack surfaces), easy portability (if I want to move machine it’s one command), neat organization (no local fs paths to manage essentially), and the overhead is a 3 lines Dockerfile and a couple of MB needed to duplicate a webserver binary. Of course it is a matter of preference, but I don’t see the cons honestly.
Containers are a perfectly suitable use-case for serving static sites. You get isolation and versioning at the absolutely negligible cost of duplicating a binary (the webserver - which in case of the one I linked in my comment, it’s 5MB of space). Also, you get autostart of the server if you use compose, which is equivalent to what you would do with a Systemd unit, I suppose.
You can then use a reverse-proxy to simply route to the different containers.
I personally package the files in a scratch or distroless image and use https://github.com/static-web-server/static-web-server, which is a rust server, quite tiny. This is very similar to nginx or httpd, but the static nature of the binary removes clutter, reduces attack surface (because you can use smaller images) and reduces the size of the image.
I don’t think this is needed to implement censorship. It’s Italy, I know better than thinking something is done out of malice, when it can be the result of incompetence. I completely believe this is some idiotic implementation of what football an TV economic powers wanted. Either way, this idea that everything bad is because “old white men” is bs. We don’t even need meloni, we can use the dear iron lady as an example…class and economic positions count way more than age and gender, ultimately.
I don’t think it’s you, it generally is a bad practice to have multiple processes inside a container. It usually defeats most of the isolation, introduces problems with handling zombie processes (therefore you need an init) and restarting tools when they crash (then you need something like supervisord, which I guess this image might use - I didn’t check). Each software adds dependencies, which can conflict (again defeating the idea of containers), and of course CVEs. Then you have a problem with users etc.
So yeah, containers are generally not meant to be used this way. The project might be cool but I would be very uncomfortable running it like this, especially if that’s going to be my primary email, with all the password resetting capabilities etc.
Some additional benefits also are the management of secrets. In compose you will shove them inside a .ENV file if not directly inside the compose file, while in Kubernetes you can use the secrets resource or even plug in Vault relatively easily. Stateful storage is also better handled. Named volumes are nasty to keep track of, backup and it’s not possible to spread them across multiple devices (as in disks) while bind mounts are insecure in general. Kubernetes provides a storage abstraction which is easier to manage.
Obviously the big advantage comes when you want to run stuff on multiple devices to spread the load (or because the one box is saturated), since with compose you would need completely custom and independent setups.
Finally, I would say that running compose makes it much harder to have a monitoring stack supporting your services, since you will need to do all the plumbing for metrics endpoints yourself. And - very last - you can have admission controllers in Kubernetes that prevent certain configuration (e.g. Kyverno with a bunch of default policies), while with compose you need to manually vet every compose file and image (for example, to ensure it doesn’t run as root).
That said, compose is perfect to get started and to run stuff on one machine.
I did exactly this in https://github.com/Sudneo/Home-ddns, and it works pretty OK for me for more than a year
OK, but how do you solve the problem? Trusting an image is not so different than downloading a random deb and installing it, which maybe configures a systemd unit as well. If not containers you still have to run the application somehow.
Ultimately my point is that containers allow you to do things securely, exactly like other tools. You don’t even have to trust the image, you can build your own. In fact, almost every tool I add to my lab, I end up opening a PR for a hardened image and a tighter helm chart.
In any case, I would not expose such application outside of a VPN, which is a blanket security practice that most selhosters should do for most of their services…
They are not as secure, because there are less controls for ENV variables. Anybody in the same PID namespaces can cat /proc/PID/environ and read them. For files (say, config file) you can use mount namespaces and the regular file permissions to restrict access.
Of course you can mess up a secret implementation, but a chmod’d 600 file from another user requires some sort of arbitrary read vulnerability or privilege escalation (assuming another application on the same host is compromised, for example). If you get low-privileged access to the host, chances are you can dump the ENV for all processes.
Security-wise, ENV variables are worse compared to just a mounted config file, for example.
The problem is in fact in the applications. If these support loading secrets from a file, then the problem does not exist. Even with the weak secrets implementation in kubernetes, it is still far better than ENV variables.
The disappointing thing is that in many “selfhost” apps, often the credentials to specify are either db credentials or some sort of initial password, which could totally be read from file or be generated randomly at first run.
I agree that the issue is information disclosure, but the problem is that ENV variables are stored in memory, are accessible to many other processes on the same system, etc. They are just not a good way to store sensitive information.
In general, a mounted file would be better, because it is easier to restrict access to filesystem objects both via permissions and namespacing. Also it is more future proof, as the actual ideal solution is to use secret managers like Vault (which are overkill for many hobbyist), which can render secrets to files (or to ENV, but same security issue applies here).
The only thing that makes this case worse in docker is that more info is in ENV variables. The vulnerability has nothing to do with containers though, and using ENV variables to provide sensitive data is in general a bad decision, since they can be leaked to any process with /proc access.
Unfortunately, ENV is still a common way which people use to pass data to applications inside containers, but it is not in any way a requirement imposed by the tech.
OK :)
So chroot has not been used to isolate processes for decades to a confined view of the filesystem (especially in combo with a restricted shell), and for example the networking namespace is not used to limit the impact on a compromise on the firewall, the user namespace is not used to allow privileged processes to run de-facto unprivileged.
Whatever you say
EDIT: Actually, if you are really convinced of what you are saying we can do the following experiment:
scratch
container running under an unprivileged userThen we can compare the kind of impact that using containers to wrap applications has on the security of the system. My guess, even with a full RCE you will not be able to escape the container.
Half-jokes aside, my stance is that isolation (namespacing and cgroups) allows to greatly reduce the attack surface and contain the blast radius of a compromise, which are security benefits. You can easily have a container with no shell, no binaries at all, no writable paths, read-only filesystem etc. You can do at least some of those things even in a regular Linux box of course, but it is much more uncommon, much harder, much less convenient (for example, no writeable /tmp is going to break a lot of stuff), much more error prone, etc.
Your stance i.e.:
running things inside of a container does not provide any security benefits as opposed to outside of the container
is way too absolute, imo.
Not really true, containers are based on namespaces which have always been also a security feature. Chroot has been a common “system” technique, afterall.
Containers help security if built properly, and it’s easier to build a container securely (and run them), compared to proper SystemD unit security.
tl;dr, yes, it does.
Containers are nothing like VMs, and containers in Linux are basically a combination of a feature called Cgroups, which allows to restrict the resources (like memory, etc.) available to a process or group of processes, and namespaces. Namespaces are a construct in which certain namespaced resources are separated from each other, and processes can only see those belonging to their namespace. A simple example is a mount namespace. When you launch a container, you see a / directory which is not the root directory of your system.
Now, the problem is, that not all the resources are namespaced, so there is still quite a lot that processes within containers can do interacting with the main system resources, especially if they are root.
A root process within a container generally can do lots of things that the actual root process can do outside of it. For example, mounting parts of the filesystem (if you run with --privileged), loading kernel modules, etc. Podman can run rootless, in the sense that it uses also User namespaces, meaning a user 0 (root) inside a container is actually mapped to something else outside, but also docker nowadays can do the same.
So yeah, in general, running the applications with the less amount of privileges is a good idea and you should do it whenever you can. Even if you do need some privileges, you should add only the Capabilities needed, not just go straight to root.
That can be fun. The benefit of kubernetes is flexibility in the orchestration and (sometimes) scaling. Also the tooling in Kubernetes is more sofisticated compared to plain containers or manual services.
Kubernetes is basically just a finite-state machine that is able to manage a certain number of nodes as a pool of resources. This has added complexity compared to you managing the scheduling (I.e. I install this service on this box and this on this other box), but it also allows for much easier automation.
I was thinking of a similar initiative. To be honest, I am dreaming of a future where a network of small no-profits and co-ops can contribute to a large ecosystem of free software.
I have some devops experience but my main occupation has to do with security. I would be very happy to help especially in the security space.
Shared folders for me are an extremely rare use case (in fact, I generally don’t even use folders as I rely always on the search), but the way I achieve it is creating collections for the people to share it with. For example my “sister” collection in which I put stuff that I want to share with her, and give her access. Also each user can have their own collection that they can manage, to give access to other people (so far, this has never been the case for me).
Thanks (grazie?)! I was looking for something similar and kanidm looks great feature wise and simple to deploy!