• 3 Posts
  • 109 Comments
Joined 1Y ago
cake
Cake day: Jun 21, 2023

help-circle
rss

Interesting solution! Thanks for the info. Seems like Nginx Proxy Manager doesn’t support Proxy Protocol. Lmao, the world seems to be constantly pushing me towards Traefik all the time 🤣


I see. And the rest of your services are all exposed on localhost? Hmm, darn, it really looks like there’s no way to use user-defined networks.


I am guessing you’re not running Caddy itself in a container? Otherwise you’ll run into the same real IP issue.


I see! So I am assuming you had to configure Nginx specifically to support this? Problem is I love using Nginx Proxy Manager and I am not sure how to change that to use socket activation. Thanks for the info though!

Man, I often wonder whether I should ditch docker-compose. Problem is there’s just so many compose files out there and its super convenient to use those instead of converting them into systemd unit files every time.


Yeah, I thought about exposing ports on localhost for all my services just to get around this issue as well, but I lose the network separation, which I find incredibly useful. Thanks for chiming in though!


Pasta is the default, so I am already using it. It seems like for bridge networks, rootlesskit is always used alongside pasta and that’s the source of the problem.


How do you guys handle reverse proxies in rootless containers?
I've been trying to migrate my services over to rootless Podman containers for a while now and I keep running into weird issues that always make me go back to rootful. This past weekend I almost had it all working until I realized that my reverse proxy (Nginx Proxy Manager) wasn't passing the real source IP of client requests down to my other containers. This meant that all my containers were seeing requests coming solely from the IP address of the reverse proxy container, which breaks things like Nextcloud brute force protection, etc. It's apparently due to this Podman bug: https://github.com/containers/podman/issues/8193 This is the last step before I can finally switch to rootless, so it makes me wonder what all you self-hosters out there are doing with your rootless setups. I can't be the only one running into this issue right? If anyone's curious, my setup consists of several docker-compose files, each handling a different service. Each service has its own dedicated Podman network, but only the proxy container connects to all of them to serve outside requests. This way each service is separated from each other and the only ingress from the outside is via the proxy container. I can also easily have duplicate instances of the same service without having to worry about port collisions, etc. Not being able to see real client IP really sucks in this situation.
fedilink

Man, I have GOT to try Truenas Scale one of these days. I see it recommended so often, but I was just too used to a standard Linux ecosystem to bother learning something new. I am assuming it gets you closer to the feel of a pre-built NAS during administration tasks compared to Cockpit and a SSH session lmao.

I think I am just always afraid of being locked into a specific way of doing things by a vendor. I feel like I would get annoyed if something that I could do easily on standard Linux was harder to do on Truenas Scale.


I have zero trust in QNAP. QNAP knowingly sold several NASes with a known clock-drift defect in their Intel J1900 CPUs and then refused to provide any support. A bunch of community members had to figure out how to solder a resistor to temporarily revive their bricked NASes in order to retrieve their data. https://forum.qnap.com/viewtopic.php?t=135089

I had a TS-453 Pro and my friend had a TS-451. Both mine and his exhibited this issue and refused to boot. After this debacle and the extreme apathy from their support, I vowed to never buy a pre-built NAS.


You shouldn’t trust ANY brand’s pre-installed OS when it comes to your personal data to be honest.


The preloaded spyware OS

Nowhere in that video did it say this. I am all for DIY NAS and I have an Arch-based one at home, but saying this while implying that that’s what the source video you linked said is a bit disingenuous.

To be honest, nothing about this UGREEN is any different from any of the other off-the-shelf NAS solutions out there like QNAP, Synology, etc. If you don’t trust the UGREEN pre-installed OS, you shouldn’t trust any of the other ones either. I am not saying you should, but my point is that this pretty par for the course as far as pre-built NASes go.

Most companies do not provide support if you install a custom OS. That isn’t a sign of vendor lock-in, just a matter of keeping support feasible in the long-term, especially since they’re relatively new at this. If you want a custom OS, it is far easier and cheaper to just build your own.


I use podman with the podman-docker compatibility layer and native docker-compose. Podman + podman-docker is a drop-in replacement for actual docker. You can run all the regular docker commands and it will work. If you run it as rootful, it behaves in exactly the same way. Docker-compose will work right on top of it.

I prefer this over native Docker because I get the best of both worlds. All the tutorials and guides for Docker work just fine, but at the same time I can explore Podman’s rootless containers. Plus I enjoy it’s integration with Cockpit.


Cockpit definitely has the ability to create bridge devices. I haven’t found a tutorial specifically for cockpit, but you can follow something like this and apply the same principles to the “Add Bridge” dialog in Cockpit’s network settings.


Your containers show up in Cockpit under the “Podman containers” section and you can view logs, type commands into their consoles, etc. You can even start up containers, manage images, etc.

Are there any tutorials on how to do this from Cockpit?

I have not done this personally, but I would assume you need to create a bridge device in Network Manager or via Cockpit and then tell your VM to use that. Keep in mind, bridge devices only work over Ethernet.


Yeah, I pay for Netflix, Hulu, and Amazon Prime and yet, I am still downloading shows that are on those services because their shitty DRM schemes limit me to 720p. It’s insanity.


I am using it as a migration tool tbh. I am trying to get to rootless, but some of the stuff I host just don’t work well in rootless yet, so I use rootful for those containers. Meanwhile, I am using rootless for dev purposes or when testing out new services that I am unsure about.

Podman also has good integration into Cockpit, which is nice for monitoring purposes.


It isn’t that much better. I use it as drop-in docker replacement. It’s better integrated with things like cockpit though and the idea is that it’s easier to eventually migrate to rootless if you’re already in the podman ecosystem.


podman-compose is different from docker-compose. It runs your containers in rootless mode. This may break certain containers if configured incorrectly. This is why I suggested podman-docker, which allows podman to emulate docker, and the native docker-compose tool. Then you use sudo docker-compose to run your compose files in rootful mode.


If you use firewalld, both docker and podman apply rules in a special zone separate from your main one.

That being said, podman is great. Podman in rootful mode, along with podman-docker and docker-compose, is basically a drop-in replacement for Docker.


Thanks! Yeah i am already using a nginx reverse proxy in a docker container to expose my other docker containers so I was thinking two reverse proxies in a row might be too inefficient. Will definitely look into nftables. Nftable rules are temporary though right? What’s the correct way to automate running these rules on boot?


I was thinking the same thing regarding VPS and Wireguard. I use Wireguard personally to VPN into my home network for remote management, but I still haven’t looked up how to make a VPS as a proxy using it. I know they can join the same network and talk with each other but what’s the best way to route port 80 and 443 on the VPS to my server at home? Iptables?


Not OP, but I’ve been looking into Cloudflare tunnels on my end as well and ended up not going with them because you’re forced to use their own certs so they can decrypt and see the data. I mean most likely they aren’t doing anything untoward, but it’s still a consideration with regards to data privacy.


Yep! I was surprised at how power efficient the build was myself. It really pays to go with an APU both because it doesn’t go ham with the core count and clocks and also because you don’t need an external GPU. As long as you’re just doing light to medium loads and not transcoding at maximum speed 24/7, your power usage will be fine.


The reader itself leaves a lot to be desired though. There’s literally no UI besides the arrow keys and no way to configure font rendering etc. It’s cool that the functionality is there, but it needs work.


It idles anywhere from 28-33W, but when its doing heavy processing it spikes up to the full power consumption of the CPU (max I’ve seen is like 120W according to my UPS). I run it in Balanced performance profile so there’s essentially no limiter to the power consumption. I figured I spent all this money on a CPU, I might as well take advantage of its processing power when I need it.

Lately I’ve been running a 24/7 Palworld server, and that is constantly running at 65%-85% CPU (out of a possible 1200%). My UPS reports 45W.

If Palworld isn’t running and someone watches movies off of my Jellyfin, usage is around 40W-50W when doing transcoding, and 35W when doing direct play.


I went with Arch Linux, mostly because I am the most familiar with it, how barebones I can make it, and how rolling updates are generally easier for me to deal with than large break-the-world distro upgrades. All my services are running in Podman containers so they’re completely isolated from any library versioning issues.


DIY NAS all the way. I had a QNAP that had a known manufacturing defect in the Intel CPU and QNAP refused to provide any support or repair options despite knowing about the issue for a long time. I will never again bow down to silly corporate shenanigans when it comes to my data.

My DIY NAS is a bit…unconventional and definitely doesn’t fit in your budget requirement, but I’ll leave the parts list as an interesting thought experiment: https://pcpartpicker.com/list/Lm92Kp

…okay, look, I know its a bit crazy. No, its A LOT crazy. But I genuinely feel like it isn’t worth dealing with HDDs anymore when it comes to building a NAS. Back when I was using the QNAP, I had to replace each HDD at least twice and I spent $90-$100 bucks per drive. A NVMe SSD can easily outlast two or 3 HDD drives and you can get the MSI Spatiums on sale for $180, so in the long-term the costs even out. But the speed at which an NVMe array performs during scrubs and rebuilds blows a regular HDD array out of the water. Yes, its a higher up front cost, but an immensely better experience and the costs even out in the long run. Plus, a PCIe bifurcation expansion card is a hell of a lot smaller than 4 HDD drives, so it opens up your case selection for more compact builds.

I got the NZXT H1 because it was easy to build, came with cooler and PSU and just made things simple. It also goes on sale for around $180. You can definitely go with something else entirely. My thought process was that if I ever wanted a compact PC, I could possibly repurpose this case. This is just for me, it is not a hard recommendation.

I picked Ryzen 5600G because it was relatively cheap, decently powerful, and has HW h265 and h264 decode and H264 encode, which is basically what you need for Jellyfin or Plex. Just be aware that it only supports up to x4x4x8 PCIe bifurcation, so if you do go with a NVMe expansion slot, you can only put 3 on there and will have to use a mobo slot for the 4th. That’s how mine is currently setup and it works great.

Yeah, its crazy, and I am sure some people here will scoff at the build, but after using it for 3 years, I just can’t go back to regular HDD performance. An NVMe array just makes all of the services you host fly.


I run Nextcloud and two Jellyfin instances behind Nginx Proxy Manager. I also run a Palworld server. All of them are running under podman. I do use cockpit for checking container status, logs, and viewing the console for each container. I also use docker-compose to create all of my containers (using podman-docker of course). Unfortunately, all of them are running rootful instead of rootless, mostly because most proxies require root and setting things up for rootless like enabling low ports for regular users and allowing processes to run after logout are a pain in the ass.


My DIY NAS runs Arch

  • LTS kernel
  • BTRFS snapshots on root fs
  • 4 drive NVMe array using ZFS raidz1
  • podman for my docker containers

It’s been working fantastically so far.


I felt the exact same way. So many comments online told me that running Arch as a home NAS was insane, but after the Jupiter Broadcasting guys did it without much issue, I decided to give it a go and was pleasantly surprised. I think if most of your stuff is running in Docker and you have BTRFS snapshots for your root filesystem, the system’s pretty much bullet proof. The rolling updates also mean you’ll never have huge upgrade cycles that are a pain in the ass to migrate to. You’re always just dealing with small manageable fires instead of large complicated ones and that’s a plus.


Out of curiosity, what reverse proxy docker do you use that can run rootless in podman? My main issue, and feel free to correct me if I am wrong, is that most of them require root. And then its not possible to easily connect those containers into the same network as your rootless containers so then your other containers have to be root anyways. I don’t really want my other containers to be host accessible, I want them to be only accessible from within the podman network that the reverse proxy has access to.

And then there’s issues where you have to enable lingering processes for normal users and also let it access ports < 1024, makes using docker-compose a pain, etc. I haven’t really found a good solution for rootless, but I really want to eventually move that way.


I use the docker compose file with apache, mariadb, and redis, and it is still a bit slow even on a DIY NAS with a Ryzen 5600G.



Lol, you know you’re getting old when other people start questioning cinema aspect ratios.


I started with Docker and then migrated to Podman for the integrated Cockpit dashboard support. All my docker-compose files work transparently on top of rootful Podman so the migration was relatively easy. Things get finicky when you try to go rootless though.

I say try both. Rootful podman is gonna be closest to the Docker experience.


My only issue with rootless is that SWAG doesn’t work with it, otherwise my other containers could be rootless. However, I heard connecting rootful and rootless containers is impossible so all my containers are rootful right now.





Thanks! The piKVM does look very interesting, and its open source nature gives me more piece of mind too.



What are some KVM-over-IP or equivalent solutions you guys would recommend for guaranteed remote access and remote power cycle?
Currently, I have SSH, VNC, and Cockpit setup on my home NAS, but I have run into situations where I lose remote access because I did something stupid to the network connection or some update broke the boot process, causing it to get stuck in the BIOS or bootloader. I am looking for a separate device that will allow me to not only access the NAS as if I had another keyboard, mouse, and monitor present, but also let's me power cycle in the case of extreme situations (hard freeze, etc.). Some googling has turned up the term KVM-over-IP, but I was wondering if any of you guys have any trustworthy recommendations.
fedilink

[SOLVED] If I am using the SWAG proxy in front of a Nextcloud instance, is it safe to ignore some of the warnings in the admin page?
I am using [one of the official Nextcloud docker-compose files](https://github.com/nextcloud/docker/tree/master/.examples/docker-compose/insecure/mariadb/apache) to setup an instance behind a SWAG reverse proxy. SWAG is handling SSL and forwarding requests to Nextcloud on port 80 over a Docker network. Whenever I go to the Overview tab in the Admin settings, I see this security warning: ``` The "X-Robots-Tag" HTTP header is not set to "noindex, nofollow". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly. ``` I have X-Robots-Tag set in SWAG. Is it safe to ignore this warning? I am assuming that Nextcloud is complaining about this because it still thinks its communicating over an insecured port 80 and not aware of the fact that its only talking via SWAG. Maybe I am wrong though. I wanted to double check and see if there was anything else I needed to do to secure my instance. **SOLVED:** Turns out Nextcloud is just picky with what's in X-Robots-Tag. I had set it to SWAG's recommended setting of `noindex, nofollow, nosnippet, noarchive`, but Nextcloud expects `noindex, nofollow`.
fedilink