Raspberry pi4 Docker:- gluetun(qBit, prowlarr, flaresolverr), tailscale(jellyfin, jellyseerr, mealie), rad/read/sonarr, pi-hole, unbound, portainer, watchtower.

Raspberry pi3 Docker:- pi-hole, unbound, portainer.

  • 1 Post
  • 29 Comments
Joined 1Y ago
cake
Cake day: Jun 26, 2023

help-circle
rss

I remember Watchtower helpfully stopping Pihole before pulling the new image when I only had the one instance running… All while I was out at work with the fiancée on her day off. So many teaching moments in so little time.


A good general suggestion. The WAF I follow are ‘reasonable’ expense, reasonable form factor, and a physical investment. I floated the idea of a VPS and that’s when I learned of the third criteria. It is what it is.

I just started on this 8tb HDD so it isn’t very full right now, I could raise the ratio limits. But, I worry about filling the HDD and part of me worries about 100s of torrents on an n100 doing other things. So I’m keeping the habit from my pi4+1TB days of deleting media behind us and keeping the torrent count low.

I justify it as self managing though: popular Isos are on then off my harddrive fairly quickly, but the ones that need me will sit and wait until they hit the ratio of 3 however long that is. I would like to do “3 + (get that last seeder to 100%)” but I don’t know how/if it’s possible to automate through prowlarr.


I should probably keep sharing Linux Isos longer than I do, but data hording has a low WAF. Instead I have prowlarr set the ratio to 3 (one for me, one for a leecher, and one to add to the pool) to keep the data churning.


The firestick is what I chose as my TV’s, a 10yo LG, jellyfin client. Works as intended, better really. One day I’ll block the stick’s internet connection, and it’ll be the almost perfect device, in that it plays almost anything natively. My server is a rpi4 so anything I can do to stop transcoding, I do.


Aoostar n100 2 Bay nas is what I’m currently thinking about. Or the same device but rebadged.

Pros: n100 for quicksync. 2 bays of HDD for media storage. Low power at idle. Cheap for a box with all relevant codecs + sata storage. High WAF compared to other HTPCs

Cons: Unknown brand for build quality and bios updates. General Chinese security anxieties. Idle power, while low, is higher than other n100 options. Fan isn’t pwm. Personally don’t like the aesthetics.


Favourite game - 1, it was the first one I played and the one I’m most familiar with.

Least is 3, it was the first game I encountered with day 1 dlc, so didn’t get any. Last ME game I bought too, Jokes on me I guess because I got the remaster instead.

I enjoyed KOTOR/II before it and I was hoping for more of the same, more HK-47 really. No HK but the play is familiar: go to a planet do some quests, X person wants to talk and on to the next.

Femshep is the only shep for me.


I guessed it was a “once bitten twice shy” kind of thing. This is all a hobby to me so the cost-benefit, I think, is vastly different, nothing on my setup is critical. Keeping all those records and up to date on what version everything is on, and when updates are available and what those updates do and… sound like a whole lot of effort when currently my efforts can be better spent in other areas.

In my arrogance I just installed Watchtower, and accepted it can all come crashing down. When that happens I’ll probably realise it’s not so much effort after all.

That said I’m currently learning, so if something is going to be breaking my stuff, it’s probably going to be me and not an update. Not to discredit your comment, it was informative and useful.


When I asked this question

So there are many reasons, and this is something I nowadays almost always do. But keep in mind that some of us have used Docker for our applications at work for over half a decade now. Some of these points might be relevant to you, others might seem or be unimportant.

  • The first and most important thing you gain is a declarative way to describe the environment (OS, dependencies, environment variables, configuration).
  • Then there is the packaging format. Containers are a way to package an application with its dependencies, and distribute it easily through the docker hub (or other registries). Redeploying is a matter of running a script and specifying the image and the tag (never use latest) of the image. You will never ask yourself again “What did I need to do to install this again? Run some random install.sh script off a github URL?”.
  • Networking with docker is a bit hit and miss, but the big thing about it is that you can have whatever software running on any port inside the container, and expose it on another port on the host. Eg two apps run on port :8080 natively, and one of them will fail to start due to the port being taken. You can keep them running on their preferred ports, but expose one on 18080 and another on 19080 instead.
  • You keep your host simple and empty of installed software and packages. Less of a problem with apps that come packaged as native executables, but there are languages out there which will require you to install a runtime to be able to start the app. Think .NET, Java but there is also Python out there which requires you to install it on the host and have the versions be compatible (there are virtual environments for that but im going into too much detail already).

I am also new to self hosting, check my bio and post history for a giggle at how new I am, but I have taken advantage of all these points. I do use “latest” though, looking forward to seeing how that burns me later on.

But to add one more:- my system is robust, in that I can really break my containers (and I do), and to recover is a couple clicks in Portainer. Then I can try again, no harm done.


Finally got it set up, pointed Prowlarr at it which synced to Sonarr and Radarr, not readarr or lidarr though. I couldn’t manually point readarr at it either without getting a

Query successful, but no results in the configured categories were returned from your indexer. This may be an issue with the indexer or your indexer category settings

which is a shame. Still a potentially powerful bit of kit regardless.


I use Mullvad and have a qbit go through gluetun. I don’t mind the lack of port forwarding, as I leave the Pi on 24/7 and I’m not under ratio constraints. Also, my system isn’t secure enough for me to be messing with that stuff, next build I’ll get everything off root, set proper permissions, route everything through a single port etc, then think about port forwarding. For now I’ll hide behind my ISP and Mullvad’s security while I learn and make mistakes.

Down is quick enough for me and Up is slow but constant.



My unbound is on v1.13.1 (Raspbian) after update/upgrade. I’ve read it lags behind the main release by alot, should I trust the process that everything is fine.


Ah, I knew it was bypassing the pi-hole, I thought it was IPv6. I think I made the mistake of changing more than one thing at once, what I did worked and I moved on to the next functionality I was chasing. I’ll try enabling IPv6 on the pihole, I know at least if I get Ads with it on its not IPv6.


You have cleared up a lot of misconceptions for me, I have not been port forwarding, I have not learned how yet. I think I’m good. I don’t mind breaking functional stuff, and have a lot already, but I really don’t want to explain to my fiancée that the reason someone is in her bank is because I wanted to watch Samurai Jack.

I have been keeping it as insular as possible for this reason, and the next thing I intent to learn is to make it more insular by putting the pi on a subnet of its own. Actually, thank you for writing that up. I have been actively resisting using people for IT support, as I know it takes time. I have been trying to find everything I can, there isn’t much or what there is assumes knowledge I don’t have.

There’s a comment with a list of stuff to do that I’ve saved. So I’ll probably start knocking that out one by one.


When it was active I was getting ads. I disabled the pi-hole registered an increase in traffic and there were no more ads. I don’t know why. It’s working as it is and I’ll tinker when I know more.



Both pi’s have static IPs.

I asked the *arrs to talk to each other, and when they didn’t work (and only when they didnt work) I "ufw allow"ed the relevant port.

I just want to patch up my firewall layer as best I can, and then start building security layers on top/below it as I learn how.

So I told Sonarr that qBit it at 192.168…:port. The test failed, “ufw allow port”, then the test passed. Could I instead have told Sonarr qBit is at 172.18…:port(dockers network address) and then close up the firewall. Or can I set them all to “ufw limit”. Or set the firewall to only allow local local traffic… You get the idea, I know enough to be dangerous but not enough to ask the right questions.


I don’t know, what’s more I don’t know how to check.Which ever most likely?

ISP plastic box didn’t allow custom DNS, I disabled DHCP and IPv6. On pihole I enabled DHCP with IPv6 disabled.

I know, I know enough to be dangerous now, and I’m trying to get the system through my dangerous phase. I don’t think I know enough to ask intelligent questions yet…


ISP modem. I have a pi3 running pihole-dhcp-unbound, ufw and log2ram.

My system is a pi4 running *arrs, qBit, fail2ban, portainer in docker and ufw for now. Use case is: via mobile phone access *arrs, let them do their things and manually play files via hdmi or move files via thumbdrive. I was thinking giving up the phone access to put them on their own network, but subnets are beyond my ken for now.

Hoping to increment my security, and then the system as my skills develop.

Edit, qBit and prowlarr are behind gluetun set up for mullvard. I’m in the UK so had to put the indexer behind a VPN. UFW


Just trying to keep outside/malicious actors from entering my stuff while also bring able to use my stuff. More safer is more better, but I’m trying to balance that against my poor technical ability.

My priority list is free>easy>usable>safe. Using UFW seemed to fit, but you’re right, punching holes in it defeats the purpose Which is why I wanted to only allow local network and have only the necessary ports open. You have given me lots of terms to Google as a jumping off point so thank you.


Uncomplicated firewall rule set for a *arr stack.
I set up an *arr stack and made it work, and now I'm trying to make it safe - the objectivly correct order. I installed uncomplicated firewall on the system to pretend to protect myself, and opened ports as and when I needed them. So I'm in mind to fix my firewall rules and my question is this: Given there's a more sensible ufw rule set what is it, I have looked online I couldn't find any answers? Either "limit 8080", "limit 9696", "limit ..." etc. or "open". Or " allow 192.168.0.0/16" would I have to allow my docker's subnet as well? To head off any "why didn't you <brilliant idea>?" it's because I'm dumb. Cheers in advance.
fedilink

Current obstacle: dockstarter qbittorrent immediately flips torrent to ‘errored’. Edit to current progress. Bottom left has “free space: unknown” so I think it’s a storage issue. sudo lsblk" has sdb1 mounted to /mnt/hdd correctly I think. The “storage” volume in Portainer is set to /mnt/hdd so I think that’s correct. The storage in qbit is set correctly as well I think, /data/torrents. I think I’ve set permissions to allow things to happen to the HDD “sudo chmod 777 /mnt/hdd” on the Pi’s cli. I dont kmow if I was supposed to gove docker those permissions somehow, I haven’t been smart enough to find anything in any documents.

Yay learning


Matches my experience. It doesn’t matter what guide I’m following, I seem to have to troubleshoot every other step. On the plus side, stumbling over every obstacle possible has been a great learning experience and I am primarily doing this as an exercise… Fuck me would I like something to just work though.



I am sorry, I am but a worm just starting Docker and I have two questions.

Say I set up pihole in a container. Then say I use Pihole’s web UI to change a setting, like setting the web UI to the midnight theme.

Do changes persist when the container updates?

I am under the impression that a container updating is the old one being deleted and a fresh install taking its place. So all the changes in settings vanish.

I understand that I am supposed to write files to define parameters of the install. How am I supposed to know what to write to define the changes I want?

Sorry to hijack, the question doesn’t seem big enough for its own post.


Music is my main media and I pay Deezer specifically to recommend me stuff.

For games, I’m on an emulation streak, so top X listicles. Mostly, I’m racking my brain for things I thought were interesting but never played/didn’t finish/want to experience again.

For TV and movies (and new games), I care a lot less about and I mean a lot less, so word of mouth. If all my colleagues, family, and friends are raving about the new hotness I’ll find out.


I have nothing to add, and an upvote isn’t enough. Truly, thank you for your time, there’s a lot to think about.

I think for this initial iteration I’m going to direct install in the name of keeping it simple. Next go around I’ll try containerising, just to learn if nothing else. If I out-grow the Pi4 they’ll be good skills to have.


Thanks. I already have Log2Ram running to prolong the life of the SD. My planned disaster relief is a spare SD, already set up and taped to the box ready to swap and reboot in case of emergency. SD cards are cheap so chucking <£10 at the setup once in a while is no big thing. A fresh install on the new SD allows me to improve on what I’ve already done, for example the new SD I’ll run DietOS instead of Raspbian, and reinforce skills. Less time efficient but that’s no matter when the box is working and it’s a hobby. I can then keep the old SD card taped inside the case as a physical back up. Perhaps more expensive in the long run, but an SD card taped to the inside of the case with simple instructions is an easy sell to the fiancée.

My experience with guides has shaken my confidence quite a bit. Which is fine, I’ll get over myself and the point is to learn, so me hitting snags is a good thing. But, until I have a functioning back up I’m not going to be fucking with it. Facebook cannot go down on account of my education.

But if I may, I have one question, a bunch of recommendations have the setup “segregated” (I dunno the word) in Docker and Portainers but I don’t understand the rationale. I wasn’t intending on doing this, instead opting to install Pi-hole, Log2Ram, UFW, and the… other… softwares directly to the OS for simplicity. Why would one set up a Pi-hole et al in a containers instead of directly?

My current set up is Raspbian OS running Pi-hole as ad, tracker, malware block and DHCP (the ISP router is a Sky2 box so no IP or DNS customisation), Log2Ram and UncomplicatedFireWall.


I went with a pi running pi-hole. I got it as a project where the tool is the project. But, it’s essential infrastructure now and I don’t want to mess with it incase I break it. I’m an idiot with a poor history with pi guides so far, so I will break it. It’s running the adblock fine, I assume it’s doing the tracking and malware blocking fine too.

Sadly, that’s where I leave the project for now, I had intended to give it a HDD and some… other… software but I really don’t want to break it. I tried convincing the better half that I obviously need to N+1 but she wisely did not see reason.


I have the RG35XX + 128GB Sandisc SD card + GarlicOS + Best set go. As a toe in the water set up it’s awesome, 4 stops to have damn near everything PS1 and before worth playing, all for under a hund. Just add your few nostalgia hits that are missing and you’re done, like Worms hello?

I picked it over the Miyoo Mini Plus because it was cheaper on the day. They seem much of a muchness, for the most part the USP of either is meh to me, though I did use the mini hdmi for some level/life shenanigans once… I think I would have been happy with whichever one I got.

It certainly has its quirks with the menu button, I never tried analogue stick games with an external controller, because I’ve tonnes of games to get through before I think about Ape Escape.