Captain’s note: This OC
was originally posted in reddit but its quality makes me wants to ensure a copy survices in lemmy as well.
We will setup the following applications in this guide:
Once you are done, your dashboard will look something like this.
I started building my setup after reading this guide https://www.reddit.com/r/Piracy/comments/ma1hlm/the_complete_guide_to_building_your_own_personal/.
You don’t need powerful hardware to set this up. I use a decade old computer, with the following hardware. Raspberry pi works fine.
I will be using Ubuntu server in this guide. You can select whatever linux distro you prefer.
Download ubuntu server from https://ubuntu.com/download/server. Create a bootable USB drive using rufus or any other software(I prefer ventoy). Plug the usb on your computer, and select the usb drive from the boot menu and install ubuntu server. Follow the steps to install and configure ubuntu, and make sure to check “Install OpenSSH server”. Don’t install docker during the setup as the snap version is installed.
Once installation finishes you can now reboot and connect to your machine remotely using ssh.
ssh username@server-ip
# username you selected during installation
# Type ip a to find out the ip address of your server. Will be present against device like **enp4s0** prefixed with 192.168.
I keep all my media at ~/server/media. If you will be using multiple drives you can look up how to mount drives.
We will be using hardlinks so once the torrents are downloaded they are linked to media directory as well as torrents directory without using double storage space. Read up the trash-guides to have a better understanding.
mkdir ~/server
mkdir ~/server/media # Media directory
mkdir ~/server/torrents # Torrents
# Creating the directories for torrents
cd ~/server/torrents
mkdir audiobooks books incomplete movies music tv
cd ~/server/media
mkdir audiobooks books movies music tv
Docker https://docs.docker.com/engine/install/ubuntu/
# install packages to allow apt to use a repository over HTTPS
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
# Add Docker’s official GPG key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Setup the repository
echo \
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Install Docker Engine
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
# Add user to the docker group to run docker commands without requiring root
sudo usermod -aG docker $(whoami)
Sign out by typing exit in the console and then ssh back in
Docker compose https://docs.docker.com/compose/install/
# Download the current stable release of Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Apply executable permissions to the binary
sudo chmod +x /usr/local/bin/docker-compose
First setup Adguard home in a new compose file.
Docker compose uses a yml file. All of the files contain version and services object.
Create a directory for keeping the compose files.
mkdir ~/server/compose
mkdir ~/server/compose/adguard-home
vi ~/server/compose/adguard-home/docker-compose.yml
Save the following content to the docker-compose.yml file. You can see here what each port does.
version: '3.3'
services:
run:
container_name: adguardhome
restart: unless-stopped
volumes:
- '/home/${USER}/server/configs/adguardhome/workdir:/opt/adguardhome/work'
- '/home/${USER}/server/configs/adguardhome/confdir:/opt/adguardhome/conf'
ports:
- '53:53/tcp'
- '53:53/udp'
- '67:67/udp'
- '68:68/udp'
- '68:68/tcp'
- '80:80/tcp'
- '443:443/tcp'
- '443:443/udp'
- '3000:3000/tcp'
image: adguard/adguardhome
Save the file and start the container using the following command.
docker-compose up -d
Open up the Adguard home setup on YOUR_SERVER_IP:3000
.
Enable the default filter list from filters→DNS blocklist. You can then add custom filters.
Jackett is where you define all your torrent indexers. All the *arr apps use the tornzab feed provided by jackett to search torrents.
There is now an *arr app called prowlarr that is meant to be the replacement for jackett. But the flaresolverr(used for auto solving captchas) support was added very recently and doesn’t work that well as compared to jackett, so I am still sticking with jackett for meantime. You can instead use prowlarr if none of your indexers use captcha.
jackett:
container_name: jackett
image: linuxserver/jackett
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/jackett:/config'
- '/home/${USER}/server/torrents:/downloads'
ports:
- '9117:9117'
restart: unless-stopped
prowlarr:
container_name: prowlarr
image: 'hotio/prowlarr:testing'
ports:
- '9696:9696'
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/prowlarr:/config'
restart: unless-stopped
Sonarr is a TV show scheduling and searching download program. It will take a list of shows you enjoy, search via Jackett, and add them to the qbittorrent downloads queue.
sonarr:
container_name: sonarr
image: linuxserver/sonarr
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
ports:
- '8989:8989'
volumes:
- '/home/${USER}/server/configs/sonarr:/config'
- '/home/${USER}/server:/data'
restart: unless-stopped
Sonarr but for movies.
radarr:
container_name: radarr
image: linuxserver/radarr
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
ports:
- '7878:7878'
volumes:
- '/home/${USER}/server/configs/radarr:/config'
- '/home/${USER}/server:/data'
restart: unless-stopped
lidarr:
container_name: lidarr
image: ghcr.io/linuxserver/lidarr
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/liadarr:/config'
- '/home/${USER}/server:/data'
ports:
- '8686:8686'
restart: unless-stopped
# Notice the different port for the audiobook container
readarr:
container_name: readarr
image: 'hotio/readarr:nightly'
ports:
- '8787:8787'
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/readarr:/config'
- '/home/${USER}/server:/data'
restart: unless-stopped
readarr-audio-books:
container_name: readarr-audio-books
image: 'hotio/readarr:nightly'
ports:
- '8786:8787'
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/readarr-audio-books:/config'
- '/home/${USER}/server:/data'
restart: unless-stopped
bazarr:
container_name: bazarr
image: ghcr.io/linuxserver/bazarr
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/bazarr:/config'
- '/home/${USER}/server:/data'
ports:
- '6767:6767'
restart: unless-stopped
I personally only use jellyfin because it’s completely free. I still have plex installed because overseerr which is used to request movies and tv shows require plex. But that’s the only role plex has in my setup.
I will talk about the devices section later on.
For the media volume you only need to provide access to the /data/media
directory instead of /data
as jellyfin doesn’t need to know about the torrents.
jellyfin:
container_name: jellyfin
image: ghcr.io/linuxserver/jellyfin
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
ports:
- '8096:8096'
devices:
- '/dev/dri/renderD128:/dev/dri/renderD128'
- '/dev/dri/card0:/dev/dri/card0'
volumes:
- '/home/${USER}/server/configs/jellyfin:/config'
- '/home/${USER}/server/media:/data/media'
restart: unless-stopped
plex:
container_name: plex
image: ghcr.io/linuxserver/plex
ports:
- '32400:32400'
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
- VERSION=docker
volumes:
- '/home/${USER}/server/configs/plex:/config'
- '/home/${USER}/server/media:/data/media'
devices:
- '/dev/dri/renderD128:/dev/dri/renderD128'
- '/dev/dri/card0:/dev/dri/card0'
restart: unless-stopped
I use both. You can use ombi only if you don’t plan to install plex.
ombi:
container_name: ombi
image: ghcr.io/linuxserver/ombi
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/ombi:/config'
ports:
- '3579:3579'
restart: unless-stopped
overseerr:
container_name: overseerr
image: ghcr.io/linuxserver/overseerr
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/overseerr:/config'
ports:
- '5055:5055'
restart: unless-stopped
I use qflood container. Flood provides a nice UI and this image automatically manages the connection between qbittorrent and flood.
Qbittorrent only needs access to torrent directory, and not the complete data directory.
qflood:
container_name: qflood
image: hotio/qflood
ports:
- "8080:8080"
- "3005:3000"
environment:
- PUID=1000
- PGID=1000
- UMASK=002
- TZ=Asia/Kolkata
- FLOOD_AUTH=false
volumes:
- '/home/${USER}/server/configs/qflood:/config'
- '/home/${USER}/server/torrents:/data/torrents'
restart: unless-stopped
There are multiple dashboard applications but I use Heimdall.
heimdall:
container_name: heimdall
image: ghcr.io/linuxserver/heimdall
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
volumes:
- '/home/${USER}/server/configs/heimdall:/config'
ports:
- 8090:80
restart: unless-stopped
If your indexers use captcha, you will need flaresolverr for them.
flaresolverr:
container_name: flaresolverr
image: 'ghcr.io/flaresolverr/flaresolverr:latest'
ports:
- '8191:8191'
environment:
- PUID=1000
- PGID=1000
- TZ=Asia/Kolkata
restart: unless-stopped
As I mentioned in the jellyfin section there is a section in the conmpose file as “devices”. It is used for transcoding. If you don’t include that section, whenever transcoding happens it will only use CPU. In order to utilise your gpu the devices must be passed on to the container.
https://jellyfin.org/docs/general/administration/hardware-acceleration.html Read up this guide to setup hardware acceleration for your gpu.
Generally, the devices are same for intel gpu transcoding.
devices:
- '/dev/dri/renderD128:/dev/dri/renderD128'
- '/dev/dri/card0:/dev/dri/card0'
To monitor the gpu usage install intel-gpu-tools
sudo apt install intel-gpu-tools
Now, create a compose file for media server.
mkdir ~/server/compose/media-server
vi ~/server/compose/media-server/docker-compose.yml
And copy all the containers you want to use under services. Remember to add the version string just like adguard home compose file.
Start the containers using the same command we used to start the adguard home container.
docker-compose up -d
Navigate to YOUR_SERVER_IP:9117
Add a few indexers to jackett using the “add indexer” button. You can see the indexers I use in the image below.
Navigate to YOUR_SERVER_IP:8080
The default username is admin
and password adminadmin
. You can change the user and password by going to Tools → Options → WebUI
Change “Default Save Path” in WebUI section to /data/torrents/
and “Keep incomplete torrents in” to /data/torrents/incomplete/
Create categories by right clicking on sidebar under category. Type category as TV
and path as tv
. Path needs to be same as the folder you created to store your media. Similarly for movies type Movies
as category and path as movies
. This will enable to automatically move the media to its correct folder.
Navigate to YOUR_SERVER_IP:8989
YOUR_SERVER_IP
port as **8080
,** and the username and password you used for qbittorrent. In category type TV
(or whatever you selected as category name(not path) on qbittorent). Test the connection and then save./data/media/tv
Repeat this process for Radarr, Lidarr and readarr.
Use /data/media/movies
as root for Radarr and so on.
The setup for ombi/overseerr is super simple. Just hit the url and follow the on screen instructions.
Navigate to YOUR_SERVER_IP:6767
Go to settings and then sonarr. Enter the host as YOUR_SERVER_IP
port as 8989
. Copy the api key from sonarr settings→general.
Similarly for radarr, enter the host as YOUR_SERVER_IP
port as 7878
. Copy the api key from radarr settings→general.
Go to YOUR_SERVER_IP:8096
/data/media
. Repeat this for movies, tv, music, books and audiobooks.VAAPI
and enter the device as /dev/dri/renderD128
Monitor GPU usage while playing content using
sudo intel_gpu_top
Navigate to YOUR_SERVER_IP:8090
Setup all the services you use so you don’t need to remember the ports like I showed in the first screenshot.
With docker compose updates are very easy.
~/server/compose/media-server
.docker-compose pull
to download the latest images.docker-compose up -d
to use the latest images.docker system prune -a
1. Posts must be related to the discussion of digital piracy
2. Don’t request invites, trade, sell, or self-promote
3. Don’t request or link to specific pirated titles, including DMs
4. Don’t submit low-quality posts, be entitled, or harass others
💰 Please help cover server costs.
And now all of this, but in nixos 🤔
I’ve never used nixos but with nixos would it be possible to do all that with just the configuration file ?
Yes, without any docker, or with docker if you like
But really the point is not to use docker, you just write an additional configuration file for the service you want. It looks like docker-compose but shorter, and you already have everything preconfigured (db, users, storage, etc)
Docker is not safe if not ran rootless. With nixos you can write a docker-compose-like file for the service to be docker/podman/baremetal/VM/anything
And you can find all the parameters/env variables on https://search.nixos.org/options?channel=23.05&from=0&size=50&sort=relevance&type=packages&query=Nextcloud
This search is for nextcloud, you can not only install the app and specify the login and password, but specify things like installed apps, default files, themes, which reverse proxyto use, and whether use some rules/headers/filtering
Like that nixos is the future, really
It’s going to be about as popular as Kubernetes for general use.
Nixos is great but really it has its learning curve.
Docker-Compose is more friendly because all people use it.
I use NixOs and it’s beautifull
Thank you for the guide. What are your thoughts on Jellyseer instead of Overseer? So that we can eliminate the need for Plex in the setup.
Any advice if I want to use a seedbox?
I don’t have much advise to give here, but a relevant question is: are you planning on running the services (except for torrent client) locally, or on a dedicated server that’s running as the seedbox as well? I think running it all on the seedbox would be the easiest to setup, and, from my understanding, you can pretty much just rent a dedicated server from a provider you trust and follow this guide. Maybe you’ll need to change a few things regarding the routing.
Also, explore if any dedi server/seedbox providers provide 1-click solutions for Jellyfin + qBittorrent + *arr etc. That would be easier, if you don’t wanna fiddle widdit!
Absolutely incredible, thank you for this 🙇🏽♂️
Wow, this is so detailed.
I was looking into setting up some stuff because it seems like a fun project to me, but it was very daunting.
Having it all here definitely helps a lot and gives me motivation to start.
Bookmarked and will try to install on Saturday. Do I need a specific server or can I just do this on my dezktpt?
Anywhere you have docker. I run it on my NAS.
Amazing guide!
I use minidlna, qbittorrent, qbittorrent-nox on a very old raspberry pi. A 4tb USB hard drive is attached via a powered hub. I can stream 4k Atmos using vlc installed on my “smart” tv. Can’t it be this simple? What’s the reason to dive into docker and plex?
Plex/Jellyfin is only needed if you need any of its features: remote access, ability to transcode (for reduced bandwidth when remote or when the client device doesn’t support the codec), showing the media as a “library”, search, last watched, ability to fetch information and subtitles about the media, per-user preferences and watch list etc.
You can also achieve some of these things with local apps like Kodi, Archos Player, BubbleUPnP etc. Or you can just do what you do and play files directly over a file share.
Docker helps you keep your main system much cleaner and helps with backups and recovering after failures/reinstall/upgrades.
With Docker the base OS is very basic indeed and just needs some essential things like SSH and Docker installed, so you can use a super reliable OS like Debian stable and not care that it doesn’t have super recent versions of various apps, because you install them from Docker images anyway.
The OS is not affected by what you install with Docker so it’s very easy to reinstall if needed, and you never have to mess with its packages to get what you want.
Docker also lets you separate the app files from the persistent files (like configs, preferences, databases etc.) so you can backup the latter separately and preserve them across reinstalls or upgrades of the app.
Docker also makes it very easy to do things like experiment with a new app, or a new version of an app, or run an app in an environment very unlike your base OS, or get back the exact same environment every time etc. All of these are classic problems when you run apps directly on the OS — if you’ve been doing that for a while you’ve probably run into some issues and know what I mean.
Docker eases the automated setup.
Yours surely does work but docker compose is really nice if you want to have multiple types of one thing on the same hardware (like 2 sonarr/radarr for 4K content).
Simply impossible with regular installs.
Also while yes it complicates somethings it also makes maintenance so easy with updates than anything else.
Remove the image and you are only left with files you put in /path/to/folder.
Remove a conventional program and you’d need to hunt down the files it created somewhere in the file structures like AppData or /opt and other folders.
Can you elaborate on why you’d need two instances of radarr/sonarr running at once?
the option to have two instances is nice for maintenance stuff, e.g.
another benifit of containers:
Literally in the comment…
But they can pull different quality profiles based on your list preferences right? I don’t see why you need one instance for downloading 4k and one for 1080p.
Depends on the setup. Maybe you run everything off a raspberry pi and can’t afford to transcode 4k, so you have a separate 4k library for local users only. I could also see wanting to separate the volumes when you have multiple servers attached to a single NAS.
IDK, I don’t personally bother with 4k, but I imagine it’s a little more to manage if you’re sharing your media out with friends/family.
That is a cool option I hadn’t thought of trying.
I don’t do it but I read that those are the reason for doing it were having both versions side by side without trumping another and without doing a manual automatic transcode by something like plex/handbrake.
Or that you would do a separate 4K library so when you share the library with family the fanily will only watch content that fits in the upstream pipe and doesn’t transcode while you could watch crisp 4K content.
I run arr stack and jellyfin server on windows 11. Also use tailscale for remote access
You can use Jellyseer and remove plex entirely. It’s a fork of overseer.
👏 👏 👏 👏 👏 👏 👏 👏 👏 👏 👏
everything goes somewhere, and i go everywhere.
This is a freaking great guide. I wish I had this wonderful resource when I started selfhosting. Thanks for this.
People might also want to have a look at pihole as an alternative to adguard for add blocking. It is awesome.
I prefer homepage over heimdall. It is more configurable, but less noob friendly.
Jellyseer is a fork of overseer that integrates very well with jellyfin. Reiveer is promising for discovering and adding content.
Seconded.
I found heimdall unreliable and not very lightweight. Considering I essentially just wanted bookmarks it made more sense to switch to an app similar to homepage.
The code base in reiverr is beautiful and svelte kit is amazing.
I want to echo thanks for this, because this is such an incredible resource and it’s finally motivated my lazy ass to get to work setting up my server.
Best community, thanks again for this excellent guide.
Is there a guide for using something other than Ubuntu? Is TrueNAS a suitable alternative?
You could do it, especially if you’re running Truenas Scale since that’s Linux. On Core you could do it inside a VM (I have Jellyfin set up inside an Ubuntu VM with persistent samba mounts to access my media).
On Scale the recommended way would probably be through helm charts, though config might look a bit different than the Docker Compose files here. There are charts for I think all the services mentioned: https://truecharts.org/charts/description_list
Personally I’m planning on waiting just a little bit longer for Scale to become more stable and then I’m going to migrate, rather than trying to set up all these services in a VM on my Core machine today.
Yeah this stuff just confuses the shit out of me but I definitely don’t want any telemetry on my server and I don’t trust canonical anymore. TrueNAS is the one I was planning to use but I’m hoping for something comprehensive.