• 0 Posts
  • 29 Comments
Joined 1Y ago
cake
Cake day: Aug 07, 2023

help-circle
rss

A lot of people self host so they are in control. This is Plex taking away that control, plain and simple.

I don’t know how many people host completely legitimately acquired content in their libraries, but your reasoning is such a cop out. Are you gonna defend them if they start scanning libraries for potentially illegally obtained content and blocking that because it could “put them in legal hot water?”


For what it’s worth you can convert the database to postgres if you want. I tried it out a few weeks ago and went flawlessly.

https://docs.nextcloud.com/server/latest/admin_manual/configuration_database/db_conversion.html


Yeah I’ve been using it for about a year and half or so on my main devices and it’s been wonderful. I’m likely going to down the list of supported providers from the gluetun docs and decide from there. Throwing my torrents and all that behind a vpn was the catalyst for signing up so I’ll continue to look for that support first and everything else is secondary.


I’m pretty sure it’s entirely disabled. Their announcement post says it’s being removed and doesn’t call out any exceptions.

I run my clients through a gluetun container with forwarding set up and ever since their announced end of support date (July I think?) I have had 0B uploaded for any of my trackers.

E: realized you may be asking about proton, oops


Wow this is great. I’ve been having trouble getting exit nodes working properly with these two. Sad that mullvad dropped port forwarding though so I’m not sure if I’ll stay with them.


I thought about setting one up for my main server because every time the power went out I’d have to reconfigure the bios for boot order, virtualization, and a few other settings.

I’ve since added a UPS to the mix but ultimately the fix was replacing the cmos battery lol. Had I put one of these together it would be entirely unused these days.

It’s a neat concept and if you need remote bios access it’s great, but people usually overestimate how useful that really is.


Why do you think AdGuard is better than Pihole? I’m not upset with the job Pihole is doing but always looking for improvements.


Yeah I use different VMs to separate out the different containers into arbitrary groups I decided on.

I run my docker containers inside different Debian VMs that are on a couple different Proxmox hosts.


I can’t speak for everyone else, but I run about 6 different VMs solely to run different docker containers. They’re split out by use case, so super critical stuff on one VM, *arr stuff on another, etc. I did this so my tinkering didn’t take down Jellyfin and other services for my wife and kids.

Beyond that I also have two VMs for virtualized pihole running gravity sync on different hosts, and another I intend to use for virtualized opnsense.

Everything is managed via ansible with each docker project in its own forgejo repo.


I like where your head’s at. I’m gonna go with the expecting more indictments angle paired with not being as rich/liquid as he’d like everyone to believe. What an unavoidable situation he’s found himself in.


That’s a valid point I hadn’t considered. Based on a cursory look at how bail bond works, if you go with a bondsman you’re out a certain amount regardless if you show up to court or not.

So if he paid the 200k he’d get almost all of it back after court, minus whatever processing fees the court has. If he goes with the bondsman he forks over 10% and the bondsman covers the other 90%, but he would get nothing back after court. The bondsman gets the full refund and keeps it all.

I can’t imagine the return on that would move the needle much for someone as “rich” as he is. I don’t know though, and I’ll fully admit this is pure speculation.


Doesn’t this make her point even more? Had to use a bondsman for 20k but he’s super rich? Yiiikes


I’m assuming you installed it directly to the container vs running docker in there?

I have been debating making the jump from docker in a VM to a container, but I’ve been maintaining Nextcloud in docker the entire time I’ve been using it and not had any issues. The interface can be a little slow at times but I’m usually not in there for long. I’m not sure it’s worth it to have to essentially rearchitect mely setup for that.

All that aside, I also map an NFS share to my docker container that stores all my files on my NAS. This could be what causes the interface slowness I sometimes see, but last time I looked into it there wasn’t a non hacky way to mount a share to an LXC container, has that changed?


Yikes! I pay a couple bucks more for uncapped gigabit. I’m fortunate in that there’s two competing providers in my area that aren’t in cahoots (that I can tell.) I much prefer the more expensive one and was able to get them to match the other’s price.

My wife has been dropping hints she wants to move to another state though and I’m low key dreading dealing with a new ISP/losing my current plan.


I do a separate container for each service that requires a db. It’s pretty baked into my backup strategy at this point where the script I wrote references environment variables for dumps in a way that I don’t have to update it for every new service I deploy.

If the container name has -dbm on the end it’s MySQL, -dbp is postgres, and -dbs would be SQLite if it needed its own containers. The suffix triggers the appropriate backup command that pulls the user, password, and db name from environment variables in the container.

I’m not too concerned about system overhead, but I’m debating doing a single container for each db type just to do it, but I also like not having a single point of failure for all my services (I even run different VMs to keep stable services from being impacted by me testing random stuff out.)


It took a little bit of work but I rolled my own docker compose and it’s been pretty solid. I pin the specific nextcloud version in my compose file (I don’t like using :latest for things) and updating is as simple as incrementing the version, pulling the new image, and restarting the container. I’ve been running this way for a couple years now and I couldn’t be happier with it.


I host forgejo internally and use that to sync changes. .env and data directories are in .gitignore (they get backed up via a separate process)

All the files are part of my docker group so anyone in it can read everything. Restarting services is handled by systemd unit files (so sudo systemctl stop/start/restart) any user that needs to manipulate containers would have the appropriate sudo access.

It’s only me they does all this though, I set it up this way for funsies.


I’ve been running it behind Cloudflare with no issues. I’m also doing it a completely different way than the official docs and the ubergeek method. Mostly because I have a particular way I do my docker stuff.

Every time something has broken it’s been 100% on me. My favorite way to learn is by breaking things though, so I also have an account on a different instance in case I break mine and have to wait a bit to fix it 😅


As someone else already said, automated backups should be up on the priority list.

But also maybe try out self hosting Lemmy. It’s been a fun little journey and helped me flesh out my Caddy config more than I thought possible.


Agreed. I haven’t come across any instances I care to participate in that have that enabled though.


This is ultimately why I decided to roll my own instance. I’m keeping my backup here though in case I mess something up, but full control is nice to have.


@synae@lemmy.sdf.org is correct, you can pass the values through that part of the UI. I used to do it that way and had Portainer watching my main branch to auto pull/deploy updates but recently moved away from it because I don’t deploy everything to 1 server and linking Portainer instances together was hit or miss for me.

Edit: I just deployed it like this (I hit deploy after taking the screenshot) and confirmed both inside the container that it sees everything as well as checking where Portainer drops the files on disk (it uses stack.env)

Stack settings

Environment vars in container

Portainer stack on disk

I don’t know why I did all that, but do with it what you will lol



You can already do this. You can specify an env file or use the default .env file.

The compose file would look like this:

environment:
      PUBLIC_RADARR_API_KEY: ${PUBLIC_RADARR_API_KEY}
      PUBLIC_RADARR_BASE_URL: ${PUBLIC_RADARR_BASE_URL}
      PUBLIC_SONARR_API_KEY: ${PUBLIC_SONARR_API_KEY}
      PUBLIC_SONARR_BASE_URL: ${PUBLIC_SONARR_BASE_URL}
      PUBLIC_JELLYFIN_API_KEY: ${PUBLIC_JELLYFIN_API_KEY}
      PUBLIC_JELLYFIN_URL: ${PUBLIC_JELLYFIN_URL}

And your .env file would look like this:

PUBLIC_RADARR_API_KEY=yourapikeyhere
PUBLIC_RADARR_BASE_URL=http://127.0.0.1:7878
PUBLIC_SONARR_API_KEY=yourapikeyhere
PUBLIC_SONARR_BASE_URL=http://127.0.0.1:8989
PUBLIC_JELLYFIN_API_KEY=yourapikeyhere
PUBLIC_JELLYFIN_URL=http://127.0.0.1:8096

This is how I do all of my compose files and then I throw .env in .gitignore and throw it into a local forgejo instance.


Like others have asked, how exactly did you create these containers? If they were through Portainer did you use a compose file in a stack or did you use the GUI the entire way?

This will nuke them assuming you don’t have something recreating them.

docker ps -a # find your rogue container, copy the container id, my example is a0ff66a83c73
docker stop a0ff66a83c73
docker rm a0ff66a83c73

My suggestion is to go through the process you did to try to deploy them and clean it up from that direction.


I don’t do it all in one compose file out of preference, but as others have said Gluetun + your preferred torrent client with all networking going to Gluetun. I’ve been running this way with deluge for a while now and it’s been solid as a rock.


Are these people you trust? I would do Jellyfin and expose it to them via tailscale. Might be annoying for them to have to run tailscale but no chance I’m serving media directly from my house.


Where do you have that data directory on disk? It’s likely not where portainer is looking. Your options are to move it to where portainer is expecting or to use the absolute path to the data directory to the left of the colon for the volume mapping.

For example, I put all my docker compose files in /opt/docker/vaultwarden so if my data were next to it I would use /opt/docker/vaultwarden/vaultwarden-rclone-data:/config/

I don’t recall the path where portainer looks but it’s going to be wherever you have its docker-compose. I can help you find it if that’s the route you’d like to take, but I won’t be able to help with that for a few hours.


I pretty much always leave stuff seeding once I get it these days. Ever since I bumped the disk space on my NAS it made it a lot easier to leave stuff instead of jockeying for space on disk.

My higher ratio items are all old shits like You Got Served lmao