• 11 Posts
  • 82 Comments
Joined 1Y ago
cake
Cake day: Jun 17, 2023

help-circle
rss

You most likely won’t utilize these speeds in a home lab, but I understand why you want them. I do too. I settled for 2.5GBit because that was a sweet spot in terms of speed, cost and power draw. In total, I idle at about 60W for following systems:

  • Lenovo M90q (i7 10700, 32GB, 3 x 1 TB SSD) running Proxmox, 15W idle
  • Custom NAS (Ryzen 2400G, 16GB, 4x12TB HDD)v running Truenas (30W idle)
  • Firewall (N5105, 8GB) running OPNsense (8W idle)
  • FritzBox 6660 Cable, which functions as a glorified access point, 10W idle

I’d be very careful to publicly host Jellyfin. Although not necessarily true, it basically advertises that you’re pirating content while also giving out your IP. Even if you rip your own media, this can still be illegal. Please be careful.

Maybe you can put it behind some authentication or, even better, a VPN.


From what I found, Lemmy is much better in this regard. I’ve gotten lots of helpful answers here, so give it a go! There is also a ton of tutorials on YouTube, I recommend something like this for beginners.


From what I found, Lemmy is much better in this regard. I’ve gotten lots of helpful answers here, so give it a go! There is also a ton of tutorials on YouTube, I recommend something like this for beginners.


From what I found, Lemmy is much better in this regard. I’ve gotten lots of helpful answers here, so give it a go! There is also a ton of tutorials on YouTube, I recommend something like this for beginners.


Thank you for your offer, but these are too old for what I want to do with them. Cheers!


Proxmox eats consumer grade SSDs (at least that’s what people are talking about)



Hej. I need all of that data. And those movies too. But yeah, seems to be the case. Weird, that people buy those drives, when 12tb aren’t that much more expensive. We’ll, but here I am but only because I had an old but okay 4TB drive lying around.


I’d be scared to be ripped off in a lot. Do they show drive stats before sale?


I’ve had great success with used drives so far, mind you I only buy slightly used with lots of remaining warranty… Saved me tons.


There is quite a price difference, at least here in Germany. It easily be double, if not more… I’d love to use SSDs, but can’t afford them right now


I didn’t even think to look at Amazon, but for 12TB, that is an okay to good price. Too bad the 4TB is inappropriately expensive…


Yeah, that seems to be the case. I’ll be on the lookout for official refurbished drives, thanks for your input!


What’s up with the prices of smaller used drives?
I'm in the marked for a used 4TB for my offsite backup. As I've recently acquired four 12TB drives (about 10000 hours and one to two years old) for 130€ each, I was optimistic. 30 to 40€ I thought. Easy. WRONG! Used drive, failing SMART stats, 40€. Here is a new drive, no hours on it. Oh wait, it was cold storage and it's almost 8 years old. Price? 90€ (mind you, a new drive costs about 110€). Another drive has already failed, but someone wants 25€ for e-waste. No Sir, it worked fine when I used Check-Disk, please buy. Most of the decent ones are 70 to 80€, way too close to the new price. I PAID 130 FOR 12TB. These drive were almost new and under warranty. WHY DO THIS NUMBNUT WANT 80 EURO FOR A USED 4TB Drive? And what sane person doesn't put SMART data in their offerings??? I have to ask at least 50 percent of the time. Don't even get me started on those external hard drives, they were trash to begin with. I'm SO CLOSE to buying a high capacity drive, because in that segment, people actually know what they are doing and understand what they have. Rant over. What gives? Did these people buy them, when they were much more expensive? Does anyone now a good site that ships refurbished drives to Germany? Most of those I found are also rippoffs...
fedilink

Let me know if you need any help with that. I’m still a beginner, but have used the last few months to learn about cyber security. It can be a daunting subject, but if you get the basics right, you’re probably good. I also hosted without a care for years and was never hacked, but it can/will happen. Here are some pointers!

Get or use a firewall. Iptables, UFW and such are probably good enough. I myself use OPNsense. It can be integrated with Crowdsec, a popular intrusion prevention system. This can be quite a rabbit whole. In the end, you should be able to control who goes where in your network.

Restrict ssh access or don’t allow it at all via internet. Close port 22 and use a VPN, if needed. Don’t allow root access via Ssh, use sudo. Use keys and passphrase login for best security.

Update your stuff regularly. Weekly or bi-weekly, if you can.

Use two factor authentication, where possible. It can be a bit annoying, but improves things dramatically. Long passwords help to, I use random-word-other-word combinations.

If you haven’t, think of a backup strategy. 3 redundant copys on 2 media, one off site.


Cool idea. Just be aware, that there are a lot of shady people out there. I’m not sure I would publicly host services, which rely on tight security (like Vaultwarden). They will come and they will probe your system and it’s security!

You might also want to remove Dockge from Uptime Kuma, no need to broadcast that publicly.


Thanks, I’ll let you know, once/if I figure it out!


I did what you suggested and reduced (1) the number of running services to a minimum and (2) the networks traefik is a member of to a minmum. It didn’t change a thing. Then I opened a private browser window and saw much faster loading times. Great. I then set everything back and refreshed the private browser window: still fast. Okay. Guess it’s not Traefik after all. The final nail in the coffin for my theory: I uses two traefik instances. Homepage still loads its widgets left to right, top to bottom (the order from the yaml file). The order doesn’t correspond to the instances, it’s more or less random. So I’m assuming the slowdown has something to do with (a) either caching from traefik or (b) the way Homepage handels the API request: http://IP:PORT (fast) or https://subdomain.domain.de. Anyway, thanks for your help!


Thank you so much for your thorough answer, this is very much a topic that needs some reading/watching for me. I’ve checked and I already use all of those headers. So in the end, from a security standpoint, not even having port 80 open would be best. Then, no one could connect unencrypted. I’ll just have to drill into my family to just use HTTPS if they have any problems.

It was interesting to see, how the hole process between browser and server works, thanks for clearing that up for me!


I didn’t even know that you could have a whole dynamic config directory, I just use one file. I’m guessing I can just as well put it there? And the dummy service simply acts as a place holder?


Thank you for your answer. If I do that, can I still connect via HTTP and the browser will then redirect? I don’t think I have a problem with remembering HTTPs, but my family will…


That’s a great idea, I’ll give it a try tomorrow. The weird thing is, the webuis load just fine, at least 90+ of the time is almost instant…


Each service stack (e.g. media, iso downloading) has it’s own network and traefik is in each of those networks as well. It works and seperates the stacks from each other (i don’t want stack a to be able to access stack b, which would be the case with a single traefik network, I think.)


Traefik Docker Lables: Common Practice
Hej everyone. My traefik setup has been up and running for a few months now. I love it, a bit scary to switch at first, but I encourage you to look at, if you haven't. Middelwares are amazing: I mostly use it for CrowdSec and authentication. Theres two things I could use some feedback, though. --- 1. I mostly use docker labels to setup routers in traefik. Some people only define on router (HTTP) and some both (+ HTTPS) and I did the latter. ``` - labels - traefik.enable=true - traefik.http.routers.jellyfin.entrypoints=web - traefik.http.routers.jellyfin.rule=Host(`jellyfin.local.domain.de`) - traefik.http.middlewares.jellyfin-https-redirect.redirectscheme.scheme=https - traefik.http.routers.jellyfin.middlewares=jellyfin-https-redirect - traefik.http.routers.jellyfin-secure.entrypoints=websecure - traefik.http.routers.jellyfin-secure.rule=Host(`jellyfin.local.domain.de`) - traefik.http.routers.jellyfin-secure.middlewares=local-whitelist@file,default-headers@file - traefik.http.routers.jellyfin-secure.tls=true - traefik.http.routers.jellyfin-secure.service=jellyfin - traefik.http.services.jellyfin.loadbalancer.server.port=8096 - traefik.docker.network=media ``` So, I don't want to serve HTTP at all, all will be redirected to HTTPS anyway. What I don't know is, if I can skip the HTTP part. Must I define the *web entrypoint* in order for redirect to work? Or can I define it in the traefik.yml as I did below? ``` entryPoints: ping: address: ':88' web: address: ":80" http: redirections: entryPoint: to: websecure scheme: https websecure: address: ":443" ``` --- 2. I use homepage (from benphelps) as my dashboard and noticed, that when I refresh the page, all those widgets take a long time to load. They did not do that, when I connecte homepage to those services directly using IP:PORT. Now I use URLs provided by traefik, and it's slow. It's not really a problem, but I wonder, if I made a mistake somewhere. I'm still a beginner when it comes to this, so any pointers in the right direction are apprecciated. Thank you =)
fedilink

Timing of Periodic Mainteance Tasks on TrueNAS Scale
EDIT: I found something looking through the source code on Github. I couldn't find anything at first, but then I searchedfor "periodic" and found something in `middelwared/main.py`. Theses tasks (see below) are executed at system start and will be re-run after `method._periodic.interval` seconds. Looking at the log in var/log/middelwared.log I saw, that the intervall was 86400 seconds, exactly one day. So I'm assuming that the daily execution time is set at the last system start. I've rebooted and will report back in a day. Maybe somebody can find the file to set it manually, not in source code. That is waaaay to advanced for me. EDIT 2: EDIT: I was correct, the tasks are executed 24hours later. This gives at least a crude way to change their execution time: restart the machine. --- Hej everyone, in the past few weeks, I've been digging my hands into TrueNAS and have since setup a nice little NAS for all my backup needs. The drives spin down when not in use, as the instance only recieves/sends backup data once a day. Howevery, there are a few periodic tasks which wake my drives. Namely: ``` catalog.sync Success 26796 12/03/2024 18:06:54 12/03/2024 18:06:54 catalog.sync_all Success 26795 12/03/2024 18:06:54 12/03/2024 18:06:54 zfs.dataset.bulk_process Success 26792 12/03/2024 18:06:43 12/03/2024 18:06:43 pool.dataset.sync_db_keys Success 26791 12/03/2024 18:06:42 12/03/2024 18:06:43 certificate.renew_certs Success 26790 12/03/2024 18:06:42 12/03/2024 18:06:43 dscache.refresh Success 24991 12/03/2024 03:30:01 12/03/2024 03:30:01 update.download Success 25027 12/03/2024 03:46:01 12/03/2024 03:46:02 ``` I spend the last hour searching online and digging through files and checking cron. I found the dscache.refresh and the update.download. I can't find the first five. At least one of them wakes my drives. Does anyone have an idea? There used to a periodic.conf, but I can't find it on my system. Thanks!
fedilink


Great setup! Be careful with the SSD though, Proxmox likes to eat those for fun with all those small but numerous writes. A used, small capacity enterprise SSD can be had for cheap.


I tried this. Put a DNS override for Google.com for one but not the other Adguard instance. Then did a DNS lookup and the answer (ip) changed randomly form the correct one to the one I used for the override. I’m assuming the same goes for the scenario with the l public DNS as well. In any case, the response delay should be similar, since the local pi hole instance has to contact the upstream DNS server anyway.



Sounds like I’ll do just that, thanks. Should I move all public facing services to that DMZ or is it enough to just isolate Traefik?


Only Nextcloud if externally available so far, maybe I’ll add Vaultwarden in the future.

I would like to use a VPN, but my family is not tech literate enough for this to work reliably.

I want to protect these public facing services by using an isolated Traefik instance in conjunction with Cloudflare and Crowdsec.


Both public and local services. I have limited hardware for now, so I’m still using my ISP router as my WLAN AP. Not the best solution, I know, but it works and I can seperate my Home-WLAN from my Guest-WLAN easily.

I want to use an AP at some point in the future, but I’d also need a managed switch as well as the AP itself. Unfortunately, thats not in my budget for now.


Thank you so much for your kind words, very encouraging. I like to do some research along my tinkering, and I like to challenge myself. I don’t even work in the field, but I find it fascinating.

The ZTA is/was basically what I was aiming for. With all those replies, I’m not so sure if it is really needed. I have a NAS with my private files, a nextcloud with the same. The only really critical thing will be my Vaultwarden instance, to which I want to migrate from my current KeePass setup. And this got me thinking, on how to secure things properly.

I mostly found it easy to learn things when it comes to networking, if I disable all trafic and then watch the OPNsense logs. Oh, my PC uses this and this port to print on this interface. Cool, I’ll add that. My server needs access to the SMB port on my NAS, added. I followed this logic through, which in total got me around 25-30 firewall rules making heavy use of aliases and a handfull of floating rules.

My goal is to have the control for my networking on my OPNsense box. There, I can easily log in, watch the live log and figure out, what to allow and what not. And it’s damn satisfying to see things being blocked. No more unknown probes on my nextcloud instance (or much reduced).

The question I still haven’t answered to my satisfaction is, if I build a strict ZTA or fall back to a more relaxed approach like you outlined with your VMs. You seem knowledgable. What would you do, for a basic homelab setup (Nextcloud, Jellyfin, Vaultwarden and such)?


This sounds promising. If I understand correctly, you have a ton of networks declared in your proxy, each for one service. So if I have Traefik as my proxy, I’d create traefik-nextcloud, traefik-jellyfin, traefik-portainer as my networks, make them externally available and assign each service their respective network. Did I get that right?


I’ve read about those two destinctions but I am simply lacking the number of ports on my little firewall box. I still only allow access to management from my PC, nothing else - so I feel good enough here. This all is more a little project for me to tinker on, nothing serious.

You’re explanation with trust makes sense. I will simply keep my current setup but put different VMs on different VLANs. Then I can seperate my local services from my public services, as well as isolate any testing VMs.

I’ve read that one should use one proxy instance for local access and one for public services with internet access. Is it enough to just isolate that public proxy or must I also put the services behind that proxy into the DMZ?

Thank you for your good explantion.


Thanks for your input. Am I understanding right, that all devices in one VLAN can communicate with each other without going through a firewall? Is that best practice? I’ve read so many different opinions that it’s hard to see.


Ah, I did not know that. So I guess I will create several VLANs with different subnets. This works as I intended it, trafic coming from one VM has to go through OPNsense.

Now I just have to figure out, if I’m being to paranoid. Should I simply group several devices together (eg, 10=Servers, 20=PC, 30=IoT; this is what I see mostly being used) or should I sacrifice usability for a more fine grained segeration (each server gets its own VLAN). Seems overkill, now that I think about it.


Network design. I started my homelab / selfhost journey about a year ago. Network design was the topic that scared me most. To challenge myself, and to learn about it, I bought myself a decent firewall box with 4 x 2.5G NICs. I installed OPNsense on it, following various guides. I setup my 3 LAN ports as a network bridge to connect my PC, NAS and server. I set the filtering to be applied between these different NICs, as to learn more about the behavior of the different services. If I want to access anything on my server from my PC, there needs to be a rule allowing it. All other trafic is blocked. This setup works great so far an I'm really happy with it. Here is where I ran into problems. I installed Proxmox on my server and am in the process of migrating all my services from my NAS over there. I thought that all trafic from a VM in Proxmox would go this route: first VM --> OPNsense --> other VM. Then, I could apply the appropriate firewall rules. This however, doesnt seem to be the case. From what I've learned, VMs in Proxmox can communicate freely with each other by default. I don't want this. From my research, I found different ideas and opposing solutions. This is where I could use some guidance. 1. Use VLANs to segregate the VMs from each other. Each VLAN gets a different subnet. 2. Use the Proxmox firewall to prevent communication between VMs. I'd rather avoid this, so I don't have to apply firewall rules twice. I could also install another OPNsense VM and use that, but same thing. 3. Give up on filtering traffic between my PC, NAS and server. I trust all those devices, so it wouldn't be the end of the world. I just wanted the most secure setup I could do with my current knowledge. Is there any way to just force the VM traffic through my OPNsense firewall? I thought this would be easy, but couldn't find anything or just very confusing ideas. I also have a second question. I followed [TechnoTim](https://technotim.live/posts/traefik-portainer-ssl/) to setup Treafik and use my local DNS and wildcard certificates. Now, I can reach my services using `service.local.example.com`, which I think is neat. However, in order to do this, it was suggested to use one docker network called `proxy`. Each service would be assigned this network and Traefik uses lables to setup the routes. ' Would't this allow all those services to communciate freely? Normally, each container has it's own network and docker uses iptables to isolate them from each other. Is this still the way to go? I'm a bit overwhelmed by all those options. Is my setup overkill? I'd love to hear what you guys think! Thank you so much!
fedilink

Nevermind, I am an idiot. You’re comment gave me thought and so I checked my testing procedure again. Turns out that, completly by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard driver as the source. I did that about 10 times. Everything is fine now. THANKS!


Both machines are easily capable of reaching around 2.2Gbps. I can’t reach full 2.5Gbps speed even with Iperf. I tried some tuning but that didn’t help, so its fine for now. I used iperf3 -c xxx.xxx.xxx.xxx, nothing else.

The slowdown MUST be related to ZFS, since LVM as a storage base can reach the “full” 2.2Gbps when used as a smb share.


Its videos, pictures, music and other data as well. I’ll try playing around with compression today, see if disabeling helps at all. The CPU has 8C/16T and the container 2C/4T.


The disk is owned by to PVE host and then given to the container (not a VM) as a mount point. I could use PCIe passthrough, sure, but using a container seems to be the more efficient way.


I meant mega byte (I hope that’s correct I always mix them up). I transferred large videos files, both when the file system was zfs or lvm, yet different transfer speeds. The files were between 500mb to 1.5gb in size


Proxmox SMB Share not reaching full 2.5Gbit speed
EDIT: SOLUTION: Nevermind, I am an idiot. As @ClickyMcTicker pointed out, it's the client side that is causing the trouble. His comment gave me thought so I checked my testing procedure again. Turns out that, completely by accident, everytime I copied files to the LVM-based NAS, I used the SSD on my PC as the source. In contrast, everytime I copied to the ZFS-based NAS, I used my hard drive as the source. I did that about 10 times. Everything is fine now. Maybe this can help some other dumbass like me in the futere. Thanks everyone! Hello there. I'm trying to setup a NAS on Proxmox. For storage, I'm using a single Samsung Evo 870 with 2TB (backups will be done anyway, no need for RAID). In order to do this, I setup a Debian 12 container, installed Cockpit and the tools needed to share via SMB. I set everything up and transfered some files: about 150mb/s with huge fluctuations. Not great, not terrible. Iperf reaches around 2.25Gbit/s, so something is off. Let's do some testing. I started with the filesystem. This whole setup is for testing anyway. 1. Storage via creating a **directory with EXT4**, then adding a mount point to the container. This is what gave me those speeds mentioned above. Okay, not good. --> **150mb/s**, speed fluctuates 2. a Let's do ZFS, which I want to use anyway. I created a **ZFS pool** with ashift=12, atime=off, compression=lz4, xattr=sa and 1MB record size. I did "some" research and this is what I came up with, please correct me. Mount to container, and go. --> **170mb/s**, stable speed 2. b Tried **OpenMediaVault** and used **EXT4 with ZFS as base** for the VM-Drive. --> around **200mb/s** 3. **LVM-Thin** using Proxmox GUI, then mount to container. --> **270mb/s**, which is pretty much what I'm reaching with Iperf. So where is my mistake when using ZFS? Disable compression? A different record size? Any help would be appreciated.
fedilink

Proxmox: data storage via NAS/NFS or dedicated partition
Black friday is almost upon us and I'm itching to get some good deals on missing hardware for my setup. My boot drive will also be VM storage and reside on two 1TB NVMe drives in a ZFS mirror. I plan on adding another SATA SSD for data storage. I can't add more storage right now, as my M90q can't be expanded easily. Now, how would I best setup my storage? I have two ideas and could use some guidance. I want some NAS storage for documents, files, videos, backups etc. I also need storage for my VMs, namely Nextcloud and Jellyfin. I don't want to waste NVMe space, so this would go on the SATA SSD as well. 1. Pass the SSD to a VM running some NAS OS (OpenMediaVault, TrueNas, simple Samba). I'd then set up different NFS/samba shares for my needs. Jellyfin or Nextcloud would rely on the NFS share for their storage needs. Is that even possible and if so, a good idea? I could easily access all files, if needed. I don't now if there would be a problem with permissions or diminished read/write speeds, especially since there are a lot of small files on my nextcloud. 2. I split the SSD, pass one partition to my NAS and the other will be used by Proxmox to store virtual disks for my VMs. This is probably the cleanest, but I can't easily resize the partitions later. What do you think? I'd love to hear your thoughts on this!
fedilink

ZFS: Should I use NAS or Enterprise/Datacenter SSDs?
I've [posted](https://feddit.de/post/5136921) a few days ago, asking how to setup my storage for Proxmox on my Lenovo M90q, which I since then settled. Or so I thought. The Lenovo has space for two NVME and one SATA SSD. There seems to a general consensus, that you shouldn't use consumer SSDs (even NAS SSDs like WD Red) for ZFS, since there will be lots of writes which in turn will wear out the SSD fast. Some conflicting information is out there with some saying it's fine and a few GB writes per day is okay and others warning of several TBs writes per day. I plan on using Proxmox as a hypervisor for homelab use with one or two VMs runnning Docker, Nextcloud, Jellyfin, Arr-Stack, TubeArchivist, PiHole and such. All static data (files, videos, music) will not be stored on ZFS, just the VM images themselves. I did some research and found a few SSDs with good write endurance (see table below) and settled on **two WD Red SN700 2TB** in a ZFS Mirror. Those drives have **2500TBW**. For file storage, I'll just use a **Samsung 870EVO with 4TB** and **2400TBW**. | SSD| TB|TBW|€| |----|----|----|----| |980 PRO|1TB|600|68| ||2TB|1200|128| |SN 700| 500GB| 1000|48| ||1TB| 2000 | 70| ||2TB|2500|141| |870 EVO|2TB|1200|117| ||4TB|2400|216| |SA 500|2TB|1300|137| ||4TB|2500|325| Is that good enough? Would you rather recommend enterprise grade SSDs? And if so, which ones would you recommend, that are m.2 NVME? Or should I just stick with ext4 as a file system, loosing data security and the ability for snapshots? I'd love to hear your thought's about this, thanks!
fedilink

Storage Setup for Proxmox in Lenovo M90q (Gen 1)
Hej everyone! I’m planning on getting acquainted with Proxmox, but I’m a total noob, so please keep that in mind. For this experiment, I’ve purchased a Lenovo M90q (Gen 1) to use as an efficient hardware basis. This system will later replace my current one. On it, I want to set up a small number of virtual machines, mainly one for Docker and one for NAS (or set up a NAS with Proxmox itself). My main concern right now is storage. I’d like to have some redundancy built into my setup, but I am somewhat limited with the M90q. I have space for two M.2 2280 NVMe drives as well as one SATA port. There are also several options to extend this setup using either a Wi-Fi M.2 to SATA or the PCIe x8 to either SATA or NVMe. For now, I’d like to avoid adding complexity and stick with the onboard options, but I'm open to suggestions. I'd buy some new or refurbished WD Red NAS SSDs. Given the storage options that I have, what would be a sensible setup to have some level of redundancy? I can think of three options: 1. ZFS Mirror using two NVMe as well as a SATA-SSD for non-critical storage. I would set up Proxmox and VMs on the same disk and mirror it to have redundancy. I could store ISOs and “ISOs” on the SATA-SSD, where no redundancy is needed, as it would be backed up to a different system anyway. 2. Proxmox and VMs each get their own NVMe storage, non-critical storage on the SSD. Here, “redundancy” would be achieved by backing up the host and the VMs to my NAS. This process is somewhat tedious and will cause downtime if something happens. 3. Add a Wi-Fi M.2 to SATA adapter and power two SSDs with an external power supply (possibly internal?) and install Proxmox on these. I’d love to hear your thoughts on this. Am I being too paranoid with redundancy? I’m hosting nothing critical, but downtime would cause some inconvenience (e.g., no Jellyfin, Nextcloud, Pi-hole, Vaultwarden) until I fix it. The data of these services will always be backed up using the 3-2-1 system and I'll move to a HA system in the future when funds allow it. EDIT: Are there any disadvantages to proxmox and the VMs being on the same disk?
fedilink

Does usage of third party youtube apps necessitate a VPN in the near future?
Greetings y'all. I've been using ways to circumvent YouTube ads for years now. I'd much rather donate to creators directly instead of using Google as a middle man, needing YouTube Premium. If even pay for premium for just a add free version, if the price wouldn't be so outrageous. I've So far used adblockers, Vanced and then Revanced. Since the recent developments in this matter, I've setup Tubearchivist, a self hosted solution to download YouTube videos for later consumption. It mostly works great, with a few minor things that bother me but I highly recommend it. ReVanced also still works, but nobody knows for how long. The question now is, if I should use a VPN to obscure my identity to Google. I don't know if I'm being paranoid here but I wouldn't put it past Google to block my account, if they see YouTube traffic for my IP address and no served ads. Revanced even uses my main Google account, so not that far fetched. So far, or at least to my knowledge, Google has never done this but I think they just might in the future. So I'm planning on putting tubearchivist behind a VPN via gluetun. What do you think? I'm eager to hear your opinions on this. I can also add my docker compose, if there's interest and when I'm back on my PC.
fedilink

How to organize docker volumes into subdirectories using compose
Hei there. I've read that it's best practice to use docker volumes to store persistent container data (such as config, files) instead of using bindmount. So far, I've only used the latter and would like to change this. From what I've read, all volumes are stored in var/lib/docker/volumes. I also understood, that a volume is basically a subdirectory in that path. I'd like to keep things organized and would like the volumes of my containers to be stored in subdirectories for each stack in docker compose, e.g. volumes/arr/qbit /arr/gluetun /nextcloud/nextcloud /nextcloud/database Is this possible using compose? Another noob question: is there any disadvantage to using the default network docker creates for each stack/container?
fedilink

checking for ip leaks using Docker, Gluetun and qBittorrent
Hej everyone. Until now I've used a linux install and vpn software (airvpn and eddie) when sailing the high seas. While this works well enough, there is always room for improvement. I am in the process of setting up a docker stack which so far contains gluetun/airvpn and qbittorrent. Here is my compose file: ``` version: "3" services: gluetun: image: qmcgaw/gluetun container_name: gluetun cap_add: - NET_ADMIN volumes: - /appdata/gluetun:/gluetun environment: - VPN_SERVICE_PROVIDER=airvpn - VPN_TYPE=wireguard - WIREGUARD_PRIVATE_KEY= - WIREGUARD_PRESHARED_KEY= - WIREGUARD_ADDRESSES=10.188.90.221/32,fd7d:76ee:e68f:a993:63b2:6cc0:fe82:614b/128 - SERVER_COUNTRIES= - FIREWALL_VPN_INPUT_PORTS= ports: - 8070:8070/tcp - 60858:60858/tcp - 60858:60858/udp restart: unless-stopped qbittorrent: image: lscr.io/linuxserver/qbittorrent:latest container_name: qbittorrent network_mode: "service:gluetun" environment: - PUID=1000 - PGID=100 - TZ=Europe/Berlin - WEBUI_PORT=8070 volumes: - /appdata/qbittorrent/config/:/config - /data/videos/downloads:/downloads depends_on: - gluetun restart: always ``` My first problem was related to the ip adress. For some reason, when I use an IPV6 adress, I got this error in gluetun: ``` 2023-10-06T17:30:42Z ERROR VPN settings: Wireguard settings: interface address is IPv6 but IPv6 is not supported: address fd7d:76ee:e68f:a993:63b2:6cc0:fe82:614b/128 ``` Well, I removed that IPV6 and now everything works. Does anyone have a fix? :) Now for the *important* part. I tested the setup with a linux iso and to my surprise - everything works. When I used ipleak.net or other websites, these websites only detect the ip from my vpn. Great. **Do I need to take any other precautions?** I also bound the network interface tun0 in the qbit webui, just to be sure. When I stop the gluetun container, the webui stops working (as it should, but it is hard to check, if the download also stops). I'm just a bit paranoid because I don't want to pay coin when downloading all the isos my heart desires. Thank you so much for any input!
fedilink