• 2 Posts
  • 123 Comments
Joined 1Y ago
cake
Cake day: Jun 26, 2023

help-circle
rss

Both UnraidFS and mergerFS can merge drives of separate types and sizes into one array. They also allow removing / adding drives without disturbing the array. None of this is possible with traditional RAID (or at least not without a significant time sink for re-making the array), no matter the type of RAID you use.


Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.


Correct me if im wrong, but if you play a 1080p video on a 4k screen, that would be upscaled. If you put a 1080p video in a 4K stream, then play that 4k stream on the 4k screen, no post-processing would be applied to the video on the screen. All the upscaling happens during encoding, where you have far more control over the upscaler quality.


Because using a containerization system to run multiple services on the same machine is vastly superior to running everything bare metal? Both from a security and a ease-of-use standpoint. Why wouldnt you use docker?


Caddy and Authentik play very nicely together thanks to caddy forward_auth directive. Regarding acls, you’ll have to read some documentation, but it shouldnt be difficult to figure out whatsoever. The documentation and forum are great sources of info.



Just use yt-dlp instead of relying on websites that shove ads in your face and may do what ever they want to the files you’re downloading?


AdGuard Home supports static clients. Unless the instance is being used over TCP (port 53, unencrypted), it is by far the better way to use clientnames in the DNS server addresses and unblock the clients over that.

For DoT: clientname.dns.yourdomain.com
For DoH: https://dns.yourdomain.com/dns-query/clientname

A client, especially a mobile one, can simply not guarantee always having the same IP address.


If you dont fear using a little bit of terminal, caddy imo is the better choice. It makes SSL even more brainless (since its 100% automatic), is very easy to configure (especially for reverse proxying) yet very powerful if you need it, has a wonderful documentation and an extensive extension library, doesnt require a mysql database that eats 200 MB RAM and does not have unnecessary limitations due to UI abstractions. There are many more advantages to caddy over NPM. I have not looked back since I switched.

An example caddyfile for reverse proxying to a docker container from a hostname, with automatic SSL certificates, automatic websockets and all the other typical bells and whistles:

https://yourdomain.com {
  reverse_proxy radarr:7878
}

The demo instance would be their commercial service I suppose: https://ente.io/. Since, as are their own words, the github code 1:1 represents the code running on their own servers, the result when selfhosting should be identical.


Theres a Dockerfile that you can use for building. It barely changes the flow of how you setup the container. Bigger issue imo is that it literally is the code they use for their premium service, meaning that all the payment stuff is in there. And I don’t know if the apps even have support for connecting to a custom instance.

Edit: their docs state that the apps all support custom instances, making this more intruiging


The settlement addresses any and all people who were ever involved in any fashion with the production or distribution of yuzu. Quoting the Settlement:

against Defendant enjoining it and its members, agents, servants, employees, independent contractors, successors, assigns, and all those acting in privity or under its control

So yeah, the devs are all individually targeted, not just the LLC as its own entity. And if any of the devs is ever caught doing anything like this again, they’re gonna face charges much more serious than this i fear.


Is location the only reason to not use it as the AP? If I had a larger house I’d agree, but as I live in a small apartment, the current router location can easily serve the entire flat, so that is no concern right now.


Ive wanted one of these for a while to replace my ISPs modem+router+switch+wifi-AP. But apparently these devices can be funky to get a good wifi going, and I don’t feel like adding three (mini pc, switch, AP) new devices to my “we don’t talk about it” corner where all the IT is stored. Do you know anything about wifi on these?


You can docker compose up -d <service> to (re)create only one service from your Dockerfile


I’ll plug another subsonic compatible server here: gonic. It does not have a web player ui, which saves on RAM. And it is really fast too.


It supports sharing via public link. But I don’t think it has sharing with registered users via username.


Do you understand how this sub works?



if predb.net is anything to go by, there has not yet been any scene release for that series: https://predb.net/search/rey mysterio?page=1. Either it’s too new, interest is too low, or a mix of both. Or something else entirely, who’s to say.


Hm, I have yet to mess around with matrix. As anything fediverse, the increased complexity is a little overwhelming for me, and since I am not pulled to matrix by any communities im a part of, I wasn’t yet forced to make any decisions. I mainly hang out on discord, if that’s something you use.


Are you talking about the Tailscale App or the ZeroTier app? Because the TS Android app is the one thing im somewhat unhappy about, since it does not play nice with the private DNS setting.


I heard about tailscale first, and haven’t yet had enough trouble to attempt a switch.


I use Hetzner, mainly because of their good uptime, dependable service and being geographically close to me. Its a “safe bet” if you will. Monthly cost, if we’re not counting power usage by the homelab, is about 15 bucks for all three servers.


That’s a tough one. I’ve pieced this all together from countless guides for each app itself, combined with tons of reddit reading.

There are some sources that I can list though:


I’d love to have everything centralized at home, but my net connection tends to fail a lot and I dont want critical services (AdGuard, Vaultwarden and a bunch of others that arent listed) to be running off of flakey internet, so those will remain in a datacenter. Other stuff might move around, or maybe not. Only time will tell, I’m still at the beginning of my journey after all!


Pretty sure ruTorrent is a typical download client. The real reason is that it came preinstalled and I never had a reason to change it ¯_(ツ)_/¯


Glad to have gotten you back into the grind!

My homelab runs on an N100 board I ordered on Aliexpress for ~150€, plus some 16GB Corsair DDR5 SODIMM RAM. The Main VPS is a 2 vCPU 4GB RAM machine, and the LabProxy is a 4 vCPU 4GB RAM ARM machine.


The rclone mount works via SSH credentials. Torrent files and tracker searches run over simple HTTPS, since both my torrent client and jackett expose public APIs for these purposes, so I can just enter the web address of these endpoints into the apps running on my homelab.

Sidenote, since you said sshfs mount: I tried sshfs, but has significantly lower copy speeds than with rclone mount. Might have been a misconfiguration, but it was more time efficient to use rclone than trying to debug my sshfs connection speed.


Allow me to cross-post my recent post about my own infrastructure, which has pretty much exactly this established: lemmy.dbzer0.com/post/13552101.

At the homelab (A in your case), I have tailscale running on the host and caddy in docker exposing port 8443 (though the port matters not). The external VPS (B in your case) runs docker-less caddy and tailscale (probably also works with caddy in docker when you run it in network: host mode). Caddy takes in all web requests to my domain and reverse_proxies them to the tailscale hostname of my homelab :8443. It does so with a wildcard entry (*.mydomain.com), and it forwards everything. That way it also handles the wildcard TLS certificate for the domain. The caddy instance on the homelab then checks for specific subdomains or paths, and reverse_proxies the requests again to the targeted docker container.

The original source IP is available to your local docker containers by making use of the X-Forwarded-For header, which caddy handles beautifully. Simply add this block at the top of your Caddyfile on server A:

{
        servers {
                trusted_proxies static 192.168.144.1/24 100.111.166.92
        }
}

replacing the first IP with the gateway in the docker network, and the second IP with the “virtual” IP of server A inside the tailnet. Your containers, if they’re written properly, should automatically read this value and display the real source IP in their logs.

Let me know if you have any further questions.


Maybe. But I’ve read some crazy stories on the web. Some nutcases go very far to ruin an online strangers day. I want to be able to share links to my infrastructure (think photos or download links), without having to worry that the underlying IP will be abused by someone who doesn’t like me for whatever reason. Maybe that’s just me, but it makes me sleep more sound at night.


May I present to you: Caddy but for docker and with labels so kind of like traefik but the labels are shorter 👏 https://github.com/lucaslorentz/caddy-docker-proxy

Jokes aside, I did actually use this for a while and it worked great. The concept of having my reverse proxy config at the same place as my docker container config is intriguing. But managing labels is horrible on unraid, so I moved to classic caddy instead.


You make a good point. But I still find that directly exposing a port on my home network feels more dangerous than doing so on a remote server. I want to prevent attackers sidestepping the proxy and directly accessing the server itself, which feels more likely to allow circumventing the isolations provided by docker in case of a breach.

Judging from a couple articles I read online, if i wanted to publicly expose a port on my home network, I should also isolate the public server from the rest of the local LAN with a VLAN. For which I’d need to first replace my router, and learn a whole lot more about networking. Doing it this way, which is basically a homemade cloudflare tunnel, lets me rest easier at night.


its basically a VPS that comes with torrenting software preinstalled. Depending on hoster and package, you’ll be able to install all kinds of webapps on the server. Some even enable Plex/Jellyfin on the more expensive plans.


Nope, don’t have that yet. But since all my compose and config files are neatly organized on the file system, by domain and then by service, I tar up that entire docker dir once a week and pull it to the homelab, just in case.

How have you setup your provisioning script? Any special services or just some clever batch scripting?


Absolutely! To be honest, I don’t even want to have countless machines under my umbrella, and constantly have consodilation in mind - but right now, each machine fulfills a separate purpose and feels justified in itself (homelab for large data, main VPS for anything thats operation critical and cant afford power/network outages and so on). So unless I find another purpose that none of the current machines can serve, I’ll probably scale vertically instead of horizontally (is that even how you use that expression?)


The crowdsec agent running on my homelab (8 Cores, 16GB RAM) is currently sitting idle at 96.86MiB RAM and between 0.4 and 1.5% CPU usage. I have a separate crowdsec agent running on the Main VPS, which is a 2 vCPU 4GB RAM machine. There, it’s using 1.3% CPU and around 2.5% RAM. All in all, very manageable.

There is definitely a learning curve to it. When I first dove into the docs, I was overwhelmed by all the new terminology, and wrapping my head around it was not super straightforward. Now that I’ve had some time with it though, it’s become more and more clear. I’ve even written my own simple parsers for apps that aren’t on the hub!

What I find especially helpful are features like explain, which allow me to pass in logs and simulate which step of the process picks that up and how the logs are processed, which is great when trying to diagnose why something is or isn’t happening.

The crowdsec agent running on my homelab is running from the docker container, and uses pretty much exactly the stock configuration. This is how the docker container is launched:

  crowdsec:
    image: crowdsecurity/crowdsec
    container_name: crowdsec
    restart: always
    networks:
      socket-proxy:
    ports:
      - "8080:8080"
    environment:
      DOCKER_HOST: tcp://socketproxy:2375
      COLLECTIONS: "schiz0phr3ne/radarr schiz0phr3ne/sonarr"
      BOUNCER_KEY_caddy: as8d0h109das9d0
      USE_WAL: true
    volumes:
      - /mnt/user/appdata/crowdsec/db:/var/lib/crowdsec/data
      - /mnt/user/appdata/crowdsec/acquis:/etc/crowdsec/acquis.d
      - /mnt/user/appdata/crowdsec/config:/etc/crowdsec

Then there’s the Caddyfile on the LabProxy, which is where I handle banned IPs so that their traffic doesn’t even hit my homelab. This is the file:

{
	crowdsec {
		api_url http://homelab:8080
		api_key as8d0h109das9d0
		ticker_interval 10s
	}
}

*.mydomain.com {
	tls {
		dns cloudflare skPTIe-qA_9H2_QnpFYaashud0as8d012qdißRwCq
	}
	encode gzip
	route {
		crowdsec
		reverse_proxy homelab:8443
	}
}

Keep in mind that the two machines are connected via tailscale, which is why I can pass in the crowdsec agent with its local hostname. If the two machines were physically separated, you’d need to expose the REST API of the agent over the web.

I hope this helps clear up some of your confusion! Let me know if you need any further help with understanding it. It only gets easier the more you interact with it!

don’t worry, all credentials in the two files are randomized, never the actual tokens


Of course! here you go: https://files.catbox.moe/hy713z.png. The image has the raw excalidraw data embedded, so you can import it to the website like a save file and play around with the sorting if need be.


Oh, that! That app proxies the docker socket connections over a TCP channel. Which provides a more granular control over what app gets what access to specific functionalities of the docker socket. Directly mounting the socket into an app technically grants full root access to the host system in case of a breach, so this is the advised way to do it.


You’re right, that’s one of the remaining pain points of the setup. The rclone connections are all established from the homelab, so potential attackers wouldn’t have any traces of the other servers. But I’m not 100% sure if I’ve protected the local backup copy from a full deletion.

The homelab is currently using Kopia to push some of the most important data to OneDrive. From what I’ve read it works very similarly to Borg (deduplicate, chunk based, compression and encryption) so it would probably also be able to do this task? Or maybe I’ll just move all backups to Borg.

Do you happen to have a helpful opinion on Kopia vs Borg?


@selfhosted@lemmy.world Mid 2022, a friend of mine helped me set up a selfhosted Vaultwarden instance. Since then, my "infrastructure" has not stopped growing, and I've been learning each and every day about how services work, how they communicate and how I can move data from one place to another. It's truly incredible, and my favorite hobby by a long shot. Here's a map of what I've built so far. Right now, I'm mostly done, but surely time will bring more ideas. I've also left out a bunch of "technically revelant" connections like DNS resolution through the AdGuard instance, firewalls and CrowdSec on the main VPS. Looking at the setups that others have posted, I don't think this is super incredible - but if you have input or questions about the setup, I'll do my best to explain it all. None of my peers really understand what it takes to construct something like this, so I am in need of people who understand my excitement and proudness :) Edit: the image was compressed a bit too much, so here's the full res image for the curious: https://files.catbox.moe/iyq5vx.png And a dark version for the night owls: https://files.catbox.moe/hy713z.png
fedilink

Looking for Advice with networking between VPS, Homelab and Cloudflare
Hello SelfHosters! After getting myself a wonderfully large NAS and spending a couple days thinking about how to link up the different services, I turn to you for advice. This is my situation: I've been operating a cheap VPS for a while now, which runs a bunch of services that require neither lots of storage nor compute (webserver, vaultwarden, gitea and so on). But I refuse to pay the price for a large capacity / powerful remote machine for stuff like Jellyfin or Immich, especially because I want these things to be available to me in the local network no matter the network state (internet drops frequently here). Therefor, I've setup a ~50TB NAS, on which I want to both store and backup larger data packets, as well as operate some storage/traffic heavy applications (Jellyfin, Immich, Nextcloud, ...). What I'm struggling with is the networking of things. My VPS sits behind a Cloudflare Proxy, and I like it that way. All services are managed via domains and accessible from anywhere via that. I neither want nor need isolation of these services by a VPN. I want to continue this way with the new homelab, but am unable to directly expose ports on my home connection, or to get a static IP. For additional complication, traffic from these data-heavy applications cannot run through Cloudflare due to their limitations on the free plan. Finally, in a perfect world, I would be able to manage the domain names for services on the Homelab in the Nginx Container on the VPS, so that everything is centralized and I don't have separate management interfaces. My first idea was to connect the VPS and the Homelab with a Wireguard tunnel, but since this would route traffic through Cloudflare, it wouldn't work. ![network layout with a tunnel](https://files.catbox.moe/iwbmw2.png) I then read about Tailscale, and that I could link up the Homelab and VPS in a tailnet, setting up the node on the VPS as subnet router for the docker network on the homelab, which would bring me to something along these lines: ![network layout with a direct connection](https://files.catbox.moe/18u9fl.png) In a perfect world, the Nginx container on the VPS would be able to seemlessly direct traffic to both services running on the VPS and the Homelab, and data coming from the homelab would be routed directly to the client, while VPS data would continue running through Cloudflare. This would work without the client having to connect to any VPNs or mesh networks, the domain name would have to be enough. Maybe I'm overcomplicating things. Please don't feel obligated to copy-paste guides, I'll happily read external ressources that you can recommend. I'll also provide clarifications in the comments as needed. Any pointers how you people solve this would be much appreciated.
fedilink