tldr: I’d like to set up a reverse proxy with a domain and an SSL cert so my partner and I can access a few selfhosted services on the internet but I’m not sure what the best/safest way to do it is. Asking my partner to use tailscale or wireguard is asking too much unfortunately. I was curious to know what you all recommend.

I have some services running on my LAN that I currently access via tailscale. Some of these services would see some benefit from being accessible on the internet (ex. Immich sharing via a link, switching over from Plex to Jellyfin without requiring my family to learn how to use a VPN, homeassistant voice stuff, etc.) but I’m kind of unsure what the best approach is. Hosting services on the internet has risk and I’d like to reduce that risk as much as possible.

  1. I know a reverse proxy would be beneficial here so I can put all the services on one box and access them via subdomains but where should I host that proxy? On my LAN using a dynamic DNS service? In the cloud? If in the cloud, should I avoid a plan where you share cpu resources with other users and get a dedicated box?

  2. Should I purchase a memorable domain or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

  3. What’s the best way to geo-restrict access? Fail2ban? Realistically, the only people that I might give access to live within a couple hundred miles of me.

  4. Any other tips or info you care to share would be greatly appreciated.

  5. Feel free to talk me out of it as well.

EDIT:

If anyone comes across this and is interested, this is what I ended up going with. It took an evening to set all this up and was surprisingly easy.

  • domain from namecheap
  • cloudflare to handle DNS
  • Nginx Proxy Manager for reverse proxy (seemed easier than Traefik and I didn’t get around to looking at Caddy)
  • Cloudflare-ddns docker container to update my A records in cloudflare
  • authentik for 2 factor authentication on my immich server
@Fedegenerate@lemmynsfw.com
link
fedilink
English
4
edit-2
1M

On my home network I have nginxproxymanager running let’s encrypt with my domain for https, currently only for vaultwarden (I’m testing it for a bit for rolling it out or migrating wholly over to https). My domain is a ######.xyz that’s cheap.

For remote access I use Tailscale. For friends and family I give them a relay [raspberry pi with nginx which proxys them over tailscale] that sits on their home network, that way they need “something they have”[the relay] and “something they know” [login credentials] to get at my stuff. I won’t implement biometrics for “something they are”. This is post hoc justification though, and nonesense to boot. I don’t want to expose a port and a VPS has low WAF and I’m not installing tailscale on all of their devices so s relay is an unhappy compromise.

For bonus points I run pihole to pretty up the domain names to service.swirl and run a homarr instance so no-one needs to remember anything except home.swirl, but if they do remember immich.swirl that works too.

If there are many ways to skin a cat I believe I chose to use a spoon, don’t be like me. Updating each dockge instance is a couple minutes and updating diet pi is a few minutes more which, individually, is not a lot on my weekly/monthly maintence respectfully. But on aggregate… I have checklists. One day I’ll write a script that will ssh into a machine > update/upgrade the os > docker compose pull/rebuild/purge> move on to the next relay… That’ll be my impetus to learn how to write a script.

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
2
edit-2
1M

That’ll be my impetus to learn how to write a script.

This part caught my eye. You were able to do all that other stuff without ever attempting to write a script? That’s surprising and awesome. Assuming you are running everything on a linux server, I feel like a bash script that is run via a cronjob would be your best bet, no need to ssh into the server, just let it do it on it’s own. I haven’t tested any of this but I do have scripts I wrote that do automatic ZFS backups and scrubs; the order should go something like:

open the terminal on the server and type

mkdir scripts

cd scripts

nano docker-updates.sh

type something along the lines of this (I’m still learning docker so adjust the commands to your needs)

#!/bin/bash

cd /path/to/scripts/docker-compose.yml
docker compose pull && docker compose up -d
docker image prune -f

save the file and then type sudo chmod +x ./docker-updates.sh to make it executable

and finally set up a cronjob to run the script at specific intervals. type

crontab -e

or

sudo crontab -e (this is if you want to run the script as root but ideally, you just add your user to the docker group so this shouldn’t be needed)

and at the bottom of the file type this and save, that’s it:

# runs script at 1am on the first of every month
0 1 1 * * /path/to/scripts/docker-updates.sh

this website will help you choose a different interval

For OS updates you basically do the same thing except the script would look something like: (I forget if you need to type “sudo” or not; it’s running as root so I don’t think you need it but maybe try it with sudo in front of both "apt"s if it’s not working. Also use whatever package manager you have if you aren’t using apt)

while in the scripts folder you created earlier

nano os-updates.sh

#!/bin/bash

apt update -y && apt upgrade -y
reboot now

save and don’t forget to make it exectuable

then use

sudo crontab -e (because you’ll need root privileges to update. this will run the script as root without requiring you to input your password)

# runs script at 12am on the first of every month
0 0 1 * * /path/to/scripts/os-updates.sh
@Fedegenerate@lemmynsfw.com
link
fedilink
English
2
edit-2
1M

I did think about cron but, long ago, I heard it wasn’t best practice to update through cron because the lack of logs makes things difficult to see where things went wrong, when they do.

I’ve got automatic-upgrades running on stuff so it’s mostly fine. Dockge is running purely to give me a way to upgrade docker images without having to ssh. It’s just the monthly routine of “apt update && apt upgrade -y” *5 that sucks.

Thank you for the advice though. I’ll probably set cron to update the images with the script as you suggest. I have a “maintenance” homarr page as a budget uptime kuma so I can quickly look there to make sure everything is pinging at least. I made the page so I can quickly get to everyone’s dockge, pihole and nginx but the pings were a happy accident.

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
31M

the lack of logs

That’s the best part, with a script, you can pipe the output of the updates into a log file you create yourself. I don’t currently do that, if something breaks, I just roll back to a previous snapshot and try again later but it’s possible and seemingly straight forward.

This askubuntu link will probably help

All good info. Thank you kindly.

@slowmotionrunner@lemmy.sdf.org
link
fedilink
English
7
edit-2
1M

5). Hey OP, don’t worry, this can seem kind of scary at first, but it is not that difficult. I’ve skimmed some of the other comments and there are plenty of good tips here.

2). Yes, you will want your own domain and there is no fear of other people “knowing it” if you have everything set up correctly.

1b). Any cheap VPS will do and you don’t need to worry about it being virtualized rather than dedicated. What you really care about is bandwidth speed and limits because a reverse proxy is typically very light on resources. You would be surprised how little CPU/memory it needs.

1a). I use a cheap VPS from RackNerd. Once you have access to your VPS, just install your proxy directly into the OS or in Docker. Whichever is easier. The most important thing for choosing a reverse proxy is automatic TLS/Let’s Encrypt. I saw a comment from you about certbot… don’t bother with all that nonsense. Either Traefik, Caddy, or Nginx Proxy Manager (not vanilla Nginx) will do all this for you–I personally use Traefik unless for some reason I can’t. Way less headaches. The second most important thing to decide is how your VPS in the cloud will connect back to your home securely… I personally use Tailscale for that and it works perfectly fine.

3). Honestly, I think Fail2Ban and geo restrictions are overdoing it. Fail2ban has never gotten me any lift because any sort of modern brute force attack will come from a botnet that has 1000s of unique IPs… never triggering Fail2ban because no repeat offenders. Just ensure your VPS has a firewall enabled and you know what ports you are exposing from Docker and you should be good. If your services don’t natively support authentication, look into something like Authelia or Authentik. Rather than Fail2Ban and/or geo restrictions, I would be more inclined to suggest a WAF like Caddy WAF before I reached for geo restrictions. Again, assuming your concern is security, a WAF would do way more for you than IP restrictions which are easily circumvented.

4). Have fun!

EDIT: formatting

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
11M

I appreciate the info, thanks

@markstos@lemmy.world
link
fedilink
English
51M

It doesn’t improve security much to host your reverse proxy outside your network, but it does hide your home IP if you care.

If your app can exploited over the web and through a proxy it doesn’t matter if that proxy is on the same machine or over the network.

@jimmy90@lemmy.world
link
fedilink
English
21M

nixos with nginx services does all proxying and ssl stuff, fail2ban is there as well

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
11M

I know I should learn NixOS, I even tried for a few hours one evening but god damn, the barrier to entry is just a little too high for me at the moment 🫤

@jimmy90@lemmy.world
link
fedilink
English
1
edit-2
1M

i guess you were able to install the os ok? are you using proxmox or regular servers?

i can post an example configuration.nix for the proxy and container servers that might help. i have to admit debugging issues with configurations can be very tricky.

in terms of security i was always worried about getting hacked. the only protection for that was to make regular backups of data and config so i can restore services, and to create a dmz behind my isp router with a vlan switch and a small router just for my services to protect the rest of my home network

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
11M

i guess you were able to install the os ok? are you using proxmox or regular servers?

I was. It was learning the Nix way of doing things that was just taking more time than i had anticipated. I’ll get around to it eventually though

I tried out proxmox years ago but besides the web interface, I didn’t understand why I should use it over Debian or Ubuntu. At the moment, I’m just using Ubuntu and docker containers. In previous setups, I was using KVMs too.

Correct me if I’m wrong, but don’t you have to reboot every time you change your Nix config? That was what was painful. Once it’s set up the way you want, it seemed great but getting to that point for a beginner was what put me off.

I would be interested to see the config though

@jimmy90@lemmy.world
link
fedilink
English
2
edit-2
1M

this is my container config for element/matrix podman containers do not run as root so you have to get the file privileges right on the volumes mapped into the containers. i used top to find out what user the services were running as. you can see there are some settings there where you can change the user if you are having permissions problems




{ pkgs, modulesPath, ... }:

{

  imports = [
    (modulesPath + "/virtualisation/proxmox-lxc.nix")
  ];

  security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];

  system.stateVersion = "23.11";
  system.autoUpgrade.enable = true;
  system.autoUpgrade.allowReboot = false;

  nix.gc = {
    automatic = true;
    dates = "weekly";
    options = "--delete-older-than 14d";
  };

  services.openssh = {
    enable = true;
    settings.PasswordAuthentication = true;
  };

  users.users.XXXXXX = {
    isNormalUser = true;
    home = "/home/XXXXXX";
    extraGroups = [ "wheel" ];
    shell = pkgs.zsh;
  };

  programs.zsh.enable = true;

  environment.etc = {
    "fail2ban/filter.d/matrix-synapse.local".text = pkgs.lib.mkDefault (pkgs.lib.mkAfter ''
      [Definition]
      failregex = .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Failed password login.*
                  .*POST.* - <HOST> - 8008.*\n.*\n.*Got login request.*\n.*Attempted to login as.*\n.*Invalid username or password.*
    '');
  };

  services.fail2ban = {
    enable = true;
    maxretry = 3;
    bantime = "10m";
    bantime-increment = {
      enable = true;
      multipliers = "1 2 4 8 16 32 64";
      maxtime = "168h";
      overalljails = true;
    };
    jails = {
      matrix-synapse.settings = {
        filter = "matrix-synapse";
        action = "%(known/action)s";
        logpath = "/srv/logs/synapse.json.log";
        backend = "auto";
        findtime = 600;
        bantime  = 600;
        maxretry = 2;
      };
    };
  };

  virtualisation.oci-containers = {
    containers = {

      postgres = {
        autoStart = false;
        environment = {
          POSTGRES_USER = "XXXXXX";
          POSTGRES_PASSWORD = "XXXXXX";
          LANG = "en_US.utf8";
        };
        image = "docker.io/postgres:14";
        ports = [ "5432:5432" ];
        volumes = [
          "/srv/postgres:/var/lib/postgresql/data"
        ];
        extraOptions = [
          "--label" "io.containers.autoupdate=registry"
          "--pull=newer"
        ];
      };

      synapse = {
        autoStart = false;
        environment = {
          LANG = "C.UTF-8";
#          UID="0";
#          GID="0";
        };
 #       user = "1001:1000";
        image = "ghcr.io/element-hq/synapse:latest";
        ports = [ "8008:8008" ];
        volumes = [
          "/srv/synapse:/data"
        ];
        log-driver = "json-file";
        extraOptions = [
          "--label" "io.containers.autoupdate=registry"
          "--log-opt" "max-size=10m" "--log-opt" "max-file=1" "--log-opt" "path=/srv/logs/synapse.json.log"
          "--pull=newer"
        ];
        dependsOn = [ "postgres" ];
      };

      element = {
        autoStart = true;
        image = "docker.io/vectorim/element-web:latest";
        ports = [ "8009:80" ];
        volumes = [
          "/srv/element/config.json:/app/config.json"
        ];
        extraOptions = [
          "--label" "io.containers.autoupdate=registry"
          "--pull=newer"
        ];
#        dependsOn = [ "synapse" ];
      };

      call = {
        autoStart = true;
        image = "ghcr.io/element-hq/element-call:latest-ci";
        ports = [ "8080:8080" ];
        volumes = [
          "/srv/call/config.json:/app/config.json"
        ];
        extraOptions = [
          "--label" "io.containers.autoupdate=registry"
          "--pull=newer"
        ];
      };

      livekit = {
        autoStart = true;
        image = "docker.io/livekit/livekit-server:latest";
        ports = [ "7880:7880" "7881:7881" "50000-60000:50000-60000/udp" "5349:5349" "3478:3478/udp" ];
        cmd = [ "--config" "/etc/config.yaml" ];
        entrypoint = "/livekit-server";
        volumes = [
          "/srv/livekit:/etc"
        ];
        extraOptions = [
          "--label" "io.containers.autoupdate=registry"
          "--pull=newer"
        ];
      };

      livekitjwt = {
        autoStart = true;
        image = "ghcr.io/element-hq/lk-jwt-service:latest-ci";
        ports = [ "7980:8080" ];
        environment = {
          LK_JWT_PORT = "8080";
          LIVEKIT_URL = "wss://livekit.XXXXXX.dynu.net";
          LIVEKIT_KEY = "XXXXXX";
          LIVEKIT_SECRET = "XXXXXX";
        };
        entrypoint = "/lk-jwt-service";
        extraOptions = [
          "--label" "io.containers.autoupdate=registry"
          "--pull=newer"
        ];
      };

    };
  };

}




@jimmy90@lemmy.world
link
fedilink
English
2
edit-2
1M

you only need to reboot Nix when something low level has changed. i honestly don’t know where that line is drawn so i reboot quite a lot when i’m setting up a Nix server and then hardly reboot it at all from then on even with auto-updates running oh and if i make small changes to the services i just run sudo nixos-rebuild switch and don’t reboot

@jimmy90@lemmy.world
link
fedilink
English
2
edit-2
1M

this is my nginx config for my element/matrix services

as you can see i am using a proxmox NixOS with an old 23.11 nix channel but i’m sure the config can be used in other NixOS environments


{ pkgs, modulesPath, ... }:

{
  imports = [
    (modulesPath + "/virtualisation/proxmox-lxc.nix")
  ];

  security.pki.certificateFiles = [ "/etc/ssl/certs/ca-certificates.crt" ];

  system.stateVersion = "23.11";
  system.autoUpgrade.enable = true;
  system.autoUpgrade.allowReboot = true;

  nix.gc = {
    automatic = true;
    dates = "weekly";
    options = "--delete-older-than 14d";
  };

  networking.firewall.allowedTCPPorts = [ 80 443 ];

  services.openssh = {
    enable = true;
    settings.PasswordAuthentication = true;
  };

  users.users.XXXXXX = {
    isNormalUser = true;
    home = "/home/XXXXXX";
    extraGroups = [ "wheel" ];
    shell = pkgs.zsh;
  };

  programs.zsh.enable = true;

  security.acme = {
    acceptTerms = true;
    defaults.email = "XXXXXX@yahoo.com";
  };

  services.nginx = {
    enable = true;

    virtualHosts._ = {
      default = true;
      extraConfig = "return 500; server_tokens off;";
    };

    virtualHosts."XXXXXX.dynu.net" = {
      enableACME = true;
      addSSL = true;

      locations."/_matrix/federation/v1" = {
        proxyPass = "http://192.168.10.131:8008";
        extraConfig = "client_max_body_size 300M;" +
          "proxy_set_header X-Forwarded-For $remote_addr;" +
          "proxy_set_header Host $host;" +
          "proxy_set_header X-Forwarded-Proto $scheme;";
      };

      locations."/" = {
        extraConfig = "return 302 https://element.XXXXXX.dynu.net;";
      };

      extraConfig = "proxy_http_version 1.1;";
    };

    virtualHosts."matrix.XXXXXX.dynu.net" = {
      enableACME = true;
      addSSL = true;

      extraConfig = "proxy_http_version 1.1;";

      locations."/" = {
        proxyPass = "http://192.168.10.131:8008";
        extraConfig = "client_max_body_size 300M;" +
          "proxy_set_header X-Forwarded-For $remote_addr;" +
          "proxy_set_header Host $host;" +
          "proxy_set_header X-Forwarded-Proto $scheme;";
      };
    };

    virtualHosts."element.XXXXXX.dynu.net" = {
      enableACME = true;
      addSSL = true;
      locations."/" = {
        proxyPass = "http://192.168.10.131:8009/";
        extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
      };
    };

    virtualHosts."call.XXXXXX.dynu.net" = {
      enableACME = true;
      addSSL = true;
      locations."/" = {
        proxyPass = "http://192.168.10.131:8080/";
        extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
      };
    };

    virtualHosts."livekit.XXXXXX.dynu.net" = {
      enableACME = true;
      addSSL = true;

      locations."/wss" = {
        proxyPass = "http://192.168.10.131:7881/";
#        proxyWebsockets = true;
        extraConfig = "proxy_http_version 1.1;" +
          "proxy_set_header X-Forwarded-For $remote_addr;" +
          "proxy_set_header Host $host;" +
          "proxy_set_header Connection \"upgrade\";" +
          "proxy_set_header Upgrade $http_upgrade;";
      };

      locations."/" = {
        proxyPass = "http://192.168.10.131:7880/";
#        proxyWebsockets = true;
        extraConfig = "proxy_http_version 1.1;" +
          "proxy_set_header X-Forwarded-For $remote_addr;" +
          "proxy_set_header Host $host;" +
          "proxy_set_header Connection \"upgrade\";" +
          "proxy_set_header Upgrade $http_upgrade;";
      };
    };

    virtualHosts."livekit-jwt.XXXXXX.dynu.net" = {
      enableACME = true;
      addSSL = true;
      locations."/" = {
        proxyPass = "http://192.168.10.131:7980/";
        extraConfig = "proxy_set_header X-Forwarded-For $remote_addr;";
      };
    };

    virtualHosts."turn.XXXXXX.dynu.net" = {
      enableACME = true;
      http2 = true;
      addSSL = true;
      locations."/" = {
        proxyPass = "http://192.168.10.131:5349/";
      };
    };

  };
}




@jimmy90@lemmy.world
link
fedilink
English
11M

yeah proxmox is not necessary unless you need lots of separate instances to play around with

@jimmy90@lemmy.world
link
fedilink
English
11M

i have found this reference very useful https://mynixos.com/options/

@teuto@lemmy.teuto.icu
link
fedilink
English
61M

I use a central nginx container to redirect to all my other services using a wildcard let’s encrypt cert for my internal domain from acme.sh and I access it all externally using a tailscale exit node. The only publicly accessible service that I run is my Lemmy instance. That uses a cloudflare tunnel and is isolated in it’s own vlan.

TBH I’m still not really happy having any externally accessible service at all. I know enough about security to know that I don’t know enough to secure against much anything. I’ve been thinking about moving the Lemmy instance to a vps so it can be someone else’s problem if something bad leaks out.

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
1
edit-2
1M

wildcard let’s encrypt cert

I know what “wildcard” and “let’s encrypt cert” are separately but not together. What’s going on with that?

How do you have your tailscale stuff working with ssl? And why did you set up ssl if you were accessing via tailscale anyway? I’m not grilling you here, just interested.

I know enough about security to know that I don’t know enough to secure against much anything

I feel that. I keep meaning to set up something like nagios for monitoring and just haven’t gotten around to it yet.

@teuto@lemmy.teuto.icu
link
fedilink
English
41M

So when I ask Let’s Encrypt for a cert, I ask for *.int.teuto.icu instead of specifically jellyfin.int.teuto.icu, that way I can use the same cert for any internally running service. Mostly I use SSL on everything to make browsers complain less. There isn’t much security benefit on a local network. I suppose it makes harder to spoof on an external network, but I don’t think that’s a serious threat for a home net. I used to use home.lan for all of my services, but that has the drawback of redirecting to a search by default on most browsers. I have my tailscale exit node running on my router and it just works with SSL like anything else.

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
1
edit-2
1M

Ok so I currently have a cert set up to work with:

domain.com

www.domain.com (some browsers seemingly didn’t like it if I didn’t have www)

subdomain.domain.com

Are you saying I could just configure it like this:

domain.com

*.domain.com

The idea of not having to keep updating the cert with new subdomains (and potentially break something in the process) is really appealing

Yes. If you’re using lets encrypt then note that they do not support wildcard certs with the HTTP-01 challenge type. You will need to use the DNS-01 challenge type. To utilize it you would need a domain registrar that supports api dns updates like cloudflare and then you can use the acme.sh package. Here is an example guide i found.

Note that you could still request multiple explicit subdomains in the same issue/renew commands so it’s not a huge deal either way but the wildcard will be more seamless in the future if you don’t know what other services you might want to selfhost.

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
11M

awesome, thanks for the info

@foggy@lemmy.world
link
fedilink
English
31M

Don’t fret, not even Microsoft does.

You’re not as valuable as a target as Microsoft.

It’s just about risk tokerance. The only way to avoid risk is to not play the game.

I use traefik with a wildcard domain pointing to a Tailscale IP for services I don’t want to be public. For the services I want to be publicly available I use cloudflare tunnels.

@powermaker450@discuss.tchncs.de
link
fedilink
English
3
edit-2
1M

if you know/use docker, the solution that has been the most straightforward for me is SWAG. the setup process is fairly easy when combined with registering your domain with Porkbun, as they allow free API access needed for obtaining top-level (example.com) as well as wildcard (*.example.com) SSL certificates.

along with that, exposing a new service is fairly easy with the plethora of already included nginx configs for services like Nextcloud, Syncthing, etc.

Orbituary
link
fedilink
English
51M

Nginx Proxy Manager + LetsEncrypt.

@486@lemmy.world
link
fedilink
English
12
edit-2
1M

or a domain with a random string of characters so no one could reasonably guess it? Does it matter?

That does not work. As soon as you get SSL certificates, expect the domain name to be public knowledge, especially with Let’s Encrypt and all other certificate authorities with transparency logs. As a general rule, don’t rely on something to be hidden from others as a security measure.

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
31M

Damn, I didn’t realize they had public logs like that. Thanks for the heads up

@foggy@lemmy.world
link
fedilink
English
11
edit-2
1M

Https://crt.sh would make anyone who thought obscurity would be a solution poop themselves.

@Breve@pawb.social
link
fedilink
English
81M

It is possible to get wildcard certificates from LetsEnrcypt which doesn’t give anyone information on which subdomains are valid as your reverse proxy would handle that. Still arguably security through obscurity, but it does make it substantially harder for anyone who can’t intercept traffic between the client and server.

@tritonium@midwest.social
link
fedilink
English
3
edit-2
1M

Why do so many people do this incorrectly. Unless you are actually serving a public then you don’t need to open anything other than a WireGuard tunnel. My phone automatically connects to WireGuard as soon as I disconnect from my home WiFi so I have access to every single one of my services and only have to expose one port and service.

If you are going through setting up caddy or nginx proxy manager or anything else and you’re not serving a public… you’re dumb.

@RyeMan@lemmy.world
link
fedilink
English
51M

What are you using to auto connect to VPN when you disconnect from your home wifi?

@tritonium@midwest.social
link
fedilink
English
1
edit-2
1M

I setup Tasker to do it before there was any other options but now there are apps that will handle this. I’ve not tried them because my Tasker script works perfectly but I’ve noticed this one browsing f-droid and it looks appealing: WG Auto Connect - https://f-droid.org/en/packages/de.marionoll.wgautoconnect/

@TieDyePie@lemmy.world
link
fedilink
English
11M

Tasker on android, bit faffy and shouldn’t at all be necisary

@MMAniacle@lemm.ee
link
fedilink
English
11M

The Wireguard iOS app has an “on-demand” toggle that automatically connects when certain conditions are met (on cellular, on wifi, exclude certain networks, etc)

@g_damian@lemmy.world
link
fedilink
English
31M

WG Tunnel does that natively, you can whitelist some wifis and auto connect on other and optionally on mobile data

For point number 2, security through obscurity is not security.
Besides, all issued certificates are logged publicly. You can search them here https://crt.sh

Nginx Proxy Manager is easy to set up and will do LE acme certs, has a nice GUI to manage it.

If it’s just access to your stuff for people you trust, use tailscale or wireguard (or some other VPN of your choice) instead of opening ports to the wild internet.
Much less risk

@ikidd@lemmy.world
link
fedilink
English
31M

Tailscale is completely transparent on any devices I’ve used it on. Install, set up, and never look at it again because unless it gets turned off, it’s always on.

@emptiestplace@lemmy.ml
link
fedilink
English
11M

relatable

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
21M

I’ve run into a weird issue where on my phone, tailscale will disconnect and refuse to reconnect for a seemingly random amount of time but usually less than hour. It doesn’t happen often but it is often enough that I’ve started to notice. I’m not sure if it’s a network issue or app issue but during that time, I can’t connect to my services. All that to say, my tolerance for that is higher than my partner’s; the first time something didn’t work, they would stop using it lol

@ikidd@lemmy.world
link
fedilink
English
21M

So I have it running on about 20 phones for customers of mine that use Blue Iris with it. But these are all Apple devices, I’m the only one with Android. I’ve never had a complaint except one person that couldn’t get on at all, and we found that for some reason the Blue Iris app was blacklisted in the network settings from using the VPN. But that’s the closest I’ve seen to your problem.

I wonder if you set up a ping every 15 seconds from the device to the server if that would keep the tunnel active and prevent the disconnect. I don’t think tailscale has a keepalive function like a wireguard connection. If that’s too much of a pain, you might want to just implement Wireguard yourself since you can set a KeepAlive value and the tunnel won’t go idle. Tailscale is probably wanting to reduce their overhead so they don’t include a keepalive.

@yak@lmy.brx.io
link
fedilink
English
31M

I came here to upvote the post that mentions haproxy, but I can’t see it, so I’m resorting to writing one!

Haproxy is super fast, highly configurable, and if you don’t have the config nailed down just right won’t start so you know you’ve messed something up right away :-)

It will handle encryption too, so you don’t need to bother changing the config on your internal server, just tweak your firewall rules to let whatever box you have haproxy running on (you have a DMZ, right?) see the server, and you are good to go.

Google and stackexchange are your friends for config snippets. And I find the actual documentation is good too.

Configure it with certificates from let’s encrypt and you are off to the races.

Why is it too much asking your partner to use wireguard? I installed wireguard for my wife on her iPhone, she can access everything in our home network like she was at home, and she doesn’t even know that she is using VPN.

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
21M

A few reasons

  1. My partner has plenty of hobbies but sys-admin isn’t one of them. I know I’ll show them how to turn off wireguard to troubleshoot why “the internet isn’t working” but eventually they would forget. Shit happens, sometimes servers go down and sometimes turning off wireguard would allow the internet to work lol
  2. I’m a worrier. If there was an emergency, my partner needed to access the internet but couldn’t because my DNS server went down, my wireguard server went down, my ISP shit the bed, our home power went out, etc., and they forgot about the VPN, I’d feel terrible.
  3. I was a little too ambitious when I first got into self hosting. I set up services and shared them before I was ready and ended up resetting them constantly for various reasons. For example, my Plex server is on it’s 12th iteration. My partner is understandably weary to try stuff I’ve set up. I’m at a point where I don’t introduce them to a service I set up unless accessing it is no different than using an app (like the Homeassistant app) or visiting a website. That intermediary step of ensuring the VPN is on and functional before accessing the service is more than I’d prefer to ask of them

Telling my partner to visit a website seems easy, they visit websites every day, but they don’t use a VPN everyday and they don’t care to.

  1. I don’t think this is a problem with tailscale but you should check. Also you don’t have to pipe all the traffic through your tunnel. In the allowed IPs you can specify only your subnet so that everything else leaves via the default gateway.
  2. in the DNS server field in your WireGuard config you can specify anything, doesn’t have to be RFC1918 compliant. 1.1.1.1 will work too
  3. At the end of the day, a threat model is always gonna be security vs. convenience. Plex was used as an attack vector in the past as most most people don’t rush to patch it (and rightfully so, there are countless horror stories of PMS updates breaking the whole thing entirely). If you trust that you know what you’re doing, and trust the applications you’re running to treat security seriously (hint: Plex doesn’t) then go ahead, set up your reverse proxy server of choice (easiest would be Traefik, but if you need more robustness then nginx is still king) and open 443 to the internet.
@GreenKnight23@lemmy.world
link
fedilink
English
5
edit-2
24d

deleted by creator

@a_fancy_kiwi@lemmy.world
creator
link
fedilink
English
21M

I get where the original commenter is coming from. A VPN is easy to use, why not have my partner just use the VPN? But like, try adding something to your routine that you don’t care about or aren’t interested in. It’s an uphill battle and not every hill is worth dying on.

All that to say, I appreciate your comment.

@swerler@lemm.ee
link
fedilink
English
51M

I use nginx proxy manager and let’s encrypt with a porkbun domain, was very easy to set up for me. Never tried caddy/traefik/etc though. Geo blocking happens on my OPNsense with the built in tools.

Do you have instructions on how you set that up?

@swerler@lemm.ee
link
fedilink
English
2
edit-2
1M

At a high level you forward ports 80 and 443 to NPM from your router. In NPM you set up your proxy by IP address and port and you can also set up automatic SSL certs when you create the proxy via letsencrypt. I also run a DDNS auto update that tells porkbun if my IP changes. I’d be happy to get into some more specifics if there’s a particular spot you’re stuck. This is all assuming you have a public IPv4 and aren’t behind cgnat. If you have cgnat you’re not totally fucked but it makes it more complicated. If it’s OPNsense related struggles that shit is mysterious to me, I’ve only been running it a few weeks and it’s not fully configured. Still learning.

@BaroqueInMind@lemmy.one
link
fedilink
English
1
edit-2
1M

Why am I forwarding all http and https traffic from WAN to a single system on my LAN? Wouldn’t that break my DNS?

You would be forwarding ingress traffic(traffic not originating from your internal network) to 443/80, this doesn’t affect egress requests(requests from users inside your network requesting external sites) so it wouldn’t break your internal DNS resolution of sites. All traffic heading to your router from outside origins would be pushed to your reverse proxy where you can then route however you please to whatever machine/port your apps live on.

@swerler@lemm.ee
link
fedilink
English
1
edit-2
1M

The reverse proxy is th single system because it tells the incoming traffic where to go. It also doesn’t really do anything unless the incoming traffic is requesting one of the domains you set up. it doesn’t affect your internal DNS. You are able to redirect from the public address to your internal server through DNS though.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 404 users / day
  • 610 users / week
  • 1.44K users / month
  • 3.91K users / 6 months
  • 1 subscriber
  • 4.17K Posts
  • 86.6K Comments
  • Modlog