• 0 Posts
  • 172 Comments
Joined 1Y ago
cake
Cake day: Jun 30, 2023

help-circle
rss

I would say the more regular expiration and renewal of an LE cert is better.
It’s an ongoing check instead of an annual check.


At the homelab scale, proxmox is great.
Create a VM, install docker and use docker compose for various services.
Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
Have proxmox take regular snapshots of the VMs.
Every now and then, copy those backups onto an external USB harddrive.
Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.

Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.

Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.

That’s all you really need to do.
At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.

Automating any of the above will become apparent when tinkering stops being fun.

The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.


Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.

However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.

Reverse proxies are the backbone of hosting and services these days.
Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.

The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
Like “now you have it setup, make sure you tune it for production” and it just ends.
And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.

I understand your frustrations.


Nano is useful because it is everywhere.
There are better editors, but being familiar with nano and it’s shortcuts means you can edit files pretty much anywhere.
Same with knowing the basics of vim (like being able to edit, exit and save)


If your windows computer makes an outbound connection to a server that is actively exploiting this, then yes: you will suffer.

But having a windows computer that is chilling behind a network firewall that is only forwarding established ipv6 traffic (like 99.9999% of default routers/firewalls), then you are extremely extremely ultra unlucky to be hit by this (or, you are such a high value target that it’s likely government level exploits). Or, you are an idiot visiting dogdy websites or running dodgy software.

Once a device on a local network has been successfully exploited for the RCE to actually gain useful code execution, then yes: the rest of your network is likely compromised.
Classic security in layers. Isolatation/layering of risky devices (that’s why my homelab is on a different vlan than my home network).
And even if you don’t realise your windows desktop has been exploited (I really doubt that this is a clean exploit, you would probably notice a few BSOD before they figure out how to backdoor), it then has to actually exploit your servers.
Even if they turn your desktop into a botnet node, that will very quickly be cleaned out by windows defender.
And I doubt that any attacker will have time to actually turn this into a useful and widespread exploit, except in targeting high value targets (which none of us here are. Any nation state equivalent of the US DoD isn’t lurking on Lemmy).

It comes back to: why are you running windows as a server?

ETA:
The possibility that high value targets are exposing windows servers on IPv6 via public addresses is what makes this CVE so high.
Sensible people and sensible companies will be using Linux.
Sensible people and sensible companies will be very closely monitoring what’s going on with windows servers exposed by ipv6.
This isn’t an “ipv6 exploit”. This is a windows exploit. Of which there have been MANY!


If the router/gateway/network (IE not local) firewall is blocking forwarding unknown IPv6, then it’s a compromised server connected to via IPv6 that has the ability to leverage the exploit (IE your windows client connecting to a compromised server that is actively exploiting this IPv6 CVE).

It’s not like having IPv6 enabled on a windows machine automatically makes it instantly exploitable by anyone out there.
Routers/firewalls will only forward IPv6 for established connections, so your windows machine has to connect out.

Unless you are specifically forwarding to a windows machine, at which point you are intending that windows machine to be a server.

Essentially the same as some exploit in some service you are exposing via NAT port forwarding.
Maybe a few more avenues of exploit.

Like I said. Why would a self-hoster or homelabber use windows for a public facing service?!


How many people are running public facing windows servers in their homelab/self-hosted environment?

And just because “it’s worked so far” isn’t a great reason to ignore new technology.
IPv6 is useful for public facing services. You don’t need a single proxy that covers all your http/s services.
It’s also significantly better for P2P applications, as you no longer need to rely on NAT traversal bodges or insecure uPTP type protocols.

If you are unlucky enough to be on IPv4 CGNAT but have IPv6 available, then you are no longer sharing reputation with everyone else on the same public IPv4 address. Also, IPv6 means you can get public access instead of having to rely on some RPoVPN solution.


I thought T568B at each end was standard practice these days



The benefit of using config files is easy version management via git.
Makes it easy to rebuild from scratch and easy to rollback a change that breaks something


If you think coal mines are bad, wait till you see the conditions in the Alexa mines!


Other services will be reflected by active DNS records.

If the only DNS record points to a “Buy this domain” webpage, I think it’s fair to argue that is misuse.
Doubley so if it turns out many unrelated domains are owned by and point to the same webpage, and it’s just doing a js hostname thing to make it seem relevant to the current address


Transfering a domain from one registrar (IE reseller) to another can be a pain, but yes you can - it normally involves a fee and manual actions from the registrars.
As long as the new registrar supports the TLD. A few Geo-TLDs can only be resold/managed by some registrars.

The easiest thing to do is to point the domain at ClouDNS nameservers.
Make sure you are happy with ClouDNS (I’ve never had issues with them) etc before committing


I think the headcanon is that the shortest distance is impressive.
Either a different faster and harder route through “the kessel”. Or that 12 parsecs is the absolute minimum distance it can be done in, perfectly apexing every corner.


Nginx Proxy Manager is probably perfect for you.
Pick a domain (like mylab.home or something), set up your home network to resolve that domains IP as your docker hosts IP.
NPM will do self-signed certs. So, you will get a “warning, Https is insecure” kinda page when you visit it. You could import NPMs root cert into your OS/browser so it trusts it (or set up an “don’t warn for this domain” or something).

If you don’t want per-client config to trust it, then you need to buy a domain, use a DNS that supports letsencrypt DNS-challenge, and grab certs that way (means you don’t need a publicly accessible well-known route exposed)


Supabase is a dockerised postgres with user auth, rest API and some other goodies. It’s maybe too complicated as a starter.
Appwrite might also work for ya. Much easier to get into, but also less feature complete.
Pocketbase might also work. Haven’t used it tho


You can do reverse proxy on the VPS and use SNI routing (because the requested domain is in clear text over HTTPS), then use Proxy Protocol to attach the real source IP to the TCP packets.
This way, you don’t have to terminate HTTPS on the VPS, and you can load balance between a couple wireguard peers so you have redundancy (or direct them to different reverse proxies or whatever).
On your home servers, you will need an additional frontend(s) that accepts Proxy Protocol from the VPS (as Proxy Protocol packets aren’t standard HTTP/S packets, so standard HTTPS reverse proxies will drop them as unknown/broken/etc).
This way, your home reverse proxy knows the original IP and can attach it to the decrypted http requests as x-forward-for. Or you can do ACLs based on original client IP. Or whatever.

I haven’t found a way to get a firewall that pays attention to Proxy Protocol TCP headers, but I haven’t found that to really be an issue. I don’t really have a use case


Oh man, spoilable items? Spoilable agriculture research packs?
That’s pretty intense


Sure, but what you are describing is the problem that k8s solves.
I’ve run plenty of production things from docker compose. Auto scaling hasn’t been a requirement, and HA was built into the application (so 2 separate VMs running the compose stack). Docker was perfect for it, and k8s would’ve been a sledgehammer.


It’s not a workaround.
In the old days, if you had 2 services that were hard coded to use the same network port, you would need virtualization or a different server and make sure the networking for those is correct.

Network ports allow multiple services to use the same network adapter as a port is like a “sub” address.
Docker being able to remap host network ports to containers ports is a huge feature.
If a container doesn’t need to be accessed outside of the docker network, you don’t need to expose the port.

The only way to have multiple services on the same port is to use either a load balancer (for multiple instances of the same service) or an application-aware reverse proxy (like nginx, haproxy, caddy etc for web things, I’m sure there are other application-aware reverse proxies).


Surely you want to enable 802.1q? Like, that is vlan aware switching and routing. Or is that on the nas?

Edit:
Some troubleshooting:

Connect a laptop into the same subnet as your Nas (so same vlan and IP range/subnet) and connect to the nas. This either eliminates the NAS or the router from the equation


That whole “shortest path” has caught me out before (tho in a different way)!
And firewall logs of “state violation” aren’t always helpful when that’s pretty much the default log message


If they are on the same subnet, why are they going via the router? Surely the NIC/OS will know it’s a local address within its subnet, and will send it directly; as opposed to not knowing where to send the packet, so letting the router deal with it.

I’m assuming you are using a standard 24 bit subnet mask, because you haven’t provided anything that indicates otherwise and the issue you present would be indicative of a local link being used - this possible


So, is public accessibility actually required?
Does it need to be exposed to the public internet?

Why not use wireguard (or another VPN)? Even easier is tailscale.
If you are hand selecting users (IE, doesn’t actually need to be publicly accessible), then VPN is the most secure and just run a reverse proxy for ease & certs.
Or set up client certificate authentication, so only users that install a certificate issued by you can connect to the service (dunno how that works for 3rd party apps to immich)

Like I asked, what is your actual threat model?
What are your requirements?
Is public accessibility actually required?


That got a bit long.
Reading more into bunkerweb.

Things like the “limit” feature are going to doink people on cgnat or large corporate networks. I’ve had security stuff tripped by a company using my software, and it’s a PITA cause all the requests from legit users come from only a few IP addresses.

Antibot isn’t going to be helpful for things like JS requests, because cookies aren’t included by default with fetch requests - so the application needs to be specifically built for this (at which point, do it at an application level so it can scale easier?).
And captcha. For whatever that is worth these days.

Reverse Scan is going to slow down every request (as it scans the remote client for suspicious open ports, so a 500ms delay as default).

Country is just geo-ip.

Bad Behaviour is just rate limiting (although with a 24h ban). Sucks if a few corporate/cgnat users all hit a 404 and suddenly that entire company/ISP’s IP is blocked for a day.

This seems like something to use when running a TOR server or something, where security is more important than user experience. Like, every feature seems to punish legit users


LE certs can always be “side loaded” by acme.sh or LEbot or whatever, and the reverse proxy restarted to use the new certs. So, the whole “pro subscription to use specific certs” shouldn’t be a factor, except a little more work/config (so, money Vs time).

Now for my opinion…

For base security, all it’s doing is looking at whatever you tell it to look at in an http request and forward/drop/block as such.
HAProxy is well battle-tested. Nginx is well battle-tested. Traefik and caddy are comparably newer contenders, but considering their adoption they are probably well battle-tested.
Which means, an established reverse proxy is only going to be as secure as the software it’s forwarding traffic to.

If there happens to be some mental TLS handshake RCE that comes up, chances are they are all using the same underlying TLS library so all will be susceptible…
But at least an attacker only gets access to the reverse proxy server. Which is why it’s worth having that in a locked down isolated VM, ideally built in a way that is extremely easy to rebuild (declarative configs like docker-compose and some scripts, or even something like nixos for an immutable OS).

As for add-ons… Most WAFs only look for things like XSS injection or SQL injection or exploitative HTTP request formats. Very very basic attack vectors that any decent HTTP stack and reasonably built software shouldn’t have to even worry.
Any DDOS protection is more likely to blast your network connectivity, which (for self hosting) a WAF isn’t going to be able to do anything about.
I’m not sure how good they actually are against a DOS attack that is caused by bugs/inefficiencies in the application. Maybe they monitor for long/increasing response times, and block further requests to them? Might cause a lot of false-positives for your users.

So, the only real benefit - that I see - are zero-day exploit protections… and that only matters if they are built around near-realtime updates like crowdsec is. I don’t know how it compares to cloudflares WAF, tho.
Any zero-day protection that isn’t being managed and updated in near-realtime is about as effective as you monitoring news of your installed services/programmes and updating them regularly. Because you are likely to update your WAF and apps when you hear about those, or regular scheduled updates will deal with them before you even learn about them.

I guess there is security in layers, and if layers of security is more important than CPU consumption/response time/requests per second (ie have an abundance of processing, servicing few users, etc) then it might be a no-brainer.

The only other time I can see a generic WAF being useful is if you have rolled your own framework and HTTP stack, and are running your own software. Because, you won’t get that right… So might as well have the extra protection of a WAF.

Or, I guess, with really old unsupported software.
But surely there is a newer take or fork of it?

There is also the “am I worth it” factor.
Like, what is your actual threat model?
Defend against the usual script-based attacks (IE low hanging fruit), only expose/forward ports that are actually required, use some sensible security that isolates more vulnerable systems (IE a proxy) from more sensitive (ie a database or storage), and update regularly on stable/lts branches.

Edit:
I just googled bunkerweb.
First we had firewalls. Then we got web application firewalls. Along came next generation firewalls. Now we have Next Generation Web Application Firewalls with paid features like “Pay per protected services” and “Best effort support included”

Maybe I’m just salty


If you pay for your VPN using crypto, then they can’t tie it to your name, when they’re reselling the traffic it’s harder to tie it to an identity

Surely that only works if you have personally mined the crypto yourself.
And if you only use that wallet for paying for the same VPN service.
Crypto isn’t anonymous, the ledger of all transactions (IE the Blockchain) can be read by anyone.


Training will never stop, tho.
New models will keep coming out, datasets and parameters are going to change.



Gateway is a more specific name for a server.
Like web host is a more specific name for a server.

A server isn’t anything fancy, it just serves a service.
If that is just a relay between your phone and local devices, that’s what it’s serving


Having multiple machines can protect against hardware failures.
If hardware fails, you have dono machines.
It’s good learning, both for provisioning and for the physical (cleaning, customising, wiring, networking with multiple nics), and for multi-node clusters.

Virt is convenient, but doesn’t teach you everything



It defends against the lowest level of automation. And if that is a legit threat in your model, you are going to have a bad time.
It’s just going to trip you up at some point


Just have 2 ipv4 assigned to your server. Have 1 for all your services, and run ssh on the other allowing root login with the password “admin”.
A random ipv6 in the same subnet as your server is just obscurity.

The XZ exploit would be functionally similar to allowing root login using the password “admin”.
Would doing that on a different port be secure? No? Then a different port is not security, it’s obscurity.

Obscurity is just going to trip you up at some point and reduce log chatter.

And yes, running LTSB/stable is a sensible choice for servers.


But scriptkiddies and automated scans are not a security threat. If they were a legitimate threat to your server, you have bigger problems.
All it does is reduce log chatter.

Anyone actually wanting in would port scan, then try and connect to each port, and quickly identify an SSH port


Changing ports does nothing except reduced log chatter.
Security through obscurity is not security


I use jerboa and it is working (I used the toolbar to generate it, but had to fix it because my mobile keyboard is a massive PITA for any corrections and I haven’t had time to find something new).
Anyway, looks like sync and boost are not lemmy-markdown-compatible


nasty things people do with AI [trigger warning]

“I went on to this stream because somebody gave me a heads up and I went on and heard my own voice reading rape porn. That’s the level of stuff we’ve had to deal with since this game came out and it’s been horrible, honestly.”

Amelia Tyler.

I cannot imagine going into a stream of someone playing a game you have poured your heart and soul into for years, and hear you own voice reading stuff like that

Edit: fixing spoiler tag.


Yeh, but I already have compose files and ansible things to set up a server.
And I’d have to figure out how health checks and depends-on works for that.

I’m sure it would give me an amazing experience, but I have all the tools and I can run them in isolation (ie I can install docker on any os I can SSH into)


I always think about using nixos. But considering I dockerise everything, I always end up using Debian.
Good old stable Debian