I put up a vps with nginx and the logs show dodgy requests within minutes, how do you guys deal with these?

Edit: Thanks for the tips everyone!

Ignore them, as long as your firewall is set up properly.

Teapot
link
fedilink
English
451Y

Anything exposed to the internet will get probed by malicious traffic looking for vulnerabilities. Best thing you can do is to lock down your server.

Here’s what I usually do:

  • Install and configure fail2ban
  • Configure SSH to only allow SSH keys
  • Configure a firewall to only allow access to public services, if a service only needs to be accessible by you then whitelist your own IP. Alternatively install a VPN
@AES@lemmy.ronsmans.eu
link
fedilink
English
141Y

I would suggest crowdsec and not fail2ban

Seconded, not only is CrowdSec a hell of a lot more resource efficient (Go vs Python IIRC), having it download a list of known bad actors for you in advance really slows down what it needs to process in the first place. I’ve had servers DDoSed just by fail2ban trying to process the requests.

Alfi
link
fedilink
English
3
edit-2
1Y

Hi,

Reading the thread I decided to give it a go, I went ahead and configured crowdsec. I have a few questions, if I may, here’s the setup:

  • I have set up the basic collections/parsers (mainly nginx/linux/sshd/base-http-scenarios/http-cve)
  • I only have two services open on the firewall, https and ssh (no root login, ssh key only)
  • I have set up the firewall bouncer.

If I understand correctly, any attack detected will result in the ip being banned via iptables rule (for a configured duration, by default 4 hours).

  • Is there any added value to run the nginx bouncer on top of that, or any other?
  • cscli hub update/upgrade will fetch new definitions for collections if I undestand correctly. Is there any need to run this regularly, scheduled with let’s say a cron job, or does crowdsec do that automatically in the background?

Well I was expecting some form of notification for replies, but still, seen it now.

My understanding of this is limited having mostly gotten as far as you have and been satisfied.

For other bouncers, there’s actually a few decisions you can apply. By default the only decision is BAN which as the name suggests just outright blocks the IP at whatever level your bouncer runs at (L4 for firewall and L7 for nginx). The nginx bouncer can do more thought with CAPTCHA or CHALLENGE decisions to allow false alerts to still access your site. I tried writing something similar for traefik but haven’t deployed anything yet to comment further.

Wih updates, I don’t have them on automated, but I do occasionally go in and run a manual update when I remember (usually when I upgrade my OPNSense firewall that’s runs it). I don’t think it’s a bad idea at all to automate them, however the attack vectors don’t change that often. One thing to note, newer scenarios only run on the latest agent, something I discovered recently when trying to upgrade. I believe it will refuse to update them if it would cause them to break in this way, but test it yourself before enabling corn

Fail2ban and Nginx Proxy Manager. Here’s a tutorial on getting started with Fail2ban:

https://github.com/yes-youcan/bitwarden-fail2ban-libressl

@Pete90@feddit.de
link
fedilink
English
31Y

I really wanted to use this and set it up a while ago. Works great but in the end I had to deactivate it, because my nextcloud instance would cause too many false positives (404s and such) and I would ban my own up way too often.

@AES@lemmy.ronsmans.eu
link
fedilink
English
21Y

Crowdsec is more advanced

Does it integrate with NPM?

@AES@lemmy.ronsmans.eu
link
fedilink
English
21Y

Yes it does! You find everything on the site. It is very well documented.

Ok, so I spent way too much time tonight trying to figure this out, made a mess of my npm, and fixed it.

It is very well documented.

Official documentation on using crowdsec with NPM is out of date and relies on a fork that’s no longer maintained. I’m trying to find any documentation on how to integrate the bouncer into the official NPM project and am really coming up empty.

@AES@lemmy.ronsmans.eu
link
fedilink
English
1
edit-2
1Y

You only need the unmaintaind version (official PR is in the works: https://github.com/NginxProxyManager/nginx-proxy-manager/pull/2677 ) if you want to bounce at the NPM level (aka: with a captcha). At the moment I am using crowdsec to parse the NPM logs (and some other logs) and bounce at the IP tables level on my VPS ( block only) and at the opnsense firewall level (also block only) at home.

I’m not sure if it’s the fact that I was up at 1am trying to figure this all out or what but it wasn’t clicking last night. So the NPM (nginx) integration would strictly be the captcha and I would need to bounce at the firewall to block? That makes way more sense to me now. Thanks.

apigban
link
fedilink
English
41Y

Depends on what kind of service the malicious requests are hitting.

Fail2ban can be used for a wide range of services.

I don’t have a public facing service (except for a honeypot), but I’ve used fail2ban before on public ssh/webauth/openvpn endpoint.

For a blog, you might be well served by a WAF, I’ve used modsec before, not sure if there’s anything that’s newer.

Last
link
fedilink
English
8
edit-2
1Y

deleted by creator

Illecors
link
fedilink
English
51Y

I’ve implemented bot blocker and some iptables rate limiting.

These requests are probably made by search/indexing bots. My personal server gets a quite a lot of these, but they rarely use any bandwidth.
The easiest choice (probably disliked by more savvy users) is to just enable cloudflare on your server. It won’t block the requests, but will stop anything malicious.
With how advanced modern scraping techniques are there is so much you can do. I am not an expert, so take what I say with a grain of salt.

WasPentalive
link
fedilink
English
21Y

The ligitimate web spiders (for example the crawler used by Google to map the web for search) should pay attention to robots.txt. I think though that that is only valid for web-based services.

Rusty
link
fedilink
English
21Y

Fail2Ban is great and all, but Cloudflare provides such an amazing layer of protection with so little effort that it’s probably the best choice for most people.

You press a few buttons and have a CDN, bot attack protection, DDOS protection, captcha for weird connections, email forwarding, static website hosting… It’s suspicious just how much stuff you get for free tbh.

@AES@lemmy.ronsmans.eu
link
fedilink
English
81Y

And you only need to give them your unencrypted data…

@GlitzyArmrest@lemmy.world
link
fedilink
English
1
edit-2
1Y

To be fair, you can configure Cloudflare to use your own certs.

Sifr Moja
link
fedilink
11Y

@GlitzyArmrest Including for origins? If not, the point of CloudFlare is gone.

You can use a custom origin certificate, but that’s irrelevant when CloudFlare still re-encrypt everything to analyse the request in more detail. It does leave me torn when using it, I don’t use it on anything where sensitive plain text is flying around, especially authentication data (which is annoying when that’s the most valuable place to have the protection), but I do have it on my matrix homeserver as anything remotely important is E2EE anyway so there’s little they can gain, and with the amount of requests it gets some level of mitigation is desirable

@alibloke@feddit.uk
link
fedilink
English
21Y

Cloudflare tunnel

Last
link
fedilink
English
-41Y

deleted by creator

deleted by creator

@lemmy@lemmy.nsw2.xyz
link
fedilink
English
71Y
  • Turn off password login for SSH and only allow SSH keys
  • Cloudflare tunnel
  • Configure nginx to resolve the real IPs since it will now show a bunch of Cloudflare IPs. See discussion.
  • Use Fail2ban or Crowdsec for additional security for anything that gets past Cloudflare and also monitor SSH logs.
  • Only incoming port that needs to be open now is SSH. If your provider has a web UI console for your VPS you can also close the SSH port, but that’s a bit overkill.
takeda
link
fedilink
81Y

I use fail2ban and add detection (for example I noticed that after I implemented it for ssh, they started using SMTP for brute force, so had to add that one as well.

I also have another rule that observes fail2ban log and adds repeated offenders to a long term black list.

Sifr Moja
link
fedilink
21Y

@takeda @jcal Have you tried CrowdSec?

takeda
link
fedilink
11Y

I did not, but it looks interesting, thanks

Don’t have vulnerable shit and ignore them.

Those are just weather.

Nothing too fancy other than following the recommended security practices. And to be aware of and regularly monitor the potential security holes of the servers/services I have open.

Even though semi-related, and commonly frowned upon by admins, I have unattended upgrades on my servers and my most of my services are auto-updated. If an update breaks a service, I guess its an opportunity to earn some more stripes.

@scrchngwsl@feddit.uk
link
fedilink
English
41Y

Why is unattended upgrades frowned upon? Seems like I good idea all round to me?

Mostly because stability is usually prioritized above all else on servers. There’s also a multitude of other legit reasons.

exu
link
fedilink
English
101Y

All the legit reasons mentioned in the blog post seem to apply to badly behaved client software. Using a good and stable server OS avoids most of the negatives.

Unattended Upgrades on Debian for example will by default only apply security updates. I see no reason why this would harm stability more than running a potentially unpatched system.

@med@sh.itjust.works
link
fedilink
English
11Y

Hell, debian is usually so stable I would just run dist-upgrade on my laptop every morning.

The difference there is that I’d be working with my laptop regularly and would notice problems more quickly

Even though minimal, the risk of security patches introducing new changes to your software is still there as we all have different ideas on how/what correct software updates should look like.

exu
link
fedilink
English
31Y

Fair, I’d just rather have a broken system than a compromised one.

z3bra
link
fedilink
English
171Y

I mean, it’s not a big deal to have crawlers and bots poking at our webserver if all you do is serving static pages (which is common for a blog).

Now if you run code on server side (eg using PHP or python), you’ll want to retrieve multiple known lists of bad actors to block them by default, and setup fail2ban to block those that went through. The most important thing however is to keep your server up to date at all times.

BlackEco
link
fedilink
English
51Y

I’m using BunkerWeb which is an Nginx reverse-proxy with hardening, ModSecurity WAF, rate-limiting and auto-banning out of the box.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 124 users / day
  • 419 users / week
  • 1.16K users / month
  • 3.85K users / 6 months
  • 1 subscriber
  • 3.68K Posts
  • 74.2K Comments
  • Modlog