• 0 Posts
  • 204 Comments
Joined 1Y ago
cake
Cake day: Jun 09, 2023

help-circle
rss

Yes i do i and you do you. But advertising those things as security measures while not adding any real security is just snake oil and can result in neglecting real security measures.

As i said, the whole internet can be port scanned within seconds, so your services will be discovered, what is the risk you assume to have when your IP address is known and the fact that you host a service with it? The service has the same vulnerabilities if it is hosted via cloudflare tunnels or directly via port forwarding on the router. So you assume that your router is not secure? Then unplug it, cause it is already connected to the router.

Geoblocking is useless for any threat actor. You can get access to VPN services or a VPS for very very very little money.


  1. Guess what, all IP addresses are known. There is no secret behind them. And you can scan all IPv4 addreses for ports in a few seconds at most.
  2. So some countries are more dangerous than others? Secure your network and service and keep them up to date, then you do not have to rely on nonsense geoblocking.
  3. Known bots are also no issue most of the time. They are just bots. They usually target a decade old Vulnerabilities and try out default passwords. If you follow my advice on 3. this is a non issue

You want your backup functional even if the system is compromised so yes another system is required for that, or through it to the cloud. Important that you do not allow deleting or editing of the backup even if the credentials used for backing up are compromised. Basically an append only storage.

Most Cloud Storage like S3 Amazon (or most other S3 compatible providers like backblaze) offer such a setting.


I doubt that this is the case, whether it is encrypted or not. The complexity and risks involved with decrypting it on the fly is really unrealistic and unheard of by me (have not heard of everything but still)

Also the ransomware would also need to differentiate between the user and the backup program. When you do differentiated backups(like restic) with some monitoring you also would notice the huge size of the new data that gets pushed to your repo.

Edit: The important thing about your backup is, to protect it against overwrites and deletes and have different admin credentials that are not managed by the AD or ldap of the server that gets backed up.


During that time, your data is encrypted but you don’t know because when you open a file, your computer decrypts it and shows you what you expect to see.

First time i hear of that. You sure? Would be really risky since you basically need to hijack the complete Filesystem communication to do that. Also for that to work you would need the private and public key of the encryption on the system on run time. Really risky and unlikely that this is the case imho.


Would it be not much easier (and more portable) if you create a Linux VM in for example VirtualBox? From there you could just follow any Linux guide.


You should have read the post more carefully. The CVE affects every OS. Just the first shown example is Windows only.

Also, the relevant commits are outlined in the first paragraph. This article is not for the stupid user it’s a technical analysis on a few ways to exploit it and for those cases the commits are more relevant than the version. Also saying which versions are affected is not that easy, commits can be backported into an older version by for example the packager.


This is not really correct. Those companies take complete control of the secret keys. And no, it is not the same effect when you use tailscale compared to wireguard cause of various reasons. CGNAT, no port forwarding, funnels etc.


Netmaker, Tailscale or Zerotier

No way in hell i am giving a company complete remote access to my servers and clients.


This is not the invention of an IP KVM, those are old. This product just offers the functionality of an IP KVM for very little money.


It is based on completely different hardware. A Raspberry Pi CPU is much more expensive than the CPU that is used here.


Power issues can cause problems that the hardware glitches into states it should not be. Changing something in the BIOS or updating it. Hardware defects. OS upgrade fails (Kernel bug causes the network driver to fail) Etc. Etc.

Those devices are not for the weekly “oh my setup failed” its for the once in 10 years “i am on vacation and the server is not reachable and for some reasons my system crashed and has not rebooted by its own”

And for below 100€ it’s a no-brainer.


I just set it up. Yes i dislike the fact, that you need another party for syncing it, but i doubt it would be possible otherwise, just too much work to support everyone.

I read up on GoCardless and they do not sound that evil

But not sure if i will keep the connection up. Will see i guess.


Really disliking that discord is used as helpdesk/forum. Not really searchable via the web.

Also no link to the repo.


I am talking about the fork. It is operated by someone else.


The syncthing fork on f-droid is still an option. An issue has been opened on the github repo. Lets see what will happen with the fork


The thing is, those poor design decisions have nothing to do with those features, i claim that every feature could be implemented without “holding the compose files hostage”.

Btw. dockge does support connecting to another docker dockge instance.


No, that would make no sense and is obviously not what i meant.

But you could separate the arr stack from things like pihole with a vm. For example you could pin one thread to that VM so you will not bottleneck your DNS when you are doing heavy loads on the rest of the system. This is just one example what can be done.

Just because you do not see a benefit, does not mean there is none.

Also, VMs are not “heavy” thanks to virtualization technology built into modern hardware, VMs are quite light on the system. Yes they still have overhead but its not like you are giving up big percentages of your potential performance, depending on the setup.


You talk like there is not in between containers and VMs. You can use both.


What exactly are you referring to? ZIL? ARC? L2ARC? And what docs? Have not found that call out in the official docs.


I use a consumer SSD for caching on ZFS now for over 2 years and do not have any issues with it. I have a 54 TB pool with tons of reads and writes and no issue with it.

smart reports 14% used.


You recall wrong. ECC is recommended for any server system but not necessary.



Surprised Transmission has issues seeding that many, thought Transmission 4.x made improvements in that area. How much RAM does your system have? Maybe at some point you just need more system resources to handle the load.

PS - For what it’s worth you can still stick with Transmission and/or other torrent clients & just spread the torrents among multiple torrent client instances. e.g. run multiple Transmission instances with each seeding 1000 or whatever amount of torrents works for you.

Those are duck tape solutions. Why use them when there is a good solution



There are tunnel protocols like 6to4, 6RD and so on to allow you to get an IPv6 connection tunneled to you. Various routers do support it.

Another option is to ask your ISP if he will supply a IPv6 subnet to you.


You can disable the web updater in the config which is the default when deploying via docker. The only time i had a mismatch is when i migrated from a nativ debian installation to a docker one and fucked up some permissions. And that was during tinkering while migrating it. Its solid for me ever since.

Again, there is no official nextcloud auto updater, OP chose to use an auto updater which bricked OPs setup (a plugin was disabled).


Docker is kind of a giant mess in my experience. The trick to it is creating backup plans to recover your data when it fails.

Thats the trick for any production service. Especially when you do an update.


They’re releasing a new version every two month or so and dropping them rapidly from support, pinning it with a tag means that in 12 months the install would be exploitable.

The lifecycle can be found with a single online search. Here https://github.com/nextcloud/server/wiki/Maintenance-and-Release-Schedule

Releases are maintained for roughly a year.

Set yourself a notification if you forget it otherwise.


What are you talking about? If you are not manual (or by something like watchtower) pull the newest image it will not update by itself.

I have never seen an auto-update feature by nextcloud itself, can you pls link to it?


The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower.

Lol. Do not blame others for your incompetence. If you have automatically updates enabled then that is your fault when it breaks things. Just pin the major version with a tag like nextcloud:29 or something. Upgrading major versions automatically in production is a terrible decision.


That brings me to what’s available. I almost pulled the trigger on Synology DS423+. It looks reasonable powerful, I can put 4 SATA SSDs and 2 M.2… that’s what I thought. But it turned out it’s not possible to use M.2 as storage with anything but Synology’s own overpriced drives that aren’t even available in my country.

You can use a script to make them available. Still a pain.

Since you only need 2 TB, why do you even bother with the m.2 slots?

Why do you think that you need the m.2 in the first place? I guess you are hang up on “sata bad cause m.2 new” (thats btw only the connector not the interface, there are sata m.2 as well)

sata can handle 6 Gbps. That’s 6 times more than most home network connections can even handle. Since you have not mentioned once how many Ethernet ports the systems have and how fast they are, i figure you only have a 1 Gbps LAN.

Yes NVMe SSDs are somewhat cheaper these days, but not that much that i would bother with it. We are only talking about 2 times 2 tb.


No, you can do this process to automate it.


You can use ACNE DNS. Just add the single record for acne dns and then you can the acne dns api to fulfill the challange.


OP uses tailscale to connect to everything and not his local connection.


Jellyfin doesn’t have local allowance baked in? I’ve never used it.

Nope and that is also not needed, since it’s not a cloud dependent service.


Then using something like fail2ban to block bad acting connections is far more effective and you even get a security benefit out of it.

Also, when a few scripts try to connect via ssh DDOS your router then something is messed up. Either a shitty router from 20 years ago or you have a Bandwidth lower than 100kbps.


Getting “hit” is nothing to worry about by automated scripts. All it does is keep your logs a little bit cleaner. Any attack you should actually worry does not care if your ssh is running on 22 or 7389.


A port is not secure or insecure. The thing that can lead to security risks is the service that answers that port.

Use strong authentication and encryption on those services and keep them up to date.


no. The default port is fine. Changing the default port does nothing for security. It only stops some basic crawler, when you are scared by crawler, then you should not host anything on the internet.