You want your backup functional even if the system is compromised so yes another system is required for that, or through it to the cloud. Important that you do not allow deleting or editing of the backup even if the credentials used for backing up are compromised. Basically an append only storage.
Most Cloud Storage like S3 Amazon (or most other S3 compatible providers like backblaze) offer such a setting.
I doubt that this is the case, whether it is encrypted or not. The complexity and risks involved with decrypting it on the fly is really unrealistic and unheard of by me (have not heard of everything but still)
Also the ransomware would also need to differentiate between the user and the backup program. When you do differentiated backups(like restic) with some monitoring you also would notice the huge size of the new data that gets pushed to your repo.
Edit: The important thing about your backup is, to protect it against overwrites and deletes and have different admin credentials that are not managed by the AD or ldap of the server that gets backed up.
During that time, your data is encrypted but you don’t know because when you open a file, your computer decrypts it and shows you what you expect to see.
First time i hear of that. You sure? Would be really risky since you basically need to hijack the complete Filesystem communication to do that. Also for that to work you would need the private and public key of the encryption on the system on run time. Really risky and unlikely that this is the case imho.
You should have read the post more carefully. The CVE affects every OS. Just the first shown example is Windows only.
Also, the relevant commits are outlined in the first paragraph. This article is not for the stupid user it’s a technical analysis on a few ways to exploit it and for those cases the commits are more relevant than the version. Also saying which versions are affected is not that easy, commits can be backported into an older version by for example the packager.
Power issues can cause problems that the hardware glitches into states it should not be. Changing something in the BIOS or updating it. Hardware defects. OS upgrade fails (Kernel bug causes the network driver to fail) Etc. Etc.
Those devices are not for the weekly “oh my setup failed” its for the once in 10 years “i am on vacation and the server is not reachable and for some reasons my system crashed and has not rebooted by its own”
And for below 100€ it’s a no-brainer.
No, that would make no sense and is obviously not what i meant.
But you could separate the arr stack from things like pihole with a vm. For example you could pin one thread to that VM so you will not bottleneck your DNS when you are doing heavy loads on the rest of the system. This is just one example what can be done.
Just because you do not see a benefit, does not mean there is none.
Also, VMs are not “heavy” thanks to virtualization technology built into modern hardware, VMs are quite light on the system. Yes they still have overhead but its not like you are giving up big percentages of your potential performance, depending on the setup.
Who says that it is no longer maintained? https://github.com/containers/podman-compose Looks fine to me?
Surprised Transmission has issues seeding that many, thought Transmission 4.x made improvements in that area. How much RAM does your system have? Maybe at some point you just need more system resources to handle the load.
PS - For what it’s worth you can still stick with Transmission and/or other torrent clients & just spread the torrents among multiple torrent client instances. e.g. run multiple Transmission instances with each seeding 1000 or whatever amount of torrents works for you.
Those are duck tape solutions. Why use them when there is a good solution
You can disable the web updater in the config which is the default when deploying via docker. The only time i had a mismatch is when i migrated from a nativ debian installation to a docker one and fucked up some permissions. And that was during tinkering while migrating it. Its solid for me ever since.
Again, there is no official nextcloud auto updater, OP chose to use an auto updater which bricked OPs setup (a plugin was disabled).
They’re releasing a new version every two month or so and dropping them rapidly from support, pinning it with a tag means that in 12 months the install would be exploitable.
The lifecycle can be found with a single online search. Here https://github.com/nextcloud/server/wiki/Maintenance-and-Release-Schedule
Releases are maintained for roughly a year.
Set yourself a notification if you forget it otherwise.
The docker image automatically updated the install to nextcloud 30, but the forms app requires nextcloud 29 or lower.
Lol. Do not blame others for your incompetence. If you have automatically updates enabled then that is your fault when it breaks things. Just pin the major version with a tag like nextcloud:29 or something. Upgrading major versions automatically in production is a terrible decision.
That brings me to what’s available. I almost pulled the trigger on Synology DS423+. It looks reasonable powerful, I can put 4 SATA SSDs and 2 M.2… that’s what I thought. But it turned out it’s not possible to use M.2 as storage with anything but Synology’s own overpriced drives that aren’t even available in my country.
You can use a script to make them available. Still a pain.
Since you only need 2 TB, why do you even bother with the m.2 slots?
Why do you think that you need the m.2 in the first place? I guess you are hang up on “sata bad cause m.2 new” (thats btw only the connector not the interface, there are sata m.2 as well)
sata can handle 6 Gbps. That’s 6 times more than most home network connections can even handle. Since you have not mentioned once how many Ethernet ports the systems have and how fast they are, i figure you only have a 1 Gbps LAN.
Yes NVMe SSDs are somewhat cheaper these days, but not that much that i would bother with it. We are only talking about 2 times 2 tb.
Then using something like fail2ban to block bad acting connections is far more effective and you even get a security benefit out of it.
Also, when a few scripts try to connect via ssh DDOS your router then something is messed up. Either a shitty router from 20 years ago or you have a Bandwidth lower than 100kbps.
Yes i do i and you do you. But advertising those things as security measures while not adding any real security is just snake oil and can result in neglecting real security measures.
As i said, the whole internet can be port scanned within seconds, so your services will be discovered, what is the risk you assume to have when your IP address is known and the fact that you host a service with it? The service has the same vulnerabilities if it is hosted via cloudflare tunnels or directly via port forwarding on the router. So you assume that your router is not secure? Then unplug it, cause it is already connected to the router.
Geoblocking is useless for any threat actor. You can get access to VPN services or a VPS for very very very little money.