ArchiveBox is great.
I’m big into retro computing and general old electronics shit, and I archive everything I come across that’s useful.
I just assume anything and everything on some old dude’s blog about a 30 year old whatever is subject to vanishing at any moment, and if it was useful once, it’ll be useful again later probably so fuck it, make a copy of everything.
Not like storage is expensive, anyway.
It’s viable, but when you’re buying a DAS for the drives, figure out what the USB chipset is and make sure it’s not a flaky piece of crap.
Things have gotten better, but some random manufacturers are still using trash bridge chips and you’ll be in for a bad time. (By which I mean your drives will vanish in the middle of a write, and corrupt themselves.)
10000% this.
Tell me what it does, and SHOW me what it does.
Because guessing what the hell your thing looks like and behaves like is going to get me to bounce pretty much immediately because you’ve now made it where I have to figure out how to deploy your shit if I want to know. And, uh, generally, if you have no screenshots, you have no good documentation and thus it’s going to suuuuck.
It’s because of updates and who owns the support.
The postgres project makes the postgres container, the pict-rs project makes the pict-rs container, and so on.
When you make a monolithic container you’re now responsible for keeping your shit and everyone else’s updated, patched, and secured.
I don’t blame any dev for not wanting to own all that mess, and thus, you end up with seperate containers for each service.
I’d probably go with getting the ISP equipment into the dumbest mode possible, and putting your own router in it’s place, so option #2?
I know nothing about eero stuff, but can you maybe also put it into a mode that has it doing wifi-only, and no routing/bridging/whatever?
Then you can just leave the ISP router in place, and just use them for wifi (and probably turn off the wifi on the ISP router, while you’re in there).
Then the correct answer is ‘the one you won’t screw up’, honestly.
I’m a KISS proponent with security for most things, and uh, the more complicated it gets the more likely you are to either screw up unintentionally, or get annoyed at it, and do something dumb on purpose, even though you totally were going to fix it later.
Pick the one that makes sense, is easy for you to deploy and maintain, and won’t end up being so much of a hinderance you start making edge-case exceptions because those are the things that will 100% bite you in the ass later.
Seen so many people turn off a firewall or enable port forwarding or set a weak password or change permissions to something too permissive and just end up getting owned that have otherwise sane, if maybe over-complicated, security designs and do actually know what they’re doing, but just getting burned by wandering off from standards because what they implemented originally ends up being a pain to deal with in day-to-day use.
So yeah, figure out your concerns, figure out what you’re willing to tolerate in terms of inconvenience and maintenance, and then make sure you don’t ever deviate from there without stopping and taking a good look at what you’re doing, what could happen if you do it, and coming up with a worst-case scenario first.
What’s your concern here?
Like who are you envisioning trying to hack you, and why?
Because frankly, properly configured and permissioned (that is, stop using root for everything you run) container isolation is probably good enough for anything that’s not a nation state (barring some sort of issue with your container platform and it having an escape), and if it is a nation state you’re fucked anyways.
But more to your direct question: I actually use dns scopes and nginx acls to seperate public from private. I have a *.public and a *.private cname which points to either my external or internal IP, and ACLs in the nginx site configuration to scope where access is allowed.
You can’t access a *.private host outside the network, but can access either from inside it, and so (again, barring nginx having an oopsie somewhere) it’s reasonably secure and not accessible, and leaves a very clear set of logs (and I’m pulling those logs in and parsing them for anything suspicious and doing automated alerting if I find anything I would not otherwise expect) so I’m happy enough with the level of security that this is, when paired with the services built-in authentication options.
Couple of weeks ago. NSI decided to push some of their domains into CLIENT HOLD status, and that will cause DNS resolution to stop working for the domain.
Took down uh, well, everything: https://status.digitalocean.com/incidents/jm44h02t22ck
[Edit] I’ll have to see if I can find the video.
I can save you the time there, at least: https://youtu.be/hiwaxlttWow
Honestly, I’d contact their support and ask what their processes are and what timelines they give customers for a response/remediation before they take action.
Especially ask how they notify you, and how long they allow for a response before escalation to make sure that’s something you can actually get, read, and do something about within.
It might not be a great policy, but if you at least know what might happen, it gives you the ability to make sure you can do whatever you need to do to keep it from becoming a larger issue.
Everyone loves to hate on Cloudflare, but uh, duh, of course a US company will comply with a request under US law that they have to comply with?
If you don’t want your shit DMCAed, don’t use anything based in the US to provide it.
Go host somewhere that doesn’t have smiliar laws and won’t comply with foreign requests.
There was a recent video from everyone’s favorite youtube Canadians that tested how many USB devices you can jam onto a single controller.
The takeaway they had was that modern AMD doesn’t seem to give a shit and will actually let you exceed the spec until it all crashes and dies, and Intel restricts it to where it’s guaranteed to work.
Different design philosophies, but as long as ‘might explode and die for no clear reason at some point once you have enough stuff connected’ is an acceptable outcome, AMD is the way to go.
This new uh, tactic? of going after a registrar instead of a hosting provider with reports is a little concerning.
There’s an awful lot of little registrars that don’t have any real abuse department and nobody is going to do shit other than exactly this: take it down and worry about it next week when they have time.
It really feels like your choice of registrar is becoming as much or more important than your choice of hosting provider, and the little indie guys are probably the wrong choice if you’re running a legitimate business as you’re gonna need one that has enough funding and a proper team to vet reports before clobbering your site.
On the OTHER hand, Network Solutions is just took down DigitalOcean for no reason, so maybe they all suck?
I mean not the first time they’ve sued over cheats, and they very much took a sweeping victory last time.
I’d expect the same DMCA circumvention provision along with the always fun “Well, literally everything you did is also a CFAA violation so maybe you want to settle now before we try to get you extradited to the US on federal felony charges” threat would result in pretty much the same outcome here.
Looks like others have provided MOST of the answers.
Radarr/sonarr do the heavy lifting making symlinks where symlinks are required, but there’s still the occasional bit of manual downloading.
I also have a script that’ll check for broken symlinks like once a week and notify me of them and I’ll go through and clean them up occasionally, but that’s not super common and only happens if I’m manually removing content I made manual symlinks for, since I’ll just let radarr/sonarr deal with it otherwise.
(The full stack is jellyseerr -> radarr/sonarr -> qbittorrent/sabnzb -> links for jellyfin)
I just select the files I want from the bigger torrents, and then proceed to not touch it ever again, unless I want to add more stuff to the downloaded files.
I also don’t move things around - I’m on Linux so all the torrents live in one place with symlinks pointing to where I need/want the data to be as I figured out yeeeears ago that trying to manage a couple thousand active torrents while having the data spread everywhere is a quick trip to migrane town.
Quicksync
Yeah, it doesn’t sound like you’re transcoding in a way that’ll show any particular benefit from Quicksync over AMF or anything else. My ‘it’s better’ use case would be something like streaming to a cell phone at 3-5mbps, and not something local or just making a file to save on your device.
DDR4 and no ECC
That’s what my build is: 128gb of Corsair whatever on a 10850k. I’m sure there’s been some silent corruption somewhere in some video file or whatever, but, honestly, I don’t care about the data enough to even bother with RAID, let alone ECC.
I will say, though, if you’re going to delve into something like ZFS, you should probably consider ECC since there are a lot more ‘well shits’ that can happen than what I’m doing (mergerfs + snapraid).
power consumption
A $30 or whatever they are kill-a-watt plus something like s-tui running on the NAS itself to watch what the CPU is doing in terms of power states and usage. I’ve got a 8-drive i9-10850k under 60w at “idle” which is not super low power, but it’s low enough that the cost of hardware to improve on it even a little bit (and it’d be a very little bit) has a ROI period of longer than I’d expect the hardware to last.
If you’re going to be doing transcoding for remote users at lower bitrates, quicksync is still better than AMF, so I’d vote Team Intel.
If you’re not, then buy whatever meets your power envelope desires and price point.
For Intel, anything 8th gen or newer should be able to natively do anything you need in Quicksync, so you don’t need to head to Amazon and buy something new, unless you really want to.
Also, I’d consider hardware that has enough SATA ports for the number of drives you want so that you can avoid dealing with a HBA card: they inflate the power envelope of the system (if power usage is something you’re concerned with), and even in IT mode, I’ve found them to be annoyingly goofy at times and am MUCH happier just using integrated SATA stuff.
New (7000 and 9000) ryzen CPUs have an iGPU that can transcode via AMF, so the ‘equivalent’ would just be buy a modern AMD CPU.
AMF isn’t quite as good as Quicksync, but it’s probably fine for most use cases for most people, though I can notice the image quality losses when you’re doing something like transcoding to 1080p low(ish) bitrate for remote streaming, and so have a very big bias in favor of nvenc or quicksync.
Also, I’m in the more-ram-is-better camp, so buy as much as you want and/or the platform supports.
big fan of mini PC’s
Same, but just be careful if you venture outside of the “reputable” vendors.
I bought one recently from Aliexpress, and while it’s perfectly functional, it’s using an ethernet chipset that doesn’t have in-kernel drivers so I have to keep compiling new drivers for it every time the kernel upgrades.
Not the end of the world, but an annoyance that I could do without, and not something a slightly more expensive version of what I got would have.
Privacy regulations are all fine and dandy, but even with the strictest ones in place,
They’re also subject to interpretation, regulatory capture, as well as just plain being ignored when it’s sufficiently convenient for the regulators to do so.
“There ought to be a law!” is nice, but it’s not a solution when there’s a good couple of centuries of modern regulatory frameworks having had existed, and a couple centuries of endless examples of where absolutely none of it matters when sufficient money and power is in play.
Like, for example, the GDPR: it made a lot of shit illegal under penalty of company-breaking penalties.
So uh, nobody in the EU has had their personal data misused since it was passed? And all the big data brokers that are violating it have been fined out of business?
And this is, of course, ignoring the itty bitty little fact that you have to be aware of the misuse of the data: if some dude does some shady shit quietly, then well, nobody knows it happened to even bring action?
How exactly are “communities offering services” a different thing than “hosted software”?
I think what they’re saying is that the ideal wouldn’t be to force everyone to host their own, but rather for the people who want to run stuff to offer them to their friends and family.
Kinda like how your mechanic neighbor sometimes helps you do shit on your car: one person shares a skill they have, and the other person also benefits. And then later your neighbor will ask you to babysit their kids, and shit.
Basically: a very very goofy way of saying “Hey! Do nice things for your friends and family, because that’s kinda how life used to work.”
Yeah, it doesn’t appear that PSSR (which I cannot help but pronounce with an added i) is the highest quality upscaling out there, combined with console gamers not having experienced FSR/FSR2/FSR3’s uh, specialness is leading to people being confused why their faster console looks worse.
Hopefully Sony does something about the less than stellar quality in a PSSR2 or something relatively quickly, or they’re going to burn a lot of goodwill around the whole concept, much like how FSR is pretty much considered pretty trash by PC gamers.
The master-omnibus image bundles all that into a single container is MUCH simpler to deploy.
Literally just used their compose file they provide at https://github.com/AnalogJ/scrutiny/blob/master/docker/example.omnibus.docker-compose.yml and added in the device names and was done.
Are uptimekuma and whatever you’re trying to monitor on the same physical hardware, or is it all different kit?
My first feeling is that you’ve got some DNS/routing configuration that’s causing issues if you’re leaving your local network and then going through two layers before coming back in, especially if you have split horizon DNS.
https://www.microsoft.com/en-us/evalcenter/evaluate-windows-11-iot-enterprise-ltsc
Keep in mind, though, that you’ll still have to do some activation and KMS hackery to make them usable, but you can at least use an installer that’s going to be clean.
From Microsoft. They actually provide ISO downloads for the 11 LTSC versions, so there’s not really any reason to go grab some random one off totally-legit-software-and-totatlly-not-malware.com or whatever.
What platform would you perhaps be interested in?