Universiality, basically: almost everyone, everywhere has an email account, or can find one for free. As well as every OS and every device has a giant pile of mail clients for you to chose from.
And I mean, email is a simple tech stack and well understood and reliable: I host an internal mail server for notifications and updates and shit, and it’s rapid, fast, and works perfectly.
It’s only when you suddenly need to email someone OTHER than your local shit that it turns to complete shit.
There’s no such thing as too much seeding.
Well, maybe the 85tb of Ubuntu 24.04 I’ve done is too much, but I mean, whatever.
(I’ve got basically everything I’ve downloaded in the last 7 years seeding, some 6000 torrents. qBittorrent isn’t the most happy with this, but it’s still working, if using a shit-ton of RAM at this point.)
Debian stable is great: it’s, well, stable. It’s well supported, has an extremely long support window, and the distro has a pretty stellar track record of not doing anything stupid.
It’s very much in the install-once-and-forget-it category, just gotta do updates.
I run everything in containers for management (but I’m also running something like 90 containers, so a little more complex than your setup) and am firmly of the opinion that, unless you have a compelling reason to NOT run something in a container, just use the containerized version.
I’m the same way. If it’s split license, then it’s a matter of when and not if it’s going to have some MBA come along and enshittify it.
There’s just way, way too much prior experience where that’s what eventually will happen for me to be willing to trust any project that’s doing that, since the split means they’re going to monetize it, and then have all the incentive in the world to shit all over the “free” userbase to try to get them to convert.
See, IBM (with OS/2) and Microsoft (with Windows 2.x and 3.x) were cooperating initially.
Right-ish, but I’d say there was actually a simpler problem than the one you laid out.
The immediate and obvious thing that killed OS/2 wasn’t the compatibility layer, it was driven by IBM not having any drivers for any hardware that was not sold by IBM, and Windows having (relatively) broad support for everything anyone was likely to actually have.
Worse, IBM pushed for support for features that IBM hardware support didn’t support to be killed, so you ended up with a Windows that supported your hardware, the features you wanted, and ran on cheaper hardware fighting it out with an OS/2 that did none of that.
IBM essentially decided to, well, be IBM and committed suicide in the market, and didn’t really address a lot of the stupid crap until Warp 3, at which point it didn’t matter and was years too late, and Windows 95 came swooping in shortly thereafter and that was the end of any real competition on the desktop OS scene for quite a while.
That could probably work.
Were it me, I’d build a script that would re-hash and compare all the data to the previous hash as the first step of adding more files, and if the data comes out consistent, I’d copy the files over, hash everything again, save the hash results elsewhere and then repeat as needed.
I can answer your question: Resolve are very clear that Intel iGPUs are not supported in Linux, at all, because the Intel Linux drivers do not support some features they require.
Free version, paid version: doesn’t matter, it’s not supported hardware right now. Not even the new ARC cards are, because it’s a software issue Intel has to fix.
Ran into this when looking at moving to Linux and there’s not a solution for it.
The format is the tape in the drive, or the disk or whatever.
Tape existed 50 years ago: nothing modern and in production can read those tapes.
The problem is, given a big enough time window, the literal drives to read it will simply no longer exist, and you won’t be able to access even non-rotted media because of that.
As for data integrity, there’s a lot of options: you can make a md5 sum of each file, and then do it again and see if anything is different.
The only caveat here is you have to make sure whatever you’re using to make the checksums gets stored somewhere that’s not JUST on the drive because if the drive DOES corrupt itself, and your only record of the “good” hashes is on the drive, well, you can’t necessarily trust those hashes either.
So, 50 years isn’t a reasonable goal unless you have a pretty big budget for this. Essentially no media is likely to survive that long and be readable unless they’re stored in a vault, under perfect climate controlled conditions. And even if the media is fine, finding an ancient drive to read a format that no longer exists is not a guaranteed proposition.
You frankly should be expecting to have to replace everything every couple of years, and maybe more often if your routine tests of the media show it’s started rotting.
Long term archival storage really isn’t just a dump it to some media and lock it up and never look at ever again.
Alternately, you could just make someone else pay for all of this, and shove all of this to something like Glacier and make the media Amazon’s problem. (Assuming Amazon is around that long and that nothing catches fire.)
I’m using blu-ray disks for the 3rd copy, but I’m not backing up nearly as much data as you are.
The only problem with optical media is that you should only expect it to be readable for a couple of years, best case, at this point and probably not even that as the tier 1 guys all stop making it and you’re left with the dregs.
You almost certainly want some sort of tape option, assuming you want long retention periods and are only likely to add incremental changes to a large dataset.
Edit: I know there’s longer-life archival optical media, but for what that costs, uh, you want tape if at all possible.
When I was a wee kid, I thought that scene from the Matrix where Morpehus explains that humans destroyed the whole damn planet just to maybe slow down the machines was stupid.
I mean if you block the sun, we’re all going to fucking die, why would you do something that stupid?
Yeah, well, the last few years has shown that actually at least half the people on the planet would be pro-kill-everything, even if that includes themselves.
So really, this take isn’t remotely shocking anymore.
Oh I wasn’t saying to not, I was just saying make sure you’re aware of what recovery entails since a lot of raid controllers don’t just write bytes to the disk and can, if you don’t have spares, make recovery a pain in the ass.
I’m using MD raid for my boot SSDs and yeah, the install was a complete pain in the ass since the debian installer will let you, but it’s very much in the linux sense of ‘let you’: you can do it, but you’re figuring it out on your own.
Buy multiple drives, setup some sort of raid, setup some sort of backup. Then set up a 2nd backup.
Done.
All drives from all manufacturers are going to fail at more or less the same rate (see: backblaze’s stats) and trying to buy a specific thing to avoid the death which is coming for all drives is, mostly, futile: at the absolute best you might see a single specific model to avoid, but that doesn’t mean entire product lines are bad.
I’m using some WD red drives which are pushing 8 years old, and some Seagate exos drives which are pushing 4, and so far no issues on any of the 7 drives.
Make sure, if you use hardware RAID, you know what happens if your controller dies.
Is the data in a format you can access it easily? Do you need a specific raid controller to be able to read it in the future? How are you going to get a new controller if you need it?
That’s a big reason why people nudge you to software raid: if you’re using md and doing a mirror, then that’ll work on any damn drive controller on earth that linux can talk to, and you don’t need to worry about how you’re getting your data back if a controller dies on you.
As with all things email, they probably really wanted to make sure that the mails were delivered and thus were using a commercial MTA to ensure that.
I’d wager, even at 20 or 30 or 40k a year, that’s way less than it’d cost to host infra and have at least two if not three engineers available 24/7 to maintain critical infra.
Looking at my mail, over the years I’ve gotten a couple hundred email from them around certificates and expirations (and other things), and if you assume there’s a couple million sites using these certs, I could easily see how you’d end up in a situation where this could scale in cost very very slowly, until it’s suddenly a major drain.
Very very little. It’s a billion tiny little bits of text, and if you have image caching enabled, then all those thumbnails.
My personal instance doesn’t cache images since I’m the only one using it (which means a cached image does nobody any good), and i use somewhere less than 20gb a month, though I don’t have entirely specific numbers, just before-lemmy and after-lemmy aggregates.
To self-host, you do not need to know how to code.
I agree but also say that learning enough to be able to write simple bash scripts is maybe required.
There’s always going to be stuff you want to automate and knowing enough bash to bang out a script that does what you want that you can drop into cron or systemd timers is probably a useful time investment.
No.
I pirate everything, but am very very reluctant to do so with software or games.
I only pirate in cases where the company involved is just too gross to support (looking at you, Adobe), or if there’s absolutely no other option.
But I consider pirated software and games absolutely suspect 100% of the time, because I’m old enough to remember when every keygen was also a keylogger, and every crack was also a rootkit and touching any pirated software was going to give you computer herpes without fail.
So maybe it’s not that bad anymore, but I mean, do you fully trust in the morals of someone who would spend the time helping you steal someone else’s shit to not add just one more little thing to it for themselves?
I don’t disagree, but if it’s a case where the janky file problem ONLY appears in Jellyfin but not Plex, then, well, jank or not, that’s still Jellyfin doing something weird.
No reason why Jellyfin would decide the French audio track should be played every 3rd episode, or that it should just pick a random subtitle track when Plex isn’t doing it on exactly the same files.
Yeah, I don’t let anything that has to be cracked out of an isolated VM until it’s VERY clear that nothing untoward is going on.
QEMU has proven perfectly lovely for a base to use for testing questionable software, and I’ve got quite a lot of VMs sitting around for various things that ah, have been acquired.
If you share access with your media to anyone you’d consider even remotely non-technical, do not drop Jellyfin in their laps.
The clients aren’t nearly as good as plex, they’re not as universally supported as plex, and the whole thing just has the needs-another-year-or-two-of-polish vibes.
And before the pitchfork crowd shows up, I’m using Jellyfin exclusively, but I also don’t have people using it who can’t figure out why half the episodes in a tv season pick a different language, or why the subtitles are somtimes english, and sometimes german, or why some videos occasionally don’t have proper audio (l and r are swapped) and how to take care of all of those things.
I’d also agree your thought that docker is the right approach to go: you don’t need docker swarm, or kubernetes, or whatever other nonsense for your personal plex install, unless you want to learn those technologies.
Install a base debian via netinstall, install docker, install plex, done.
I’m not saying it is or is not a false positive, so please read the rest of my comment with that in mind.
But, that said, this is not new: AV has triggered on cracks and cheat software and similar stuff since forever.
The very simplified explanation is that the same things you do to install a rootkit, you do to cheat in a game with or crack software DRM.
Bigger but, though: cracks and game cheats have also been a major source of malicious software for just as long, so like, it’s also entirely likely that it’s a good catch, too.
Timely post.
I was about to make one because iDrive has decided to double their prices, probably because they could.
$30/tb/year to $50/tb/year is a pretty big jump, but they were also way under the market price so capitalism gonna capital and they’re “optimizing” or someshit.
I’ve love to be able to push my stuff to some other provider for closer to that $30, but uh, yeah, no freaking clue who since $60/tb/year seems to be the more average price.
Alternately, a storage option that’s not S3-based would also probably be acceptable. Backups are ~300gb, give or take, and the stuff that does need S3-style storage I can stuff in Cloudflare’s free tier.
+1 for Frigate, because it’s fantastic.
But don’t bother on an essentially depreciated google product, and skip the coral.
The devs have added the same functionality on the GPU side, and if you’ve got a gpu (and, well, you do, because OpenVino supports intel iGPUs) just use that instead and save the money on a coral for something more useful.
In my case, I’ve both used a coral AND openvino on a coffee lake igpu, and uh, if anything, the igpu was about 20% faster inference times.
A thing you may not be aware of, which is nifty, is the M.2 -> SATA adapters.
They work well enough for consumer use, and they’re a reasonably cheap way of adding another 4-6 SATA ports.
And, bonus, you don’t need to add the heat/power and complexity of some decade old HBA to the mix, which is a solution I’ve grown to really, really, dislike.
The chances of both failing is very rare.
If they’re sequential off the manufacturing line and there’s a fault, they’re more likely to fail around the same time and in the same manner, since you put the surviving drive under a LOT of stress when you start a rebuild after replacing the dead drive.
Like, that’s the most likely scenario to lose multiple drives and thus the whole array.
I’ve seen far too many arrays that were built out of a box of drives lose one or two, and during rebuild lose another few and nuke the whole array, so uh, the thought they probably won’t both fail is maybe true, but I wouldn’t wager my data on that assumption.
(If you care about your data, backups, test the backups, and then even more backups.)
You can find reasonably stable and easy to manage software for everything you listed.
I know this is horribly unpopular around here, but you should, if you want to go this route, look at Nextcloud. It 's a monolithic mess of PHP, but it’s also stable, tested, used and trusted in production, and doesn’t have a history of lighting user data on fire.
It also doesn’t really change dramatically, because again, it’s used by actual businesses in actual production, so changes are slow (maybe too slow) and methodical.
The common complaints around performance and the mobile clients are all valid, but if neither of those really cause you issues then it’s a really easy way to handle cloud document storage, organization, photos, notes, calendars, contacts, etc. It’s essentially (with a little tweaking) the entire gSuite, but self-hosted.
That said, you still need to babysit it, and babysit your data. Backups are a must, and you’re responsible for doing them and testing them. That last part is actually important: a backup that doesn’t have regular tests to make sure they can be restored from aren’t backups they’re just thoughts and prayers sitting somewhere.
For bandwidth intensive stuff I like wholesale internet’s stuff.
The hardware is very uh, old, but the network quality is great since they run an ix. And it’s unmetered too so it’s probably sufficient.