You don’t have to install drivers or CUPS on client devices. Linux and Android support IPP out of the box. Just make sure your CUPS on the server is multicasting to the LAN.
You may need to install Avahi on the server if it’s not already (that’s what does the actual multicasting). The printer(s) should then auto magically appear in the print dialogs on apps on Linux clients and in the printer service on Android.
On Linux it may take a few seconds to appear after you turn it on and may not appear when it’s off. On Android it shows up anyways as long as the CUPS server is on.
Mozilla has already shipped strict privacy mode by default in recent versions of Firefox so they’re already a leg up on this.
Google is currently trying to transition people to its own proprietary method of tracking (where the browser itself tracks you) so they would love it if third party cookies were no longer usable for that.
Mozilla has also added a direct tracking feature (anonimized) to Firefox btw. Not sure what their agenda is.
Websites are irrelevant, if third party cookies stop working in major browsers there’s no point in setting them anymore, they’ll be ignored.
TBF in most cases forced app obsolescence is on the developers. Some of them are super aggressive and will force you to update without really needing it. Like, come on, package tracking app, I really don’t believe you’re unable to show me the package pick-up barcode without updating. 🙄
But yeah, on iOS it’s completely impossible to get older versions, once you’ve updated something that’s it. And even on Android I’ve noticed that it’s become impossible to downgrade some apps even if I have the old apk, the Google installer simply fails to install it if I’ve ever had a newer version installed.
In the olden days software used to be sold by individual major versions. You paid for version 9, you paid for version 10. Or you skipped versions you didn’t need. You could use versions side by side. The newest installed would import its data from the older ones. etc.
App stores have made this very awkward or almost impossible. There’s no concept of separating major versions. You’d have to buy and install completely different apps to be able to pay for them separately and to use them side by side, but if they’re separate apps they can’t import your data from each other. Not to mention that people seem to hate having “too many apps” for some reason.
Software subscriptions switch the “support per major version” to “support per time of use”. It’s obviously shittier but it’s more realistic than a one-time price and expecting to use the app in all future versions in perpetuity. The one time price would have to be very large to be realistic.
It’s impossible to tell how meaningful Backblaze’s numbers are because we don’t know the global failure rate for each model they test, so we can’t calculate the statistical significance. Also there are other factors involved like the age of the drives and the type of workload they were used for.
buying more reliable devices can definitely save you time and headache in the future by having to deal with failures less frequently.
That’s a recipe for sorrow. Don’t waste time on “reliability” research, just plan for failure. All HDDs fail. Assume they will and backup or replicate your data.
Any difference you personally experience between the three big brands is meaningless. For any failed HDD you have there’s going to be another person who swears by them and has had five of them running for 10 years without a hitch.
But whatever’s cheaper in your area and stop worrying. Your reliability should be assured by backups anyway not by betting on a single drive. Any drive can fail.
For home setup you don’t care because you should have either redundancy or backup (preferably both).
So that typically means buying the cheapest HDD that’s new and from one of the established brands (Seagate, Western Digital, Toshiba) that’s in the correct size for your needs, and you can afford to buy it at least twice (for the aforementioned backups or redundancy), or even thrice, and replace as soon as needed.
In other words there’s no need to speculate on how long an HDD will last, you simply replace it when needed.
Please also note that HDDs over 10 TB are starting to get increasingly replaced with enterprise models which run hotter and make more noise.
7 was actually surprisingly well optimized. It ran OK on an office PC with 512 MB of RAM and a 512 MHz CPU.
You wouldn’t use it like that because by that time apps like browsers and office were starting to feel restricted by that little RAM to the point you could only run either or. But the OS itself stayed out of the way as much as possible, and if you gave it just a little more RAM (like 1 GB) suddenly you had a usable office machine.
Everybody should be using DNS over HTTPS (DoH) or over TLS (DoT) nowadays. Clear DNS is way too easy to subvert and even when it’s not being tampered with most ISP snoop on it to compile statistics about what their customers visit.
DoH and DoT aren’t a full-proof solution though. HTTPS connections still leak domain names when the target server doesn’t use Encrypted Hello (ECH) and you need to be using DoH for ECH to work.
Even if all that is in place, a determined ISP, workplace or state actor can identify DoH/DoT servers and compile block lists, perform deep packet inspection to detect such connections regardless of server, or set up their own honey trap servers.
There’s also the negative side of DoH/DoT, when appliances and IoT devices on your network use it to bypass your control over your LAN.
If you mean properly signed certificates (as opposed to self-signed) you’ll need a domain name, and you’ll need your LAN DNS server to resolve a made-up subdomain like lan.domain.com
. With that you can get a wildcard Let’s Encrypt certificate for *.lan.domain.com
and all your https://whatever.lan.domain.com
URLs will work normally in any browser (for as long as you’re on the LAN).
Unfortunately all the volume-based email providers I know (Purely, MXroute, Migadu) are one or two-person operations. Doesn’t stop them from being excellent, of course.
I wish the volume-based pricing model was more popular but unfortunately very few people know about it, and is course the large providers prefer to charge by account or add all kinds of artificial limitations because they make much more money that way. Having multiple mailboxes for the same domain costs the provider nothing and yet you get charged per mailbox.
Then why do they offer a separate, distinct DDoS mitigation feature on the enterprise plans? And did you notice they call them “mitigation” and not “protection”? 🙂
Look at the description of each one, the free one “stops illegitimate traffic at the edge”. Meaning they’ll serve from cache, it’s not getting through to your actual site. You can get caching from any CDN service, it doesn’t have to be CF. All CDN services are distributed and will try to serve for as long as possible because their whole purpose is to deal with traffic spikes.
And if you want to know for how long CF (or any service) will serve from cache and how far they’ll go for an account (especially a free account), you want to check the terms of service not the plans. The plans are made to sell to you, the fine print is in the terms.
Anyway, I really don’t understand people’s obsession with DDoS, particularly self-hosting people. The chances of their little website ever being the target of a DDoS are astronomical. Many of them don’t take proper backups, and don’t worry about theft or fire or electric spikes, which are far more likely, but go frantic when they hear about features they’ll never use.
Use your common sense. They’re not going to expend any significant resources to keep up a free website.
They have a small capacity available for mitigating DoS for free accounts together, while resources last. If you happen to fit in that capacity at any given time that’s nice, if you don’t, you go down.
Make your website all static files (if you can) and host on a CDN like Bunny.net. It’s $1/month and your website might actually be able to get through some large traffic spikes. It won’t work against a targeted sustained DDoS but like the other comments said that’s not likely to happen.
You don’t have to worry about DDoS:
If the stuff you’ll be hosting is static files you can use a CDN service. CDN’s are designed to be distributed and redundant so they’re somewhat resilient to DoS attacks by default. They’ll still kick you off if it gets to be too much but maybe you can weather shorter/moderate attacks.
If you’re hosting a dynamic/interactive service forget about it.
CAA and DNSSEC aren’t obscure. I would not even consider managing any domain nowadays without them.
Neither are ALIAS/DNAME/HTTPS, which you’ll be running into more and more in the future if you haven’t already. You could argue there are multiple competing standards at work there but Afraid doesn’t implement any of them.
If you don’t need CI/CD I’m not sure why you need a centralized frontend at all. Git itself is distributed and you can setup any code flow you can think of. It has hooks that can be used to set up code quality checks on select branches. There are local history browser apps for every platform and IDE plugins.
A frontend is no substitute for developer communication — usually what the “PR” thing does is sugarcoat the fact the devs don’t know how to use Git and/or don’t talk to each other.
what record types are you referring to not being supported?
AFAIK it only supports a small subset of all the types currently in use.
It lets you change reverse proxy or run a website with TLS completely independently of the certbot. The certbot deals with obtaining certs and leaves them in a dir, and the proxies or webservers just take them from that dir. If the proxy container breaks the certbot still does its thing etc.
It also makes it easier to do stuff like run different proxies in paralel for different things, chain proxies (for instance if you need to use a VPS because you can’t forward ports) and so on.
But it’s all for advanced setups, for basic stuff I’d still go with NPM.
You don’t run your own DNS, they are services hosted by someone else, just like Afraid. The difference, on top of the interface, is that they support modern record types, they have redundant servers all over the world, there’s a team working on them instead of just one guy, they have APIs that can let you manage your many domains easier, they have zone backup and restore etc.
I’ve used Afraid too, back when I was starting out and didn’t know any better, but once I’ve seen some of the other services out there I’ve never looked back. You’ll never know what extra features you could want if your current service doesn’t offer you any.
You can protect important data with backups, which you should do anyway, and in practice I feel like the added complexity of BTRFS and ZFS is not worth the COW.
BTRFS is cool but they tried to cram way too much too fast into it and it added a ton of complexity and it’s still not 100% done after all these years. A COW mode for ext4 would have been adopted much faster.
How do you avoid interaction if it’s being done automatically by your machine when you open up a print dialog, and if malicious servers can use the same names as legit printers?