A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
That’s kind of what happens when somebody re-uses already assigned namespaces for a different purpose. Same with other domains, or if you mess with IP addresses or MAC addresses. The internet is filled with RFCs and old standards that need to be factored in. And I don’t really see Google at fault here. Seems they’ve implemented this to specification. So technically they’re “right”. Question is: Is the RFC any good? Or do we have any other RFCs contradicting it? Usually these things are well-written. If everything is okay, it’s the network administrators fault for configuring something wrong… I’m not saying that’s bad… It’s just that computers and the internet are very complicated. And sometimes you’re not aware of all the consequences of the technical debt… And we have a lot of technical debt. Still, I don’t see any way around implementing a technology and an RFC to specification. We’d run into far worse issues if everyone were to do random things because they think they know something better. It has to be predictable and a specification has to be followed to the letter. Or the specification has to go altogether.
Issue here is that second “may” clause. That should be prohibited from the beginning, because it just causes issues like this. That’s kind of what Google is doing now, though. If you ask me, they probably wrote that paragraph because it’s default behaviour anyways (to look up previously unknown TLDs via DNS). And they can’t really prevent that. But that’s what ultimately causes issues. So they wrote that warning. Only proper solution is to be strict and break it intentionally, so no-one gets the idea to re-use .local… But judging from your post, that hasn’t happened until now.
Linux, MacOS etc are also technically “right” if they choose to adopt that “may” clause. It just leads to the consequences lined out in the sentence. They’re going to confuse users.
You’d think with how often android is updated ridding us of this technical debt is very easy. Disable multicast DNS, add a hidden setting tucked away in a menu somewhere to re-enable it. Ez pz.