Because a lot of the content on national TLDs is relevant only for people of that nation. It helps with name clashes and pushes off stuff that doesn’t make sense in any of the more “global” TLDs.
And for governments, banks and other institutions there should really be some official standard where they pick a single second-level domain and use it for stuff that needs to be secure so anyone anywhere can be sure it’s controlled by the correct entity and not a scammer.
They’re two separate(ish) issues.
But it’s still a bad idea to use national TLDs for stuff that has nothing to do with that nation.
Granted, is ICANN wasn’t just a money-grabbing machine with no forward thinking they wouldn’t give nations clearly “generally desirable” gTLDs, but since they did already that doesn’t mean they should be misused.
Yep, exactly my thoughts. Unfortunately very few developers really think (about related but not completely adjacent code) when they implement stuff (and that’s when they are even “allowed to” by the task requirements) and even fewer have true knowledge of security and common pitfalls and whatnot to avoid such issues.
And even when you have those you still need good practices and code reviews where the rest of the slip ups is caught.
Firefox has a profile manager (the thing that’s also exposed to about:profiles). Run it like firefox -profilemanager
and you’ll get a profile switcher.
Run firefox -profilemanager -no-remote
if you want to open multiple different profiles at once (only the original one without “no-remote” will open new tabs when you click on links outside the browser). You’ll probably want to make a shortcut for different profiles though, not sure from memory what it is (but probably -profile ProfileName
) and then you can easily use profiles.
The support is actually pretty decent, just kinda hidden. You don’t get a profile switcher because the browsers are completely separate, they don’t really know about each other.
Oh I didn’t even mean that; just the (possible, shorthand/unreadable) syntax alone, weird typing, etc. seem like it’d be hard to work with.
It’s also funny because “allowing clusterfucks” is a huge reason why PHP was so hated; when you took care to write it properly it wasn’t bad even in the early days.
With AWS especially there is a shitton of proprietary stuff. Most of the friction is in knowledge however; the cloud environments differ, are configured differently, have different limitations and caveats, etc. Someone who has only ever worked with AWS will have to learn a lot of things anew if they switch. Hell there’s a reason why “AWS engineer” is a dedicated role in some companies.
Now, if you only manually set up some VMs and configure them like you would a regular server then sure, it’s easy to migrate. But when you are missing 99% of the features of the cloud environment are you actually using it?
When you use a cloud solution (and especially one with a vendor lock in like Amazon) then yeah, you are fucked there too and I’d question why you did it in the first place.
If you have your own infrastructure - be it a server at home or whatever - then you can always just move it elsewhere, get some other ISP, whatever. There is no lock-in. Inconvenience, sure, but you can migrate elsewhere. That’s just not true about all the other things mentioned, or the friction would be much higher.
You got it backwards there. PHP was pretty bad (mostly because it was easy to pick up so novice and shitty programmers used it a lot), but got insanely better, and it absolutely stood the test of time - most of modern web still uses it and it isn’t going anywhere. There are also few languages that would have such a robust ecosystem where you could whip up a solid, complete app in a few hours. JS comes close but its ecosystem is a clusterfuck. Everything else has poor third party support - be it libraries, connectors to various services or just simply best practices (for web).
So while this is probably a good answer to the hypothetical question, that’s actually not a good thing, you realize that, right?
Special tools exist because different problems require different solutions. And sure, then can be a huge overlap of those tools, but you can’t literally do everything with a single tool; chances are it’d be a shitty tool. Either you can’t actually do everything with it, or it’s so complex that you don’t want to use it in the first place.
Javascript is somewhere in between, in the sense that it’s both kinda terrible for most of the jobs you mentioned, while also not actually usable for “everything” - i.e. it’d be a terrible language for anything that needs to be performant or reliable. Hell, we have JS in crap like Gnome now and it’s a nightmare.
Because while you do have control (and “copies”) of the source code repository, that’s not really true for the ecosystem around it - tickets, pull requests, …
If Microsoft decided to fuck you over you’d have a hard time migrating the “community” around that source code somewhere else.
Obviously depends on what features you are using, but for example losing all tickets would be problematic for any projects.
Apparently Mozilla won’t be even accepting PRs there so it doesn’t matter much.
Ehh it’s not that simple either way.
Like, platforms don’t actually own your data and usually explicitly state so; if for no reason other than not having liability for what you post.
If they did actually own the data (beyond having the very broad license to use it) they’d also have to curate 100% of it, otherwise they’d get sued to oblivion by copyright holders and whatnot.
You’re completely wrong.
This means that they will implement it, and then it’s only a tiny change to make it available everywhere if they decide to do so later.
The option alone also now also allows people to build stuff that will only work in those WebViews, rejecting to work without the integrity check, which is already a huge loss.
You cannot create information from nothing.
Arguably that’s exactly what generative AIs do. Which is not what you meant, but yeah. I was going more for like “given current progress and advancements in how we curate datasets and whatnot, there is no reason to believe that we won’t have 100% undistinguishable AI-generated pictures eventually”.
We already know that you don’t need to have stuff in the training dataset to have it show up meaningfully in the output.
Psychologists/Psychiatrists are still on the fence on that one, I wouldn’t be surprised if it depends on the person. And yes the external harm produced by AI images is definitely lower than that produced from actual CSAM, doubly so newly produced CSAM, but that doesn’t mean that therapy, even in its current early stages, couldn’t do even better.
100% agree there. What I would like to see is more research, but that’s currently kinda impossible with CSAM being as criminalized as it is. Which is kinda sad.
Therapy seems to work on most help-seeking people (and there are studies proving that), so this should be a last ditch effort.
The rest of your post I don’t agree with. It isn’t really (definitely not exclusively) a societal problem - some people’s brains are simply wired in a way that’s just bad and there isn’t much you can do with it, and either these people suffer by living with it, or they cause harm to others because of it. Both is bad.
The vast majority of paedophiles are not exclusive paedophiles, often they’re not even really attracted to kids at all beyond having developed a fetish, they’re rapists focussing on the most vulnerable, often due to having been victims of sexual abuse themselves.
Do you have any statistics proving this? It’s exactly the bias that already makes non-acting pedophiles unlikely to seek help. Obviously these kinds of people are the ones you hear most about, but I wouldn’t be so sure that they’re the majority (even if they’re most of the problem).
My point is that if you take it as people who need help and actually manage to provide it, you should be able to get the number of abuse down overall except for the people who truly can’t be helped. And it really doesn’t matter much how you provide that help, even if it’s morally questionable like using artificially generated CSAM.
Artificial or not, this isn’t really a new idea. A similar argument can be made for existing CSAM and providing it under controlled conditions.
And yeah, “nobody knows”, in huge part because doing such a study would be highly illegal under current CSAM laws in most parts of the world. So, paradoxically, you can’t even legally study how to help those people, even if they actively want to be helped and want to help you do research on it.
Edit: Also, I’m not really making any assumptions; I literally said “there is an argument to be made”. I’m not making that argument because I don’t actually know enough. Just saying that it’s an option that should be explored.
A “weird fetish” is, quite literally a paraphilia, just like pedophilia. We only care about the latter because it has the potential to hurt people if acted upon. There’s no difference, medically speaking.
A lot of the comments in here seem a little bit too sympathetic.
When you want to solve an issue you need to understand the people having it and have some compassion, which tends to include stuff like defending people who didn’t actually do anything harmful from being grouped with the kind who do act on their urges.
I see where you’re coming from but that’s a technical issue that will probably be solved in time.
It’s also really not a black and white; sure maybe you can see it isn’t perfect but you’d still prefer it to content where you know no one was actually harmed.
Despite what reputation people like that have (due to the simple fact of how reporting works), most are harmless like me and you and don’t actually want to see innocent people suffer and would never act on their desires. So having a safe and harmless outlet might help.
…and that’s how it still works.