I’m not so sure. It’s possible Nintendo opted for a carrot rather than a stick in this case.
This doesn’t seem to have been started with a public C&D letter like usual. Yuzu (the previous Switch emulator that was taken down) incorporated some proprietary Nintendo information, which is why Nintendo had a legal lever against them. They don’t have one in this case, yet it still came down. Plus, everything seems to be have been going on very quiet behind the scenes.
If you were an emulator writer and Nintendo came and offered you life changing money in exchange for ending the project, would you take it? I would have a very hard time turning that down. Nintendo also doesn’t want a flood of yokels trying to start the project up again hoping to receive the same offer; most would fail, but one or two might take off. Better to let the threat be implied.
This is just speculation, of course, but something about the way this has unfolded feels a little different.
We tend to forget about it these days, but the Unix permissions model was criticized for decades for being overly simplistic. One user having absolute authority, with limited ways to delegate specific authority to other users, is not a good model for multi-user operating systems. At least not in environments with more than a few users.
A well-configured sudo or SELinux can overcome this, which is one reason we don’t bring it up much anymore. We also changed the whole model, where most people have individual PCs, and developers are often in their own little VM environment on a larger server.
It’s a series where a dragon kidnaps a princess, and a plumber from New York must save her. To do so, he must gather mushrooms by hitting bricks while jumping with his fist, jump on turtles to make them hide in their shell, and dodge fire breathing plants.
In the most recent 2d incarnation, the fire breathing plants will sing at you.
The people who made this were on a lot of drugs.
It’s entirely possible to parse HTML in PCRE. You shouldn’t, but it is possible. The language stopped being strictly regular a long time ago and is entirely capable of doing it.
I don’t. It may look less like line noise, but it doesn’t unravel the underlying complexity of what it does. It’s just wordier without being helpful.
Edit: also, these alternative syntaxes tend to make some easy cases easy, but they have no idea what to do with more complicated cases. Try making nested capture groups with these, for instance. It gets messy fast.
You can get exactly the same benefit by blocking non-established/non-related connections on your firewall. NAT does nothing to help security.
Edit: BTW–every time I see this response of “NAT can prevent external access”, I severely question the poster’s networking knowledge. Like to the level where I wonder how you manage to config a home router correctly. Or maybe it’s the way home routers present the interface that leads people to believe the two functions are intertwined when they aren’t.
Governments are not anyone’s issue other than other governments. If your threat model is state actors, you’re SOL either way.
That’s a silly way to look at it. Governments can be spying on a block of people at once, or just the one person they actually care about. One is clearly preferable.
Again, the obscurity benefit of NAT is so small that literally any cost outweighs it.
I don’t see where you get a cost from it.
We forced decisions into a more centralized, less private Internet for reasons that can be traced directly to NAT.
If you want to hide your hosts, just block non-established, non-related incoming connections at your firewall. NAT does not help anything besides extending IPv4’s life.
So instead we open up a bunch of other issues.
With CGNAT, governments still spy on individual addresses when they want. Since those individual addresses now cover a whole bunch of people, they effectively spy on large groups, most of whom have nothing to do with whatever they’re investigating. At least with IPv6, it’d be targetted.
NAT obscurity comes at a cost. Its gain is so little that even a small cost eliminates its benefit.
It wasn’t designed for a security purpose in the first place. So turn the question around: why does NAT make a network more secure at all?
The answer is that it doesn’t. Firewalls work fine without NAT. Better, in fact, because NAT itself is a complication firewalls have to deal with, and complications are the enemy of security. The benefits of obfuscating hosts behind the firewall is speculative and doesn’t outweigh other benefits of end to end addressing.
I’m starting to think the way to go isn’t set stories in the sprint at all. There’s a refined backlog in priority order. You grab one, do it, grab the next. At the end of the two week period, you can still have a retro to see how the team is doing, but don’t worry about rollover.
Alternatively, don’t think of Agile as a set instruction manual, but rather a group of suggestions. Have problem X? Solution Y worked for many teams, so try that.
the etymology of “blacklist,” for example, has no relation to race whatsoever
What happens is that the term “black” takes on negative connotations in a million different ways. “Blacklist” being one example. It may have no overt connection to race, but it gains it through repeated use in different contexts. Your brain doesn’t necessarily encode the different contexts in separate ways. You may be able to think it through at a high level of rationality in a debate, but not when you’re out on the street going about your day.
The solution may not be to change the language, though. There are too many longstanding cultural associations with black = evil, and there’s just no way to get rid of them all.
https://www.scientificamerican.com/article/the-bad-is-black-effect/
“Although psychologists have known for a long time that people associate dark skin with negative personality traits, this research shows that the reverse is also true: when we hear about an evil act, we are more likely to believe it was done by someone with darker skin. This “bad is black” effect may have its roots in our deep-seated human tendency to associate darkness with wickedness. Across time and cultures, we tend to portray villains as more likely to be active during nighttime and to don black clothing. Similarly, our heroes are often associated with daytime and lighter colors. These mental associations between color and morality may negatively bias us against people with darker skin tones. If this is true, it has far-reaching implications for our justice system. For example, eye witnesses to crimes may be more likely to falsely identify suspects who possess darker skin.”
“Overall, the “bad is black” effect only underscores the importance of finding ways to combat the various ways that our inherent biases can influence perceptions of guilt and innocence. Understanding the extent of these biases, as well as what may be causing them, represents an important first step.”
Problem is, most organizations don’t know how to properly architect for and integrate microservice architectures into their environments and work process.
Which is exactly the same thing said about OOP, or structured programming, or Agile, or a number of other paradigms. Organizations aren’t doing it right, but it works if it’s “properly architected”. Yet, this “proper architecture” doesn’t seem to exist anywhere in the real world.
In fact, I’ve argued before that if OOP is fundamentally about objects ending messages to each other, then microarchitectures are just an approach to OOP where the communication channel is over network sockets.
If you understand things like encapsulation and message passing, then the difference doesn’t come down to microarchitecture vs monolith. It’s self-contained components vs not, and the method of passing messages is only important from a feasibility and performance standpoint.
As a Perl dev, I wouldn’t recommend it at all at this point.
20 years ago, it had the best publicly available repository of libraries with CPAN. That’s been long surpassed now.
10 years ago, it still had a fantastic community of highly concentrated competence. A lot of those people are gone, and the ones left behind are some of the more toxic ones.
It’s going to stay a long tail language like COBOL where it never really dies. If you have experience in it, you can make a lot of money at it, and will for a long time to come, but it’s hard to break into.
My company is moving towards Elixir, which I like a lot. Realistically, I’m at least 20 years from retirement, and I expect the Perl platform to be around in some capacity by then. I might even be the one to shut the final lights off on it as I walk out the door.
Setting up a web of trust could cut out almost all spam. Of course, getting most people to manage their trust in a network is difficult, to say the least. The only other solution has been walled gardens like Facebook or Discord, and I don’t have to tell anyone around here about the problems with those.
I’m hoping my makerspace will be able to do something like that in the future. We’d need funding for a much bigger internet connection, at least three full time systems people paid market wages and benefits (three because they deserve to go on vacation while we maintain a reasonable level of reliability), and also space for a couple of server racks. Equipment itself is pretty cheap–tons of used servers on eBay are out there–but monthly costs are not.
It’s a lot, but I think we could pull it off a few years from now if we can find the right funding sources. Hopefully can be self-funding in the long run with reasonable monthly fees.
IIRC, it’s nearly impossible to self-host email anymore, unless you have a long established domain already. Gmail will tend to mark you as spam if you’re sending from a new domain. Since they dominate email, you’re stuck with their rules. The only way to get on the good boy list is to host on Google Workspace or another established service like Protonmail.
That’s on top of the fact that correctly configuring an email server has always been a PITA. More so if you want to avoid being a spam gateway.
We need something better than email.
I agree, and I think there’s some reliability arguments for certain services, too.
I’ve been using self-hosted Bitwarden. That’s something I really want to be reliable anywhere I happen to be. I don’t want to rely on my home Internet connection always being up and dyn DNS always matching. An AWS instance or something like that which can handle Bitwarden would be around $20/month (it’s kinda heavy on RAM). Bitwarden’s own hosting is only $3.33/month for a family plan.
Yes, Bitwarden can work with its local cache only, but I don’t like not being able to sync everything. It’s potentially too important to leave to a residential-level Internet connection.
It wouldn’t be my first choice, but it’ll probably do the job. Depends on what you want to do with it. There’s fewer people choosing this path, which means that when things go wrong, you’ll have fewer sources of information to help.
Some old Dell office PC with a good amount of RAM and an SSD would be just as well.