• 0 Posts
  • 124 Comments
Joined 1Y ago
cake
Cake day: Aug 09, 2023

help-circle
rss

IIRC, the original reason was to avoid people making custom parsing directives using comments. Then people did shit like "foo": "[!-- number=5 --]" instead.


Some hackers DoS the code. This guy DoS’s the corporate process.


Nobody going to admit to being the pigeon? Because that’s me.


Valid, but if it needs to be decided, then there should be something concrete scheduled to do that followup when people are back in.


Pi Pico SDK does. Well, the version for debugging symbols, anyway. Regular executable is .uf2.


I’ve actually met him. Pretty chill guy, but is completely confused by his Internet fame.


There will likely always be a job for someone who has good Perl knowledge. There’s no good reason to start a new project in it, though.


Eh, it’s a language that rewards deep knowledge. I like that. But it’s not coming back.


As a Perl developer: not going to happen.


We tend to forget about it these days, but the Unix permissions model was criticized for decades for being overly simplistic. One user having absolute authority, with limited ways to delegate specific authority to other users, is not a good model for multi-user operating systems. At least not in environments with more than a few users.

A well-configured sudo or SELinux can overcome this, which is one reason we don’t bring it up much anymore. We also changed the whole model, where most people have individual PCs, and developers are often in their own little VM environment on a larger server.


It’s a series where a dragon kidnaps a princess, and a plumber from New York must save her. To do so, he must gather mushrooms by hitting bricks while jumping with his fist, jump on turtles to make them hide in their shell, and dodge fire breathing plants.

In the most recent 2d incarnation, the fire breathing plants will sing at you.

The people who made this were on a lot of drugs.


It’s entirely possible to parse HTML in PCRE. You shouldn’t, but it is possible. The language stopped being strictly regular a long time ago and is entirely capable of doing it.

https://stackoverflow.com/a/4234491/830741


I don’t. It may look less like line noise, but it doesn’t unravel the underlying complexity of what it does. It’s just wordier without being helpful.

https://www.wumpus-cave.net/post/2022/06/2022-06-06-how-to-write-regexes-that-are-almost-readable/index.html

Edit: also, these alternative syntaxes tend to make some easy cases easy, but they have no idea what to do with more complicated cases. Try making nested capture groups with these, for instance. It gets messy fast.


JSON numeric encoding is perfectly capable of precise encoding to arbitrary decimal precision. Strings are easier if you don’t want to fuck around with the parser, though.


If your home router blocked incoming connections on IPv4 by default now, then it’s likely to continue doing so for IPv6. At least, I would hope so. The manufacturer did a bad job if otherwise.


You can get exactly the same benefit by blocking non-established/non-related connections on your firewall. NAT does nothing to help security.

Edit: BTW–every time I see this response of “NAT can prevent external access”, I severely question the poster’s networking knowledge. Like to the level where I wonder how you manage to config a home router correctly. Or maybe it’s the way home routers present the interface that leads people to believe the two functions are intertwined when they aren’t.


Governments are not anyone’s issue other than other governments. If your threat model is state actors, you’re SOL either way.

That’s a silly way to look at it. Governments can be spying on a block of people at once, or just the one person they actually care about. One is clearly preferable.

Again, the obscurity benefit of NAT is so small that literally any cost outweighs it.

I don’t see where you get a cost from it.

  • Firewall rules are more complicated
  • Firewall code is more complicated
  • Firewall hardware has to be beefier to handle it
  • NAT introduces more latency
  • CGNAT introduces even more latency
  • It introduces extra surface area for bugs in the firewall code. Some security related, some not. (I have one NAT firewall that doesn’t want to setup the hairpin correctly for some reason, meaning we have to do a bunch of workarounds using DNS).
  • Lots of applications have to jump through hoops to make it through NAT, such as VoIP services
  • Those hoops sometimes make things more susceptible to snooping; Vonage VoIP, for example, has to use a central server cluster to keep connections open to end users, which is the perfect point to install snooping (and this has happened)
  • . . . and that centralization makes the whole system more expensive and less reliable
  • A bunch of apps just never get built or deployed en masse because they would require direct addressing to work; stuff like a P2P instant messenger
  • Running hosted games with two people behind NAT and two people on the external network gets really complicated
  • . . . something the industry has “fixed” by having “live service” games. In other words, centralized servers.
  • TLS has a field for “Server Name Indication” (SNI) that sends the server name in plaintext. Without going far into the details, this makes it easier for the ISP to know what server you’re asking for, and it exists for reasons directly related to IPv4 sticking around because of NAT. Widespread TLS use would never have been feasible without this compromise as long as we’re stuck with IPv4.

We forced decisions into a more centralized, less private Internet for reasons that can be traced directly to NAT.

If you want to hide your hosts, just block non-established, non-related incoming connections at your firewall. NAT does not help anything besides extending IPv4’s life.


But why bother? “Let’s make my network slower and more complicated so it works like a hack on the old thing”.


So instead we open up a bunch of other issues.

With CGNAT, governments still spy on individual addresses when they want. Since those individual addresses now cover a whole bunch of people, they effectively spy on large groups, most of whom have nothing to do with whatever they’re investigating. At least with IPv6, it’d be targetted.

NAT obscurity comes at a cost. Its gain is so little that even a small cost eliminates its benefit.


IIRC, there are some sloppy ISPs who are needlessly handing out prefixes dynamically. ISPs seem to be doing everything they can to fuck this up, and it seems more incompetence than malice. They are hurting themselves with this more than anybody else.


It wasn’t designed for a security purpose in the first place. So turn the question around: why does NAT make a network more secure at all?

The answer is that it doesn’t. Firewalls work fine without NAT. Better, in fact, because NAT itself is a complication firewalls have to deal with, and complications are the enemy of security. The benefits of obfuscating hosts behind the firewall is speculative and doesn’t outweigh other benefits of end to end addressing.


Obfuscation is not security, and not having IPv6 causes other issues. Including some security/privacy ones.

There is no problem having a border firewall in IPv6. NAT does not help that situation at all.


For individuals. There are tons of benefits for everyone collectively, but as is often the case, there’s not enough incentive for any one person to bother until everybody else does.


I’m starting to think the way to go isn’t set stories in the sprint at all. There’s a refined backlog in priority order. You grab one, do it, grab the next. At the end of the two week period, you can still have a retro to see how the team is doing, but don’t worry about rollover.

Alternatively, don’t think of Agile as a set instruction manual, but rather a group of suggestions. Have problem X? Solution Y worked for many teams, so try that.


the etymology of “blacklist,” for example, has no relation to race whatsoever

What happens is that the term “black” takes on negative connotations in a million different ways. “Blacklist” being one example. It may have no overt connection to race, but it gains it through repeated use in different contexts. Your brain doesn’t necessarily encode the different contexts in separate ways. You may be able to think it through at a high level of rationality in a debate, but not when you’re out on the street going about your day.

The solution may not be to change the language, though. There are too many longstanding cultural associations with black = evil, and there’s just no way to get rid of them all.

https://www.scientificamerican.com/article/the-bad-is-black-effect/

“Although psychologists have known for a long time that people associate dark skin with negative personality traits, this research shows that the reverse is also true: when we hear about an evil act, we are more likely to believe it was done by someone with darker skin. This “bad is black” effect may have its roots in our deep-seated human tendency to associate darkness with wickedness. Across time and cultures, we tend to portray villains as more likely to be active during nighttime and to don black clothing. Similarly, our heroes are often associated with daytime and lighter colors. These mental associations between color and morality may negatively bias us against people with darker skin tones. If this is true, it has far-reaching implications for our justice system. For example, eye witnesses to crimes may be more likely to falsely identify suspects who possess darker skin.”

“Overall, the “bad is black” effect only underscores the importance of finding ways to combat the various ways that our inherent biases can influence perceptions of guilt and innocence. Understanding the extent of these biases, as well as what may be causing them, represents an important first step.”


Yeah, that’s kinda what my GP post was getting at. But it’s all managed by corporations, not individuals.



Problem is, most organizations don’t know how to properly architect for and integrate microservice architectures into their environments and work process.

Which is exactly the same thing said about OOP, or structured programming, or Agile, or a number of other paradigms. Organizations aren’t doing it right, but it works if it’s “properly architected”. Yet, this “proper architecture” doesn’t seem to exist anywhere in the real world.

In fact, I’ve argued before that if OOP is fundamentally about objects ending messages to each other, then microarchitectures are just an approach to OOP where the communication channel is over network sockets.

If you understand things like encapsulation and message passing, then the difference doesn’t come down to microarchitecture vs monolith. It’s self-contained components vs not, and the method of passing messages is only important from a feasibility and performance standpoint.


As a Perl dev, I wouldn’t recommend it at all at this point.

20 years ago, it had the best publicly available repository of libraries with CPAN. That’s been long surpassed now.

10 years ago, it still had a fantastic community of highly concentrated competence. A lot of those people are gone, and the ones left behind are some of the more toxic ones.

It’s going to stay a long tail language like COBOL where it never really dies. If you have experience in it, you can make a lot of money at it, and will for a long time to come, but it’s hard to break into.

My company is moving towards Elixir, which I like a lot. Realistically, I’m at least 20 years from retirement, and I expect the Perl platform to be around in some capacity by then. I might even be the one to shut the final lights off on it as I walk out the door.


Setting up a web of trust could cut out almost all spam. Of course, getting most people to manage their trust in a network is difficult, to say the least. The only other solution has been walled gardens like Facebook or Discord, and I don’t have to tell anyone around here about the problems with those.


Not really, I just have trust issues with my ISP, and I’m willing to spend three bucks a month to work around them.


I’m hoping my makerspace will be able to do something like that in the future. We’d need funding for a much bigger internet connection, at least three full time systems people paid market wages and benefits (three because they deserve to go on vacation while we maintain a reasonable level of reliability), and also space for a couple of server racks. Equipment itself is pretty cheap–tons of used servers on eBay are out there–but monthly costs are not.

It’s a lot, but I think we could pull it off a few years from now if we can find the right funding sources. Hopefully can be self-funding in the long run with reasonable monthly fees.


IIRC, it’s nearly impossible to self-host email anymore, unless you have a long established domain already. Gmail will tend to mark you as spam if you’re sending from a new domain. Since they dominate email, you’re stuck with their rules. The only way to get on the good boy list is to host on Google Workspace or another established service like Protonmail.

That’s on top of the fact that correctly configuring an email server has always been a PITA. More so if you want to avoid being a spam gateway.

We need something better than email.


I agree, and I think there’s some reliability arguments for certain services, too.

I’ve been using self-hosted Bitwarden. That’s something I really want to be reliable anywhere I happen to be. I don’t want to rely on my home Internet connection always being up and dyn DNS always matching. An AWS instance or something like that which can handle Bitwarden would be around $20/month (it’s kinda heavy on RAM). Bitwarden’s own hosting is only $3.33/month for a family plan.

Yes, Bitwarden can work with its local cache only, but I don’t like not being able to sync everything. It’s potentially too important to leave to a residential-level Internet connection.


Some send table scraps to bigger organization, like the Apache Foundation. The millions of small projects that they depend on get shit.


Clearly, the solution is to write another layer of abstraction to unite them all.



http://www.quadibloc.com/arch/sriscint.htm

The RISC architecture contains several common elements. Some of them are no longer present in most chips that still call themselves RISC:

  • All instructions execute in a single cycle.
  • Floating-point operations, specifically, are therefore excluded.

But most of the defining characteristics of RISC do remain in force:

  • All instructions occupy the same amount of space in memory.
  • Only load, store, and jump instructions directly address memory. Calculations are performed only between operands in registers.

https://groups.google.com/g/comp.arch/c/IZP5KUJprHw?pli=1

MOST RISCs:
3a) Have 1 size of instruction in an instruction stream
3b) And that size is 4 bytes
3c) Have a handful (1-4) addressing modes) (* it is VERY hard to count these things; will discuss later).
3d) Have NO indirect addressing in any form (i.e., where you need one memory access to get the address of another operand in memory)
4a) Have NO operations that combine load/store with arithmetic, i.e., like add from memory, or add to memory. (note: this means especially avoiding operations that use the value of a load as input to an ALU operation, especially when that operation can cause an exception. Loads/stores with address modification can often be OK as they don’t have some of the bad effects)
4b) Have no more than 1 memory-addressed operand per instruction
5a) Do NOT support arbitrary alignment of data for loads/stores
5b) Use an MMU for a data address no more than once per instruction
6a) Have >=5 bits per integer register specifier
6b) Have >= 4 bits per FP register specifier

Note that none of this has to do with reducing the number of instructions, which is what people tend to think of when they hear the name.


There were XML DOM accelerators for a while. Might still be out there.


No, that’s not what RISC is about. There was some early attempts to keep the number of instructions low–originally, ARM didn’t have a multiply instruction, and there’s still a bunch of microcontrollers you can buy that don’t have a divide instruction–but it was quickly abandoned as it’s just not that useful. It only holds back instructions that optimize common cases. Your compiler can implement multiplication by doing addition in a loop, but that’s not very efficient.

What really worked about it was keeping a separation between how memory is accessed. You don’t have an ADD instruction that can fetch from both registers or main memory. You have a MOV instruction that can fetch from memory into a register, and you have an ADD instruction that can work on registers.

ARM still does this just fine.