• 0 Posts
  • 22 Comments
Joined 1Y ago
cake
Cake day: Jun 25, 2023

help-circle
rss

This is about the one thing where SQL is a badly designed language, and you should use a frontend that forces you to write your queries in the order (table, filter, columns) for consistency.

UPDATE table_name WHERE y = $3 SET w = $1, x = $2, z = $4 RETURNING *
FROM table_name SELECT w, x, y, z

Obviously the actual programs are trivial. The question is, how are the tools supposed to be used?

So you say to use deno? Out of all the tutorials I found telling me what tools to use, that wasn’t one of them (I really thought this “typescript” package would be the thing I was supposed to use; I just checked again on a hot cache and it was 1.7 seconds real time, 4.5 seconds cpu time, only 2.9 seconds if I pin everything to a single core). And I swear I just saw this week, people saying “seriously, don’t use deno”. It also doesn’t seem to address the browser use case at all though.

In other languages I know, I know how to write 4 files (the fib library and 3 frontends), and compile and/or execute them separately. I know how to shove all of them into a single blob with multiple entry points selected dynamically. I know how to shove just one frontend with the library into a single executable. I know how to separately compile the library and each frontend, producing 4 separate artifacts, with the library being dynamically replaceable. I even know how to leave them as loose files and execute them directly (barring things like C). I can choose between these things all in a single codebase, since there are no hard-coded project filenames.

I learned these things because I knew I wanted the ability from previous languages I’d learned, and very quickly found how the new language’s tools supported that.

I don’t have that for TS (JS itself seems to be fine, since I have yet to actually need all the polyfill spam). And every time I try to find an answer, I get something that contradicts everything I read before.

That is why I say that TS is a hopelessly immature ecosystem.


I’m not concerned about the Microsoft’s involvement. TypeScript shows an immature tooling ecosystem even on its own merits.

I posted some of my concerns earlier, along with a basic problem challenge (that I can easily do in many other languages) that nobody managed to solve: https://programming.dev/comment/2734178


1, Don’t target X11 specifically these days. Yes a lot of people still use it or at least support it in a backward-compatible manner, but Wayland is only increasing.

2, Don’t fear the use of libraries. SDL and GTK, being C-based, should both be feasible from assembly; at most you might want to build a C program that dumps constants (if -dM doesn’t suffice) and struct offsets (if you don’t want to hard-code them).


True, but successfully doing dynamically-linked old-disto-test-environment deployments gets rid of the real reason people use static linking.


DNS-over-TCP (which is required by the standard for all replies over 512 bytes) was unsupported prior to MUSL 1.2.4, released in May 2023. Work had begun in 2022 so I guess it wasn’t EWONTFIX at that point.

Here’s a link showing the MUSL author leaning toward still rejecting the standard-mandated feature as recently as 2020: https://www.openwall.com/lists/musl/2020/04/17/7 (“not to do fallback”)

Complaints that the differences are just about “bug-for-bug compatibility” are highly misguided when it’s useful features, let alone standard-mandated ones (e.g. the whole complex library is still missing!)


The problem is that the application developer usually thinks they know everything about what they want from their dependencies, but they actually don’t.


The problem is that GLIBC is the only serious attempt at a libc on Linux. The only competitor that is even trying is MUSL, and until early $CURRENTYEAR it still had worldbreaking standard-violating bugs marked WONTFIX. While I can no longer name similar catastrophes, that history gives me little confidence.

There are some lovely technical things in MUSL, but a GLIBC alternative it really is not.


That’s misleading though, since it only cares about one side, and ignores e.g. the much faster development speed that dynamic linking can provide.


Only if the library is completely shitty and breaks between minor versions.

If the library is that bad, it’s a strong sign you should avoid it entirely since it can’t be relied on to do its job.


Some languages don’t even support linking at all. Interpreted languages often dispatch everything by name without any relocations, which is obviously horrible. And some compiled languages only support translating the whole program (or at least, whole binary - looking at you, Rust!) at once. Do note that “static linking” has shades of meaning: it applies to “link multiple objects into a binary”, but often that it excluded from the discussion in favor of just “use a .a instead of a .so”.

Dynamic linking supports much faster development cycle than static linking (which is faster than whole-binary-at-once), at the cost of slightly slower runtime (but the location of that slowness can be controlled, if you actually care, and can easily be kept out of hot paths). It is of particularly high value for security updates, but we all known most developers don’t care about security so I’m talking about annoyance instead. Some realistic numbers here: dynamic linking might be “rebuild in 0.3 seconds” vs static linking “rebuild in 3 seconds” vs no linking “rebuild in 30 seconds”.

Dynamic linking is generally more reliable against long-term system changes. For example, it is impossible to run old statically-linked versions of bash 3.2 anymore on a modern distro (something about an incompatible locale format?), whereas the dynamically linked versions work just fine (assuming the libraries are installed, which is a reasonable assumption). Keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.

Unfortunately, a lot of programmers lack basic competence and therefore have trouble setting up dynamic linking. If you really need frobbing, there’s nothing wrong with RPATH if you’re not setuid or similar (and even if you are, absolute root-owned paths are safe - a reasonable restriction since setuid will require more than just extracting a tarball anyway).

Even if you do use static linking, you should NEVER statically link to libc, and probably not to libstdc++ either. There are just too many things that can go wrong when you given up on the notion of “single source of truth”. If you actually read the man pages for the tools you’re using this is very easy to do, but a lack of such basic abilities is common among proponents of static linking.

Again, keep in mind that “just run everything in a container” isn’t a solution because somebody has to maintain the distro inside the container.

The big question these days should not be “static or dynamic linking” but “dynamic linking with or without semantic interposition?” Apple’s broken “two level namespaces” is closely related but also prevents symbol migration, and is really aimed at people who forgot to use -fvisibility=hidden.


As a practical matter it is likely to break somebody’s unit tests.

If there’s an alternative approach that you want people to use in their unit tests, go ahead and break it. If there isn’t, but you’re only doing such breakage rarely and it’s reasonable for their unit tests to be updated in a way that works with both versions of your library, do it cautiously. Otherwise, only do it if you own the universe and you hate future debuggers.


The thing is - I have probably seen hundreds of projects that use tabs for indentation … and I’ve never seen a single one without tab errors. And that ignoring e.g. the fact that tabs break diffs or who knows how many other things.

Using spaces doesn’t automatically mean a lack of errors but it’s clearly easy enough that it’s commonly achieved. The most common argument against spaces seems to boil down to “my editor inserts hard tabs and I don’t know how to configure it”.


It’s solving (and facing) some very interesting problems at a technical level …

but I can’t get over the dumb decision for how IO is done. It’s $CURRENTYEAR; we have global constructors even if your platform really needs them (hint: it probably doesn’t).


Stop reinventing the wheel.

Major translation systems like gettext (especially the GNU variant) have decades of tooling built up for “merging” and all sorts of other operations.

Even if you don’t want to use their binary format at runtime, their tooling is still worth it.


Write-up is highly Windows-centric (though not irrelevant elsewhere).

One thing that is regretfully ignored in discussions of async, tasks, green threads, etc. is that there is no support/consideration for native (reliable/efficient) thread-local variables. If you’re lucky you’ll get a warning about “don’t use them”.


For an extension like this - unlike most prior extensions - you’re best off with essentially an entirely separately compiled copy of the program/library. So IFUNC is a poor fit, even with peer optimization.


I’ve done something similar. In my case it was a startup script that did something like the following:

  • poll github using the search API for PR labels (note that this has sometimes stopped returning correct results, but …).
    • always do this once at startup
    • you might do this based on notifications; I didn’t bother since I didn’t need rapid responsiveness. Note that you should not do this for the specific data from a notification though; it’s only a way to wake up the script.
    • but no matter what, you should do this after N minutes, since notifications can be lost.
  • perform a git fetch for your main development branch (the one you perform the real merges to) and all pull/ refs (git does not do this by default; you’ll have to set them up for your local test repo. Note that you want to refer to the unmerged commits for these)
  • if the set of commits for all tagged PRs has not changed, wait and poll again
  • reset the test repo to the most recent commit from your main development branch
  • iterate over all PRs with the appropriate label:
    • ordering notes:
      • if there are commits that have previously tested successfully, you might do them first. But still test again since the merge order could be different. This of course depends on the level of tests you’re doing.
      • if you have PRs that depend on other PRs, do them in an appropriate order (perhaps the following will suffice, or maybe you’ll have some way of detecting this). As a rule we soft-forbid this though; such PRs should have been merged early.
      • finally, ordering by PR number is probably better than ordering by last commit date
    • attempt the merge (or rebase). If a nop, log that somewhere. If not clean, skip the PR for now (and log that), but only mark this as an error if it was the first PR you’ve merged (since if there’s a conflict it could be a prior PR’s fault).
    • Run pre-build stuff that might need to create further commits, build the product, and run some quick tests. If they fail, rollback the repo to the previous merge and complain.
    • Mark the commit as apparently good. Note that this is specifically applying to commits not PRs or branch names; I admit I’ve been sloppy above.
  • perform a pre-build, build and quick test again (since we may have rolled back and have a dirty build - in fact, we might not have ended up merging anything!)
  • if you have expensive tests, run them only here (and treat this as “unexpected early exit” below). It’s presumed that separate parts of your codebase aren’t too crazily entangled, so if a particular test fails it should be “obvious” which PR is relevant. Keep in mind that I used this system for assumed viable-work-in-progress PRs.
  • kill any existing instance and launch a new instance of the product using the build from the final merged commit and begin accepting real traffic from devs and beta users.
  • users connecting to the instance should see the log
  • if the launched instance exits unexpectedly within M minutes AND we actually ended up merging anything into the known-good branch, then reset to the main development branch (and build etc.) so that people at least have a functioning test server, but complain loudly in the MOTD when they connect to it. The condition here means that if it exits suddenly again the whole script goes up and starts again, which may be necessary if someone intentionally tried to kill the server to force a new merge sequence but it was too soon.
    • alternatively you could try bisecting the set of PR commits or something, but I never bothered. Note that you probably can’t use git bisect for this since you explicitly do not want to try commit from the middle of a PR. It might be simpler to whitelist or blacklist one commit at a time, but if you’re failing here remember that all tests are unreliable.

Javascript isn’t one of my main language, but using class avoids a few of the footguns Javascript has. You can’t forget new, and extends/instanceof are far saner than duck typing.


and yet the very fact that you have to go out of your way to enable them means people don’t use them like they should.


The solution is quite simple though: dogfood.

Developers must test their website on a dialup connection, and on a computer with only 2GB of RAM. Use remote machines for compilation-like tasks.


It’s mostly relevant for a project you’re not familiar with (perhaps it is/was someone else’s project, or perhaps a project that’s too large for a single user to be familiar with the entirety of), since it helps you figure out where a bug came from.

If you’re familiar with the entire project you usually don’t need it IME.