• 1 Post
  • 32 Comments
Joined 1Y ago
cake
Cake day: Jun 20, 2023

help-circle
rss

Or just :set mouse=a if your terminal emulator was updated in the past decade. gVim has nothing to offer anymore, except that it bundles its own weird terminal emulator that doesn’t inherit any of the fonts, themes, settings or shortcuts of one’s default terminal. Blegh.

Also if you’re not going to leverage Vim’s main feature and just want to click around on stuff, just install VSCod(e|ium), which is genuinely amazingly good.


The studios! Think of the studios! Their execs couldn’t live off merch sales and shitty reboots anymore! They might even have to - gasp - develop original IP if they want to milk an exclusive license. Some other execs would make money off some of last century’s licenses! The horror! The tragedy!

That can’t be. Clearly the best thing about Indiana Jones and Jurrasic Park is the death grip the studios have on those IPs. Ever since Steamboat Willie fell into the public domain I’ve been unable to enjoy the Disney Classics. All joy has been snuffed out from my life.


I wonder how many terrorist (and “terrorist”) plots that were foiled were from compromised telegram messages. How many Ukrainian airstrikes were called from similar sources. My gut says a whole lot more than people think. Since nothing is encrypted, one backdoor is all the NSA needs to read everyone’s group messages. Like the much lamer version of Crypto AG, because in this case it’s an open secret.


There are good sides to DST, such as coming home “earlier” (by the sun clock but not by the social clock) from school or work and therefore having more hours of daylight during the free time after work. These positive effects may go beyond subjective feelings. A study has shown for example that activity increases with longer evening daylight (Goodman et al., 2014) – albeit with small biological effect sizes (≈6% difference in the daily activity between the Standard Time of the year and DST, adjusted for photoperiod). Interestingly these results of the above study were culture-specific: a significant increase was mainly observed in Europe and to some extent in Australia, while no significant effects or even slightly negative effects were seen in the United States and Brazil.

Fucking duh. This is the sticking point for me, and I am disappointed that the article doesn’t mention the effect of latitude here. Very easy for muricans to say “DST is not useful” when these fuckers never get pitch-black night before 6pm or full daylight before 6am ST.

Brussels is on the same latitude as Calgary. ST robs every office worker of one hour of useful daylight. That’s it. That’s the whole argument for permanent DST. Businesses will not change their opening hours, so permanent ST means a net loss of one active hour in the day for every office worker. Permanent DST in Europe means someone working 9-6 would not have to drive home at night for 4 months of the year and could maybe even take the dog for a walk in the evening sun.


What kind of non-agile bottom-up software projects have you experienced? Bottom-up waterfall? I guess it’s possible in theory but that would be a sight to behold.

My only point is that in most situations, upper management are fools that should be left to their devices and should never get a say in development methodologies. By definition if upper management imposes Scrum, it’s a self-defeating prophecy.

Waterfall Agile Scrum
Top-down Can be great (esp. with rigid requirements like fintech, for safety-critical systems, or integration with traditional engineering processes with rigid schedules and feature sets) but will probably be more expensive Bad managers trying to make-up for their own lack of foresight Can’t exist (but some companies pretend very hard)
Bottom-up Probably can’t exist (but I haven’t seen anyone try) Yes Yes

Your average tech company should be somewhere in the bottom-right, but bad managers are trying to pull the needle upwards to justify their existence or make up for their incompetence. But they still call that “Agile” (which can be true by some definitions of the word) or “Scrum” (which that isn’t, by definition).


Good software does not come out of companies without a bottom-up approach to software development. Top-down approaches are either terrible or extremely expensive.

Agile development is something that at my company we fought for, not against. It’s literally impossible to fight against actual agile development since it has to come from the workers. Agile is not scrum, and neither are a collection of ceremonies. It’s just a framework to give agency to developers.


Scrum is not the be-all end-all, but in organizations that cannot implement scrum effectively, no system could hope to achieve anything meaningful either.

Scrum aims at empowering workers to remove power from clueless MBAs and meritless CEOs, if they don’t want to play ball then the idiocracy will win every time regardless.


I’ve witnessed similar corporate screwups from the inside, I know the greed and political games and misaligned incentives that allow for such an obviously and catastrophically badly scoped project to be pushed dead-on-arrival in production, against the advice of literally anyone with a pair of eyes and literally any honesty.

Intellectually, I understand. Yet my heart doesn’t, because it refuses to believe the sheer amount of collective stupidity and outright malice at every level of management, consistently for years on end, required to achieve these outcomes. How anyone can sleep at night with “Product Manager for New Outlook” on their resume is beyond me.


I mean, bad programming sucks regardless of the “paradigm” (and vice-versa, mostly). But as someone whose job it often is to sift through production logs hunting for an issue in someone else’s component, at least I have a chance with OOP, because its behavior is normally predictable at compile time. So with the source and the backtrace I can pretty reasonably map the code path, even if the spaghetti is 300 calls deep.

Now where shit really hits the fan is OOP with dependency injection. Now I’m back to square 1 grepping through 15 libraries because my LSP has no idea where the member comes from. Ugh.


Anyone who praises FP is either a student, works primarily in academia, or otherwise never had to look at a deep stack trace in their life.

Every time a production system spits out a backtrace that’s just 15 event loop calls into a random callback, I lose 6 months life expectancy. Then I go look at the source, and the “go to definition” of my LSP never works because WHY WOULD IT, IT’S ALL FUNCTIONAL hapi.register CALLS

I hate it I hate it I hate it I hate it. I support UBI because the people pushing functional programming in real production systems should be reassigned to gardening duties.


Comparing Cloudflare to insurance companies is not how you’ll convince me they’re not acting like jerks lol


There are quite a few mature projects in 0.x that would cause a LOT of pain if they actually applied semver.

I am generally of the opinion that version numbers do not matter at all until the author/distributor has GUARANTEED that they do. Until then they’re worthless, including in places where semver is supposedly enforced like NPM. If I had a penny for every NPM package that broke my project after removing the package-lock.json, I could retire.


You mean npm duplicates even if the the two dependency versions are compatible?

By default yes, unless you explicity use the “peer dependency” system which isn’t the default. The “default” naive implementation is for every package in your node_modules to have a node_modules of its own, all the way down recursively. There are tricks nowdays to deduplicate packages with the exact same version, but not to automatically detect “compatible” versions and use those instead (in my experience nothing would work if that was the case, deleting package-lock.json causes way too many issues due to the… uh, let’s call it “brave” approach of JS devs to stability).

That couldn’t be, right? Otherwise, if you installed two packages that rely on different incompatible versions of another package, one of the two would break

Correct. This is intended behavior which is solved in several ways:

  1. Correctly declaring your dependencies. If newer versions of a dependency break your package, disallow them, but that is not normally needed for minor version changes.
  2. Focus on quality. Semver exists for a reason, and 1.2.3 should not break something built against 1.1.2. JS and NPM’s cascade of stupid implementations bred a culture of “move fast and break things”, but that’s not the norm in any other commonly used ecosystem
  3. Linux distros almost exclusively use curated repositories, so they are (mostly) internally consistent and incompatibilities are rare and quickly fixed. A good package manager will resolve dependencies and automatically detect incompatibilities, proposing several fixes (typically abort the upgrade or uninstall one of the problematic packages)
  4. Not breaking down packages into a constellation of smaller packages. glibc6 is glibc6, not glibc_string (1.2.3) + glibc_memory (2.6.5) + glibc_fs (1.5.3) + glibc_stdio (1.9.2) + glibc_threads (6.1.0) + …
    Internally glibc6 is a bunch of modules, but they get bundled into one package specifically to simplify dependency management.

Not being able to install two versions of the same package sounds restrictive, but it’s a HUGE security benefit: glibc6 (1.2.3) is vulnerable to CVE-2024-1, then updating to glibc6 (1.2.4) secures your entire system at once. With NPM though, you have to either wait for every. single. dependency on that vulnerable package down your tree to recursively update, or patch those versions yourself (at your own risk because again, small version changes often break things since developers think that NPM’s dependency model means they don’t have to actually provide stability guarantees).


It’s saner, not perfect. With virtualenvs it does basically what you describe except that it re-downloads everything for every virtualenv, but that does not typically matter much since it’s not downloading a billion dependencies.

With NPM there’s no choice but to have hundreds of duplicates installed for every project, that’s not just inefficient but it is a security, maintainability, and auditability nightmare.


npm downloads every dependency recursively. If a depends on d (= 1.2.3) and b depends on d (= 1.2.4), then both versions of d get downloaded into a and b’s respective node_modules.

All other package managers I’m aware of resolve dependencies into a flat list then download, and you can only have one version of the same package on your system.


  1. Like Python, have a large and featureful standard library such that > 80% of NPM packages are redundant. Other languages allow you to make very large projects with only a few tens of dependencies. JavaScript requires THOUSANDS.
  2. With this in place, stop with the recursive dependencies, immediately and forever. Every other package manager under the sun installs the dependencies next to each other.

I’d say pip is saner, though not by much as its support for private registries is very bad and seems designed to facilitate supply-chain attacks. I’ve heard a lot of good things about cargo but haven’t used it enough myself to have a strong opinion.


It’s already a thing with near-zero delay. MS Teams does it (dunno about the translation) and the QSMP Minecraft server has a bunch of livestreamers from different countries who use it for realtime translation.

[EDIT: Live demo from today. Shit’s impressive.]

What actually happens is that the current sentence gets “corrected” several times as you keep speaking. It’s a bit jittery and if the word order differs significantly then the translated sentence might be a bit wonky for a few seconds, and there are a few misses but overall it works really well; at least well enough that people who don’t speak each others’ language can have a conversation in their native tongues with essentially no more delay than reading speed. I can easily follow a livestream in a foreign language with the live subtitles (which was not the case a mere 6 months ago for any language other than English).


US-defaultism has a catch: it sometimes accidentally extends to the Commonwealth. You won’t run into most of the internationalization quirks if all you’re comparing is “British English vs American English”.
[Sidebar: I notice this also when English speakers online assume that their audience at least has a vague idea of what Imperial units are, but while that is true of most native English speakers in the northern hemisphere who use feet and miles colloquially, for ESL audiences it’s almost always incorrect]

I switched from AZERTY to US QWERTY permanently specifically to avoid all the issues of badly internationalized software. Bad default bindings (e.g. common vim operations like { requiring the use of AltGr), but also things like games not working at all or only partially (e.g. the number row being either unbindable, or key hints naively showing as “&” and “é” instead of “1” and “2”). Surprisingly few devs understand the difference between key codes and characters, and lots of indie games straight up don’t even internationalize and require switching layouts (good luck if there is an in-game chat).
After getting into mechanical keyboards, the ANSI US keyboard layout has been useful as well because these are quite common. ISO mechanical keyboards are rarer, and Belgian AZERTY keycaps are borderline nonexistent.

Also in practice I use the qwerty-fr layout which is the US layout with a French layer on AltGr. The kicker? It’s better at writing French than the French AZERTY which is missing a lot of letters (Ç, æ, œ, À, …). AZERTY is a terrible layout but that’s a separate discussion.

Of course the Americans should develop properly internationalized software, but I personally know several fellow Belgians who switched to QWERTY for (some of) the reasons outlined above.


The title of the post is literally “I love my Gitea”.

The content of them meme does conflate “git” with its various frontends (like gitea), but it’s an incredibly common misnomer so who cares?

The person I responded to then went on a weird rant about how “git by itself is distributed” which is completely irrelevant to the point since OP’s Gitea provides a whole lot more.


You’re completely missing the point. Even Gitea (much simpler than GitHub, nevermind GitLab) is much more than a git backend. It’s viewable in a browser, renders markdown, has integrated CI functionality, and so on.

Even for my meager self-host use-case, being able to view markdown docs in the browser is useful from time to time, even on my phone.

As for the things I use (a self-hosted) GitLab instance at work for… that doesn’t even scratch the surface.


Immich saves pictures on the filesystem, where they are easily picked up by all my backup solutions. My pictures also get uploaded on NextCloud before being moved to Immich’s auto-upload folder.

… Where exactly is the risk for my precious memories? The bloody thing could rm -rf /* for all I care.


EDIT: NVM I’m a goddamn idiot, Unix Time’s handling of leap seconds is moronic and makes everything I said below wrong.


Unix Time is an appropriate tool for measuring time intervals, since it does not factor in leap seconds or any astronomical phenomenon and is therefore monotonously increasing… If T1 and/or T2 are given in another format, then it can get very hairy to do the conversion to an epoch time like unix time, sure.

The alt-text pokes fun at the fact that due to relativity, at astronomical scales then time moves at different speeds. However, I would argue that this is irrelevant as the comic itself talks about “Anyone who’s worked on datetime systems”, vanishingly few of which ever have to account for relativity (the only non-research use-case being GPS AFAIK).
While the comic is funny, if:

  • Your time source is NTP or GPS
  • “event 1” and “even 2” both happen on Earth
  • You’re reasonably confident that the system clock is functioning properly

(All of which are reasonable assumption for any real use-case)
Then ((time_t) t2) - ((time_t) t1) is precise well within the error margin of the available tools. Expanding the problem space to take into account relativistic phenomena would be a mistake in almost every case and you’re not getting the job.


I’m sure that there are tools to automate some of the work, but my understanding is that in most cases modelling artists want some kind of control over the generated LODs to ensure they don’t look like shit. Removing vertices on a 3d textured object is not nearly as simple as scaling a 2d picture as far as I understand it. You need to avoid mismapped textures, clipping vertices, the wrong missing details causing obvious pop-in, etc. A triangle in one place can be redundant but another triangle elsewhere may be a critical detail whose removal will be obviously missing from a distance (for example if you model the white house, you really want to keep the small flagpole up top at ALL levels of detail, but automated systems might remove it).

TBF part of the problem is that modern graphics cards mostly can shrug off insane amounts of geometry and badly optimized models, so management must have heard “high prio but not strictly blocking for release” and said “put it in the backlog” (aka “lmao whatever nerd I don’t care then, please focus on Marketing’s feature list happy please and thank you”).


BMW’s is alright IMO. Unorthodox in its layout but it actually works well with the center console knob thingy and has historically been low-latency even back when every other manufacturer had a 2 second delay for every action on their awful touch screens.

Of course even BMW’s system is still going to end up being an overly expensive Android Auto/Apple CarPlay launcher because what people want is Waze & Spotify, not random traffic updates from last week & DAB.


Game dev is not my wheelhouse but from what I gather in the article it is supposed to do some things better but the engine features (HDRP, DOTS, etc.) are still missing important features that led to a low of low-level re-implementations by Paradox…

However AFAIK game engines will not create LODs for you (and certainly won’t prevent you from using overly detailed models) so that part is squarely on Paradox.

At the end of the day a game engine is like any framework, it can make things a lot faster and easier but will not prevent you from shooting yourself in the foot if you don’t know what it is doing.



You don’t own songs on spotify, you merely license the temporary right to listen to them. Therefore the proper comparison is not album sales, it’s radio plays.

Now I don’t know the average rate for radio licensing, but I’m willing to bet big money that it’s absolutely nowhere near $200 for 20 people listening.

Maybe radio is not as cheap as spotify, maybe not, but famously spotify is not profitable so the labels are still to blame regardless.


The ads are really annoying if you streamhop frequently, because almost every time you switch stream you have to wait 30s-1m.

I pay for Turbo now so that’s fine, but the way it’s implemented seems really stupid to me, if you are looking for a stream to watch you sometimes get ad after ad after ad which can’t possibly be good for viewer retention.


They are legally obligated to show which part of the video is an ad (and contractually obligated to have a clickable link), which always leaves ad blockers a way to correlate and remove those segments though (essentially skipping forward during the ad, then lying to the backend when asking for additional segments as if the user had skipped through the video after the ad was over).

On Twitch they managed to outplay even uBlock, because the streaming is realtime and if you skip the ad segments, there’s no data to fall back to and the backend won’t send you the regular segments until the ad break is over (from what I understand). So at best you get a waiting screen instead of an ad.

However I’m not sure if it would make (financial) sense to apply a similar strategy on YouTube, as that would require preventing buffering the video until the ads have stopped playing (and wouldn’t work at all for midroll ads since the video has already been buffered at that point). Not only would this be expensive to do in the backend, but it would likely cause disproportionate buffering on low-end connections which couldn’t start loading the video while the ad is playing.


It’s not wrong to work with modern languages, but don’t pretend that you have the answer to the debate if you don’t work in a field where it applies.

Linting bash/perl is a TERRIBLE idea. Consider the following, extremely common piece of code (perl has equivalent syntax as well):

#!/bin/bash

cat > testfile < < EOF
    test1
	test2
EOF

(lol lemmy bug found, can’t write the actual “left angled bracket - left angled bracket” syntax, it somehow truncates the comment)

OTOH if you use a modern auto-formattable language, then you can auto-format to tabs with a git hook or IDE plugin (and back for committing) if you want, so the debate doesn’t matter in that case. It goes both ways.


Tell me you develop with modern languages without telling me you develop with modern languages.

Try linting perl, or bash.

Like yeah if you work on a modern JS/Python/C# project, whatever, whitespace is going to be autoformatted, so the tabs vs spaces debate does not matter AT ALL.


Because other people are fucking morons and their editor doesn’t have visible whitespace enabled - or it does but they don’t give a shit.

Therefore these fucking morons have anywhere between 2 and 8 spaces-per-tab configured and will happily mash the tab key however many times is convenient for them to align their code or comments because they don’t understand shit about fuck when it comes to alignement (or they don’t care). Now I open their file and everything is predictably misaligned. Spaces and tabs are mixed from one line to the next, and in particularly egregious cases no tab width I can locally set on the file will make it readable because multiple different morons used different tab widths to align with tabs - sometimes within the same goddamn function or comment.

Have you ever tried to read an important technical diagram in ASCII art aligned with tabs by different people with different IDE settings? Because I have. Emphasis on tried.


So you both agree that the system fucking sucks. Fundamentally, the hoops you have to jump through to do anything are far worse than the annoyance of bad seeds on public torrents.

The counterpoint is that obscure torrents are better seeded on private trackers. If what you’re looking for is even mildly popular however, private trackers just suck.