I’m not any type of lawyer, especially not a copyright lawyer, though I’ve been informed that the point of having the copyright date is to mark when the work (book, website, photo, etc) was produced and when last edited. Both aspects are important, since the original date is when the copyright clock starts counting, and having it further in the past is useful to prove infringement that occurs later.
Likewise, each update to the work imbues a new copyright on just the updated parts, which starts its own clock, and is again useful to prosecute infringement.
As a result, updating the copyright date is not an exercise of writing today’s year. But rather, it’s adding years to a list, compressing as needed, but never removing any years. For example, if a work was created in 2012 and updated in 2013, 2015, 2016, 2017, and 2022, the copyright date could look like:
© 2012, 2013, 2015-2017, 2022
To be clear, I’m not terribly concerned with whether large, institutional copyright holders are able to effectively litigate their IP holdings. Rather, this is advice for small producers of works, like freelancers or folks hosting their own blog. In the age of AI, copyright abuse against small players is now rampant, and a copyright date that is always the current year is ammunition for an AI company’s lawyer to argue that they didn’t plagiarize your work, because your work has a date that came after when they trained their models.
Not that the copyright date is wholly dispositive, but it makes clear from the get-go when a work came unto copyright protection.
I vaguely recall a (probably apocryphal) story of an early washing machine-sized hard drive that lurched its way across the floor during a customer demo, eventually falling over once the connecting cables pulled taught.
That said, those hard drives did indeed move themselves: http://catb.org/jargon/html/W/walking-drives.html
The knot is non-SI but perfectly metric and actually makes sense as a nautical mile is exactly one degree meridian
I do admire the nautical mile for being based on something which has proven to be continually relevant (maritime navigation) as well as being brought forward to new, related fields (aeronautical navigation). And I am aware that it was redefined in SI units, so there’s no incompatibility. I’m mostly poking fun at the kN abbreviation; I agree that no one is confusing kilonewtons with knots, not unless there’s a hurricane putting a torque on a broadcasting tower…
No standard abbreviation exists for nautical miles
We can invent one: kn-h. It’s knot-hours, which is technically correct but horrific to look at. It’s like the time I came across hp-h (horsepower-hour) to measure gasoline energy. :(
if you take all those colonial unit
In defense of the American national pride, I have to point out that many of these came from the Brits. Though we’re guilty of perpetuating them, even after the British have given up on them haha
An inch is 25mm, and a foot an even 1/3rd of a metre while a yard is exactly one metre.
I’m a dual-capable American that can use either SI or US Customary – it’s the occupational hazard of being an engineer lol – but I went into a cold sweat thinking about all the awful things that would happen with a 25 mm inch, and even worse things with 3 ft to the meter. Like, that’s not even a multiple of 2, 5, or 10! At least let it be 40 inches to the meter. /s
There’s also other SI-adjacent strangeness such as the hectare
I like to explain to other Americans that metric is easy, using the hectare as an example. What’s a hectare? It’s about 2.47 acre. Or more relatable, it’s the average size of a Walmart supercenter, at about 107,000 sq ft.
1 hectare == 1 Walmart
I’m surprised there aren’t more suggestions which use intentionally-similar abbreviations. The American customary system is rich with abbreviations which are deceptively similar, and I think the American computer memory units should match; confusion is the name of the game. Some examples from existing units:
FYI, the Intel code used to be here (https://github.com/intel/thunderbolt-utils) but apparently was archived a week ago. So instead, the video creator posted the fork here: https://github.com/rxrbln/thunderbolt-utils
Thank you for reminding me of this: https://youtube.com/shorts/XqNrO33bxmw
For other people’s benefit and my own:
PWA: Progressive Web App
Do you recommend dns.sb?
For the modern IP (aka IPv6) folks: 2606:4700:4700::1111
Other brands of IPv6 DNS servers are available.
I’m a fan of Pelican for static blog generation from Markdown files. Separating template and content into CSS/HTML and md files, and having it all in a Git repo for version control, is only a few hundreds of kilobytes. Lightweight to work on, and lightweight to deploy. It’s so uncomplicated, I can probably pick right back up if I left it alone for ten years.
I’ve always wanted a power switch on my hot glue gun but after seeing that, I think I’m now perfectly fine with the existing situation, lest I monkey’s paw my way to an even worse implementation.
I agree with the accepted answer that a toggle button UI – when unadorned with any other indicators – should be avoided due to the ambiguity. The fact that this question is being asked is an indicator of non-uniform consensus.
In American English, the verb “to table” means “to remove from discussion entirely”, which is almost entirely the opposite meaning from English spoken anywhere else in the world, where it means “to bring forward for discussion”. As a result of this US-specific confusion, there’s not much choice besides either clarifying through context or avoiding sentence constructions using that verb, at least when speaking to or with other Americans.
I think the same applies here: the small UI space savings is not worth the inevitable UX confusion this would cause, without modifications.
As an aside, I will say that the examples from the OSM Overpass API are pretty nifty for other applications. For example, I once wanted to find the longest stretch of road within city limits that does not have a stop sign or traffic light, in order to fairly assess ebike range by running back and forth until out-of-battery. I knew at the time that OSM had the data, but I didn’t know it could be queried in such a way. Would have saved me some manual searching, as well as broadening to include rural roads just outside the city.
OSM can definitely find you a bank near a freeway ramp, but it can also find you a bank near a creek to make an inflatable boat getaway. What it can’t do is arrange for decoys to confuse the police while you eacape.
The inflatable boat robber was ultimately caught and sentenced a year later.
This reminds me of a post I once saw, describing a person who (ab)used the C preprocessor to make an Old English version of C. It was clever, but obviously unmaintainable in a collaborative setting.
If this DreamBerd language is statically compiled, then it might still rank slightly above Tcl, a language I’ve had to use in production and despised every moment of it.
There will always be some instructors that are more dogmatic than pragmatic. All the same, there will be instructors that have pearls of wisdom to offer. Regarding the “break” and “continue” keywords, this lays somewhere in the middle.
One of the purposes of higher-level programming language is to remove from the low-level, machine-specific language of assembly, by offering other, more descriptive constructs, like “while”, “for”, and “switch”. In the C language, “break” is almost mandatory in a “switch” statement but only occasionally shows up in a “for” loop, excepting drivers. In Python, “break” only exists in loops, but there are lots of loops which can be replaced more efficiently with comprehensions, so “break” can be a sign of poorly organized logic.
If you can specify which programming language you’re learning, it would help to understand what your instructor might have meant to teach.
I think you asked about how to improve a few days ago, so I’ll answer now about how to start learning programming. In a lot of ways, programming is describing what you want the computer to do, but in a language it understands. So half the effort is building an intuition of how to break down a task into individual parts which the computer can work on, and the other half is to actually write the instructions for the computer.
The first part is common to all the engineering fields, but shows up elsewhere like in art (eg deconstructing a human face into drawable geometric shapes), daily life (eg navigating a car or public transit by making various left and right turns in a certain order) and other fields; familiarity with any of these will put you a step forward. Basic programming tutorials are useful in developing an awareness of what a computer can easily work on, and by exception, what it cannot.
The second part requires learning the programming language and its grammar, which I think the general curriculum for programming courses or online tutorials mostly has covered. If you’re already familiar with an existing programming language, then a new language can be framed as a translation from the first, mostly. Some features don’t translate at all – eg explaining Rust memory ownership to a C programmer – so those will have to be rote learned.
I find that reading a lot of code helps. From the bad code, take note of what to avoid. From the good code, take note of what to emulate.
To be clear, it’s often more useful to read code within your specialization than to read any ol piece of code on the Internet. That said, drawing from other programming languages and across a rich variety can be useful into itself.
This was an interesting read, so thanks for writing!
My background: I am an embedded software engineer by trade, and a tinkerer as one of my hobbies. I’ve played around with microcontrollers (MSP430, AtMega328p on the Arduino) and microprocessors (STM32, ARM64 on RPi) and have done a small amount of board design with KiCAD.
After reading your post, I thought about what platform is my “go to” for particular applications, and why. And what I arrived at is that it’s not as important what each platform offers, but how it fits into what I want to build. That is, how integrable it is.
When I have a hobby project that just needs a SPI bus and a programmed sequence, I might reach for an MSP430 in DIP form-factor, or the Arduino with the intent to program the 328p and then extricate it to use alone for my project. The DIP format is what makes me lump these two chips together, as both are reasonably comparable but have their own unique features, like low power consumption or 5v input.
Similarly, if my project needs networking, I would definitely lean into microprocessors, but now I have to settle on the format before proceeding. Specifically, if I want to use the RPi, then perhaps my design will take the shape of a Hat. If instead I want to build around an STM32 chip, then I need to provision its support hardware. The latter is fine, but I don’t exactly trust my EE skills to do this every time lol
As a result, what I would like – as an embedded engineer – is a common microprocessor platform which can be swapped out, with a common pinout and connector. I know in the industrial space, they have standards like COM Express to do exactly this, but I’m not sure if that’s exactly the right direction to go, since those tend to be x86 based. Maybe something like the RPi Compute Module, but FOSS.
To go with it, I’d also want a common module format, same as how RPi has Hats and Arduino has Shields. Again, industry has the OCP Standard, which conveniently breaks out a PCIe interface, but there isn’t a lot of PCIe used in hobbyist work. But maybe it’s time to change that? IDK
Thanks again for writing this; it’s given me a chance to think about what I’d really like to have in the proverbial toolbox.
No objections to your answer to the OP’s question, but as a curiosity, I’m trying to figure out what the original xrealloc() function is trying to do.
So far as I can tell, it tries a normal realloc() with the requested size, but if that fails, tries again with size=1. But strangely, it that fails, tries using the requested size a second time. And if that still fails, tries once more with size=1.
The POSIX man page isn’t giving me any hints as to why size=1 might be special, or if this is some sort of Linux-specific behavior or workaround. I wondered if you might have some insight why this function is the way it is.
Note: I’m on mobile, so haven’t checked the Git Blame history yet.
I know this is c/programmerhumor but I’ll take a stab at the question. If I may broaden the question to include collectively the set of software engineers, programmers, and (from a mainframe era) operators – but will still use “programmers” for brevity – then we can find examples of all sorts of other roles being taken over by computers or subsumed as part of a different worker’s job description. So it shouldn’t really be surprising that the job of programmer would also be partially offloaded.
The classic example of computer-induced obsolescence is the job of typist, where a large organization would employ staff to operate typewriters to convert hand-written memos into typed documents. Helped by the availability of word processors – no, not the software but a standalone appliance – and then the personal computer, the expectation moved to where knowledge workers have to type their own documents.
If we look to some of the earliest analog computers, built to compute differential equations such as for weather and flow analysis, a small team of people would be needed to operate and interpret the results for the research staff. But nowadays, researchers are expected to crunch their own numbers, possibly aided by a statistics or data analyst expert, but they’re still working in R or Python, as opposed to a dedicated person or team that sets up the analysis program.
In that sense, the job of setting up tasks to run on a computer – that is, the old definition of “programming” the machine – has moved to the users. But alleviating the burden on programmers isn’t always going to be viewed as obsolescence. Otherwise, we’d say that tab-complete is making human-typing obsolete lol