• 0 Posts
  • 113 Comments
Joined 1Y ago
cake
Cake day: Jun 19, 2023

help-circle
rss

I love containers for this use case.

They allow you to just install and test pretty much anything you want and if it doesn’t go well… just rebuild the container and start again. Rebuilding a container takes about 5 seconds to fix problems that would take 5 weeks of headaches if you made the same mistake on your main operating system.

If apt-get install wants to install a bunch of dependencies you’re not sure about, oh well give it a try and see how it goes. That’s definitely not an approach you can use successfully outside of a container.

Another benefit of containers is you can have two computers (e.g. a desktop and a laptop) and easily share the exact same container between both of them.

Personally I use Docker, because there are rich tools available for it and also it’s what everyone else I work with uses. I can’t speak to wether or not Incus is better as I’ve never used it.


I don’t know a good template, but whatever you choose make sure it uses Markdown for the post format. Markdown was originally designed for exactly your use case. The Daring Fireball blog has been using Markdown for 20 years now.

There are variants of markdown, and I’d go with Github Flavoured Markdown which has all the features you require and has quite a few improvements over the original spec:


For the few things it can’t do, like embedding graphs — Markdown is a superset of HTML, meaning that arbitrary HTML is valid Markdown. You could, for example, use D3.js.

Personally I would also use GitHub as my distribution method. Write your posts in any text editor, push to GitHub, and then a GitHub action triggers an action that re-generates the HTML and publishes your site.

That approach will work well and if it ever stops working well you can easily move part of your system to something else without reinventing the entire thing.


Compare the success of the programming communities here . Most are empty. Most have no posts and no activities

Lemmy also lacks the ability to edit someone else’s post. The best answers (and even the best questions) on Stack Overflow had multiple authors. It’s very rare to find one person who can comprehensively understand a problem, but several people can do that.

Distinct posts by several people can never be as good as a single post. There’s too much repetition, too many stale posts that are out of date or have errors that the author didn’t come back to fix, etc.


Yep - I’m also looking for the same thing. Stack Overflow used to be the most amazing website on the internet… it’s not that anymore.

LLMs are working pretty well for me and I’m not sure if anything like Stack Overflow will ever exist again. That ship has sailed in my opinion.


My experience is not much flies under the radar anymore - this stuff is heavily automated and even legitimate content is often accused of infringing (I’ve stopped buying stock photos for example… because using them is likely to result in being accused of copyright infringement and proving you purchased a license is far more effort than they are worth).

https://github.com/github/transparency/blob/main/data/dmca/dmca_takedowns_by_month.csv

That’s their DMCA takedown report (there is also a “transparency center” with pretty charts)… hundreds of takedowns every month, with some of them fought and re-instated. I’d bet smaller sites don’t have any reinstatements. It was last updated 4 months ago, hopefully another update is coming soon.

Definitely a good idea to have eggs in other baskets but there’s a pretty good chance all of them will be taken down at once and GitHub seems like it’d be more likely to come back if you have a defensible case.


Forgejo looks good, but I don’t see any support for Gantt charts?

Kanban works well for active work but sometimes you need to step back and look at the longer term plan as well.


Do you really think a smaller service will do a better job defending against Nintendo?

I wish there were more options too but I don’t see GitHub as being at fault here. The law is pretty clear on takedown notices and defending Fair Use claims is horrifically expensive.


Find someone else’s open source mod and try to change how it works.


Java’s not my favourite language either, but the only “nice” language on his list is C# and particularly if he was using it in a .NET context then it’s got a steep learning curve:

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/", () => "Hello World!");

app.Run();

Working with closures and run loops is a pretty rough starting point compared to other languages where you start with just print "Hello World". Those concepts are relatively simple for someone experienced but a beginner can easily hit a brick wall they can’t climb over.


Because GTK is designed for GUI software, and this is a text editor. Almost everything is text - it’s got more in common with Vim than Gedit.


VSCode has way more features than Vim. Including the ability to run Vim inside the IDE. Or Emacs.


These two code blocks don’t use standard libraries (aside printing output) and have nothing in common. Even output is totally different, since at the time JavaScript did not support text output at all (there was no browser console). They are as close as you can possibly get between the two languages (they’re not really close at all, because in the 90’s it wasn’t possible to define “real” classes in JavaScript, and to this day it’s not possible for a function to have instance variables in Java).

And as someone who’s been writing JavaScript professionally for 20 years… I assure you it’s full of quirks that still confuse the heck out of me at times. I mean just last week I had a problem with variable scope that took me three hours to figure out what was wrong with the code. I’m sure some people are more familiar with it, but I’m not one of those people… probably because I avoid the language as much as I possibly can and try to make it behave like “any other language” even though it definitely isn’t that.

Java:

public class Animal {
    private String name;

    public Animal(String name) {
        this.name = name;
    }

    public String getName() {
        return this.name;
    }
}

public class Application {
    public static void main(String[] args) {
        Animal myAnimal = new Animal("Spike");
        System.out.println(myAnimal.getName());
    }
}

JavaScript:

function Animal(name) {
    this.name = name;
}

Animal.prototype.getName = function() {
    return this.name;
};

var myAnimal = new Animal("Spike");
alert(myAnimal.getName());

There was that one guy who developed JavaScript

Bullshit. AOL was a huge company with large development teams and lots of people worked on JavaScript. Obviously there was a project lead and in the beginning he did most of the work, but a project that big doesn’t get done by one person and by the end he would have been doing less than 1% of the work.

he decided on the name of JavaScript

The legal teams at AOL and Sun Microsystems negotiated JavaScript as part of a deal where AOL would “only” pay millions of dollars per month to license Java as long as they didn’t include any other programming languages in the browser. JavaScript wasn’t a separate language, we promise.


They clearly meant JavaScript to be to Java what AWK is to C

No. From interviews with the people who created JavaScript what actually happened is they invented an awesome new language, and the boss had just signed a contract to integrate Java into Netscape.

That contract specifically banned Netscape from supporting anything other than Java… but the new language was so awesome they didn’t want to kill it. The compromise was to call it “JavaScript” and insist it’s not a new language, it’s just a light weight version of Java. Even though clearly that was bullshit and they all knew it - they just didn’t admit it publicly until decades later.


Pointers suck in C++. In other languages every single variable is a pointer and it works perfectly with no memory bugs and great performance.

Pass by value often uses too much memory. Especially if you have a bunch of simultaneous functions/threads/etc that all need to access the same value at once. You can get away with it when your memory is a few dozen integers, but when you’re working with gigabytes of media… you need pointers. Some of the code I work with has values so large they don’t even fit in RAM at all, let alone two or three copies of them. Pass by value could mean writing a hundred gigabytes to swap.


A better approach is the one Apple uses with Swift (and before that, Objective-C… though that wasn’t memory safe).

In swift the compiler writes virtually all of your memory management code for you, and you can write a bit of code (or annotate things) for rare edge cases where you need memory management to do something other than the default behaviour.

There’s no garbage collection, but 99.999% of your code looks exactly like a garbage collected language. And there’s no performance penalty either… in fact it tends to be faster because compiler engineers are better at memory management than run of the mill coders.


If you want memory-safe, don’t write C/C++.

Fixed that for you. There’s no situation where you want buffer overruns.


Calldav sucks and ActivityPub is really good. I would probably implement both - use caldav for compatibility with other software and ActivityPub for your own internal representation of the data.

But there’s another option, also an industry standard, which is structurally very similar to ActivityPub only it provides more than just a network protocol to transmit data, it also provides a storage method and tools to work with it.

I am of course talking about git.

Put all your tasks/etc in a directory, use git to track and sync/backup/share changes. If you want a kan ban board, just have todo/in progress/code review/done directories and move your stuff between them.

Personally I do my note taking in VSCode with an extension that automatically (and almost instantly) commits changes and pushes them to the cloud/my other devices. Plus a few other extension like Foam which which supports linking documents/etc.


The sentence “IN NO EVENT SHALL THE AUTHORS BE LIABLE” doesn’t fly where I live. You don’t get to choose wether or not you’re liable - the law decides who is liable. This thing is about as enforceable as those sovereign citizen license plates and would get the same reaction in a court room.

Plenty of other commonly used licenses have the same issue unfortunately and and the biggest nightmare (at least in my country) is laws against “misleading” or “deceptive” conduct. Telling someone you’re not liable for anything is blatantly misleading/deceptive.

Even if your software works perfectly, you are still breaking the law… a victimless crime that would normally fly under the radar or result in a “cut that shit out” order by the court… but it’ll really hurt your case if there is actually a victim (e.g. if your software has a bad security flaw that caused real damage).

That’s why organisations with big legal teams tend to choose licenses like Apache 2.0. Ones with language like “unless required by applicable law (such as deliberate and grossly negligent acts)”.


They’re working on Linux compatibility. It’s not ready yet but it’s well along the way with about half the necessary tasks completed. Windows will be after that.


Pulsar is a fork of Atom, which was discontinued because almost everyone jumped ship to VSCode.

What does Pulsar do that is better than VSCode? All the features this article highlights are in VSCode too, and I can think of a bunch of features that Pulsar doesn’t have (dev containers are a big one for me - they allow you to have different versions of the same software installed, depending what project you’re working on right now… and you can work on/run both versions of the same software at the same time, on the same hardware… you can also emulate other CPU architectures in a dev container, some of the software I work with every day can’t actually run natively on my hardware).


For your use case I recommend working with the web - HTML/CSS for basic interface designs and where those fall short SVG or Canvas or WebGL.

There are various frameworks but if you’re just starting out I wouldn’t touch those with a ten foot pole. You need to learn how these things work first, without adding complicated third party code to your environment.

You can write code that runs on the server, or client side in the browser. Most web software is a mix of both. Literally any language works well server side, but client side most people use JavaScript (you don’t have to do that - you can write code in almost any language and compile it to WASM (Web Assembly)… but JavaScript has deep integration with HTML/CSS so it’s probably the best choice.

I’d start with w3schools.com for the absolute basics.

Web software doesn’t have to run in a browser. You probably use apps every day that use Electron - which is essentially a way to integrate web applications into your operating system (and also, a way to run web apps without an internet connection).

It’s really quite simple to get your head around, a web browser sends a text message like this (I’ve simplified it) to a server::

GET /example/page HTTP/1.1
Host: example.com

And the server responds with another text message, like this:

HTTP/1.1 200 OK
Date: Tue, 20 Feb 2024 12:00:00 GMT
Server: Apache/2.4.1 (Unix)
Last-Modified: Sat, 18 Feb 2024 12:00:00 GMT
Content-Length: 438
Content-Type: text/html; charset=UTF-8
Connection: close

<html>
<head>
  <title>An Example Page</title>
</head>
<body>
  <p>Hello, World!</p>
</body>
</html>

You can generate that response with software, or you can have it sitting as a file on the disk.

As someone who’s written GUI software for a couple decades - trust me that simple “text in, text out” approach to writing software is really really good especially when you just want to get something to work and don’t want to spend years refining every little detail. I’m a thousand times more productive writing web software than anything else.


I would just use an iPad. You can buy them really cheap secondhand.

Even old models have excellent color quality and wide viewing angles. New(er) models have ambient light sensors that pick up not only the brightness of the room but also the color temperature of your ceiling lights, which are almost certainly different to natural sunlight through the windows, and will seamlessly adjust the picture to look “right” throughout the day (if you enable that feature). You can make it even more yellow at night if you want.

They do unfortunately have a built in battery but the battery power management features are first rate and the operating system will quickly detect a permanent power source and reduce the charge level of the battery to make sure it lasts (lithium batteries don’t like being full all the time).

They use almost no power, have no fans, etc.

You can find plenty of picture frame mounts to hide the cable, and there’s a “guided access” feature (intended for kids mostly) to lock it down. You won’t be able to close the app or turn the iPad off in guided access mode.

There are plenty of picture frame apps, or you could write your own (as a website if you’re not comfortable using Xcode). Definitely rotate the photo at least every few hours, or the display will burn in. And I wouldn’t put it in the sun - you want the display running at less than full brightness to further extend the life of the device. Additional customisation is available via the “shortcuts” app, which is a visual scripting/automation tool.

You could also choose not to run any app at all, and just leave the iPad on the regular home screen with a rotating wallpaper, some widgets showing the weather/etc, and maybe a couple apps to do home automation or whatever.


Well, no. That’s just plain wrong. There is only a certain amount of demand for software, like for every other product or service. That’s literally economy 101.

But that demand isn’t going anywhere. A company with good profits is always going to be willing to re-invest a percentage of those profits in better software. A new company starting out is always going to invest whatever amount of risk they can tolerate on brand new software written from scratch.

That money will not be spent on AI, because AI is practically free. It will always be spent on humans doing work to create software. Maybe the work looks a bit different as in “computer, make that button green” vs button.color = 'green' however it’s still work that needs to be done and honestly it’s not that big of an efficiency gain. It’s certainly not as big as the jump we did decades ago from hole punch programming to typing code on a keyboard. That jump did not result in lay offs, we have far more programmers now than we did then.

If productivity improves, if anything that will mean more job opportunities. It lowers the barrier to entry allowing new projects that were not financially viable in the past.


AI is a tool for coders to use and it will never make coders obsolete. As someone trying to enter the industry, my advice is lean into it and use AI as a learning tool.

Having said that - it is pretty hard to find a job in the industry right now, due to all the layoffs. Those layoffs are related to covid not AI, so it should be temporary… but in the mean time you’re likely to be competing for jobs with people who have decades of experience.

I believe there is still a shortage of developers long term, but short term not so much.


Stage 5 can give you a huge boost in writing complex queries!

You say that like it’s a good thing. I like my queries simple.

Also - the stuff you have under “stage 6” should all be learned before “stage 2” in my opinion. Knowing how to write efficient queries is far more important than group by / join / etc.


I’m an actual human. I don’t work for any AI company.

As a test I pasted OP’s broken code into an LLM and it took two seconds to find the problem, explain what the code actually does, explain what OP thinks the code should do, and write updated code that actually does that.


I’ve been trying unsuccessfully for several days to fix to what must be a simple error.

That really sucks. Others have already helped out so I won’t go there, but seriously do yourself a favour and start using large language models. I personally pay for ChatGPT Plus, but there are free ones (from other companies, not the free Open AI models) that could have helped you with this problem in minutes instead of days.


I’ll tell you what I want - make Electron based on web standards so it can operate on any rendering engine and therefore you don’t need to bundle a browser engine with your binary. Just use whatever the operating system provides (Blink on PC/Android, WebKit on Mac/iPhone, etc).


I disagree. I think both the current state and the state it will change to should be clearly labeled.

Also - just because everyone is familiar with something doesn’t make it a good user experience. We’re used to play/pause but it’s honestly not very good.


That’s not the convention in my country. Up is off here. And no switches have labels either.


For a serious comparison, I’m not sure I’d call that “reasonable”. A lot of use cases would very quickly exceed that drive’s wear levelling and render it unusable.


build a dynamic library with a new instantiation, then dynload it and off we go

I haven’t played around with the internals of C++ myself, but isn’t that a one way thing? Wouldn’t you need to be able to “unload” a query after you’re done with it?

Personally I think child processes are the right approach for this. Launch a new process* for each query and it can (if you choose to go that route) dynamically load in compiled code. Exit when you’re done, and the dynamically loaded code is gone. A side benefit of that is memory leaks are contained, since all memory you allocate is about to be removed anyway.

(*) On most operating systems launching new process is a bit slow, so you likely wouldn’t want to do that when the query is requested. Instead you’d maintain a pool of processes that are running and ready to receive a query. That’s how HTTP servers are often configured to run. The number of processes “pool” is generally limited by how much memory they need. Is it 1MB per process? 2GB?

Honestly, I wonder if you could just use an actual HTTP server for this? They can handle hundreds or even thousands of simultaneous requests. They can handle requests that complete in a fraction of a millisecond or ones that run for several hours. And they have good tools to catch/deal with code that segfaults, hits an endless loop, attempts to allocate terabytes of swap, etc. HTTP also has wonderful tools to load balance across multiple servers if you do need to scale to massive numbers of requests.

I would also seriously consider using JavaScript instead of C++. I hate JavaScript… but modern JavaScript JIT compilers are really special… they apply compiler optimisations AT RUNTIME. So a loop will compile to different machine code if it iterates three times vs three million times. The code is literally recompiled on the fly when the JIT compiler detects a tight loop. Same thing with a function that’s called over and over again - it will be inlined if inlining is appropriate.

As flexible as your system sounds, I suspect runtime optimisations like that would provide real performance advantages. Well optimised C++ code is faster than JavaScript, but you’re probably not always going to generate well optimised code.

JavaScript would also eliminate entire categories of security vulnerabilities. And any time you’re generating code on the fly, you really need to be careful about those.

The good news is if you use a HTTP server like I suggested… then you can literally use any language you want, C++, JavaScript, Python, Rust… you can decide on a case by case basis.


Programmers write parsers quite a lot

Speak for yourself. I’ve done it exactly once. It didn’t work, and never shipped. Learned my lesson and always use a parser that someone else wrote. Usually a big team of at least thousands of people (how many people have worked on JSON? Millions? What about UTF8? Those are the main two I use).


These days I use ChatGPT 4, with a long running conversation where I explain what I’m trying to do, what tools I’m using, paste in sections of code that I don’t understand, asking how to change the behaviour of that code, give it error messages I’m seeing, etc.

It feels really close to pair programming with someone sitting next to me who knows the language/framework. The code it writes is often wrong but it’s close enough that I can work reasonably efficiently.

A couple favourite from earlier todays

  1. I asked “where can I find the code that does X” and it told me to search the project for “Y” to find it.
  2. I asked it how to use a code generation shell script bundled with the framework to do a common task, and when I explained that the answer didn’t seem to line up it said “in that case you can’t use the script. You’ll need to write the code manually, here’s how to do that”

Both pieces of advice were spot on and saved me hours of googling.


A better analogy is writing vs writing.

Do you know how to hold a pen and draw letters? You can write. Do you want to write a best selling novel? Yeah that’s a different skill.


Part of the investment has to be only using libraries that have type hints.

But yeah - I definitely prefer strongly typed languages. Or at least languages like Swift where you have to jump through a few hoops to have a dynamic type (in Swift there is an “Any” type but you have to write a bunch of code checking what the variable contains before you can actually worth with it). Basically you have to convert it to a static typed variable before it can be touched. Thankfully there’s pretty good syntax for that. Including an arbitrary way to convert almost anything to a string (essential for debugging).


Programmers are not hackers. The reverse might be true but hacking is about finding problems (and exploiting them) while programming is about fixing problems.

You have to find a problem before you can fix it. All good programmers are hackers.


Currently the obvious tell is if they pitch Rust

I would amend that to “if they pitch any language”.

The best language is almost universally “whatever we already use” or for new projects “whatever the team is most familiar with”. It should occasionally be reconsidered, and definitely try out new languages, but actually switching to the new language after trying it out? That should be very very rare.


Pkl is a hell of a lot easier to work with. Compare this pkl code:

host: String
port: UInt16(this > 1000)

To the equivalent in json:

{
  "$schema": "http://example.org/my-project/schema#",
  "type": "object",
  "properties": {
    "host": {
      "type": "string"
    },
    "port": {
      "type": "number",
      "minimum": 1000,
      "exclusiveMinimum": true
    }
  },
  "required": ["host", "port"]
}