Based on some places I used to work, upper management seemed convinced that the “idea” stage was the hardest and most important part of any project, and that the easy part is planning, gathering requirements, building, testing, changing, and maintaining custom business applications for needlessly complex and ever changing requirements.
Absolutely.
I’ve seen so many projects hindered by bad decisions around performance. Big things like shoehorning yourself into an architecture, language, or particular tool, but even small things like assuming the naive approach is unacceptably slow. If you never actually measure anything though, your assumptions are just assumptions.
Project Panama is aimed at improving the integration with native code. Not sure when it will be “done”, but changes are coming.
Wow, that looks really nice!
I use Lua for PICO-8 stuff and it works well enough, but certain parts are just needlessly clumsy to me.
Looks like TIC-80 supports wren. Might have to give that a try sometime!
Yep, absolutely.
In another project, I had some throwaway code, where I used a naive approach that was easy to understand/validate. I assumed I would need to replace it once we made sure it was right because it would be too slow.
Turns out it wasn’t a bottleneck at all. It was my first time using Java streams with relatively large volumes of data (~10k items) and it turned out they were damn fast in this case. I probably could have optimized it to be faster, but for their simplicity and speed, I ended up using them everywhere in that project.
I’ve got so many more stories about bad optimizations. I guess I’ll pick one of those.
There was an infamous (and critical) internal application somewhere I used to work. It took in a ton of data, putting it in the database, and then running a ton of updates to populate various fields and states. It was something like,
It was an unreadable mess. Trying to debug it was awful. Business rules encoded as a chain of sql updates are incredibly hard to reason about. Like, how did this row end up with that data??
Me and a coworker eventually inherited the mess. Once we deciphered exactly what the rules were and realized they weren’t actually that complicated, we changed the architecture to:
I don’t remember the exact performance impact, but it wasn’t markedly faster or slower than the previous “fast” SQL-based approach. We found and fixed numerous bugs, and when new issues came up, issues could be fixed in hours rather than days/weeks.
A few words of caution: Don’t assume that building things with a certain tech or architecture will absolutely be “too slow”. Always favor building things in a way that can be understood. Jumping to the wrong tool “because it’s fast” is a terrible idea.
Edit: fixed formatting on Sync
This is a very strange article to me.
Do some tasks run slower today than they did in the past? Sure. Are there some that run slower without a good reason? Sure.
But the whole article just kind of complains. It never acknowledges that many things are better than they used to be. It also just glosses over the complexities and tradeoffs people have to make in the real world.
Like this:
Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.
I don’t know what exactly is involved in Windows updates, but it’s likely 1) a lot of data unpacking, 2) a lot of file patching, and 3) done in a way that hopefully won’t bork your system if something goes wrong.
Sure, reinstalling is probably faster, but it’s also simpler. If your doctor told you, “The cancer is likely curable. Here’s the best regimen to get you there over the next year”, it would be insane to say, “A YEAR!? I COULD MAKE A WHOLE NEW HUMAN IN A YEAR!” But I feel like the article is doing exactly that, over and over.
I’ve found pair programming to be fantastic, but only when used rarely in certain situations.
If you’ve got a good rapport with a teammate and a legacy project has landed in your lap neither of you understand, I’ve found pair programming to be the fastest way to figure out how it works. As you work together, you’ll both understand different parts of it better and be able to quickly figure out what’s going on. I guess this is probably less pair-programming and more like collaborative dissection of code though.
I’ve also used it to help onboard people on a project fairly quickly. This is more tricky and much less comfortable to do, so I do my best to stay in tune with what they need from me, and tends to be me coding for a bit and them following along and asking questions. Eventually, they want to start writing some code, and a day or so of switching back and forth is as far as I’ve usually taken that. I think it’s useful to break down barriers when working with (especially new) people, to make mistakes in front of them, and build good rapport.
I’ve not used it much beyond those situations, but I’d definitely use it in these ways again.
Having used PHP and Java extensively in my career, it’s always entertaining to read what people think about these languages.