• 0 Posts
  • 94 Comments
Joined 1Y ago
cake
Cake day: Jun 18, 2023

help-circle
rss

Douglas Crockford, author of “JavaScript: The Good Parts” has said:

“JavaScript is the only language in the world that people think they write without learning it first.”

I think this is a true statement (well, that and bash).



Programming and Software Engineering are related, but distinct fields. Programming is relatively easy, Software Engineering is a bit harder and requires more discipline in my opinion.


Writing fast unit tests will require some refactoring that could end up being pretty extensive.

For example, you mentioned “cloud storage” - if this is not already behind an interface one ticket could be to define an interface for accessing “cloud storage” and make it so that it can be mocked for most tests and the concrete implementation can be tested directly to confirm the integration works. Try to hone down that interface so that it’s as few methods as possible, only allow the parameters you’re actually using to be exposed and used in the interface. You can add more later if it’s absolutely necessary.

Do this for anything that does I/O and/or is CPU intensive.

So, to do tickets, I’d basically say, one per refactoring.

Going forward, writing “unit tests” should not be separate tickets, it should be factored into the estimates for the original stories, and nothing should go out without appropriate tests. The operational burden will decrease over time.

QA should have their own unit for how they want to test the application. Usually this is a suite per section of the app. If your app has an API, that is probably going to have a nice logical breakdown of the different areas that could each have their own ticket for adding QA-level test suites. The tests that developers write should only be additive and reduce the workload of QA. What you want to be sure of is that change sets are getting reviewed and through the entire pipeline without getting logjammed in any stage. Ideally, individual PRs are getting started and deployed in less than a week.

If you’re interested in more techniques, check out the book “Working effectively with legacy code.” It has a lot of patterns for adding tests to existing codebases.


I don’t have “heroes” per say, but Anders Hejlsberg is up there.

Generations of programmers have benefited from his work. His focus on ergonomics in C# made the language a total powerhouse.


Just to be clear, that is not exclusive to “engineering,” as other professionals have similar legal requirements (doctors, lawyers, fiduciaries).

More generally, on a personal level, people are expected to act with integrity, and we have laws that provide them legal protections for whistleblowing.

The actual practice of engineering is about problem-solving within a set of constraints. Of course the solution should not harm the public, and there are plenty of circumstances where software is developed to that standard.

When a PE stamps a plan, they are asserting that they personally have reviewed the plan and process that created it and that it meets a standard for acceptable risk (not no risk!). That establishes the boundary of legal liability. In software, we generally do not have that process that fits in a legal framework, but that doesn’t mean that professional software engineers aren’t making those assessments for life-critical systems.

For other kinds of systems, understand that this is a new field and that it doesn’t have the bloody history that got “real engineering” to where it is today. A lot of the work product of most software engineers just don’t have stringent safety requirements, or we don’t understand the risks of certain product categories yet (and before you try to rebut that, remember that “building codes are written in blood” because people were applying technology before it was well-designed/understood).

Anyway, “engineering” is defined by a lot more than if you or your boss has a stamp (and in point of fact, there are plenty of engineers in the US that work as engineers without being a PE, or with any intention of ever having the stamp. Are they real engineers?)


You should do some research on wasm.

You can run frickin’ docker containers in the browser now.

I don’t make the rules.


Rebasing and merge conflicts are the top ways that git can turn into a mess. I know that rebasing could (in some circumstances) make merge conflicts less of an issue, but I just mostly think the value of “commit grooming” is overrated. I don’t want to argue about this, if you like doing it, go ahead.


I’ve used the git cli exclusively for more than a decade, professionally. I guess it varies wildly by team, but CLIs are the only unambiguous way to communicate instructions, both for humans and computers. That being said, I still don’t mess around with rebase for anything, and I do use a gui diff tool for merge conflict resolution. Practically everything you need to do with git can be done with like 10 commands (I’m actually being generous here, including reset, stash, and tag).


Well, a couple things:

My points are related to provable advantages to doing it while writing code. They’re also not argumentative.

Your points are related to a personal preference of aesthetic while reading code. They are not provable advantages. They’re also quite “ranty,” which is rarely a persuasive way to convince someone of your position.

If you actually want to get people to change their habits around this, I think you’ll have better luck with my approach than ranting about why you don’t like how it looks.


Along with this, once you’ve dealt with enough kinds of problems, you end up developing an intuition for how something was probably implemented.

This can help you anticipate what features are probably included in a framework/library, as well as how likely they are to work efficiently/correctly (you know that XYZ is a hard problem vs. ABC which is pretty easy for a journeyman to get right.)

As an example, a friend of mine reported a performance issue to a 3rd-party vendor recently. Based on a little bit of information he had on data scale and changes the 3rd-party made to their query API, he basically could tell them that they probably didn’t have index coverage on the new fields that could be queried from the API. That’s with almost no knowledge of how the internals of their API were implemented, other than that they were using Postgres (and he was right, by the way).

That’s not always going to happen, but there are just a lot of common patterns with known limitations that you can start to anticipate stuff after awhile.


I would recommend email for this. It’s a text-based protocol and the original RFCs 821/822 are pretty straight-forward. There are some additional rabbit holes related to content encoding, but if one can implement a simple MTA, a huge amount of the magic of the internet becomes accessible.

I would not recommend trying to build a “production grade” MTA, as there is a lot of minutia to get right, and it’s easy to screw up.


I agree with the need, but not your rationale, I’m in the “always curly braces” camp for two reasons:

  • when a second line gets added in a condition block, the braces might not get added, a bug.
  • one less decision to make while coding. Anything that removes trivial decision-making can speed up authoring and reading code.


I believe the setting is user.email so maybe confirm that’s what you have set in both? Git will silently ignore settings that aren’t used/defined.


Your question, as best as I could tell, is that you want DNS traffic to exit through your VPS node, rather than your client machine.

I posited one reason this could be happening, and additionally, a similar setup that provably routes traffic through the VPN based on the method I described.

Nobody in here is obligated to help you, I gave you a couple threads to pull on to resolve your question, so maybe consider accepting it graciously, rather than being obstinate.


Of course, you have to trust that third party, which may/may not be prudent.


It’s not completely clear what you mean, but I’m guessing you’re only routing a subset of your traffic through wireguard, probably only IPv4, and there may be some IPv6 traffic that is not being routed over your wireguard connection.

You can specify any IPs you want for DNS with wireguard, and if your allowed IPs include those addresses, then it should flow over your VPN.

I do this with Pihole at home, and it blocks ads while I’m away.

With whatever test you’re running that says stuff is “leaking,” keep in mind that the website is going to report any traffic that originates from your VPS as “unprotected” because it’s not their system, and even if you run your own DNS server, it’s still got to query upstream to a public DNS. All they’re really doing is demonstrating which upstream DNS server you have configured, and it’s up to you if you want your VPS’s IP to be connected to the query history of that upstream DNS provider.

You will usually need a hostname in DNS for your VPN server to make it easy to find/connect, which will use your normal DNS resolution. Once connected, if you have it set up correctly, new dns queries should route through your VPN connection. Just keep in mind that various results can be cached on your system and in web browsers, so you should quit and reopen your browser after you connect to the VPN before you run your “leak” test.


LLMs aren’t going to give you a roadmap or prioritize concepts. They also frequently produce contradictory information.

They’re good tools if you already have some experience and vocabulary in the field, but a more structured approach to building some projects and acquiring skills is better.


In my 20 year career, I’ve never had a single position where I could ssh into my work machine from a remote location.

I would say that if you have been able to do that, it’s exceptionally rare, and there are a number of security red flags of your organization is allowing that.


And you can do the previous years of the coding challenge at any time.

I took some time off, and this was a good source of solving “real” problems, rather than trying to write something to optimize for l33tcode (which, is fine… just not a good measure for typical software engineering responsibilities, IMO).


Like I said in my other comment, I think people tend to lump all of MSFT’s activities into the same bucket. DevDiv has always seemed pretty decent, and I am usually reminded of this comic when people talk about MSFT’s “shady” activities.


Everything is temporary. If we were talking about a niche language, I might worry a little bit that it could just lose momentum and die. But TS is a juggernaut. The only way typescript “dies” is if JS integrates enough of its features to make it redundant.

Besides that, if Oracle managed to allow Java to continue to grow and flourish, I have confidence that MS can do at least that well. I also think lumping all of MS’s products into the same boat is a mistake. They have been pretty good stewards of their languages for decades.


It’s necessarily complexity that is easily encapsulated in methods.

If those methods are under test to verify their behavior, trivial typos can be detected instantly, without adding another dialect and more conceptual overhead to a project.

If those methods are not under test, then there’s a tiny bit of help by using a DSL if it can be compile-time checked.


I used to be full on the ORM train. Now I’m a little less enthusiastic. What I actually think people need most of the time is something closer to ActiveRecord. Something that can easily map a result set into a collection of typed objects. You still generally write parameterized SQL, but the work of translating a db decimal into the correct target type on a record object in your language is handled for you (for example). In .net, Dapper is a good example.

I also think most people overemphasize or talk about how other programmers “suck at SQL” waaayy too much.

IMO, for most situations, these are the few high-level things that devs should be vigilant about:

  • parameterize all sql.
  • consider the big-o of the app-side lookup/write methods (sometimes an app join or pulling a larger set and filtering in memory is better than crafting very complex projections in sql). This is a little harder to analyze with an ORM, but not by much if you keep the mappings simple and understand the loading semantics of the ORM.
  • understand the index coverage of queries and model table keys properly to maintain insert performance (monotonically increasing keys).
  • stop fixating on optimizing queries that run in a few seconds, a few times a day. Optimize the stuff that you run on every transaction - if you need to.

On most of those points, if you don’t have aggregate query counts/metrics on query performance on your clusters, starting to get cute with complex queries is flying blind, and there’s no way to prioritize what to optimize.

For the vast majority of cases, simple, obvious selects that don’t involve special db features are going to do the job for most applications. When the database becomes a bottleneck, there are usually much more effective ways to handle them than to try to hand optimize all the queries.

Lastly, I have a little bit of a theory that part of the reason people do/do not like looking at SQL in code is because it’s a hard context switch from one language to another, often requiring the programmer to switch to “stringly-typed” mode, something we all learn causes huge numbers of headaches in our first few months of programming. Some developers accept that there’s going to be different languages/contexts and not all of them are going to be as fluent or familiar, but accept that this is par for the job. Others recoil from the unfamiliar and want to burn it down. IMO, the former attitude is a lot more productive.


My running joke, after four different friends told me they were using ChatGPT to help them with it, is that the language is so hard to learn that we invented an entirely new class of AI to help.

It’s a joke, of course, but it does have some “surprising” syntax, since some stuff is whitespace sensitive, and there are subtle differences between () and [] and [[ ]], for example. All of that’s due to the long history of shell behavior, so I don’t necessarily blame bash.


Unicode is thoroughly underrated.

UTF-8, doubly so. One of the amazing/clever things they did was to build off of ASCII as a subset by taking advantage of the extra bit to stay backwards compatible, which is a lesson we should all learn when evolving systems with users (your chances of success are much better if you extend than to rewrite).

On the other hand, having dealt with UTF-7 (a very “special” email encoding), it takes a certain kind of nerd to really appreciate the nuances of encodings.


That’s interesting. Usually when I see people talking about Rust, they really like it. Are there specific parts that make it less enjoyable than go for you?


I’ve tumbled down this rabbit hole on more than one occasion.

This line of thinking can lead you to the conclusion that the only ecologically just thing to do is for humans to cease to exist.

It’s a trap that can lead to despair.

Do your part to be mindful, respectful, and conservative with resources, but don’t give in to nihilism.


Cool. I should check it out. I tend to assume that when Apple (or Google) rolled this out that it’s not broken in any obvious way that I would recognize right away.

But like contactless payments, which I’ve advocated my friends and family switch to, I should read up on why it’s more secure.


You can still keep password + 2FA on GitHub and Google Suite (probably anything else that’s currently implementing them), it’s just a convenience/anti-phishing feature right now.

The passkey is synced between devices if it’s kept in a password manager, I haven’t looked at the mechanism that Apple uses to sync it/use it if you store it in the system keychain. I guess you could also have multiple passkeys configured for a few devices.


And, they are actually more convenient because then entire login process is one step with minimal keyboard input, rather than two.


I take what they’re saying as “don’t just give up/refuse to answer” - it’s fine to say “I don’t know, but I have a guess on how I’d start/find out” and try to work through that. In a real working environment, this is more how it’d work, and if someone truly didn’t know where to start, usually the co-worker would try to help, which is not always how interviews go.


Which is what putting most of this stuff on the background accomplishes. It necessitates designing the UX with appropriate feedback. Sometimes you can’t make things go faster than they go. For example, a web request, or pulling data from an ancient disk that a user is using - you as an author don’t have control over these, the OS doesn’t even have control over them.

Should software that depends on external resources refuse to run?

The author is talking about switching to some RTOS due to this, which is extreme. OS vendors have spent decades trying to sort out the “Beachball of Death” issue, that is exceedingly rare on modern systems, due to better multi-tasking support, and dramatically faster hardware.

Most GUI apps are not hard RT and trying to make them so would be incredibly costly and severely limit other aspects of systems that users regularly prefer (like keeping 100 apps and browser tabs open).


I’d rather they ask me a question on something for which I’m an expert (myself) and that I can prepare for, than to fire off leetcode question.

Yeah, it’s a little bit redundant, but it can break the initial tension and get the conversation going. You can also use the time to emphasize some specific aspect of your work history that you think matches up with the job req, or shows why you actually want to work there.

If they don’t ask this question/prompt, what question would you want them to ask?


You might have a look at “CONTRIBUTING.md” files in repos.

  • Set quality standards (no giant PRs, follow documented coding style, include tests for changed functionality, etc).
  • Establish a way to discuss contributing work before they do it. Generally, have them open an issue discussing the proposed change and get buy-in from the maintainer (you) before starting work.
  • document any high-level goals and non-goals in the README.md for the repo, and refer to that when discussing changes. You can always amend it as you discover more about what should be built.

Initially, contributors can fork and send a pull request for you to review and merge. You do not need to give them any write access to the main repository. Be respectful of their time and review PRs promptly.

If multiple people want to collaborate on a branch, they can do that in their fork. In my experience, this is pretty rare, usually you don’t want multiple people committing to the same branch (except for merges to master/main/stable, etc).

If you have a few dedicated contributors that have a history of submitting good quality patches, and alignment with you on your project’s goals, you can invite them to have more control in the main repository, at which point there should be minimal concern about granular controls.


The problem with the article is that it’s confusing hard realtime and low latency requirements. Most UIs do not require hard realtime, even soft realtime is a nice to have and users will tolerate some latency.

I also think the author handwaves “too many blocking calls end up on the main thread.”

Hardly. This is like rule zero for building gui apps. Put any non-trivial or blocking work on a background thread. It was harder to do before mainstream languages got good green thread/async support, but it’s almost trivial now.

I agree that there are still calls that could have variable response times (such as virtual memory being paged in or out), but even low-end machines are RAM-rich and SSDs are damn fast. The kernel is likely also doing some optimization to page stuff in from disk for the foreground app.

It’s nice to think through the issue, but I don’t think it’s quite as dire as the author claims.


The thread you are in and my response made it clear that the headline is clickbait by including that irrelevant detail.

If they didn’t include that word in the post title, it would have no traction at all.


It literally doesn’t matter. You can remove the word and the nature of the problem being discussed is still the same. What platform is being targeted has nothing to do with the example problem. Roblox is only mentioned to sensationalize it and get clicks.


It’s misleading because it’s irrelevant and makes it sound like a platform breach.

Try replacing Roblox with “Foozsplatz” and the implication of severity is completely different, even though the nature of what is being reported is unchanged.