This seems to happen quite often when programmers try to save time when writing tests, instead of writing very simple tests and allowing the duplication to accumulate before removing it. I understand how they feel: they see the pattern and want to skip the boring parts.

No worries. If you skip the boring parts, then much of the time you’ll be less bored, but sometimes this will happen. If you want to avoid this, then you’ll have to accept some boredom then refactor the tests later. Maybe never, if your pattern ends up with only two or three instances. If you want to know which path is shorter before you start, then so would I. I can sometimes guess correctly. I mostly never know, because I pick one path and stick with it, so I can never compare.

This also tends to happen when the code they’re testing has painful hardwired dependencies on expensive external resources. The “bug” in the test is a symptom of the design of the production code. Yay! You learned something! Time to roll up your sleeves and start breaking things apart… assuming that you need to change it at all. Worst case, leave a warning for the next person.

If you’d like a simple rule to follow, here’s one: no branching in your tests. If you think you want a branch, then split the tests into two or more tests, then write them individually, then maybe refactor to remove the duplication. It’s not a perfect rule, but it’ll take you far…

the code they’re testing has painful hardwired dependencies on expensive external resources

I’ve told this story elsewhere, but I had a coworker who wrote an app to remote-control a baseball-throwing machine from a PDA (running WinCE). These machines cost upwards of $50K so he only very rarely had physical access to one. He loved to write tests, which did him no good when his code fired a 125 mph knuckleball a foot over a 10-year-old kid’s head. This resulted in the only occasion in my career when I had to physically restrain a client from punching a colleague.

Wow. I love that story and I’m glad nobody was hurt.

I wonder whether that happened as a result of unexpected behavior by the pitching machine or an incorrect assumption about the pitching machine in that coworker’s tests.

I find this story compelling because it illustrates the points about managing risk and the limits of testing, but it doesn’t sound like the typical story that’s obviously hyperbole and could never happen to me.

Thank you for sharing it.

It happened because the programmer changed the API from a call that accepted integer values between 0 and 32767 (minimum and maximum wheel speeds) to one that accepted float values between 0.0 and 1.0. A very reasonable change to make, but he quick-fixed all the compiler errors that this produced by casting the passed integer parameters all through his code to float and then clamping the values between 0.0 and 1.0. The result was that formerly low-speed parameters (like 5000 and 6000, for example, which should have produced something like a 20 mph ball with topspin) were instead cast and clamped to 1.0 - maximum speed on both throwing wheels and the aforesaid 125 mph knuckleball. He rewrote his tests to check that passed params were indeed between 0.0 and 1.0, which was pointless since all input was clamped to that range anyway. And there was no way to really test for a “dangerous” throw anyway since the machine was required to be capable of this sort of thing if that’s what the coach using it wanted.

Yikes! That’s also a great cautionary tale for Primitive Obsession/Whole Value as well as a bunch of other design principles.

I’m thinking about how I’d have done that refactoring and now I wish I had the code base to try it on. It sounds like it would make a really good real-life exercise in a workshop. “Remember folks, you have to get this right. There’s not really a way to check this with the real hardware, and if you get it wrong, someone’s going to get hurt.”

Thanks again.

Well, I have a rule now which is “never test your shit on Little Leaguers” and nobody I’ve worked with has any idea what that means.

I don’t think I’ll forget.

This is true Customer empathy.

API from a call that accepted integer values between 0 and 32767 (minimum and maximum wheel speeds) to one that accepted float values between 0.0 and 1.0.

This would cause alarm bells to ring in my head for sure. If I did something like that I would make a new type that was definitely not implicitly castable to or from the old type. Definitely not a raw integer or float type.

That kind of code usually is written on a restricted dialect of C.

C is not a language that allows for that kind of safety practice even on the fully-featured version.

Even in C this is possible. Just wrap the float or whatever in a struct and all implicit conversions will be gone.

Indeed, this is a time for naming conventions that communicate the details that the type system can’t clarify. This leads to the long names that senior programmers make fun of. Don’t listen to them; let them laugh then make this kind of mistake.

This leads to the long names that senior programmers make fun of.

Hum… The notation that I’ve seen people making fun of is one where the long names encode the exact same information that C types can handle for you and nothing else. But YMMV.

Anyway, I don’t think any naming convention can save you after somebody goes over your entire codebase converting things without care for the semantics. If you are lucky, it’s one of the lazy people that do that, and you will “only” have to revise tens of thousands of lines to fix it. If you are unlucky, the same person will helpfully adjust the names for you too.

Ah, the ol’ off-by-one-foot problem.

And then in the end we realize the most important thing was the tests we wrote along the way.

Who tests the tests

Create tests to test the tests. Create tests to test those. Recurse to infinity

Mutation testing is quite cool. Basically it analyzes you code and makes changes that should break something. For example if you have if (foo) { ... } it will remove the branch or make the branch run every time. It then runs your tests and sees if anything fails. If the tests don’t fail then either you should add another test, or that code was truly dead and should be removed.

Of course this has lots of “false positives”. For example you may be checking if an allocation succeeded and don’t need to test if every possible allocation in your code fails, you trust that you can write if (!mem) abort() correctly.

Right,too much coverage is also a bad thing. It leads to having to work on the silly tests every time you change som implementation detail.

Good tests let the insides of the unit change without breaking, as long as the behave the same to the outside world.

I’ve written some tests that got complex enough that I also wrote tests for the logic within the tests.

@AAA@feddit.de
link
fedilink
610M

We do that for some of the more complex business logic. We wrote libraries, which are used by our tests, and we wrote tests which test the library functions to ensure they provide correct results.

What always worries me is that WE came up with that. It wasn’t some higher up, or business unit, or anything. Only because we cared to do our job correctly. If we didn’t - nobody would. Nobody is watching the testers (in my experience).

The Testmen?

Who tests the tests for the tests

@AAA@feddit.de
link
fedilink
310M

Unfortunately, if anyone, I do.

If you use your type system to make invalid states impossible to represent & your functions are pure, there less—maybe nothing—to test, which will save you from this scenario.

Nothing to test? Lol what.

def add(a: int, b: int) -> int: return a * b

All types are correct. No side effects. Does the wrong thing.

Maybe it’s doing the right thing but is badly named

Maybe the it’s the English language that is wrong?

@dan@upvote.au
link
fedilink
4
edit-2
10M

Old and busted: Fix the function

New hotness: Redefine enough words in the English language such that the function is now correctly implemented

You can’t have any bugs if you don’t write any code.

It must be nice to work only with toy cases where this is feasible.

Nothing toy-like about using ADTs to eliminate certain cases. When all cases are handled, your tests can move from a micro state to a macro state. Contraint types or linear types can be used to only allow certain sizes of inputs or require all file handles be closed when opened.

Naturally if your language’s type system is bad you can’t make these compile-time guarantees tho. Heck, a lot of developers are still using piss-poor languages with null or the infernce sucks with any.

Lmao, I just had something similar

I’ve seen some interesting thoughts on TDD with fail, pass, refactor assumptions. I’m curious if anyone here is writing functional code in order to then make a failing functional test pass i.e. BDD / ATDD. This follows similar logic without the refactor assumption. I’ve seen strong opinions on every side as far as this is concerned. On a team with Dev and QA competencies, I’ve heard a number of devs glad to get QA out of the bottleneck and put their knowledge to better use.

Depends. If I’m working in an existing system and I know what the shape of the thing I’m writing is, then I might write the test first and tdd it out as that process is usually a bit faster for me.

If I’m developing a new feature I’d probably spike out a solution and write an acceptance test to match it, then if I’m feeling pedantic I might throw away the spike code and tdd it back up from scratch but I haven’t done that in a while now.

This all depends on the language and the abstraction layer I’m at.

This is why you write the test before the code. You write the test to make sure something fails, then you write the code to make it pass. Then you repeat this until all your behaviors are captured in code. It’s called TDD

But, full marks for writing tests in the first place

oce 🐆
link
fedilink
7210M

That supposes to have a clear idea of what you’re going to code. Otherwise, it’s a lot of time wasted to constantly rewrite both the code and tests as you better understand how you’re going to solve the task while trying. I guess it works for very narrowed tasks rather than opened problems.

The tests help you discover what needs to be written, too. Honestly, I can’t imagine starting to write code unless I have at least a rough concept of what to write.

Maybe I’m being judgemental (I don’t mean to be) but what I am trying to say is that, in my experience, writing tests as you code has usually lead to the best outcomes and often the fastest delivery times.

Everything is made up of narrow tasks, you “just” need to break it down more :)

The only projects I’ve ever found interesting in my career was the stuff where nobody had any idea yet how the problem was going to be handled, and you’re right that starting with tests is not even possible in this scenario (prototyping is what’s really important). Whenever I’ve written yet another text/email/calling/video Skype clone for yet another cable company, it’s possible to start with tests because you already know everything that’s going into it.

@homoludens@feddit.de
link
fedilink
13
edit-2
10M

constantly rewrite both the code and tests as you better understand how you’re going to solve the task while trying

The tests should be decoupled from the “how” though. It’s obviously not possible to completely decouple them, but if you’re “constantly” rewriting, something is going wrong.

Brilliant talk on that topic (with slight audio problems): https://www.youtube.com/watch?v=EZ05e7EMOLM

100%. TDD is just not practicably applicable to a lot of scenarios and I wish evangelists were clearer on that detail.

You could replace “TDD” with pretty much any fixed methodology and be completely accurate.

This is the reason I dislike TDD.

TDD doesn’t imply that you write all the tests first. It just mean you have to write a test before you write a line of production code.

The idea is to ask yourself “what is the first step I need, where am I going to begin?”. You then write a test that validate this first step and fail. Then you write the code to make it pass. Once your done with that, you ask yourself: "what’s the next step? ". You, then, repeat the process for that step.

This is a process you are going to do anyway. Might as well take the time to write some test along with it.

That leads to focusing on the nitty gritty details first, building a library of thing you think you might need and you forget to think about the whole solution.

If you come up with another solution half way through, you will probably throw away half of the code you already built.

I see TDD as going depth first whereas I prefer to go breadth first. Try out a solution and skip the details (by mocking or assuming things). Once you have settled on the right solution you can fill in the details.

Meaning your tests where to complex.

I always name my tests too complex 🥲.

@vsh@lemm.ee
link
fedilink
English
3010M

I don’t need tests when I know the output 😎

What if the output is encrypted? Or 34d matrix.

What if the test was testing timing. Or threading. Or error handing?

LOOKS GOOD TO ME, SHIP IT

ChatGPT go brrrrrr

@kd45@lemm.ee
link
fedilink
-810M

deleted by creator

@jedibob5@lemmy.world
link
fedilink
English
1410M

Bugs in tests aren’t necessarily exceptions. You could be incorrectly setting up your function inputs, or just making the wrong assertions.

I Cast Fist
link
fedilink
English
1210M

I remember being asked to make unit tests. I wasn’t the programmer and for the better part of a week, they didn’t even let me look at the code. Yeah, I can make some great unit tests that’ll never fail without access to the stuff I’m supposed to test. /s

I guess it would make sense if you’re testing a public API? To make sure the documentation is sufficient and accurate.

Natanael
link
fedilink
1010M

Yeah blackbox testing is a whole thing and it’s common when you need something to follow a spec and be compatible

folkrav
link
fedilink
110M

He specifically said “unit tests” though, which aren’t black box tests by definition

It makes sense to do it like that if you are supposed to test requirements. Depending on the testing tools you have it might not be feasible unfortunately.

This is why you write the tests first before the actual code.

fmstrat
link
fedilink
English
1110M

But this does mean, writing tests works.

Just dont write tests lol, that’s what QA is for.

@dan@upvote.au
link
fedilink
110M

What do you do when QA finds a bug though? Just fix it without writing a test to ensure it doesn’t happen again?

Fix it then hire another QA

This guy’s a business major.

(He just fired half of QA)

He works for Bungie?

Hell no! I love QA, they find all the bugs I make since I dont bother with unit tests. I think every dev team should have 1:1 devs to testers.

folkrav
link
fedilink
210M

Every time someone needs to change anything nontrivial, like a large feature or a refactor they just code away blindly, hope it doesn’t make anything explode, and then wait for QA to pat them on the back or raise any potential bug? Does this mean your QA team makes a full product sweep for every single feature that gets merged? If that’s the case, you’d need more than 1 QA per developer. If not, you’re now stuck debugging blindly too, not knowing when the thing broke?

I worked with a team like yours at one point, and it was hell 😬 Each new feature is like poking away at a black box hoping it doesn’t explode…

On the internet, nobody knows when you’re being sarcastic.

folkrav
link
fedilink
110M

Indeed!

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 120 users / day
  • 257 users / week
  • 744 users / month
  • 3.72K users / 6 months
  • 1 subscriber
  • 1.47K Posts
  • 32.2K Comments
  • Modlog