Honestly, before I’m done setting up a debugger and creating breakpoints, etc. I have added 10 consle.log() at assumed failure points and run the code again two times.
For local development, it should be super quick. However, I’m currently building a small project where a device (or rather the library using it) can’t really be used with a debugger. So 500 print()s it is.
As soon as I make more than a script, I’m using a debugger.
I really can’t wrap my head around how so many of my colleagues in the professional work field just print wherever until they find their problem.
print statements feel like touching around in pitch darkness until I found what I sought, compared to a debugger which feels like just seeing my room and daylight while finding what I sought.
I’ve noticed that debugging tends to be more important in imperative languages than functional ones. With imperative style, you have a lot of implicit state that you need to know to figure out what actually happened. So, you end up having to go through the steps of building that state up before you can start figuring out what went wrong. On the other hand, the state is passed around explicitly with the functional paradigm, and you can typically figure out the problem by looking at the exact spot where the error occurred.
My typical debugging workflow with Clojure is to just read the stack trace, go to the last function in it, and then see what it’s doing wrong. Very rarely do I find the need to start digging deeper. I think another aspect of it is having an interactive development workflow. When you’re running code as you’re developing it, you see problems pop up as you go and you can fix them before you move to the next step. This way you don’t end up in situations where you wrote a whole bunch of code that you haven’t run, and now you’re not sure if it all works the way you expected.
With imperative style, you have a lot of implicit state that you need to know to figure out what actually happened. So, you end up having to go through the steps of building that state up before you can start figuring out what went wrong.
i think i struggle with this part the most since i’m entirely self taught and relied on very old methods for writing my source since the educational material i used was the most common and freely available at the time i starting doing development work. i’ve learned that it was acceptably sufficient for the IT-based problems that i was trying to solve at the time i learned it and that legacy style has been keeping me at a disadvantage.
if seen some of the newer style of debugging like the one you’re shared from the young fresh graduate developers who are lucky enough to be spared the slog of a over decade within “customer service” oriented side of the tech industry umbrella and it’s painfully evident to me how vastly superior it is compared to the old methods that i taught myself and it’s encouraged me to seek a degree to help me master them and my new job will make that degree free for me; which matters A LOT as an american considering the price tag it entails.
I find a good approach to getting better at programming is to reflect on the projects you’ve done and try to identify patterns that got you into trouble. Then you can try doing things differently next time, and eventually you end up settling on a style that works for you. At the end of the day it’s really just practice. The one key thing I’ve learned to focus on is reducing the operating context I need to have when reading the code. Once the context becomes too big to keep in your head, then trouble starts. So breaking things up aggressively into small components you can reason about in isolation tends to be the best way to write reliable code you can maintain over time.
So breaking things up aggressively into small components you can reason about in isolation tends to be the best way to write reliable code you can maintain over time.
This is so true. Something that has really improved my coding has been having a linter that whines to me about assignment branch condition size. Compared with learning how to properly stub methods in tests it has helped me break tasks down into simple manageable chunks with little room for error.
I find it’s also helpful to explicitly think about high level flow in the code. There are typically two types of code in an application. There’s routing code that figures out where the payload needs to go, and then there’s the code that actually cares about the content of the payload. The routing code can be thought of as sort of a railway where you ship packages around. When a package gets to a destination then you pass it to the code that knows what do do with it.
Nowadays, I really like to draw it out as a state machine before I start working on the code. When you just start coding, it’s very easy to focus on the happy path and then you end up having to start kludging handling of exceptional cases as they come up. When you sketch out the state machine, it forces you to consider the error cases up front. You don’t have to handle them right away, but the design should account for them at the very least. This is an excellent read about this approach https://shopify.engineering/17488160-why-developers-should-be-force-fed-state-machines
One aspect I feel is never talked about is that setting up the debugging more often than not takes you out of the mental space of the problem you are trying to solve. A console log is basically there already in many cases.
Added a throw new Exception('fuck');. Debugger didn’t throw.
Stepped through. Debugger wouldn’t let me step in.
It took me almost an hour to realize it wasn’t the debugger’s fault and that a variable I thought was guaranteed to be truthy at that point was actually falsey due to upstream changes in a spreadsheet parser. I felt kind of stupid for not trusting the debugger at that point.
Kind of unrelated, why does c sometimes fail to print if it hits a breakpoint right after a print while debugging? Or if it segfaults right after too iirc
does anything flush the buffers after the print, but before the break? otherwise, if the stream you’re printing to is buffered, you’re not necessarily gonna see any output
Im pretty sure its because of char 13 (carriage return). This char sets cursor to the start of the line overwriting whatever was printed there (in most terminals). I belive that some error messages use this char and when you print something the char at the begining or end of the error message overwrites your message. A workaround is simply printing a newline after or before your message.
Without knowing the details of C, I’ve seen this in other languages and it’s usually something with missing a flush or a buffered output mode or something like that.
Depending on what kind of coding you’re doing, there might not be an obvious, really atomic unit to test. Most people here seem to do the data-plumbing-for-corporations kind, though.
At a certain level of detail, tests just become a debugger, right?
I’m thinking of something like an implementation of Strassen’s algorithm. It’s all arithmetic; you can’t really check for macro correctness at a micro point without doing a similar kind of arithmetic yourself, which is basically just writing the same code again. It resembles nothing other than itself.
Yeah, you definitely run fixed tests on the whole thing. But when it returns indecipherable garbage, you’ve got to dive in in more detail, and at that point you’re just doing breakpoints and watchpoints and looking at walls of floating point values.
I suppose Strassen’s is recursive, so you could tackle it that way, but for other numerical-type things there is no such option.
data-plumbing-for-corporations tends to be able to be done in a way that’s easily testable, but also most people get paid to bolt on new shit onto old shit and spending time on “done” code is discouraged so once they fall behind on writing tests while developing the new shit those tests will never be written.
and bad developers that won’t write tests no matter what actually do exist.
If I actually did have that kind of job, the tests-first philosophy would sound very appealing. Actually, build the stack so you don’t have a choice - the real code should just be an instantiation of plumbing on generic variables with certain expected statistical properties. You can do that when correctly processing unpredictable but repetitive stuff is the name of the game, and I expect someone does.
Unit tests typically test rather fine grained, but coming up with the structure of the grain is 80% of the work. Often enough you end up with code that’s structured differently than initially thought, because it turns out that this one class needs to be wrapped, and this annotation doesn’t play nice with the other one when used on the same class, etc etc.
Sorry to hear that. I’ve had no issue with the debuggers available in neovim personally though. A bit hard to set up dap and dap ui but once it’s there it’s golden
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
I had tried to use debugger with React so many times and each time I’d drop it soon after. Not useful at all.
Does much better job on the backend though
Why not both
Honestly, before I’m done setting up a debugger and creating breakpoints, etc. I have added 10
consle.log()
at assumed failure points and run the code again two times.For local development, it should be super quick. However, I’m currently building a small project where a device (or rather the library using it) can’t really be used with a debugger. So 500 print()s it is.
Oh so it’s not just me
As soon as I make more than a script, I’m using a debugger.
I really can’t wrap my head around how so many of my colleagues in the professional work field just
print
wherever until they find their problem.print
statements feel like touching around in pitch darkness until I found what I sought, compared to a debugger which feels like just seeing my room and daylight while finding what I sought.I crave the challenge
^ This is the person I want to develop with. My goodness, can I produce some tasty, broken first-pass code you can go wild on.
“Fine! I’ll set up debugging! Stupid piece of…” - Me at some point, a little too late, into most projects
but it’s so much easier to put in
echo "if you can see this it worked"
100 times in your source. lolI’ve noticed that debugging tends to be more important in imperative languages than functional ones. With imperative style, you have a lot of implicit state that you need to know to figure out what actually happened. So, you end up having to go through the steps of building that state up before you can start figuring out what went wrong. On the other hand, the state is passed around explicitly with the functional paradigm, and you can typically figure out the problem by looking at the exact spot where the error occurred.
My typical debugging workflow with Clojure is to just read the stack trace, go to the last function in it, and then see what it’s doing wrong. Very rarely do I find the need to start digging deeper. I think another aspect of it is having an interactive development workflow. When you’re running code as you’re developing it, you see problems pop up as you go and you can fix them before you move to the next step. This way you don’t end up in situations where you wrote a whole bunch of code that you haven’t run, and now you’re not sure if it all works the way you expected.
i think i struggle with this part the most since i’m entirely self taught and relied on very old methods for writing my source since the educational material i used was the most common and freely available at the time i starting doing development work. i’ve learned that it was acceptably sufficient for the IT-based problems that i was trying to solve at the time i learned it and that legacy style has been keeping me at a disadvantage.
if seen some of the newer style of debugging like the one you’re shared from the young fresh graduate developers who are lucky enough to be spared the slog of a over decade within “customer service” oriented side of the tech industry umbrella and it’s painfully evident to me how vastly superior it is compared to the old methods that i taught myself and it’s encouraged me to seek a degree to help me master them and my new job will make that degree free for me; which matters A LOT as an american considering the price tag it entails.
I find a good approach to getting better at programming is to reflect on the projects you’ve done and try to identify patterns that got you into trouble. Then you can try doing things differently next time, and eventually you end up settling on a style that works for you. At the end of the day it’s really just practice. The one key thing I’ve learned to focus on is reducing the operating context I need to have when reading the code. Once the context becomes too big to keep in your head, then trouble starts. So breaking things up aggressively into small components you can reason about in isolation tends to be the best way to write reliable code you can maintain over time.
This is so true. Something that has really improved my coding has been having a linter that whines to me about assignment branch condition size. Compared with learning how to properly stub methods in tests it has helped me break tasks down into simple manageable chunks with little room for error.
I find it’s also helpful to explicitly think about high level flow in the code. There are typically two types of code in an application. There’s routing code that figures out where the payload needs to go, and then there’s the code that actually cares about the content of the payload. The routing code can be thought of as sort of a railway where you ship packages around. When a package gets to a destination then you pass it to the code that knows what do do with it.
Nowadays, I really like to draw it out as a state machine before I start working on the code. When you just start coding, it’s very easy to focus on the happy path and then you end up having to start kludging handling of exceptional cases as they come up. When you sketch out the state machine, it forces you to consider the error cases up front. You don’t have to handle them right away, but the design should account for them at the very least. This is an excellent read about this approach https://shopify.engineering/17488160-why-developers-should-be-force-fed-state-machines
I’m so glad it isn’t just me lol
then i feel sorry for you too. lol
This meme makes it look like it’s hard decision. I always immediately slam the button on the right.
Why? In my experience using a real debugger is always the superior choice. The only time I don’t is when I can’t.
I rarely have access to one.
console.log(d++);
Repeat.
One aspect I feel is never talked about is that setting up the debugging more often than not takes you out of the mental space of the problem you are trying to solve. A console log is basically there already in many cases.
Hmm. I, on the other hand, tend to write a lot more code than I probably should before I do debugging, so there’s plenty to go back through again.
Although this looks like it’s for a browser, and for all I know debuggers work completely differently in there.
I did both of these at once last week.
Added a breakpoint. Debugger didn’t break.
Added an
echo "here";
. Debugger didn’t print.Added a
throw new Exception('fuck');
. Debugger didn’t throw.Stepped through. Debugger wouldn’t let me step in.
It took me almost an hour to realize it wasn’t the debugger’s fault and that a variable I thought was guaranteed to be truthy at that point was actually falsey due to upstream changes in a spreadsheet parser. I felt kind of stupid for not trusting the debugger at that point.
Kind of unrelated, why does c sometimes fail to print if it hits a breakpoint right after a print while debugging? Or if it segfaults right after too iirc
does anything flush the buffers after the print, but before the break? otherwise, if the stream you’re printing to is buffered, you’re not necessarily gonna see any output
I don’t know, I just use printf
Im pretty sure its because of char 13 (carriage return). This char sets cursor to the start of the line overwriting whatever was printed there (in most terminals). I belive that some error messages use this char and when you print something the char at the begining or end of the error message overwrites your message. A workaround is simply printing a newline after or before your message.
Without knowing the details of C, I’ve seen this in other languages and it’s usually something with missing a flush or a buffered output mode or something like that.
Debug single threaded code, log multi threaded code.
“Oh, they print in that order? That’s weird.”
Surely we all write unit tests and debug from there, right?
… Right?
Depending on what kind of coding you’re doing, there might not be an obvious, really atomic unit to test. Most people here seem to do the data-plumbing-for-corporations kind, though.
Especially then I’d test the shit out of everything? I’m getting paid for writing correct software.
At a certain level of detail, tests just become a debugger, right?
I’m thinking of something like an implementation of Strassen’s algorithm. It’s all arithmetic; you can’t really check for macro correctness at a micro point without doing a similar kind of arithmetic yourself, which is basically just writing the same code again. It resembles nothing other than itself.
And who actually writes tests like that?
I mean, do you think tests do the calculations again? You simply have well defined input and known, static output. That’s it.
Yeah, you definitely run fixed tests on the whole thing. But when it returns indecipherable garbage, you’ve got to dive in in more detail, and at that point you’re just doing breakpoints and watchpoints and looking at walls of floating point values.
I suppose Strassen’s is recursive, so you could tackle it that way, but for other numerical-type things there is no such option.
data-plumbing-for-corporations tends to be able to be done in a way that’s easily testable, but also most people get paid to bolt on new shit onto old shit and spending time on “done” code is discouraged so once they fall behind on writing tests while developing the new shit those tests will never be written.
and bad developers that won’t write tests no matter what actually do exist.
If I actually did have that kind of job, the tests-first philosophy would sound very appealing. Actually, build the stack so you don’t have a choice - the real code should just be an instantiation of plumbing on generic variables with certain expected statistical properties. You can do that when correctly processing unpredictable but repetitive stuff is the name of the game, and I expect someone does.
Tests first is only good in theory.
Unit tests typically test rather fine grained, but coming up with the structure of the grain is 80% of the work. Often enough you end up with code that’s structured differently than initially thought, because it turns out that this one class needs to be wrapped, and this annotation doesn’t play nice with the other one when used on the same class, etc etc.
Can’t you just add the wrapper to the test as well, if it’s easy to do in the actual code?
If you have to ask “can’t you just” the answer is almost always no.
On projects where I had to write tests, I wrote them to pass the current code.
I miss having a functioning debugger after moving to helix/neovim
Sorry to hear that. I’ve had no issue with the debuggers available in neovim personally though. A bit hard to set up dap and dap ui but once it’s there it’s golden
I couldn’t get it working in neovim a while back and now have moved to helix which kinda has it but not really
Get to my level of guts feeling debugging