Also big O analysis IMO should just be the starting point of maximizing efficiency. Those coefficients that just get dropped can have a huge impact. For example, any algorithm written in JavaScript or visual basic will be of the same order as that same algorithm written in C/C++ or rust, but will likely perform much slower. And the exact manner of implementation could even result in the C version being slower than the VB one.
And on the other hand, I wouldn’t call a lot of big O analysis very advanced math. You might get a tighter bound with advanced math on some algorithms, but you can get a rough estimate just by multiplying and adding loops together. The harder question would be something like “does this early exit optimization result in an O(x³) algorithm becoming an O(log(x)*x²)?”
Another big thing that doesn’t get covered by big O analysis is the potential for parallelization and multi threading, because the difference created by multi threading only amounts to one of those dropped coefficients.
And yet, especially for the workloads being run on a server with 32-128 cores, being able to run algorithms in parallel will make a huge difference to performance.
I did well in data structures and algorithms in uni, but I have never had those topics come up in my 4 years of being a software developer. I’m in web development, FWIW.
So you don’t really have to know that stuff, depending on what kind of software engineering that you get into.
I think part of it is going through all those sorting methods to show quick sort is the best so that a) students who run into new sorts are better equipped to determine it most likely isn’t better than quick sort and b) to show the process used to determine quick sort is better than the rest in case you have some other algorithm options you want to pick the best of.
And yeah, depending on what you do, some tasks never involve any of that, or when they do, they get offloaded onto a library or something that gives the solution to that step directly.
But, for example, if you have a collection, the question of array vs linked list vs tree is still relevant, even if you’re just choosing a provided construct that is built on top of one of those. Each has their strengths and weaknesses depending on how the data is added, removed, and accessed.
And with how slow things are these days despite how much better the hardware is, I think there’s a lot of successful software engineers and programmers who should be using that stuff from school more than they are.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
I failed maths, but I’m great at logic, which I consider to be the more important proficiency for programming.
I think you need both. Does your logic cover complexity analysis, big O?
There’s a lot of software engineering that doesn’t require understanding Big O
Also big O analysis IMO should just be the starting point of maximizing efficiency. Those coefficients that just get dropped can have a huge impact. For example, any algorithm written in JavaScript or visual basic will be of the same order as that same algorithm written in C/C++ or rust, but will likely perform much slower. And the exact manner of implementation could even result in the C version being slower than the VB one.
And on the other hand, I wouldn’t call a lot of big O analysis very advanced math. You might get a tighter bound with advanced math on some algorithms, but you can get a rough estimate just by multiplying and adding loops together. The harder question would be something like “does this early exit optimization result in an O(x³) algorithm becoming an O(log(x)*x²)?”
I think the tldr; of what you said is that even when you have a theoretical handle on the growth function, you still need to actually benchmark anyway
Another big thing that doesn’t get covered by big O analysis is the potential for parallelization and multi threading, because the difference created by multi threading only amounts to one of those dropped coefficients.
And yet, especially for the workloads being run on a server with 32-128 cores, being able to run algorithms in parallel will make a huge difference to performance.
I did well in data structures and algorithms in uni, but I have never had those topics come up in my 4 years of being a software developer. I’m in web development, FWIW.
So you don’t really have to know that stuff, depending on what kind of software engineering that you get into.
I’ve done Telephony, Games and I’m currently working in a high performance context. 99% of the time, you don’t need to be thinking about Big O
I think part of it is going through all those sorting methods to show quick sort is the best so that a) students who run into new sorts are better equipped to determine it most likely isn’t better than quick sort and b) to show the process used to determine quick sort is better than the rest in case you have some other algorithm options you want to pick the best of.
And yeah, depending on what you do, some tasks never involve any of that, or when they do, they get offloaded onto a library or something that gives the solution to that step directly.
But, for example, if you have a collection, the question of array vs linked list vs tree is still relevant, even if you’re just choosing a provided construct that is built on top of one of those. Each has their strengths and weaknesses depending on how the data is added, removed, and accessed.
And with how slow things are these days despite how much better the hardware is, I think there’s a lot of successful software engineers and programmers who should be using that stuff from school more than they are.