Premature optimization is the root of all evil. Implement algorithm the easiest way possible, profile your application, determine if this implementation a bottleneck or no. If yes, try other implementations, benchmark them and find the fastest one. Note that optimized go code can be faster than non-optimal code in rust, C, assembly or any other language.
Theoretical level is useless, believe me. What is useful is understanding at intuitive level. You can achieve it with or without knowing theory, but you need a lot of practice anyway. Also, different languages providing OOP actually encourage different approaches. You have to follow one that your PL is suited to and that is the best solution for your current task, not that OOP or any other paradigm dictates you.
zed has always been open source. Seems that you are just trying to squat its name, am I right?
The author it trying to solve non-existing problem with the tool that does not meet requirements that he presented himself.
$ ifconfig ens33 | grep inet | awk '{print $2}' | cut -d/ -f1 | head -n 1
Yeah, it’s awful. But wait… Could one achieve this a simpler way? Assume we never heard about ifconfig
deprecation (how many years ago? 15 or so?). Let’s see at ifconfig
output on my machine:
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 198.51.100.2 netmask 255.255.255.0 broadcast 255.255.255.255
inet6 fe80::12:3456 prefixlen 64 scopeid 0x20<link>
ether c8:60:00:12:34:56 txqueuelen 1000 (Ethernet)
RX packets 29756 bytes 13261938 (12.6 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 5657 bytes 725489 (708.4 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Seems that the cut
part of pipeline is not needed because netmask is specified separately. The purpose of head
part is likely to avoid printing IPv6 address, but this could be achieved by modifying a regular expression. So we get:
$ ifconfig ens33 | grep '^\s*inet\s' | awk '{print $2}'
If you know a bit more about awk
than only print
command, you change this to
$ ifconfig ens33 | awk '/^\s*inet\s/{print $2}'
But now remember that ifconfig
has been replaced with the ip
command (author knows about it, he uses it in the article, but not in this example that must show how weird are “traditional” pipelines). It allows to use format that is easier to parse and that is more predictable. It is also easy to ask it not to print information that we don’t need:
$ ip -brief -family inet address show dev ens33
ens33 UP 198.51.100.2/24
It has not only the advantage that we don’t need to filter out any lines, but also that output format is unlikely to change in future versions of ip
while ifconfig
output is not so predictable. However we need to split a netmask:
$ ip -brief -family inet address show dev ens33 | awk '{ split($3, ip, "/"); print ip[1] }'
198.51.100.2
The same without awk
, in plain shell:
$ ip -brief -family inet address show dev ens33 | while read _ _ ip _; do echo "${ip%/*}"; done
Is it better than using JSON output and jq
? It depends. If you need to obtain IP address in unpredictable environment (i. e. in end-user system that you know nothing about), you cannot rely on jq
because it is never installed by default. On your own system or system that you administer the choice is between learning awk
and learning jq
because both are quite complex. If you already know one, just use it.
Where is a place for the jc
tool here? There’s no. You don’t need to parse ifconfig
output, ifconfig
is not even installed by default in most modern Linux distros. And jc
has nothing common with UNIX philosophy because it is not a simple general purpose tool but an overcomplicated program with hardcoded parsers for texts, formats of which may vary breaking that parsers. Before parsing an output of command that is designed for better readability, you should ask yourself: how can I get the same information in parseable form? You almost always can.
Well, sometimes it is possible to write a loop with break
or without it, and in such cases solution without break
is better readable. But if you don’t see a simple way to avoid using break
, use it. It is very common, as well as having multiple return
statements in function. Even goto
can be a good solution sometimes if it points to label located below and not very far.
However you should avoid some antipatterns. If you write an infinite loop that is interrupted only by break, it is highly likely that you are doing something wrong. Nested loops with multiple breaks or gotos are very hard to read and debug. Such code usually can and should be rewritten for better readability and to avoid possible errors (occasional hangs, for instance).
I agree. The problem is that we already have a lot of compatibility breaking options in gcc: different language standards, non-standard extensions, language features that can be disabled, warnings that can be turned into errors… Multiplying them is not the thing that will make a programming language/compiler better.
I totally disagree. Git is not hard. The way people learn git is hard. Most developers learn a couple of commands and believe they know git, but they don’t. Most teachers teach to use those commands and some more advanced commands, but this does not help to understand git. Learning commands sucks. It is like a cargo cult: you just do something similar to what others do and expect the same result, but you don’t understand how it works and why sometimes it does not do what you expect.
To understand git, you don’t need to learn commands. Commands are simple and you can always consult a man page to know how to do something if you understand how it should work. You only need to learn core concepts first, but nobody does. The reference git book is “Pro Git” and it perfectly explains how git works, but you need to start reading from the last chapter, 10 Git Internals. The concepts described there are very simple, but nobody starts learning git with them, almost nobody teaches them in the beginning of classes. That’s why git seems so hard.
You don’t have to set up your own resolver. It is enough to configure route to 1.1.1.1 via WireGuard peer. If you already use it as a default gateway, your DNS requests don’t leak (I mean, Cloudflare is unable to associate them with your local IP address). To be sure, check traceroute 1.1.1.1
(on *nix system) or tracert 1.1.1.1
(on Windows), you should see your WG peer address in the output.
Random VPN service cannot determine if your DNS server trusted or not, it only checks if the server is provided by that service. When using your own WG server, such checks are useless.
And again, using here-document greatly improves readability, like this.
Your mistake is that after variable substitution bash does not handle quoted strings, i.e. it does not remove single quotes from sed
command line. If you really need this to happen, you have to use eval
:
i1xmr=$(echo "$i1p/$apiresponse*1000" | bc -l | eval $rmdec)
However using functions is a better solution in general. But in this particular case, I guess, you only need to change the bc
’s scale
instead of using sed
:
i1xmr=$(echo "scale=17; $i1p/$apiresponse*1000" | bc -l)
For better readability you may use heredoc instead of echo
:
i1xmr=$( bc -l << EOF
scale=17
$i1p/$apiresponse*1000
EOF
)
Disable systemd-resolved.service? Uninstall systemd-resolved?