Instructor, author, developer. Creator of Beej’s Guides.
openpgp4fpr:CD99029AAD50ED6AD2023932A165F24CF846C3C8
But how do you handle candidates who say something like “look, there’s heaps of code that I’m proud of and would love to walk you through, but it’s all work I’ve done for past companies and don’t have access (or the legal right) to show you?”
It never once happened. They always knew in advance, so they could code something up if they felt like it.
I asked candidates to bring me some code they were proud of and teach me how it worked. Weeded out people really quickly and brought quality candidates to the top. On two separate occasions we hired devs with zero experience in the language or framework and they rocked it. Trythat with your coding interview, eh? 🙂
Recommendation algorithms are great for discovering related information and new stuff.
I agree that open, controllable recommendation algorithms would be great. But right now using none of the currently widespread social media recommendation algorithms at all (and just matching keywords instead) makes for a less-abusive, more positive experience. IMHO.
I mean, I have a BS and MS in computer science, so you can use that as guidance as to whether or not I know what an algorithm is. :)
In this context, though, it should be clear that “The Algorithm” refers to a specific social networking algorithm that chooses the content you see in order to maximize advertising revenue.
So yes, Lemmy has algorithms that show different content based on your input, but that’s a wildly different animal. Notably, I’m the one deciding, and also they’re not trying to maximize ad revenue.
The real problem with the internet isn’t Facebook or Twitter or Reddit, it’s the fact the entire experience is pretty much controlled by Microsoft and Google. As they shape your content, lock you out of areas and generally dictate what’s “legal” or even what gets found during your searches.
I agree the Google and MS are a problem, but Facebook, Twitter, Reddit are also a problem, albeit a different one.
A shell script can be more concise if you’re doing a lot of shell things. Keeps you from having os.system()
all over the place.
Things like “diff the output of two programs” are just more complex in other languages.
I love rust, but replacing my shell scripts with rust is not something I would consider doing any more than I’d consider replacing rust with my shell scripts.
As much as I hate ads and hate the concept that I would be forced to view them, these kind of legal wranglings freak me out. It seems quite possible that a ruling in my favor here would be used against me somewhere else. Courts and lawmakers don’t understand technology and don’t realize the effects laws have. And frankly, the rest of us don’t have much idea, either.
So the page says:
And this does in fact happen - even though some of your data was still waiting to be sent, or had been sent but not acknowledged: the kernel can close the whole connection.
But Stevens says:
By default,
close
returns immediately, but if there is any data still remaining in the socket send buffer, the system will try to deliver the data to the peer.The
SO_LINGER
socket option lets us change this default.
And, referring to the default close
behavior:
We assume that when the client’s data arrives, the server is temporarily busy, so the data is added to the socket receive buffer by its TCP. Similarly, the next segment, the client’s FIN, is also added to the socket receive buffer (in whatever manner the implementation records that a FIN has been received on the connection). But by default, the client’s
close
returns immediately. As we show in this scenario, the client’s close can return before the server reads the remaining data in its socket receive buffer. Therefore, it is possible for the server host to crash before the server application reads this remaining data, and the client application will never know.
Also:
If
l_onoff
is nonzero andl_linger
is zero, TCP aborts the connection when it is closed. That is, TCP discards any data still remaining in the socket send buffer and sends an RST to the peer, not the normal four-packet connection termination sequence.
I’m having trouble reconciling this with the article’s position that data will be discarded by the sender OS with a plain non-SO_LINGER
close()
.
I can see how the sender might be blissfully unaware that the receiver program might have crashed after the data had been sent and the connection had been closed, but before the data had arrived at the receiver program. And that’s where some kind of application ACKing mechanism might be in order.
I can also see that the receiver OS might happily collect the data and shutdown the socket correctly and then the sender app thinks everything is fine, but the receiver app has crashed and will never see the data.
But neither of those conditions result in the receiver app in the example showing less than 1,000,000 bytes received unless there’s an error.
What am I missing?
Like with spam and its basically zero conversion rate, yes. But I’ve seems to remain clear of it in the tags I follow. So far.