• 0 Posts
  • 8 Comments
Joined 1Y ago
cake
Cake day: Jun 02, 2023

help-circle
rss

I can assure you that before I set up Cloudflare, I was getting hit by SYN floods filling up the entire bandwidth of my home DSL2 connection multiple times a week.


I would say the vast majority of people (across all generations) either don’t know, or don’t really understand how extensive it (the monitoring) is and what the consequences of that are.


Downside: it’s entirety manual and not scalable whatsoever.



Personally I’d be somewhat nervous using dd to edit parts of a text file, but you do you :)


My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you’ve got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you’ll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there’s something I’m missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.


Well, assuming you’ve already gone through the effort to write a custom kernel module to offload your swap pages to Google Drive, it doesn’t seem like that much of a stretch to have it encrypt the data before transmitting it.


  • modern NVMe SSDs have much more bandwidth than that, on the order of > 3GiB/s.
  • even an antique SATA SSD from 2009 will probably have much lower access latency than sending commands to a remote device over an ethernet link and waiting for a response