Show me an SSD with 50GB/s, it’d need a PCIe6x8 or PCIe5x16 connection. By the time you RAID your swap you should really be eyeing that SFP+ port. Or muse about PCIe cards with RAM on them.

Speaking of: You can swap to VRAM.

My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you’ve got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you’ll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there’s something I’m missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.

Maybe there’s something I’m missing here

SFP direct attach, you don’t need a switch or transcievers, only two QSFP-DD ports and a cable. Also this is a thought exercise not a budget meeting. Start out with “We have this dual socket EPYC system here with full 12TB memory and need to double that”. You have *rolls dice* 104 free PCIe5 lanes, go.

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 77 users / day
  • 211 users / week
  • 413 users / month
  • 2.92K users / 6 months
  • 1 subscriber
  • 1.53K Posts
  • 33.8K Comments
  • Modlog