Wait until you find about S3 and s3fs.

Swap thrashing goes brrrrrrrrrrrrr

Leeennaaaaa
link
fedilink
71Y

Jokes on you I used to have a 128gb ssd just for swap in my laptop

clb92
link
fedilink
28
edit-2
1Y

I’d be lying if I said I hadn’t done something similar before.

Wrote my master thesis this way - didn’t have enough ram or knowledge, but plenty of time on the lab machine, so I let it do its thing over night.

Sorry, lab machine ssd.

It gave its life for academic achievement, there is no finer death for hardware. o7

WHAT FUCKING QUERY ARE YOU RUNNING TO USE UP THAT MUCH MEMORY DAMN

A very very badly written one no doubt…

Why stop at just one full table scan?

In a database course I took, the teacher told a story about a company that would take three days to insert a single order. Thing was, they were the sort of company that took in one or two orders every year. When it’s your whole revenue on the line, you want to make sure everything is correct. The relations in that database were checked to hell and back, and they didn’t care if it took a week.

Though that would have been in the 90s, so it’d go a lot faster now.

What did they produce? Cruiseships?

No idea, but I imagine it was something big like that, yes. I think it was in northern Wisconsin, so laker ships are a good guess.

We have a company like that here somewhere. When they have one job a year, they have to reduce hours, if they have two, they are doing OK, and if they have three, they have to work overtime like mad. Don’t ask me what they are selling, though. It is big, runs on tracks, and fixes roads.

Valen
link
fedilink
311Y

You really need to index your tables. This has all the hallways of a Cartesian cross product.

Presi300
link
fedilink
English
121Y

I’ve actually done something similar with a 2GB ram machine… 2GB ram / 8GB zswap, actually ran way faster lol

@nbailey@lemmy.ca
link
fedilink
English
61Y

Yeah it works surprisingly well. I installed Gentoo on a 2005 era laptop a few years ago and had to keep adding zswap until Rust could compile for Firefox. Iirc it took about 12G of zswap to get it working, but it wasn’t too bad overall.

Poor man’s Optane

I feel like this might be a giant gaping security risk.

So is pretty much all of the cloud services the average user already subscribes to. People still use them though.

Agreed. This is especially bad, though, because if it’s compromised they basically have hardware-level access to your machine. Unless you’re using encrypted swap, and I’m not sure how standard that is.

Well, assuming you’ve already gone through the effort to write a custom kernel module to offload your swap pages to Google Drive, it doesn’t seem like that much of a stretch to have it encrypt the data before transmitting it.

Obviously you should set up device mapper to encrypt the gdrive device then put the swap on the encrypted mapper device.

If your kernel isn’t using 90% of your CPU resources, are you really even using it to it’s full potential? /s

Kairos
link
fedilink
23M

The image doesn’t load.

I posted that 10 months ago.

That being said, it seems to still work for me.

wait, didn’t some tech youtubers like LTT try using cloud storage as swap/RAM? afaik they failed because of latency

Afaik they used it as redundant off-site backup

I wonder if there would be a speed boost by setting 2 gdrive as raid 0 for off site backups

The limiting factor is mostly your upload speed. And also you need to have a good QoS set up, or you have very limited internet usability. Where as on-site you can get way higher speeds for cheaper

@dan@upvote.au
link
fedilink
6
edit-2
1Y

I remember using ICMP data to bypass my high school’s firewall. TCP and UDP were very locked down, but they allowed pings. It was slow though - I think I managed to get a few KB per sec. Maybe there’s faster/fancier firewall bypass methods these days. This was back in the 2000s when an entire school would have a single OC-1 fiber connection.

ripcord
link
fedilink
31Y

155mbps Telco trunk line for a school? Nicer school than I went to.

Around 50Mbps: https://en.wikipedia.org/wiki/Optical_Carrier_transmission_rates#OC-1

I only had dialup at the time, and the fastest home broadband available was 1.5Mbps ADSL, so it was pretty fancy!

deleted by creator

Imagine doing this on a dial-up 56K modem

A:\SPICYMEMES\MODEMSOUND.WAV

For those too young to remember, the A:\ drive was for the hard 3" floppy disks and B:\ drive was for the soft 5.25" floppy disks. The C:\ drive was for the new HDDs that came out, and for whatever reason the C:\ drive became the standard after that.

the A:\ drive was for the hard 3" floppy disks and B:\ drive was for the soft 5.25" floppy disks.

FWIW they were the other way around on my system. The order of A:\ vs B:\ depended on their order on the cable (“first” and “second”), not type.

A was the first floppy drive and B the second floppy drive (in dos and cp/m). The type of drive was irrelevant.

Bwa-hahahahhah "A:" 🤣

darcy
link
fedilink
71Y

bro is still using floppies

katy ✨
link
fedilink
61Y

all the cool kids use iomega

Oh wow, I didn’t even know Gdrive offered a 1 petabyte option 😂

They don’t to my knowledge, I believe that’s mounted through rclone which just usually sets the filesystem size to 1PB so that it doesn’t have to try to query what the actual limit is for the various providers (and your specific plan).

@Vent@lemm.ee
link
fedilink
13
edit-2
1Y

Once upon a time, Google offered unlimited drive storage as part of some GSuite tiers. They stopped offering it a while ago and have kicked most/all legacy users off of it in the past few months. It was glorious while it lasted 😢

Guess they ran everyone out of business that they needed to, so now the premium features get yanked and your choice of alternatives is curtailed. Hooray for enshittification.

It’s not that, it’s that people were abusing it by using it for things like Plex with 100TB+ of data, which cost Google more than the revenue they got as a result. Blame the people that abused the policy. They’re not a charity and can’t keep an offer if they lose money as a result. Keep in mind that Google Drive data has several replicas and is also backed up to cold storage on LTO tapes, so people abusing the storage policy is actually pretty expensive for them .

They do still have unlimited data in some cases, for example with custom plans for large companies (like 50k+ employees).

icedterminal
link
fedilink
English
31Y

At one point they offered unlimited storage for Play Music only. You could literally upload your entire collection. They changed it later to consume your Drive storage. Cheap enough plans so I subscribed. Then they killed off Play Music. I’m still salty about that.

And Google docs/sheets/slides used to not count in your used space.

@slacktoid@lemmy.ml
link
fedilink
English
321Y

It will crash as soon as it needs to touch the swap due to the relatively insane latency difference.

glibg10b
link
fedilink
51Y

So use a small area in memory as cache

@slacktoid@lemmy.ml
link
fedilink
English
31Y

the infinite memory paradox. quaint. (lol)

It’s just a NUMA architecture. Linux can handle it.

deleted by creator

Once we have super fast reliable internet we’ll likely have the whole computer as a service. We’ll just have access terminals basically and a subscription with a login, except for the nerds who want their own physical machine.

That’s exactly how it works right now with VDI. I’m using one at work.

@danc4498@lemmy.world
link
fedilink
English
31Y

Honestly, cloud gaming is very good… when it is good. Sometime it suck. But when it’s good it’s incredible how much it feels like gaming locally.

sweaty gamers and nerds as always unite over having proper physical PCs rather than online services or consoles.

darcy
link
fedilink
71Y

you will own nothing and be happy!

Given how so many of us communicate, work, and compute using cloud platforms and services, we’re basically already there.

How many apps are basically just a dumb client using a REST API?

@FUsername@feddit.de
link
fedilink
4
edit-2
1Y

Given the digital literacy of many “regular people” (e.g. my father, and seemingly every other of my friends), the idea is appealing. Especially, as most of them don’t care about privacy. Give them decent availability, and they will throw money at you. And if you also give them support, I will, too.

Bro just reinvented mainframes.

They’ve been reinvented repeatedly. Citrix, terminal servers, thin clients, cloud desktops, web apps, remote app delivery…

Most people (not necessarily here) need a web browser and an office program. Most people are well suited to terminals or something like a Chromebook.

I need actual hardware for my job and hobbies, but even I have a mini PC set up like a gaming console so that if I want to play games on my bedroom TV I don’t have to hook up my Steam Deck or gaming laptop. I just stream them.

ditty
link
fedilink
1
edit-2
1Y

& thin clients

Mhmm… Computer as a service. Why does that sound familiar…?

You have to hand it to the French though, that stuff was pretty dope.

I was there, Gandalf. I was there, three thousand years ago.

PorkSoda
link
fedilink
31Y

Unsubscribe

No. Just no.

And get off my lawn, ya whippersnapper.

Wait, we already had that in the 70s.

You have to know that some dinosaur at ibm is screaming about how they gave up the centralized computer and is salivating over gigabit fiber so he can charge everyone 15 bucks a month to use an ibm mainframe.

Stadia almost didn’t suck, I bet we’re 10 years from phones just being hand terminals that tap into a local server and desktops won’t be far behind.

Like in The Expanse?

Exactly like the expanse.

Fucking love those books, am listening to one now.

Have you seen the Amazon show at all? How did you feel about it?

For many of us Stadia didn’t suck at all, except for the game library and Google lack of commitment.

Cethin
link
fedilink
English
81Y

RAM as a service can’t happen. It’s just far too slow. The whole computer can though. It’s RAM can be local so it can access it quickly, then it just needs to stream the video over, which is relatively simple if creating some amount of latency to deal with.

Cethin
link
fedilink
English
301Y

It’ll never be fast enough. An SSD is orders of magnitude slower than RAM, which is orders of magnitude slower than cache. Internet speed is orders of magnitude slower than the slowest of hard drives, which is still way too slow to be used for anything that needs memory relatively soon.

Need faster than light travel speeds and we can colocate it on the moon

@barsoap@lemm.ee
link
fedilink
3
edit-2
1Y

A SATA SSD has ballpark 500MB/s, a 10g ethernet link 1250MB/s. Which means that it can indeed be faster to swap to the RAM of another box on the LAN that to your local SSD.

A Crucial P5 has a bit over 3GB/s but then there’s 25g ethernet. Let’s not speak of 400g direct attach.

  • modern NVMe SSDs have much more bandwidth than that, on the order of > 3GiB/s.
  • even an antique SATA SSD from 2009 will probably have much lower access latency than sending commands to a remote device over an ethernet link and waiting for a response

Show me an SSD with 50GB/s, it’d need a PCIe6x8 or PCIe5x16 connection. By the time you RAID your swap you should really be eyeing that SFP+ port. Or muse about PCIe cards with RAM on them.

Speaking of: You can swap to VRAM.

My point was more that the SSD will likely have lower latency than an Ethernet link in any case, as you’ve got the extra delay of data having to traverse both the local and remote network stack, as well as any switches that may be in the way. Additionally, in order to deal with that bandwidth you’ll need to kit out not only the local machine, but also the remote one with expensive 400GbE hardware+transceivers, plus switches, and in order to actually store something the remote machine will also have to have either a ludicrous amount of RAM (resulting in a setup which is vastly more complex and expensive than the original RAIDed SSDs while offering presumably similar performance) or RAIDed SSD storage (which would put us right back at square one, but with extra latency). Maybe there’s something I’m missing here, but I fail to see how this could possibly be set up in a way which outperforms locally attached swap space.

Maybe there’s something I’m missing here

SFP direct attach, you don’t need a switch or transcievers, only two QSFP-DD ports and a cable. Also this is a thought exercise not a budget meeting. Start out with “We have this dual socket EPYC system here with full 12TB memory and need to double that”. You have *rolls dice* 104 free PCIe5 lanes, go.

Cethin
link
fedilink
English
61Y

Bandwidth isn’t really most of the issue. It’s latency. It’s the amount of time from the CPU requesting a segment of memory to receiving it, which bandwidth doesn’t effect.

@barsoap@lemm.ee
link
fedilink
1
edit-2
1Y

Depends on your workload and access pattern.

…I’m saying can be faster. Not is faster.

Cethin
link
fedilink
English
11Y

Yeah, but the point of RAM is fast random (the R in RAM) access times. There are ways to make slower memory work better for this by predicting what will be needed (grab a chunk of memory because accesses will probably need things with closer locality than pure random), but it can’t be fixed. Cloud memory is good for non-random storage or storage that isn’t time critical.

So I could download more RAM?

You can do it today, just put your swapfile on sshfs and you’re done.

Even better:

Free cloud storage that doesn’t require an account and provides no limit to the volume of data stored

https://github.com/yarrick/pingfs

I don’t want to see the EXPLAIN for that query. This person really needs to learn more about sql, I’d wager.

Protip: Put swapfile on ramdisk for highest speed

Dran
link
fedilink
601Y

Unironically that’s how zram works

Josh F.
link
fedilink
141Y

Doesn’t it compress the contents that it’s storing to help kind of get the best of both worlds?

You get faster storage because it’s in ram still, but with it being compressed there’s also “more” available?

I could be completely mistaken though

Ew0
link
fedilink
English
121Y

You are correct, although zram uses more cpu power since it compresses things. It’s not really an issue if you’re not using a potato :=)

even if you are using a potato it probably doesn’t have much ram so slightly slowing it to make things run smoother is a very popular choice

Josh F.
link
fedilink
11Y

Today I learned!

Don’t do boy zram dirty, it has a ton of utility when you have ample spare compute and limited RAM.

UFO
link
fedilink
261Y

I dunno why I didn’t realize you can add more swap to a system while running. Nice trick for a dire emergency.

Even better, you can swapoff swap too!

Kairos
link
fedilink
26M

It’s Linux it’s made by people with a brain.

Avid Amoeba
link
fedilink
18
edit-2
1Y

Slow SSD issue. RAM is for chumps.

@llama@midwest.social
link
fedilink
English
71Y

Shoot and I thought my 30 second SQL queries were a problem

@Faresh@lemmy.ml
link
fedilink
English
171Y

Does the OOM killer actually work for anyone? In every linux system I’ve used, if I run out of memory, the system simply freezes.

Absolutely can and will take action. Doesn’t always kill the right process (sometimes it kills big database engines for the crime of existing), but usually gives me enough headroom to SSH back in and fix it myself.

JokeDeity
link
fedilink
41Y

I have limited experience with Linux, but why is it that when my system locks up, SSH still tends to work and let me fix things remotely? Like, if the system isn’t locked up, let me fix it right here and now and give me back control, if it is locked up, how is SSH working to help me?

So that’s the nifty thing about Unix is that stuff like this works- when you say “locked up”, I’m assuming you refer to logging in to a graphical environment, like Gnome, KDE, XFCE, etc. To an extent, this can even apply to some heavy server processes: just replace most of the references to graphical with application access.

Even lightweight graphical environments can take a decent amount of muscle to run, or else they lag. Plus even at a low level, they have to constantly redraw the cursor as you move it around the screen.

SSH and plain terminals (Ctrl-Alt-F#, what number is which varies by distro) take almost no resources to run: SSH/Getty (which are already running), a quick process call to the password system, then a shell like bash or zsh. A singular GUI application may take more standing RAM at idle than this entire stack. Also, if you’re out of disk space, the graphical stack may not be able to alive

So when you’re limited on resources, be it either by low spec system or a resource exhaustion issue, it takes almost no overhead to have an extra shell running. So it can squeeze into a tiny corner of what’s leftover on your resource-starved computer.

Additionally, from a user experience perspective, if you press a key and it takes a beat to show up, it doesn’t feel as bad as if it had taken the same beat for your cursor redraw to occur (which also burns extra CPU cycles you may not be able to spare)

JokeDeity
link
fedilink
21Y

Thanks, great answer!

Fedora
link
fedilink
91Y

Yes, it takes surprisingly long for the OOM killer to take action, but the system unfreezes. Just wait a few minutes and see whether that does the trick.

It never kicks in for me when it should, but I figured out I can force trigger it manually with the magic SysRq key (Alt+SysRq+F, needs to be enabled first), which instantly recovers my system when it starts freezing from memory pressure.

Alt+SysRq+F, needs to be enabled first

Do note that this opens up a security hole. Since this can kill any app at random and is not interceptable, if you leave your PC in a public place, someone could come up and press this combo a few times. Chances are, it’ll kill whatever the locking app you’re using.

Yeah, default Ubuntu LTS webserver kicked the mysqld on a stupid query (but it worked on dev - all developers, someday) not too long ago…

@jabjoe@feddit.uk
link
fedilink
English
41Y

Oh yes. I’ve had massive compiles (well linking) which failed because of the OOM killer, and I did exactly the same, massive swap so it will just keep going. So what if it’s using disk as RAM and unusable for a few hours in the middle of the night, at least it finishes!

it does for me, usually by killing my session and throwing me back to the login screen

Turun
link
fedilink
61Y

Yes. If you have swap the system will crawl to a halt before the process is killed though, SSDs are like a thousand times slower than RAM. Swapoff and allocate a ton of memory to see it in action.

Nvme PCIe 4 SSDs are quite fast now tho, you can get between DDR1 and DDR2 speeds from a modern SSDs. This is why Apple are using their SSDs as swap quite aggressively. I’m using a MacBook Pro with 16 GBs of RAM and my swap usage regularly goes past 20 GBs and I didn’t experience any slowdown during work.

Turun
link
fedilink
31Y

Depends if the allocated memory is actively used or not. Some apps do not require a large amount of random access memory, and are totally fine with a small part of random access memory and a large part of not so random access and not so often used memory.

Alternatively I can imagine that MacOS simply has a damn good algorithm to determine what can be moved to swap and what cannot be moved to swap. They may also be using the SSD in SLC mode so that could contribute to the speedup as well.

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 64 users / day
  • 250 users / week
  • 420 users / month
  • 2.88K users / 6 months
  • 1 subscriber
  • 1.53K Posts
  • 33.9K Comments
  • Modlog