edit: you are right, it’s the I/O WAIT that it destroying my performance:
%Cpu(s): 0,3 us, 0,5 sy, 0,0 ni, 50,1 id, 49,0 wa, 0,0 hi, 0,1 si, 0,0 st
I could clearly see it using nmon > d > l > -
such as was suggested by @SayCyberOnceMore. Not quite sure what to do about it, as it’s simply my sdb1
drive which is a Samsung 1TB 2.5" HDD. I have now ordered a 2TB SSD and maybe I am going to reinstall from scratch on that new drive as sda1. I realize that’s just treating the symptom and not the root cause, so I should probably also look for that root cause. But that’s for another Lemmy thread!
I really don’t understand what is causing this. I run a few very small containers, and everything is fine - but when I start something bigger like Photoprism, Immich, or even MariaDB or PostgreSQL, then something causes the CPU load to rise indefinitely.
Notably, the top
command doesn’t show anything special, nothing eats RAM, nothing uses 100% CPU. And yet, the load is rising fast. If I leave it be, my ssh session loses connection. Hopping onto the host itself shows a load of over 50,or even over 70. I don’t grok how a system can even get that high at all.
My server is an older Intel i7 with 16GB RAM running Ubuntu22. 04 LTS.
How can I troubleshoot this, when ‘top’ doesn’t show any culprit and it does not seem to be caused by any one specific container?
(this makes me wonder how people can run anything at all off of a Raspberry Pi. My machine isn’t “beefy” but a Pi would be so much less.)
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
The last time I saw this was on a slow-failing HDD.
Check a quick fsck might get you a few answers. You can find more info in the Linux manual. It could just be one or two bad blocks that you can recover and fix the problem (though, ofc, it’s time to backup your data).
The other, slightly unusual time I’ve seen it is with mixed RAM. 16gb made of 2x6g and then 2x4gb did some real odd things to the system. If it’s not the disk, and your box will boot with one stick of ram, try it to see if it fixes the issue. It could be that your RAM speeds are off (or your like me and just put two sticks you had lying around, and it basically worked until it didn’t).
An outlier, that I’ve not seen on modern machines is io/wait for a CD-ROM to spin up, even if your not accessing the CD-ROM. Normally caused by bad cabling. Based on the age of your machine, this is unlikely, but it might be worth unplugging devices to see if one is bad and not reporting properly.
This is, if course, assuming dmsg is empty
Final thought: see if your running SELinux. If you are, turn it off and try again. Those policies are complex, and something installed in a non-standard place could be causing SELinux to slow IO as it fills your logs with warnings.
Hope that helps,
So how do I run this on /dev/sda? I can’t very well unmount the OS drive…
To add on to this, if you’re using some random RAM stick picked out of the gutter, then it might be worth it to run memtest86+. Bad RAM sectors can give some weird unpredictable issues.