• 8 Posts
  • 75 Comments
Joined 1Y ago
cake
Cake day: Jun 14, 2023

help-circle
rss

That sounds awfully complicated for home use.


Zero trust, but you have to use Amazon AWS, Cloudflare, and make your own Telegram bot? And have the domain itself managed by Cloudflare.

Sounds like a lot of trust right there… Would love to be proven wrong.


Mattermost runs as a Docker container and is excellent. You can create channels and groups which is incredibly useful.


And amazing is is. It has almost completely replaced my use of Google Photos 👍


Man, I feel you. I hate Mattermost for its utter inability to run on anything else than port 8065 specifically.


Yes, but given the fact that there can we weeks between incidents, that is going go be a long time to be without my services.


That’s a good idea, didn’t know Docker had such capability. I will read up on that - could you give me some keywords to start on?


You know you are right, and I’ve tried. I can manually monitor but it doesn’t happen just then. I don’t know yet what causes it, I can only assume it’s one of the Docker containers because the machine is doing nothing else.

I am doing this to find out how often it happens, how quickly it happens, and what’s at the top when it happens.


Thank you for these ideas, I will read up on systat+sar and give it a go.

Also smart to have the script always running, sleeping, rather than launching it at intervals.

I know all of this is a poor hack, and I must address the cause - but so far I have no clues what’s causing it. I’m running a bunch of Docker containers so it is very likely one of them painting itself into a corner, but after a reboot there’s nothing to see, so I am now starting with logging the top process. Your ideas might work better.



Nope, haven’t. It says I have 2 GB of swap on a 16 GB RAM system, and that seems reasonable.

Why would you recommend turning swap off?


This issue doesn’t happen very often, maybe every few weeks. That’s why I think a nightly reboot is overkill, and weekly might be missing the mark? But you are right in any case: regardless of what the cron says, the machine might never get around to executing it.


This insane torture is why there are post-it notes under the keyboards.


I see these posts everywhere, and they are friggin’ annoying :-(


How to auto-reboot if CPU load too high?
I run an old desktop mainboard as my homelab server. It runs Ubuntu smoothly at loads between 0.2 and 3 (whatever unit that is). Problem: Occasionally, the CPU load skyrockets above 400 (yes really), making the machine totally unresponsive. The only solution is the reset button. Solution: - I haven't found what the cause might be, but I think that a reboot every few days would prevent it from ever happening. That could be done easily with a crontab line. - alternatively, I would like to have some dead-simple script running in the background that simply looks at the CPU load and executes a reboot when the load climbs over a given threshold. --> How could such a cpu-load-triggered reboot be implemented? ----- edit: I asked ChatGPT to help me create a script that is started by crontab every X minutes. The script has a kill-threshold that does a kill-9 on the top process, and a higher reboot-threshold that ... reboots the machine. before doing either, or none of these, it will write a log line. I hope this will keep my system running, and I will review the log file to see how it fares. Or, it might inexplicable break my system. Fun!
fedilink

You are right, it’s not cheap. I delayed buying one for literally years but once I did, it was a game changer.


Brother ADS-1700W is amazing!

  • no PC or USB required: place it anywhere
  • WiFi
  • scans a page double-sided to PDF in two seconds!
  • sends file to network share, ready to be consumed by Paperless
  • fully automatic, no button presses needed!
  • tiny footprint
  • document feeder
  • use with separator pages to bulk-scan many documents in one go

😍


This is how you lose data. Hope you have a good backup on a NAS?


Just a sentence, please kind sir, not a video?


Joke’s on them. As a homelab noob, I just run whatever the Docker containers provide. I will not dive into a nerd cave to adjust to the latest and greatest (fad).




On average, a person contains slightly more than one skeleton.

On average, a person has slightly less than two legs.


Switch to Colemak and that XCV goodness is right where it needs to be.

Never had a nicer typing experience, thanks to DreymaR introducing me to DHm-angle-wide-mod. Colemak FTW!

🐑


PiVPN offers both services, Wireguard and OpenVPN.

What app do you use on Android? And on Windows?


I used Zerotier before and I still use it now. It is also the solution I am now going to continue with.

I wanted to try Wireguard to get away from a centrally managed solution, but if I can’t get it working after several hours, and Zerotier took five minutes - the winner is clear.


Obviously :) and make sure to forward to the correct LAN IP address, and make sure that machine has a static IP (or DHCP reservation).


PiVPN is elegant. Easy install, and I am impressed with the ascii QR code it generates.

But I could not make it work. I am guessing that my Android setup is faulty, orrrr maybe something with the Pi? This is incredibly difficult to troubleshoot.


Thank you for providing specific steps that I can take! I will look into this.

No I do not use cloudflare tunnels, just regular cloudflare to publish my services to the whole world - which is a concern of course.

Going with a connection from my device via wireguard sounds like just the right thing to do.


Help me get started with VPN
*TLDR: VPN-newbie wants to learn how to set up and use VPN.* **What I have:** Currently, many of my selfhosted services are publicly available via my domain name. I am aware that it is safer to keep things closed, and use VPN to access -- but I don't know how that works. - domain name mapped via Cloudflare > static WAN IP > ISP modem > Ubiquity USG3 gateway > Linux server and Raspberry Pi. - 80,443 fowarded to Nginx Proxy Manager; everything else closed. - Linux server running Docker and several containers: NPM, Portainer, Paperless, Gitea, Mattermost, Immich, etc. - Raspberry Pi running Pi-hole as DNS server for LAN clients. - Synology NAS as network storage. **What I want:** - access services from WAN via Android phone. - access services from WAN via laptop. - maybe still keep some things public? - noob-friendly solution: needs to be easy to "grok" and easy to maintain when services change.
fedilink

If you don’t mind, could you please check your typing? You had some obvious typos so I am not so sure of the exact name of the tool you are suggesting.


I can’t find Readably in the Play store. Got a link?


Sorry but that’s not true. I have been running Immich for a long time now, and it is solid and stable.

A recent update had a change in the Docker configuration, and if you didn’t know that and just blindly upgraded, it would still run and show a helpful explanation. That’s amazing service.


Congratulations! I think you’ll love it. There are some things to set up. Let me know if you have questions :-)


I’ve commented elsewhere on this page:

Brother ADS-1700W
Tiny,fast, scans double-sided straight to a network share. It’s the most amazing thing I’ve bought in years, literally.

The printer has a web interface where you set up destinations, and I set up a file path there. Separately, on the printer itself, you can set it up to do one action automatically when it detects material in the auto sheet feeder, and I used that so it auto-scans to PDF/A and saves it on that network share.

Then I have Paperless check that path once a minute. So my workflow is literally, drop the paper in the scanner, and 5 seconds later put it in a box, then a minute later I see it in Paperless. It’s bliss.


What are those concerns? Why is it relevant to self-hosting?

Is it like the rumor that the Lemmy devs are pro-Russia or whatever it was about?

Honestly asking, here. Not trying to start a flame war, just want to know whether to bother to care about this.


Google oppice apps are not fast, that’s true, but they are blazingly fast compared against Nextcloud or Synology. Only Office 365 can keep up (and is functionally better) - but eh, you know.


I have tried Photoprism but was not as impressed by it as Immich.


Yes! Sorry for giving wrong details. That was from memory, and I am a goldfish…

The printer has a web interface where you set up destinations, and I set up a file path there. Separately, on the printer itself, you can set it up to do one action automatically when it detects material in the auto sheet feeder, and I used that so it auto-scans to PDF/A and saves it on that network share.

Then I have Paperless check that path once a minute. So my workflow is literally, drop the paper in the scanner, and 5 seconds later put it in a box, then a minute later I see it in Paperless. It’s bliss.


Among my must-have selfhosting items, in no particular order, I can recommend:

  • Portainer, to keep track of what’s going on.
  • Nginx Proxy Manager, to ensure https with valid certificate to those services I want to have available from the outside.
  • Pihole, of course.
  • Gitea, to store my coding stuff.
  • Paperless-ngx, to store every paper in my life.
  • Immich, an amazingly good replacement for Google Photos.


Ugh, Nextcloud. It is always touted but it is such a pain to set up properly, and then it is slow as molasses.

I’ve tried, and I’ve tried the similar suite from Synology, but in the end always come back to the Google system - much as I hate to admit it, Google “just works”.


[SOLVED] Can’t access my site from WAN despite DNS and port forwarding in place. Help? [ERR_SSL_UNRECOGNIZED_NAME_ALERT]
TLDR: - Update: the server software has a bug about generating+saving certificates. Bug has been reported; as a workaround I added the local IP to my local 'hosts' file so I can continue (but that does not solve it of course). - I suspect there's a problem with running two servers off the same IP address, each with their own DNS name? Problem: - When I enter https://my.domain.abc into Firefox, I get an error ERR_SSL_UNRECOGNIZED_NAME_ALERT instead of seeing the site. Context: - I have a static public IP address, and a Unifi gateway that directs the ports 80,443 to my server at 192.168.1.10 where Nginx Proxy Manager is running as a Docker container. This also gives me a *__Let's Encrypt_* certificate. - I use Cloudflare and have a domain `foo.abc` pointed to my static public IP address. This domain works, and also a number of subdomains with various Docker services. - I have now set up a **second server** running yunohost. I can access this on my local LAN at https://192.168.1.14. - This yunohost is set up with a DynDNS `xyz.nohost.me`. The current certificate is self-signed. - Certain other ports that yunohost wants (22,25,587,993,5222,5269) are also routed directly to 192.168.1.14 by the gateway mentioned above. - All of the above context is OK. Yunohost diagnostics says that *_DNS records are correctly configured_* for this domain. Everything is great (except reverse DNS lookup which is only relevant for outgoing email). Before getting a proper certificate for the yunohost server and its domain, I need to make the yunohost reachable at all, and I don't see what I am missing. What am I missing?
fedilink

Just finished wiring the garage to the house - and find that the wire is damaged! Now what?
I mean, the simplest answer is to **lay a new cable,** and that is definitely what I am going to do - that's not my question. But this is a long run, and it would be neat if I could salvage some of that cable. How can I discover where the cable is damaged? One stupid solution would be to halve the cable and crimp each end, and then test each new cable. Repeat iteratively. I would end up with a few broken cables and a bunch of tested cables, but they might be short. How do the pro's do this? (Short of throwing the whole thing away!)
fedilink

CPU load over 70 means I can’t even ssh into my server
*edit: you are right, it's the I/O WAIT that it destroying my performance:* `%Cpu(s): 0,3 us, 0,5 sy, 0,0 ni, 50,1 id, 49,0 wa, 0,0 hi, 0,1 si, 0,0 st` *I could clearly see it using `nmon > d > l > -` such as was suggested by @SayCyberOnceMore. Not quite sure what to do about it, as it's simply my `sdb1` drive which is a Samsung 1TB 2.5" HDD. I have now ordered a 2TB SSD and maybe I am going to reinstall from scratch on that new drive as sda1. I realize that's just treating the symptom and not the root cause, so I should probably also look for that root cause. But that's for another Lemmy thread!* I really don't understand what is causing this. I run a few very small containers, and everything is fine - but when I start something bigger like Photoprism, Immich, or even MariaDB or PostgreSQL, then something causes the CPU load to rise indefinitely. Notably, the `top` command doesn't show anything special, nothing eats RAM, nothing uses 100% CPU. And yet, the load is rising fast. If I leave it be, my ssh session loses connection. Hopping onto the host itself shows a load of over 50,or even over 70. I don't grok how a system can even get that high at all. My server is an older Intel i7 with 16GB RAM running Ubuntu22. 04 LTS. How can I troubleshoot this, when 'top' doesn't show any culprit and it does not seem to be caused by any one specific container? (this makes me wonder how people can run anything at all off of a Raspberry Pi. My machine isn't "beefy" but a Pi would be so much less.)
fedilink

Docker + Nextcloud = why is it so difficult?
*TLDR: I consistently fail to set up Nextcloud on Docker. Halp pls?* Hi all - please help out a fellow self-hoster, if you have experience with Nextcloud. I have tried several approaches but I fail at various steps. Rather than describe my woes, I hope that I could get a "known good" configuration from the community? **What I have:** - a homelab server and a NAS, wired to a dedicated switch using priority ports. - the server is running Linux, Docker, and NPM proxy which takes care of domains and SSL certs. **What I want:** - a `docker-compose.yml` that sets up Nextcloud *without SSL.* Just that. - *ideally but optionally,* the compose file might include Nextcloud office-components and other neat additions that you have found useful. Your comments, ideas, and other input will be much appreciated!!
fedilink

(Why) would it be “bad practice” to separate CPU and storage to separate devices?
TLDR: I am running some Docker containers on a homelab server, and the containers' volumes are mapped to NFS shares on my NAS. **Is that bad performance?** - I have a Linux PC that acts as my homelab server, and a Synology NAS. - The server is fast but has 100GB SSD. - The NAS is slow(er) but has oodles of storage. - Both devices are wired to their own little gigabit switch, using priority ports. Of course it's slower to run off HDD drives compared to SSD, but I do not have a large SSD. The question is: (why) would it be "bad practice" to separate CPU and storage this way? Isn't that pretty much what a data center also does?
fedilink

Request: file-sharing service
*edit: thank you for all the great comments! It's going to take a while to chew through the suggestions. I just started testing **picoshare** which is already looking both easy and useful.* Hi all! I am looking for a file-hosting / file-sharing service and hope you guys could recommend something? Features I would like to see: - Docker-compose ready to use. - multi-user, not just for myself. - individual file size >2GB. - shared files should be public, not require a login to download. - optional: secret shares, not listed but public when the link is known. - optional: private shares that require either a password or a login. Thanks in advance for sharing your experiences!
fedilink