• 6 Posts
  • 98 Comments
Joined 1Y ago
cake
Cake day: Jul 11, 2023

help-circle
rss


The boot drive is an SSD, which is not in any RAID. I have another HDD connected via SATA. Another HDD connected via USB.


I did reset it. It did not help. I ran memtest86 for over 2 hours and did a CPU stress test for over 15 hours. Nothing crashed during the testing.


I cleaned everything and reapplied the thermal paste. That did not solve the problem. Also, the CPU is only of 35 watts and never goes over 55°C.


I cleaned everything and reapplied the thermal paste. That did not solve the problem. Also, the CPU is only of 35 watts and never goes over 55°C.


I just ran it. It took over 2 hours to finish. Showed no errors. Is there a benefit of running it for a few days?


I bought an Optiplex 5040, with an [i5-6500TE](https://ark.intel.com/content/www/us/en/ark/products/88186/intel-core-i5-6500te-processor-6m-cache-up-to-3-30-ghz.html), and 8 GB DDR3L RAM. When I bought it, I installed Fedora Server on it. It got stuck every few days but I could never see the error. The services just stopped working, I couldn't ssh into it, and connecting it to a monitor showed a black screen. So, I thought let's install Ubuntu Server, maybe Fedora isn't compatible with all of its hardware. The same thing is happening, now, but I can see this error. Even when there's nothing installed on it, no containers, nothing other than base packages, this happens. I have updated the bios. I have tried setting `nouveau.modeset=0` in the grub config file. I have tried disabling and enabling c-states. No luck till now. Would really appreciate if anyone helps me with this. UPDATE: - I cleaned everything and reapplied the thermal paste. I did not see any change in the thermals. It never goes over 55°C even under full load. - I reset the motherboard by removing that jumper thing. - I ran `memtest86`, which took over 2½ hours. It did not show any errors. - I ran a CPU stress test for over 15 hours, and nothing crashed. - I also ran the Dell's diagnostic tool, available in the boot menu of the motherboard. The whole test took over 2 hours but did not show any errors. It tested the memory, CPU, fans, storage drives, etc.
fedilink

I use Fedora Server with Podman (instead of Docker). I am not a noob either, but cockpit provides a really useful GUI for managing the whole operating system.


I have written a small blog post about how to Bypass CGNAT, and have also mentioned why you should not use Cloudflare if you are hosting for privacy.



My optiplex with i5-6500TE can transcode 4K videos easily if the codec is AVC. HEVC is different story though. Any CPU newer than 10th generation would be more than enough for your needs, I’d say.



Is an ARM mini PC with only 2GB RAM and 16GB storage worth buying?
My current setup is an old MacBook woth 2 external HDDs, and I am almost happy with it, for now. I just saw [this mini PC](https://www.amazon.in/thinvent-Micro-Client-Operating-System/dp/B096KR2WHK) on Amazon and I am considering buying it, just to try out a new thing. I think it is cheap (~22 USD). What I am worried about is that this much memory and storage might make it almost unusable. I was thinking of hosting some minor services, like remark42, shynet or vaultwarden. What else do you think I can host? If my mind changes, I will also try it with a desktop environment and try to connect it to my 4K Android TV. Here are some specs, if you don't want to visit the webpage: | | | | ----------------------------- | ------------------ | | Brand | thinvent | | Personal computer design type | Mini PC | | Operating System | Linux | | Memory Storage Capacity | 16 GB | | RAM Memory Installed Size | 2 GB | | CPU Model | Cortex A5 | | Special Feature | Memory Card Reader | | CPU Manufacturer | ARM | | Wireless network technology | Wi-Fi | | CPU Speed | 2 GHz | | Graphics Coprocessor | ‎Integrated Graphics | | ------------------------------ | ----------------------------- | | RAM Memory Maximum Size | ‎16 GB | | Hardware Interface | ‎Ethernet | | Memory Speed | ‎2 GHz | | Item Dimensions LxWxH | ‎10 x 10 x 1.8 Centimeters | | Speaker Description | ‎built in | | Video Output Interface | ‎HDMI | | Graphics Card Description | ‎Integrated | | Hard Disk Interface | ‎Unknown | | Style | ‎With Wi-Fi | | Manufacturer | ‎Thinvent Technologes Pvt Ltd | | Form Factor | ‎Small Form Factor | | Item Height | ‎1.8 Centimeters | | Item Width | ‎10 Centimeters | | Product Dimensions | ‎10 x 10 x 1.8 cm; 460 g | | Item model number | ‎Micro 5\_2021 | | Processor Count | ‎1 | | RAM Size | ‎2 GB | | Computer Memory Type | ‎DDR4 SDRAM | | Hard Drive Size | ‎16 GB | | Hardware Platform | ‎Linux | | Lithium Battery Energy Content | ‎5 Watt Hours | | Manufacturer | ‎Thinvent Technologes Pvt Ltd | | Country of Origin | ‎India | | Item Weight | ‎460 g |
fedilink

All ports are forwarded. If your SMTP is running on, say, port 993, on your local machine, your-VPS-ip:993 will be your SMTP.



I am not sure what you mean.

The issue is, when using Cloudflare, they will terminate your TLS, then encrypt the data again with their own certificate, which is send to the visitor. When visitor interacts, their data is decrypted on Cloudflare’s servers, which they encrypt again eith our original certificate and send it back to us.

Sure, hackers or sniffers might not be able to look at the sensitive data, but Cloudflare can. But do they, or do they not, is upto you, if you trust them or not.


If you are using the exact rules mentioned in my post, only the ports of your machine will be forwarded, not your entire local network. If you want to forward ports of more than one machine, look at the github link in the sources, it contains a detailed documentation of how to achieve that. Since, I do not know a lot about iptables, I may not be the best person to guide you, in this case. However, feel free to DM me, I’ll might be able to help.


Yes, it is fairly easy. You just have to forward the http headers. I am using HAProxy, and you can look at my configuration file in the blog. If you’re using something like Nginx Proxy, look up how to forward http heards. Some applications, like Nextcloud, require extra steps, but they also provide their own documentation.


I am not sure, actually. Look at the sources, and you’ll find the original GitHub link from where I took it. I am not very well versed with iptables.


I wrote a small blog about bypassing CGNAT using TLS-passthrough. Cloudflare uses TLS-termination, which means they can see all the data being passed through, which defeats the purpose of privacy.

https://blog.aiquiral.me/bypass-cgnat


I’ve tried hosting an nginx server. It is fun, but I wouldn’t rely on it for production use cases.

I’ve also seen some people run docker on their android devices.


Try olamovies (dot) top. They have a lot of OpenMatte versions of many films as well. You might find IMAX too.


Proton VPN works for me.

But we should not have to pay another company to watch the content owned by a company we are already paying.


Thank you for these suggestions. But I have a few questions.

How can I do the 2nd and 3rd point if I am using docker/podman containers?

Why is ClamAV useless?


Joplin can be a multi-user solution as well. I use Joplin with Nextcloud. If you don’t want to share notes just use Joplin and every user can use the same nextcloud instance, but different user accounts, to save their notes. If you want to share all the notes, all the users can synchronise with the same Nextcloud user. You can make different notebooks for different users. All the users, however, can see and edit notes. Joplin cannot be a solution if you want to share some notes. It is either all, or none.

Logseq can be another solution, with the same technique. However, you can use git to synchronise different databases, where one database is used in shared notes and personal databases for non-shared notes. I host my own Gitea (will soon shift to forgejo) to synchronise my Logseq databases.


AFAIK, Piped always proxies the videos through a server.

I am more familiar with Invidious. Find an Invidious server that lets you enable proxying. Some examples are yewtu.be, invidious.protokolla.fi and inv.nadeko.net. Then find an RSS app that lets you download the content, as well as supports cookies. Use the invidious server’s cookies in your RSS app to proxy the content you download. Invidious servers can provide RSS feeds for individual channels, as well as your complete subscription feed.

And if possible donate a dollar or two, regularly, to the invidious server that you use, since it takes up a lot of bandwidth and motivates the hoster to keep up what they are doing.


If music available on YouTube is enough, you can use, or host your own Beatbump server. beatbump.io is a what I use occasionally.


I use Jellyfin to stream my music library (which is over 20 GiB) everywhere.

Also, there is this unofficial way to install Jellyfin on BSD.

Edit – I am willing to give you temporary access to my public server, if you want to try it out.


  1. If you are self hosting for privacy, I recommend staying away from Cloudflare, that is, not to use their proxying service or Cloudflare tunnel.

More information here - https://blog.aiquiral.me/bypass-cgnat




I was paying $7/m for their mail, VPN and drive services. One of my major reasons to switch was their lack of linux support. They claim that it is hard to find Linux developers. Second reason was their drive’s download and upload speeds were terrible, from where I am sitting. Their VPN service is great. I always got great speeds, but their linux apps have always been terrible. Their mail service is also great, but I would like more control over it, like Mailbox.org. on Mailbox, I can encrypt my inbox using a different key, while also having the SMTP submission feature. I really ned that to integrate emails with my websites and services. Mailbox can also encrypt their cloud drive with our key, while also providing WebDAV support (how cool is that). Their mail app on android is open-source but is not available on f-droid. And the apk they provide on their website neither has a notification functionality, nor does it auto-update. Another reason was that I was limited to 3 custom domains, unless I buy their business plan. Mailbox has no such limit.

One final reason was that I did not want to keep all my apples in one basket. So, for mail, I am using mailbox, for storage, I am using a personal nextcloud and a Hetzner managed nextcloud, for VPN, I started using mullvad, but their speeds are terrible and connections are unreliable. For passwords I am using self-hosted vaultwarden.

There are a few more reasons that I do not remember, now. Proton is great, I still trust them. But these small things really go a long way.


I moved from Gmail to ProtonMail, then to Mailbox.org. Ypu can set up a mailserver on your home server, but you would need a VPS that would forward the traffic to and from your home server without you needing to open any ports. This guide can help you with TLS passthrough.

But setting up your own mailserver is a big hassle. Just pay a trusted provider and keep your inbox, and preferably all emails, encrypted with GPG.


Some people even use Raspberry Pis as their NAS. I use an old MacBook (5th gen i5) as a home server with 2 external hard drives as a NAS, which also runs a few docker containers like Jellyfin. Before that, I was using an old PC with 1st gen i3 for all these things.


The only thing preventing me to move from photoview to immich is the lack of sorting/viewing photos by folder hierarchy. I love the UI and the machine learning customisation options. They recently added the “external albums” feature, so I am hoping this folder hierarchy thing, too, will soon be implemented.


I have been trying to find such a solution but I couldn’t. I have scraped almost every Reddit post I could find on this topic but I could not find a solution that works for me. So I ended up making a simple table on Nextcloud notes. Along with that I used the Organic Maps app which is based on OSM. I just downloaded the maps I needed onto my device and I pinned some locations that I wanted to visit.

All the work was done manually. I would really appreciate if someone can develop such a solution. I am even willing to donate a few dollars.


I think I remember reading such a post where the person was saying that they are trying to develop such a solution, but it was work in progress. It was an old post, and I think they had abandoned it.



I am in India. Here, broadband providers give 3.33 TB of high speed usage, after which the speed gets limited to 1mbps. Some postpaid mobile plans also have “unlimited” usage, but they give only 100GB data, and if you exceed it, they convert your regular plan to a commercial plan and will bill you a relatively huge amount.




Cost-cutting tips?
What are your favourite, or least favourite but necessary, cost-cutting methods? I feel I am spending too many resources on unnecessary stuff. Edit: I feel the need to reduce both – the resources, to host multiple things on one system, and cost, to buy/pay for multiple systems. Currently, I have 2 ARM VPSes and 1 old MacBook Air as a home server.
fedilink

Forward IP headers in HAProxy to get the real IP of the client
**TL;DR - `option forwardfor` and `http-request set-header X-Real-IP %[src]` are not working.** My setup is slightly complicated. I have a homeserver, with HAProxy installed and some docker containers. My homeserver is, then, connected to a VPS via WireGuard which also has HAProxy installed. HAProxy on homeserver forwards the docker containers with an SSL certificate to the VPS. The VPS, then, just does TLS pass through to the clients. The issue is, if I do not use `option forwardfor` in either of the 2 HAProxy configurations, I get the internal IP address of the docker container (172.XX.XX.1). If I add `option forwardfor` on the homeserver's HAProxy config, I get the internal IP of the WireGuard of the home server (10.0.0.2). And if I add `option forwardfor` to the HAProxy config of the VPS as well, I get the internal IP of the WireGuard tunnel (10.0.0.1). And as far as I know, `http-request set-header X-Real-IP %[src]` has no impact. I have also tried using `send-proxy` and `send-proxy-v2`, but then the whole setup stops working. **HAProxy config on home server:** ``` global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20> ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http listen rp bind *:443 ssl crt /path/to/cert.pem acl service1 hdr_sub(host) -i service1.domain.me acl service2 hdr_sub(host) -i service2.domain.me use_backend service1_backend if service1 use_backend service2_backend if service2 backend service1_backend server service1_server 127.0.0.1:8080 backend service2_backend # option forwardfor # http-request set-header X-Real-IP %[src] server service2_server 127.0.0.1:9090 ``` **HAProxy config on VPS:** ``` global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon tune.ssl.default-dh-param 4096 defaults log global mode tcp # option forwardfor timeout connect 5000 timeout client 50000 timeout server 50000 listen http bind *:80 mode tcp server default 10.0.0.2:80 listen https bind *:443 alpn h2,http/1.1 mode tcp # option forwardfor header X-Real-IP # http-request set-header X-Real-IP %[src] server main 10.0.0.2:443 ``` I have to resort to this because I am behind CGNAT, and want TLS pass through on the VPS for privacy. What am I doing wrong?
fedilink

NGINX config for TLS passthrough with multiple services?
I am trying to set up a reverse proxy server, with TLS passthrough. I am behind CGNAT, so I cannot forward any ports from my home server. So, my current workaround was that I connected my home server to a VPS via WireGuard and used Nginx Proxy Manager (NPM) to proxy services running on different docker containers to the VPS, so that they are accessible publicly. But now I want to use TLS passthrough for better privacy. But I cannot find any guides for my case. I need help with 2 issues, basically. Let's take a look at my `passthrough.conf` file, which I have included in `nginx.conf` file. ``` stream { # Listen for incoming TLS connections on service1.domain.me server { listen 443; proxy_pass service1.domain.me; proxy_ssl on; proxy_ssl_protocols TLSv1.2 TLSv1.3; proxy_ssl_name $ssl_preread_server_name; } # Listen for incoming TLS connections on service2.domain.me # server { # listen 443; # proxy_pass service2.domain.me; # proxy_ssl on; # proxy_ssl_protocols TLSv1.2 TLSv1.3; # proxy_ssl_name $ssl_preread_server_name; # } # Define the backend server for service1.domain.me upstream service1.domain.me { server homeserverIP:port; } # Define the backend server for service2.domain.me # upstream service2.domain.me { # server homeserverIP:port; # } } ``` The services are running in docker containers on different ports. When I used two server blocks and two upstream blocks, I got this error while testing NGINX config: `nginx: [emerg] duplicate "0.0.0.0:443" address and port pair in /etc/nginx/passthrough.conf:13`. So, I commented out the other server block and tested it again. The test was successful, but NGINX failed to restart. When I checked the `systemctl status` I saw: `nginx[2480644]: nginx: [emerg] bind() to 0.0.0.0:443 failed (98: Address already in use)`. This is because I am already hosting multiple WordPress sites on this VPS. Here's my `nginx.conf` file: ``` user www-data; worker_processes auto; pid /run/nginx.pid; include /etc/nginx/modules-enabled/*.conf; events { worker_connections 768; } http { sendfile on; tcp_nopush on; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_vary on; gzip_proxied any; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; client_max_body_size 100M; server_tokens off; } #include /etc/nginx/passthrough.conf; ``` I do not know much about NGINX configuration, any help or article links would help.
fedilink

Questions about TLS Passthrough.
Hi. I have been into self-hosting for about 2 years, now. My current setup is that I have a home server and a VPS. My ISP does not let me forward any ports (I am behind CGNAT, I think), so, I have connected my home server to a VPS via a WireGuard tunnel and am using Nginx Proxy Manager (NPM) to proxy the services hosted on my homeserver to the public. Now, the traffic that goes from my home server to the VPS and from VPS to the public are encrypted, but theoretically, the VPS provider can look at the data passing through, since this is technically TLS termination. Although, I trust my VPS provider more than I trust my ISP, I am thinking about setting up TLS passthrough, for additional privacy. But I have a few questions and I would be greatful if anyone can help me. 1. Do I need to put the SSL certificates on my homeserver, or can they remain on the VPS if I have to set up TLS Passthrough? 2. Is port forwarding required to set up TLS passthough? 3. Does NPM support TLS passthrough, or should I shift to HAProxy? If there are any issues with my current setup, or the assumptions I am making, please guide me.
fedilink