help-circle
rss
Looking to start self hosting by going through Louis Rossman’s recently released guide. Any pointers for a newbie are most welcome.
pin
First, a hardware question. I'm looking for a computer to use as a... router? Louis calls it a router but it's a computer that is upstream of my whole network and has two ethernet ports. And suggestions on this? Ideal amount or RAM? Ideal processor/speed? I have fiber internet, 10 gbps up and 10 gbps down, so I'm willing to spend a little more on higher bandwidth components. I'm assuming I won't need a GPU. Anyways, has anyone had a chance to look at [his guide](https://wiki.futo.org/index.php/Introduction_to_a_Self_Managed_Life:_a_13_hour_%26_28_minute_presentation_by_FUTO_software)? It's accompanied by two youtube videos that are about 7 hours each. I don't expect to do everything in his guide. I'd like to be able to VPN into my home network and SSH into some of my projects, use Immich, check out Plex or similar, and set up a NAS. Maybe other stuff after that but those are my main interests. Any advice/links for a beginner are more than welcome. Edit: thanks for all the info, lots of good stuff here. OpenWRT seems to be the most frequently recommended thing here so I'm looking into that now. Unfortunately my current router/AP (Asus AX6600) is not supported. I was hoping to not have to replace it, it was kinda pricey, I got it when I upgraded to fiber since it can do 6.6gbps. I'm currently looking into devices I can put upstream of my current hardware but I might have to bite the bullet and replace it. Edit 2: [This](https://www.qotom.net/product/RouterPC_Q20331G9S10.html) is looking pretty good right now.
fedilink

What is a service you host you never knew you needed?
I think everybody on here is constantly keeping an eye out for what to host next. Sometimes you spinup something which chugs along nicely but sometimes you find out you've been missing out. For me it's not very refreshing or new: Paperless-ngx. Never thought I would add all my administration to it. But it's great. I probably can't find the thing I need, but I should have a record of every mail or letter I've gotten. Close second is Wanderer. But I would like to have a little bit more features like adding recorded routes to view speed and compare with previous walks. But that's not what it is intended for. What is that service for you?
fedilink

Taildrop is very convenient for sharing files between servers/devices
Someone mentioned it in a comment and I genuinely didn't know what I was setting up, but its basically airdrop but to all your devices/servers so if you have an iphone like me you can goto any photo/file click share, taildrop, then pick the device, its a prettty fast transfer. It shows up in the downloads folder on my pc by default. I no longer have to upload to icloud files to grab my files, it is very convenient and seems to be free forever for personal use up to 100 devices? I had no idea what I was even setting up til I saw the guide afterwards, I thought it was for monitoring server health, but it's made sharing files between devices/servers very convenient. (this was likely obvious, just wanted to share with others who didn't know)
fedilink

Recommendations: Internal Certificate Authority w/ CRL and/or OCSP
Title says it - I want a simple CA that doesn't overcomplicate things (looking at you, EJBCA). I need it to serve at least CRLs or better OCSP automatically for the certs it manages. If it comes with a Web GUI, all the better, but doesn't need to. Docker deployment would be sweet. Currently handling this on an OPNSense I happen to be running, but that thing is also serving stuff to the public 'net, so I'd rather not have my crown jewels on there.
fedilink

I am trying to connect qbittorrent and wireguard.
My solution uses qBittorrent with Glutun and it works great. My Docker Compose file is based on this one [https://github.com/TechHutTV/homelab/blob/main/media/arr-compose.yaml](https://github.com/TechHutTV/homelab/blob/main/media/arr-compose.yaml). I simply removed some of the services I didn't need. --- I am trying to have a QBitTorrent Docker container that is accessible on my local network and connects to WireGuard. I know this is a basic question, and I'm sorry if I'm wasting your time. I am using a separate user for this that i have add to the docker group. I can't access the web interface what have i configured wrong. Here is my docker compose file. ``` --- services: qbittorrent: image: lscr.io/linuxserver/qbittorrent:latest container_name: qbittorrent environment: - PUID=1001 - PGID=1001 - TZ=Europe/London - WEBUI_PORT=8080 - TORRENTING_PORT=6881 volumes: - /home/torrent/torrent/:/config - /home/torrent/download/:/downloads network_mode: service:wireguard depends_on: - wireguard restart: always wireguard: image: lscr.io/linuxserver/wireguard container_name: wireguard cap_add: - NET_ADMIN - SYS_MODULE environment: - PUID=1001 - PGID=1001 - TZ=Europe/London ports: - 51820:51820/udp volumes: - /home/torrent/wireguard/:/config - /home/torrent/wireguard/london.conf/:/config/wg0.conf sysctls: - net.ipv4.conf.all.src_valid_mark=1 restart: always ```
fedilink

Jellyfin Buffering Slow Torrents
I have been having a few issues recently, and I can't quite figure out what is causing this. My setup: - gigabit WAN up and down. Run speed tests regularly and get 800+ mbps up and down. - opnsense router VM (proxmox) running on a lenovo m920x. Installed an intel 2x10gbe card. - Sodola 10gbe switch - TrueNAS server (bare metal) w/ 10gbe serving the media files over NFS, stored on a ZFS mirror. - Jellyfin LXC - debian LXC running the arr stack w/ qbittorrent - NVidia Shield w/ ethernet First issue is extremely slow downloads on qbittorrent. Even if I download an ubuntu iso with hundreds of seeders will sit around 1 mibps. Media downloads with ~10 seeders, I'll sit around 200kibps. Running this through gluetun and protonvpn wireguard with port forwarding enabled and functioning. Second issue I'm having is if I am downloading anything on qbittorrent, and attempt to play a 4k remux on Jellyfin, it is constantly buffering. If I stop all downloads, immediately the movie plays without issue. 1080 files play without issue all the time. I tried spinning up a new LXC with qbittorrent, and can download ubuntu isos at 30+ mibps locally and not over NFS. Any idea what could be causing this? Is this a read/write issue on my TrueNAS server? Networking issuing causing the NFS to be slow? I've run iperf to the TrueNAS and getting 9+gbps.
fedilink

Guides for hosting sites in Fediverse and federating?
Are they all gathered in one place somewhere or does it all need to be found case by case? Intrested in hosting a mastadon, pixelfed, and peertube instance. What are the vps requiremnts? I do like the idea of hosting all my own posts/comments, etc.
fedilink

Moved to seafile. Anything I should be aware of?
I’ve used Nextcloud for a long while but only for cloud storage and photo backup from my phone. I’ve moved the latter to Immich and just replaced the cloud storage with Seafile. So far everything is hunky dory but I was wondering if anyone who’s run Seafile for longer has any insights for things to watch out for with it. I’m following the documentation backup solution and I’m the only user (so the whole db/file de sync during backup isn’t an issue)
fedilink

Proxmox backups to S3 (or similar)?
So, I've been pushing my photos to local immich-instance and I'll need some kind of file storage too soon, total amount of data is roughly 1,5TB. Everything is running on a proxmox server and that's running somewhat smoothly, but now I'd need to get that backed up offsite. I'm running a VPS at Hetzner and they offer pretty decently priced S3 storage or 'storagebox' which is just a raw disk you can connect via SMB/NFS and others. Now, the question is, how to set up automated backups from proxmox to either of those solutions? I suppose I could just mount anything to the host locally and set up backup paths accordingly, but should the mount drop for whatever reason is proxmox smart enough to notice that actual storage is missing and not fill small local drive with backups? Encryption would be nice too, but that might be a bit too much to ask. I have enough bandwidth to manage everything and after initial upload the data doesn't change that much, the only question is what is the best practise to do it?
fedilink

Need a self hosted solution to offload old Photos’s from iPhones to make space in iCloud for new ones
Not sure if this is the right place but lets give it a go. We have a family account on iCloud so all iPhones (5 of them) can sync items on the phones to their laptops and so forth. One feature that is eating all the storage space we have on iCloud and that would be Photo's. We ran out of space and thus Backups, Photos, Contacts, etc. will not sync anymore. We can add more space in iCloud but I am not keen on keeping buying storage space with Apple. So my thought was to have all Photo's older then xyz days/months/years stored somewhere else to free up space in that iCloud account. I do not want to delete these older photo's, just have them stored somewhere else but still accessible. So ideally I would be able to tell some app/solution to move photo's from a phone to something self hosted and the user of that phone can then keep seeing the photo's in either the Photos app or the app related to the self hosted solution. Honestly, even more ideal would be to 'tell' the Photos app from Apple to use the self hosted storage and not the iCloud storage. This would make the transition transparent to all the family members. Some features might no longer work (that 'memories' feature perhaps?) but that is OK, being able to store photo's is more important. Apologies if this has been asked before but my searching, which is admittedly is not that great from my side, found no answer I could translate to my issue. Any help is appreciated! FYI, I am running Docker at home and can make services available on the internet with ngnix in front of it as proxy. I can also run a new service of course, the self hosting bit as it were.
fedilink

How do services like Mastohost work on a fundamental level?
This is all basically hypothetical, but it's something I want a better idea about to improve my concept of networking, service providing, and etc. Additionally, I think services like [Mastohost](https://masto.host/) are healthy for the growth of the fediverse as it eases concerns for business use, enterprise use or even broader "community" use. I think it will be important for the future of a federated internet that many of these types of services exist. Let's get the obvious out of the way: You'd probably need a *lot* of hardware to achieve this type of service. We're talking about either having a micro datacenter or renting a datacenter from someone else. You're probably not going to get away with doing this on a bog-standard VPS regardless of how much storage you buy (though, if I'm wrong, feel free to correct me.) I understand how virtualization via proxmox works (kind of, on a surface level) and I imagine that it would work similarly to that but with a preconfigured docker image, but how exactly does someone integrate virtual machine creation with client requests? Normally I think about services running in a docker which would communicate with other docker containers or the host server -- so, for example, you can configure your Jellyfin to be visible to other containers that might be interested in sharing data between the two. But when it comes to requests for hosting *new* docker images that need persistent space, how would you manage such a task? Additionally, if we're talking about a multi-computer environment, how do you funnel a request for a new instance to one-of-many machines? This seems like a basic, fundamental server hosting question and may not be appropriate for "self hosting" as it's probably beyond the scale of what most of us are willing to do -- but humor a man who simply wants to understand a bit more about modern enterprise compute problems. Feel free to share any literature or online documentation that talks about solving these types of tasks.
fedilink

A collection of 150+ self-hosted alternatives to popular software
Hey, community :) I run a website that showcases the best open-source companies. Recently, I've added a new feature that filters self-hosted tools and presents them in a searchable format. Although there are other options available, like Awesome-Selfhosted, I found it difficult to find what I needed there, so I decided to display the information in a more digestible format. You can check out the list here: [https://openalternative.co/self-hosted](https://openalternative.co/self-hosted) Let me know if there’s anything else I should add to the list. Thanks!
fedilink

I'm proud to share a major development status update of [XPipe](https://github.com/xpipe-io/xpipe), a new connection hub that allows you to access your entire server infrastructure from your local desktop. XPipe 14 is the biggest rework so far and provides an improved user experience, better team features, performance and memory improvements, and fixes to many existing bugs and limitations. If you haven't seen it before, XPipe works on top of your installed command-line programs and does not require any setup on your remote systems. It integrates with your tools such as your favourite text/code editors, terminals, shells, command-line tools and more. Here is what it looks like: ![Hub](https://i.imgur.com/i7xQ3t8.png) ![Browser](https://i.imgur.com/00Sp1J0.png) ## Reusable identities + Team vaults You can now create reusable identities for connections instead of having to enter authentication information for each connection separately. This will also make it easier to handle any authentication changes later on, as only one config has to be changed. Furthermore, there is a new encryption mechanism for git vaults, allowing multiple users to have their own private identities in a shared git vault by encrypting them with the personal key of your user. ## Incus support - There is now full support for incus - The newly added features for incus have also been ported to the LXD integration ## Webtop For users who also want to have access to XPipe when not on their desktop, there exists the [XPipe Webtop](https://github.com/xpipe-io/xpipe-webtop) docker image, which is a web-based desktop environment that can be run in a container and accessed from a browser. This docker image has seen numerous improvements. It is considered stable now. There is now support for ARM systems to host the container as well. If you use [Kasm Workspaces](https://kasmweb.com/), you can now integrate the webtop into your workspace environment via the [XPipe Kasm Registry](https://github.com/xpipe-io/kasm-registry). ## Terminals - Launched terminals are now automatically focused after launch - Add support for the new [Ghostty terminal](https://ghostty.org/download) on Linux - There is now support for [Wave terminal](https://www.waveterm.dev/) on all platforms - The Windows Terminal integration will now create and use its own profile to prevent certain settings from breaking the terminal integration ## Performance updates - Many improvements have been made for the RAM usage and memory efficiency, making it much less demanding on available main memory - Various performance improvements have also been implemented for local shells, making almost any task in XPipe faster ## Services - There is now the option to specify a URL path for services that will be appended when opened in the browser - You can now specify the service type instead of always having to choose between http and https when opening it - There is now a new service type to run commands on a tunneled connection after it is established - Services now show better when they are active or inactive ## File transfers - You can now abort an active file transfer. You can find the button for that on the bottom right of the browser status bar - File transfers where the target write fails due to permissions issues or missing disk space are now better cancelled ## Miscellaneous - There are now translations for Swedish, Polish, Indonesian - There is now the option to censor all displayed contents, allowing for a more simple screensharing workflow for XPipe - The Yubikey PIV and PKCS#11 SSH auth option have been made more resilient for any PATH issues - XPipe will now commit a dummy private key to your git sync repository to make your git provider potentially detect any leaks of your repository contents - Fix password manager requests not being cached and requiring an unlock every time - Fix Yubikey PIV and other PKCS#11 SSH libraries not asking for pin on macOS - Fix some container shells not working do to some issues with /tmp - Fix fish shells launching as sh in the file browser terminal - Fix zsh terminal not launching in the current working directory in file browser - Fix permission denied errors for script files in some containers - Fix some file names that required escapes not being displayed in file browser - Fix special Windows files like OneDrive links not being shown in file browser ## A note on the open-source model Since it has come up a few times, in addition to the note in the git repository, I would like to clarify that XPipe is not fully FOSS software. The core that you can find on GitHub is Apache 2.0 licensed, but the distribution you download ships with closed-source extensions. There's also a licensing system in place as I am trying to make a living out of this. I understand that this is a deal-breaker for some, so I wanted to give a heads-up. ## Outlook If this project sounds interesting to you, you can check it out on [GitHub](https://github.com/xpipe-io/xpipe) or visit the [Website](https://xpipe.io/) for more information. Enjoy!
fedilink


[SOLVED] Nextcloud can’t see config.php in new install directory
Update: Turned out I had like 3 versions of php and 2 versions of postgres all installed in different places and fighting like animals. Cleaned up the mess, fresh install of php and postgres, restored postgres data to the database and bobs your uncle. What a mess. Thanks to everyone who commented. Your input is always so helpful. ----- Original Post ----- Hey everyone, [it's me again.](https://lemmy.world/post/24523007) I'm now on NGINX, surprisingly simple, not here with a webserver issue today though, rather a nextcloud specific issue. I removed my last post about migrating from Apache to Caddy after multiple users pointed out security issues with what I was sharing, as well as suggesting caddy would be unable to meet my complex hosting needs. Thank you, if that was you. During the NGINX setup which has gone shockingly smoothly I moved all of my site root directories from /usr/local/apache2/secure to /var/www/ Everything so far has moved over nicely... that is until nextcloud. It's showing an "Internal Server Error" when loading up. When I check the logs in nextcloud/data/nextcloud.log it informs me nextcloud can't find the config.php file and is still looking in the old apache webroot. I have googled relentlessly for about four hours now and everything I find is about people moving data directories which is completely irrelevant. Does anyone know how to get F*%KING nextcloud to realize that config.php is in /var/www/nextcloud/config where it belongs? I'm assuming nextcloud has an internal variable to know where it's own document root is but I can't seem to find it. Thanks for any tips. Cheers [nextcloud.log](https://privatebin.io/?b18c2032de4f5f59#HB9Aq6frnYQJSEuW6Nu2xB2XGc6qeVM5XN6g8GknBAr6) <- you can click me
fedilink

Migrate from YunoHost to Docker?
I’m still a newcomer to self hosting, and I could use some guidance on how to best accomplish what I’m trying to do. Right now, I’ve got AdGuard, Jellyfin, and Nextcloud running on a Raspberry Pi 4 with a 500 GB external hard drive, using YunoHost. Those services are all available at my free domain name provided by YunoHost. I’d like to run all of those services on the same Pi they’re on now, but using Docker, so I have more control and access to more applications. I would also like to configure a reverse proxy so I can access them at, for example, nextcloud.mydomain.com. (YunoHost doesn’t support custom domains from Porkbun, which is the registrar I’m using.) What would be the least painful way to go about this? I understand how Docker works conceptually, but I admittedly don’t really know how to use it in practice. Are there any resources available that would get me up to speed quickly? Appreciate the help - thanks!
fedilink

Using Jenkins to deploy Docker containers?
I am wanting to automate some homelab things. Specifically deploying new and updating existing docker containers. I would like to publish my entire docker compose stacks (minus env vars) onto a public Git repo, and then using something to select a specific compose from that, on a specific branch (so I can have a physical seperate server for testing) automatically deploy a container. I thought of Jenkins, as it is quite flexable, and I am very willing to code it together, but are there any tools like this that I should look into instead? I've heard Ansible is not ideal for docker compose.
fedilink

App that syncs email accounts?
So all of these apps like gmail, outlook, etc. let me login to all of my websites different emails but none of them seem to sync across devices. Is there an app that lets me login to all my inboxes once and then sync that logininfo across pc, iphone, and android? Right now I have to manually add all the email accounts in to each device, none of the mobile apps sync to their pc counterparts.
fedilink

AppFlowy Web is Live
See the video description for details on what it supports. From the email: > 🆕 Self-hosters, you can now configure web server URLs in our desktop and mobile applications to enable features like Publish, Copy Link to Share, Custom URLs, and more. Download the latest version to give it a try!
fedilink

Breaking Changes to latest Element server
For those that run Element server and run postgresql version older than 13 will need to update their postgresql major version. I found [these instructions by 'maxkratz' on their github page](https://github.com/element-hq/synapse/issues/18085#issuecomment-2593432237) which worked perfectly for me to go from 11 to 16. Hopefully this helps someone!
fedilink

Help Needed: Homepage Configuration – Missing Widgets & API Errors
Help Needed: Homepage Configuration – Missing Widgets & API Errors Hi everyone, I'm running Homepage (v0.10.9) in Docker on Arch Linux ARM (Stormux) and encountering issues with missing widgets and API errors. Some widgets are showing as "Missing" on the dashboard, and I'm seeing repeated HTTP 401 errors for Portainer and Tailscale in the logs. Setup Details: \- Homepage Version: v0.10.9 \- Host OS: Arch Linux ARM (Stormux) \- Host IP: 192.168.1.137 \- Docker Network: All containers are on homepage\_net (gateway: 172.23.0.1) \- Docker Containers: Homepage, Portainer, Miniflux, Uptime Kuma, Glances, etc. Issues: 1. Several widgets showing as "Missing": \- AdGuard (running on host, not in Docker) \- Netdata \- Uptime Kuma \- Docker \- Portainer \- Miniflux \- Tailscale 2. Repeated HTTP 401 errors for Portainer and Tailscale in logs. What I've Tried: 1. Separated service definitions (services.yaml) and widget configurations (widgets.yaml). 2. Updated widget URLs to use appropriate addresses (host IP for AdGuard, container names or Docker network IPs for containerized services). 3. Regenerated API keys for Portainer and Tailscale. 4. Verified all containers are on the same network (homepage\_net). 5. Enabled debug logging in Homepage. Configuration Files: I've uploaded my configuration files here: [https://gist.github.com/Lanie-Carmelo/e01d973bc3b208e5082011e4b76532f6](https://gist.github.com/Lanie-Carmelo/e01d973bc3b208e5082011e4b76532f6). API keys and passwords have been redacted. Any help troubleshooting this would be greatly appreciated! Let me know if you need additional details. Hashtags & Mentions: [\#SelfHosting](https://caneandable.social/tags/SelfHosting) [#Linux](https://caneandable.social/tags/Linux) [#ArchLinux](https://caneandable.social/tags/ArchLinux) [#Docker](https://caneandable.social/tags/Docker) [#HomeLab](https://caneandable.social/tags/HomeLab) [#OpenSource](https://caneandable.social/tags/OpenSource) [#WebDashboard](https://caneandable.social/tags/WebDashboard) [#ArchLinuxARM](https://caneandable.social/tags/ArchLinuxARM) [@selfhosted](https://lemmy.world/c/selfhosted) [@linux](https://lemmy.ml/u/linux) [@docker](https://lemmy.world/c/docker) [@opensource](https://a.gup.pe/u/opensource) [@selfhosting](https://a.gup.pe/u/selfhosting) [@selfhost](https://lemmy.ml/c/selfhost)
fedilink

Looking for Recommendations - FOSS WAF
Hey everyone ! I'm looking into spinning up a WAF as the number of services I'm hosting is slowly growing. I want to have a better understanding of the traffic and also have a relative peace of mind that if there is a flaw in one of the services I'm hosting, the WAF could help mitigate it. I've seen two big names come up while searching : - SafeLine - BunkerWeb They are popular and look quite good all around but I don't want to just mindlessly take the project with the most GitHub stars. What WAF are you using / have you used ? Which ones do you recommand ?
fedilink

[SOLVED] Can’t renew cert on a self-hosted lemmy instance D:
EDIT: Thanks everyone for your time and responses. To break as little as possible attempting to fix this I've opted to go with ZeroSSL's DNS process to acquire a new cert. I wish I could use this process for all of my certs as it was very quick and easy. Now I just have to figure out the error message lemmy is throwing about not being able to run scripts. Thank you all for your time sincerely. I understand a lot more than I did last night. -------- Original Post -------- As the title says I'm unable to renew a cert on a self-hosted lemmy instance. A friend of mine just passed away and he had his hands all up in this and had it working like magic. I'm not an idiot and have done a ton of the legwork to get our server running and working - but lemmy specifically required a bit of fadanglin' to get working correctly. Unfortunately he's not here to ask for help, so I'm turning to you guys. I haven't had a problem with any of my other software such as nextcloud or pixelfed but for some reason lemmy just refuses to cooperate. I'm using acme.sh to renew the cert because that's what my buddy was using when he had set this all up. I'm running apache2 on a bare metal ubuntu server. Here's my httpd-ssl.conf: https://pastebin.com/YehfTPNV Here's some recent output from my acme.sh/acme.log: https://pastebin.com/PESVVNg4 Here's the terminal read out and what I'm attempting to execute: https://pastebin.com/jfHfiaE0 If you can make any suggestions at all on what I might be missing or what may be configured incorrectly I'd greatly appreciate a nudge in the right direction as I'm ripping my hair out. Thank you kindly for your time.
fedilink

I’m not asking for much! Android Music app (requirements below)
UPDATE: Thank you guys for all the suggestions! I got Navidrome installed on my NAS in a matter of minutes, got to test like a half dozen Subsonic compatible apps (both FOSS and Play Store), and it looks like Symfonium + Navidrome meets my needs. I'll keep testing before my free trial for Symfonium ends, but I really appreciate the nudge to try a new music server! ____ I'm self-hosting my music collection (synology NAS), and while I've liked Poweramp, it only reads local music files, which means I have to copy many GB of music to my phone, even if I'm not particularly listening to it. The Synology DS Audio app actually does what I want: it caches music locally as you're streaming it, but it reads directly from the NAS. The only problem with DS Audio is that it sucks as an actual music player. Are there any Android music players, preferably FOSS or at least privacy-friendly, that will read from the NAS and cache in an intelligent way but also works well as an actual music player? I did try Symfonium, but couldn't get it to work with Webdav or SMB, plus the dev comes off as a real asshole, so I'd rather not give them money. EDIT: To clarify what I'm looking for: - The app must be able to connect to my NAS music collection (through my local network is fine). - Most importantly, the app must be able to cache my music either as I'm streaming it, or in advance when I'm running through a playlist... then future plays of the song should be from the cache. - I do NOT want to have to manually download or sync files, which is how I've been doing, and I don't like this at all. If you've used the Synology DS Audio app, then you'll know exactly the behaviour I'm looking for. It really is a shame that DS Audio sucks as a music player, or else it would be exactly what I'm looking for.
fedilink

WebDAV on Windows 11 - HTTPS Not Working & Sync Issues (Local Network Only)
Hello to everyone! Very new to WebDEV and I’m pulling my hair out trying to set up it on Windows 11 for **local network use only** (no internet access needed). I’ve hit two major roadblocks, and I’m hoping someone here can save me from this nightmare. ### The problems: 1. **HTTPS connection fails:** I can only get WebDAV to work over **HTTP**, not **HTTPS**. I’ve created a self-signed certificate, but it’s still not working. Am I missing something obvious? 2. **Sync issues with Android apps and another computer:** I’ve tried syncing with apps like **Joplin**, **EasySync**, **DataBackup**, and **Diarium**. While they can **push data to the WebDAV server**, they can’t **pull data back**. It’s like the `PUT` method works, but `GET` doesn’t. Is this a certificate issue, a permissions problem, or something else entirely? --- ### What I’ve done so far: Here’s my setup process in case it helps diagnose the issue: #### 1. Windows Features: - Enabled **Internet Information Services** (IIS) (which auto-enabled **Web Management Tools** and **World Wide Web Services**). - Enabled **WebDAV Publishing** under _World Wide Web Services > Common HTTP Features_. - Enabled **Basic Authentication** under _World Wide Web Services > Security_. #### 2. IIS Manager: - In **Default Web Site > WebDAV Authoring Rules**, I enabled WebDAV and added an authoring rule for **All users** with **Read**, **Source**, and **Write** permissions. - Enabled **Basic Authentication** and disabled **Anonymous Authentication** and **ASP .NET Impersonation**. - Created a **self-signed certificate** under _Server Certificates_ and bound it to the **Default Web Site** for HTTPS. #### 3. Folder Setup: - Created a folder (e.g. `C:\WebDAVShare`) and added it as a **Virtual Directory** in IIS with an alias (e.g. `webdav`). - Set permissions for a local user (`DESKTOP-PC\webdavuser`) with **Full Control**. #### 4. Directory Browsing: - Enabled **Directory Browsing** in IIS. #### 5. Accessing WebDAV: - Accessed the server via `https://192.168.1.10/webdav` in my browser. - Entered credentials (`DESKTOP-PC\webdavuser` + password) and could see the files, but the connection was **HTTP**, not **HTTPS**. --- ### Additional info: - I’ve exported and installed the self-signed certificate on both my Android devices (Android 13 & 15) as **VPN and app user certificates**. I couldn’t install them as **CA certificates** - not sure if that’s the issue. --- ### What am I missing? - Why isn’t HTTPS working despite the self-signed certificate? - Why can’t my Android apps pull data from the WebDAV server (nor another computer on same network)? - Is there a specific Windows feature, permission, or setting I’ve overlooked? I’m at my wit’s end here, so any help would be **hugely appreciated**. If you’ve dealt with WebDAV on Windows 11 or have any insights, please chime in! Thanks in advance and I'm sorry if this is not the right place to ask this :(
fedilink

Email server help: postfix and spamassassin with user prefs in MySQL
Hi, I'm trying and failing to get spamassassin to load user prefs from a mysql database. I'm using spamass-milter and I can't find any way in the docs to send anything. spamass fails to parse the recipient as the user and just uses it's running user in it's call to spamd. The database is properly configured and I can connect and set settings from roundcube. The sql config is added to local.cf. I know that you can use spamd as a pipe, and then you can pass more variables, but I can't figure out the correct config for this setup. This is what I have in /etc/default/spamd: `OPTIONS="-Q -x --max-children 5 -D sql,bayes -H /etc/mail/spamassassin/"` I've also tried multiple combinations with the flags -q, without -x... And this is what I have in /etc/default/spamass-milter `OPTIONS="-u spamass-milter -x -i 127.0.0.1"` Where again I've tried without -u, with `-e domain.com` to explicitly set the domain. If anyone has any advice or can point me to a recent tutorial for Ubuntu 24.04, I would be really grateful!
fedilink

Hello. Notesnook is an end-to-end encrypted note taking alternative to Evernote. I wanted to self-host a Notesnook sync server really badly, but I'm a noob. So, I worked hard on it and came up with this noob-proof tutorial on how to set up a Notesnook sync server with local file storage, getting inspiration from the provided docker-compose in the repository. That's my way of giving back to the self-hosting community. I hope it can help some people. *** ## Overview This guide will help you set up a self-hosted instance of [Notesnook](https://github.com/streetwriters/notesnook-sync-server) using Docker Compose. --- ## Prerequisites - **Linux server** with Docker and Docker Compose installed. - **Domain name** with the ability to create subdomains. - Basic understanding of terminal commands. - Ports **5264**, **6264**, **7264**, **8264**, **9090** and **9009** available. Or you can change them but take good note of your changes. --- ## 1. Directory Structure Setup Create the required directories: ``` # Create data directories mkdir -p /srv/Files/Notesnook/db mkdir -p /srv/Files/Notesnook/s3 mkdir -p /srv/Files/Notesnook/setup ``` --- ## 2. Configuration Files ### 2.1. Environment File Create the `.env` file: ``` cd /srv/Files/Notesnook/setup nano .env ``` Add the following content (modify the values accordingly): ``` # Instance Configuration INSTANCE_NAME=My Notesnook DISABLE_SIGNUPS=false NOTESNOOK_API_SECRET=your_secure_api_secret_here # SMTP Configuration SMTP_USERNAME=your_email@domain.com SMTP_PASSWORD=your_smtp_password SMTP_HOST=smtp.your-server.com SMTP_PORT=587 # Public URLs (replace domain.com with your domain) AUTH_SERVER_PUBLIC_URL=https://auth.domain.com/ NOTESNOOK_APP_PUBLIC_URL=https://notes.domain.com/ MONOGRAPH_PUBLIC_URL=https://mono.domain.com/ ATTACHMENTS_SERVER_PUBLIC_URL=https://files.domain.com/ # MinIO Configuration MINIO_ROOT_USER=admin MINIO_ROOT_PASSWORD=your_secure_password_here ``` --- ### 2.2. Docker Compose File Create the `docker-compose.yml` file: ``` nano docker-compose.yml ``` Paste the following content: ``` x-server-discovery: &server-discovery NOTESNOOK_SERVER_PORT: 5264 NOTESNOOK_SERVER_HOST: notesnook-server IDENTITY_SERVER_PORT: 8264 IDENTITY_SERVER_HOST: identity-server SSE_SERVER_PORT: 7264 SSE_SERVER_HOST: sse-server SELF_HOSTED: 1 IDENTITY_SERVER_URL: ${AUTH_SERVER_PUBLIC_URL} NOTESNOOK_APP_HOST: ${NOTESNOOK_APP_PUBLIC_URL} x-env-files: &env-files - .env services: validate: image: vandot/alpine-bash entrypoint: /bin/bash env_file: *env-files command: - -c - | required_vars=( "INSTANCE_NAME" "NOTESNOOK_API_SECRET" "DISABLE_SIGNUPS" "SMTP_USERNAME" "SMTP_PASSWORD" "SMTP_HOST" "SMTP_PORT" "AUTH_SERVER_PUBLIC_URL" "NOTESNOOK_APP_PUBLIC_URL" "MONOGRAPH_PUBLIC_URL" "ATTACHMENTS_SERVER_PUBLIC_URL" ) for var in "$${required_vars[@]}"; do if [ -z "$${!var}" ]; then echo "Error: Required environment variable $$var is not set." exit 1 fi done echo "All required environment variables are set." restart: "no" notesnook-db: image: mongo:7.0.12 hostname: notesnook-db volumes: - /srv/Files/Notesnook/db:/data/db - /srv/Files/Notesnook/db:/data/configdb networks: - notesnook command: --replSet rs0 --bind_ip_all depends_on: validate: condition: service_completed_successfully healthcheck: test: echo 'db.runCommand("ping").ok' | mongosh mongodb://localhost:27017 --quiet interval: 40s timeout: 30s retries: 3 start_period: 60s initiate-rs0: image: mongo:7.0.12 networks: - notesnook depends_on: - notesnook-db entrypoint: /bin/sh command: - -c - | mongosh mongodb://notesnook-db:27017 <<EOF rs.initiate(); rs.status(); EOF notesnook-s3: image: minio/minio:RELEASE.2024-07-29T22-14-52Z ports: - 9009:9000 - 9090:9090 networks: - notesnook volumes: - /srv/Files/Notesnook/s3:/data/s3 environment: MINIO_BROWSER: "on" depends_on: validate: condition: service_completed_successfully env_file: *env-files command: server /data/s3 --console-address :9090 healthcheck: test: timeout 5s bash -c ':> /dev/tcp/127.0.0.1/9000' || exit 1 interval: 40s timeout: 30s retries: 3 start_period: 60s setup-s3: image: minio/mc:RELEASE.2024-07-26T13-08-44Z depends_on: - notesnook-s3 networks: - notesnook entrypoint: /bin/bash env_file: *env-files command: - -c - | until mc alias set minio http://notesnook-s3:9000/ ${MINIO_ROOT_USER:-minioadmin} ${MINIO_ROOT_PASSWORD:-minioadmin}; do sleep 1; done; mc mb minio/attachments -p identity-server: image: streetwriters/identity:latest ports: - 8264:8264 networks: - notesnook env_file: *env-files depends_on: - notesnook-db healthcheck: test: wget --tries=1 -nv -q http://localhost:8264/health -O- || exit 1 interval: 40s timeout: 30s retries: 3 start_period: 60s environment: <<: *server-discovery MONGODB_CONNECTION_STRING: mongodb://notesnook-db:27017/identity?replSet=rs0 MONGODB_DATABASE_NAME: identity notesnook-server: image: streetwriters/notesnook-sync:latest ports: - 5264:5264 networks: - notesnook env_file: *env-files depends_on: - notesnook-s3 - setup-s3 - identity-server healthcheck: test: wget --tries=1 -nv -q http://localhost:5264/health -O- || exit 1 interval: 40s timeout: 30s retries: 3 start_period: 60s environment: <<: *server-discovery MONGODB_CONNECTION_STRING: mongodb://notesnook-db:27017/?replSet=rs0 MONGODB_DATABASE_NAME: notesnook S3_INTERNAL_SERVICE_URL: "http://notesnook-s3:9000/" S3_INTERNAL_BUCKET_NAME: "attachments" S3_ACCESS_KEY_ID: "${MINIO_ROOT_USER:-minioadmin}" S3_ACCESS_KEY: "${MINIO_ROOT_PASSWORD:-minioadmin}" S3_SERVICE_URL: "${ATTACHMENTS_SERVER_PUBLIC_URL}" S3_REGION: "us-east-1" S3_BUCKET_NAME: "attachments" sse-server: image: streetwriters/sse:latest ports: - 7264:7264 env_file: *env-files depends_on: - identity-server - notesnook-server networks: - notesnook healthcheck: test: wget --tries=1 -nv -q http://localhost:7264/health -O- || exit 1 interval: 40s timeout: 30s retries: 3 start_period: 60s environment: <<: *server-discovery monograph-server: image: streetwriters/monograph:latest ports: - 6264:3000 env_file: *env-files depends_on: - notesnook-server networks: - notesnook healthcheck: test: wget --tries=1 -nv -q http://localhost:3000/api/health -O- || exit 1 interval: 40s timeout: 30s retries: 3 start_period: 60s environment: <<: *server-discovery API_HOST: http://notesnook-server:5264/ PUBLIC_URL: ${MONOGRAPH_PUBLIC_URL} networks: notesnook: ``` --- ## 3. Docker Images Preparation Pull all required images to avoid timeout issues: ``` cd /srv/Files/Notesnook/setup docker pull mongo:7.0.12 docker pull minio/minio:RELEASE.2024-07-29T22-14-52Z docker pull streetwriters/identity:latest docker pull streetwriters/notesnook-sync:latest docker pull streetwriters/sse:latest docker pull streetwriters/monograph:latest docker pull vandot/alpine-bash ``` or just ``` cd /srv/Files/Notesnook/setup docker compose pull ``` --- ## 4. Deployment Start the services: ``` cd /srv/Files/Notesnook/setup docker compose up -d ``` --- ## 5. Service Verification ### 5.1. Check Container Status ``` docker compose ps ``` Expected status: - **Running containers**: - `notesnook-db` - `notesnook-s3` - `identity-server` - `notesnook-server` - `sse-server` - `monograph-server` - **Completed containers** (should show `Exit 0`): - `validate` - `initiate-rs0` - `setup-s3` ### 5.2. Check Logs ``` docker compose logs ``` ### 5.3. Test MinIO Access Visit: `http://your-server:9009/` --- ## 6. Reverse Proxy Configuration with Nginx and SSL **Enable WebSockets Support for:** notes.domain.com (port 5264) - For real-time synchronization events.domain.com (port 7264) - For real-time notifications **Enable Cache Assets for:** mono.domain.com (port 6264) - For optimizing public notes loading ### Step 1: Install Certbot ```bash sudo apt-get update sudo apt-get install certbot python3-certbot-nginx ``` ### Step 2: Obtain SSL Certificates ```bash sudo certbot --nginx -d auth.domain.com -d notes.domain.com -d events.domain.com -d mono.domain.com ``` ### Step 3: Modify Nginx Configuration Use the following example configurations for each subdomain: ```nginx # Auth Server - Basic (no cache/websocket needed) server { listen 80; server_name auth.domain.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name auth.domain.com; ssl_certificate /etc/letsencrypt/live/auth.domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/auth.domain.com/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; location / { proxy_pass http://localhost:8264/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } } # Notes Server - With WebSocket server { listen 80; server_name notes.domain.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name notes.domain.com; ssl_certificate /etc/letsencrypt/live/notes.domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/notes.domain.com/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; location / { proxy_pass http://localhost:5264/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_read_timeout 3600; proxy_send_timeout 3600; } } # Events Server - With WebSocket server { listen 80; server_name events.domain.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name events.domain.com; ssl_certificate /etc/letsencrypt/live/events.domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/events.domain.com/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; location / { proxy_pass http://localhost:7264/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_read_timeout 3600; proxy_send_timeout 3600; } } # Monograph Server - With Cache server { listen 80; server_name mono.domain.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name mono.domain.com; ssl_certificate /etc/letsencrypt/live/mono.domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/mono.domain.com/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; location / { proxy_pass http://localhost:6264/; proxy_cache_use_stale error timeout http_500 http_502 http_503 http_504; proxy_cache_valid 200 60m; add_header X-Cache-Status $upstream_cache_status; expires 1h; add_header Cache-Control "public, no-transform"; } } ``` --- ## 7. Useful Commands ### Service Management ``` # View real-time logs docker compose logs -f # View logs for specific service docker compose logs [service-name] # Restart specific service docker compose restart [service-name] # Stop all services docker compose down # Update services docker compose pull docker compose up -d ``` --- ## 8. Maintenance ### 8.1. Backup Regularly backup these directories: - `/srv/Files/Notesnook/db/` (MongoDB data) - `/srv/Files/Notesnook/s3/` (MinIO data) - `/srv/Files/Notesnook/setup/.env` (Configuration) ### 8.2. Updates To update all services: ``` cd /srv/Files/Notesnook/setup docker compose pull docker compose down docker compose up -d ``` --- ## 9. Troubleshooting ### Common Issues: #### Service won't start - Check logs: `docker compose logs [service-name]` - Verify port availability. - Check directory permissions. - Verify environment variables. #### Database Connection Issues - Ensure MongoDB replica set is initialized. - Check MongoDB logs: `docker compose logs notesnook-db`. #### Storage Issues - Verify MinIO credentials. - Check MinIO logs: `docker compose logs notesnook-s3`. #### Email Not Working - Verify SMTP settings in `.env`. - Check identity-server logs. --- ## Security Notes - Change default passwords in `.env`. - Use strong passwords for MinIO and API secret. - Keep your `.env` file secure. - Regularly update all services. - Enable HTTPS on your reverse proxy. - Consider implementing `fail2ban`. - Regularly monitor logs for suspicious activity. --- ## Support If you encounter issues: - Check the logs. - Visit the [Notesnook GitHub repository](https://github.com/streetwriters/notesnook). - Join the [Notesnook Discord](https://discord.gg/5davZnhw3V) for support.
fedilink

Multi system synced/living OS possible?
Hopefully someone can shed some light on this idea. Or explain something that kind of fits/fills the use case and need. I am looking for a basic operating system that can be updated across multiple devices like a living OS. For instance I have a desktop PC high end specs with the same Operating System as a laptop or tablet but it's live sync. Meaning apps, files, changes made on one system are the same on all devices. I've looked at cloning drives and have done it. Far too slow and cumbersome. This would be essentially changing devices based on hardware power requirements but having the same living operating system synced across all devices so all data and abilities remain the same anytime something is needed. Maybe I'm being far fetched or what have you and this might possibly be in the wrong Sub. But I assumed it would fall under self hosted almost. Ive considered a NAS and I'm open to other ways to structure the concept ALL IDEAS WELCOME feel free to expand on it in any way. But dealing with different operating systems and architectures of various devices is wildly difficult sometimes for software, mobility, power requirements not watts but processing power, cross compatibility. I've seen apps that sync across devices but some desktop apps and mobile apps aren't cross compatible and with self hosting so many services that function well across networks and devices after years of uptime you sort of forget the configs of everything it's a nightmare when a single app update or container causes a domino affect. Thanks everyone hopefully this is helpful to others as well with similar needs.
fedilink

![](https://lemmy.world/pictrs/image/dae62396-f7e9-4c21-86de-7390cd59a1c1.jpeg) Only use jellyfin. Have a list of things want to update... but it works for now. Yes that is a laptop usb cooler used as supplemental placebo cooling. Also a pc fan I have propped up against the hard drive feeding into the pi. Can't recall last time used the ps4 or switch. But they're there
fedilink


Secure Way to Expose Docker Containers to the Internet?
I've been researching different ways to expose Docker containers to the internet. I have three services I want to expose: [Jellyfin](https://jellyfin.org/), [Omnivore ](https://github.com/omnivore-app/omnivore)(Read-it-later app), and [Overseerr](https://overseerr.dev/). I've come across lots of suggestions, like using Nginx with Cloudflared, but some people mention that streaming media goes against Cloudflared tunnel TOS, and instead recommend Tailscale, or Traefik, or setting up a WireGuard VPN, or using Nginx with a WireGuard VPN. The amount of conflicting advice has left me confused. So, what would be the best approach to securely expose these containers?
fedilink

Hey everyone, wanderer recently celebrated it’s 10th anniversary. Well, as far as minor versions go at least. First and foremost: What is wanderer? wanderer is a self-hosted GPS track database. You can upload your recorded GPS tracks or create new ones and add various metadata to build an easily searchable catalogue. Think of it as a fully FOSS alternative to sites like alltrails, komoot or strava. Next: Thank you for almost 1.2k stars on [GitHub](https://github.com/Flomp/wanderer). It’s a great motivation to see how well-received wanderer is. By far the most requested feature since my last post was the possibility to track your acitivities. This is now possible on the new profile page which shows various statistics to help you gain better insights into your trailing/running/biking habits. Lists have also received a major upgrade allowing you easily bundle a multiday hike and share it with other users. If you want to give wanderer a try without installing it you can try the [demo](https://demo.wanderer.to). When you are ready to self-host it you can head over to [wanderer.to](https://wanderer.to) to see the full documentation and installation guide. If you really like wanderer and would like to support its development directly you can [buy me a coffee](https://www.buymeacoffee.com/wanderertrails). Thanks again! Cheers Flomp
fedilink

Hi guys! Postiz is an open-source social media scheduling tool. After much digging, I finally got Lemmy to work with Postiz. And, of course, it's available in the open source! Let me know if it works for you! And if you have suggestions for more Fediverses, I am happy to hear :)
fedilink
177

Hi, community :) Long time no see. It's been some challenging weeks. There are some new updates for Postiz, but just a small recap: **Postiz is a social media scheduling tool supporting 17 social media channels:** **Instagram, Facebook, TikTok, Reddit, LinkedIn, X, Threads, BlueSky, Mastodon, YouTube, Pinterest, Dribbble, Slack, Discord, Warpcast, Lemmy and Telegram.** [**https://github.com/gitroomhq/postiz-app**](https://github.com/gitroomhq/postiz-app) Here are the latest updates :) * We added a stand-alone Instagram provider that doesn't require you to have Facebook business. * Added Lemmy :) * We have added short-linking. By default, it uses DUB, but we have added a nice infrastructure to easily create new providers (currently working on Bitly and short.io). When you add links, once you schedule the post, it asks if you want to shorten them. * I added a Telegram provider, which was really challenging because the way you add a Telegram bot is a bit different. * A big step into web3 - Postiz now supports scheduling to Warpcast using Neynar. * We also added a web3 login with Farcaster. Of course, everything available in the open source :) **Future**: * I started to get more into web3 and am thinking of adding Nostr also. * Default hashtags and signatures to platforms. * Post templates to write faster. * WordPress integration. * Digest - sometimes people schedule like 10 posts at once, and get 10 emails. Funny enough, Postiz got a lot of cancellations because of the TikTok ban (bummer.) Let me know what else I should add to the roadmap.
fedilink


JetKVM is much like [nanoKVM](https://github.com/sipeed/NanoKVM) but a slightly polised version. **What is JetKVM?** JetKVM is a high-performance, open-source KVM over IP (Keyboard, Video, Mouse) solution designed for efficient remote management of computers, servers, and workstations. Whether you’re dealing with boot failures, installing a new operating system, adjusting BIOS settings, or simply taking control of a machine from afar, JetKVM provides the tools to get it done effectively. As far as I know, these Jets are not available for retail yet, but can be bought via their kickstarter. Link to the **source code**: https://github.com/jetkvm/kvm Link to their **website**: https://jetkvm.com/ Link to their **kickstarter**: https://www.kickstarter.com/projects/jetkvm/ Picture of a JetKVM mounted in a homelab, credits to [Jeff Gerling](https://bsky.app/profile/jeffgeerling.com/post/3leki5wftm22m). ![](https://slrpnk.net/pictrs/image/8aeff736-fda0-4aeb-8091-14fbf7e25ed9.jpeg)
fedilink

Hi c/selfhosted, I am the developer of PdfDing. As this feature was requested quite often I wanted to inform you that it is now possible to edit PDFs by adding annotations, highlighting and drawings. You can find the repo [here](https://github.com/mrmn2/PdfDing). I also got the feedback that organizing PDFs with simple tags does not work for many people. It is now possible to organize PDFs with multi-level tags. I hope this will improve the user experience. If you like PdfDing I would be really happy over a star on [GitHub](https://github.com/mrmn2/PdfDing). As the project is open source, if anyone wants to contribute you are welcome to do so!
fedilink

Self hostable gif database?
This might sound silly but I really do miss sending gifs. I think it gives lighthearted fun. However I do not want to use giphy or tenor api I would rather have a bunch of gifs self hosted and accessible on my home server. Matrix has their sticker solution but from what I have gathered they can see that data in your chats. You can get a gif plugin but I am sure matrix and giphy can see your requests with it as well. My solution as of right now is trying to host immich and having a gif album that is accessible by my users. Reason being is the ai may prove to be useful when searching for the perfect gif/reaction. I have ran into a problem though, I have no idea how to batch download gifs from giphy and tenor. It seems people don't just share their gif collections all willy nilly like they do memes. Is this the best solution? How would you go about self hosting such a service? (And if you have large amounts of gifs... Can I have some 👉👈 🥺) Sorry for the silly request for help 😂 Thank you all so much.
fedilink

My server won’t turn on 🫤
I decided to clean out my CPU fan as it was clogged, when I assembled everything again it won't turn on 🙁 It's an old desktop PC. There are no lights glowing on the motherboard at all, though there is none specifically labelled "power". Just CPU, RAM, BOOT. None of these light up, not even a flash when it starts. I have reseated the RAM, CPU, power cables. Removed the GPU to check. The cord leading in to the PSU works but I don't have a way to test the PSU itself or the out cables, but I have reseated them at each end. This PC was working fine before. But with no lights on the motherboard I suspect either the mobo or PSU? Mobo is asrock x570 PSU is silverstone 650w strider gold S series Any help appreciated! Edit: I made a [new post](https://lemmyverse.link/lemmy.nz/post/16775633) asking for hardware recommendations. Edit 2: I managed to get a light on the motherboard, going to buy some more thermal paste and keep tinkering to see if I can get it started! Edit 3: I never got that light to go again. In the end the comments on the other post convinced me that I had all that I needed for what I wanted (no upgrade needed), so I changed tack to seeing how to fix it. I had suspicions about the power connection still, so I bought a cheap PSU and tested it, no change. Then I bought a new motherboard (also a pretty cheap one, the cheapest that had what I needed and was also in a local store) and in the end that was the issue. Everything is up and running again now! Thanks for all the help everyone, you can now settle your bets.
fedilink

[SOLVED] Noob stuck on port-forwarding wile trying to host own raw-html website. Pls help
Edit: Solution Yeah, thanks to u/postnataldrip@lemmy.world I contacted my ISP and found out that in fact they were blocking my port forwarding capabilities. I gave them a call and I had to pay for a public IP address plan and now it's just a matter of testing again. Thank you very much to everyone involved. I love you. It was Megacable by the way. If anyone from my country ever encounters the same problem I hope this post is useful to you. Here's the original post: Hey! Ok, so I'm trying to figure this internet thing out. I may be stupid, but I want to learn. So, what I'm essentially doing is trying to host my own raw html website on my own hardware and get it out to the internet for everyone to see (temporarily of course, I don't want to get in trouble with hackers and bots) I just want to cross that out of my bucket list. What I've done so far: * I set up a qemu/kvm virtual machine with debian as my server * I configured a bridge so that it's available to my local network * I got my raw html document * I'm serving it locally with nginx * I tried to set up port forwarding (I get stuck here) Right now everyone in my home can see my ugly website if they go to 192.168.1.114:8080 (since I'm serving it through port 8080). However, I want to be able to go outside (I'm testing it with my mobile network using mobile data) to see my website. I've configured port forwarding on my ZTE router (ISP-issued) with the following parameters: ![](https://lemy.lol/pictrs/image/7a2be179-d564-4e68-b44b-1f63c92f439a.png) But now, if I search for my public IP address on my phone I don't get anything. Even if I go to my.public.ip.address:8080 (did you think I was gon-give you my public ip?) I don't get anything. I've tried ping and curl. ping doesn´t even transmit the packages, curl says "Could not connect to server". So, If you guys would be so kind as to point me in the right direction, I pose the following questions : * How do I even diagnose this? * What am I missing? * Am I being too stupid? * What do I do now? (Here's a preview of my ugly website) ![](https://lemy.lol/pictrs/image/3d37e1f2-62f1-4db1-9a3a-4a270239163d.png) I also own a domain (with cloudflare) so, next step is getting that set-up with a DNS or something. Thank youuuuuuu <3
fedilink

    Create a post

    A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

    Rules:

    1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

    2. No spam posting.

    3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

    4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

    5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

    6. No trolling.

    Resources:

    Any issues on the community? Report it using the report flag.

    Questions? DM the mods!

    • 0 users online
    • 220 users / day
    • 469 users / week
    • 1.42K users / month
    • 3.85K users / 6 months
    • 1 subscriber
    • 4.04K Posts
    • 82.9K Comments
    • Modlog
    Lemmy.World
    A generic Lemmy server for everyone to use.

    The World’s Internet Frontpage Lemmy.World is a general-purpose Lemmy instance of various topics, for the entire world to use.

    Be polite and follow the rules ⚖ https://legal.lemmy.world/tos

    Get started

    See the Getting Started Guide

    Donations 💗

    If you would like to make a donation to support the cost of running this platform, please do so at the following donation URLs.

    If you can, please use / switch to Ko-Fi, it has the lowest fees for us

    Ko-Fi (Donate)

    Bunq (Donate)

    Open Collective backers and sponsors

    Patreon

    Liberapay patrons

    GitHub Sponsors

    Join the team 😎

    Check out our team page to join

    Questions / Issues

    More Lemmy.World

    Follow us for server news 🐘

    Mastodon Follow

    Chat 🗨

    Discord

    Matrix

    Alternative UIs

    Monitoring / Stats 🌐

    Service Status 🔥

    https://status.lemmy.world

    Mozilla HTTP Observatory Grade

    Lemmy.World is part of the FediHosting Foundation