I saw this post today on Reddit and was curious to see if views are similar here as they are there.

  1. What are the best benefits of self-hosting?
  2. What do you wish you would have known as a beginner starting out?
  3. What resources do you know of to help a non-computer-scientist/engineer get started in self-hosting?
@gamermanh@lemmy.dbzer0.com
link
fedilink
English
4
edit-2
4M

For me #2 would be “you have ADHD and won’t be able to be medicated so just don’t”

I’ve mentioned elsewhere my server upgrade project took longer than expected.

Just last night I threw it all into the trash because I just can’t anymore

@jimmy90@lemmy.world
link
fedilink
English
24M

NixOS is awesome!

@jimmy90@lemmy.world
link
fedilink
English
24M

although maybe not for beginners. for beginners use docker compose and do backups however you like

Can you clarify on how NixOS is great for selfhosting? I was going to do mint.

@jimmy90@lemmy.world
link
fedilink
English
14M

you configure your whole server in one file (including docker/podman services), installation and configurations is taken care of by the package manager, you pretty much only need to know one file to admin your system

and no extra stuff is installed only what you specify so you have a minimal resource usage.

i think this is awesome

@Kaufman5000@feddit.org
link
fedilink
English
24M

As far as operating systems goes, i would recommend Debian or Ubuntu. These are very wiedly used and there are many resources. And if you are brave, you can start without a Desktop.

Possibly linux
link
fedilink
English
14M

Not as good as Ansible although they are different tools

Last
link
fedilink
English
24M

deleted by creator

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
3
edit-2
4M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
CGNAT Carrier-Grade NAT
DNS Domain Name Service/System
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
IP Internet Protocol
NAS Network-Attached Storage
NAT Network Address Translation
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
PiHole Network-wide ad-blocker (DNS sinkhole)
Plex Brand of media server package
RAID Redundant Array of Independent Disks for mass storage
SMB Server Message Block protocol for file and printer sharing; Windows-native
SSD Solid State Drive mass storage
SSH Secure Shell for remote terminal access
SSL Secure Sockets Layer, for transparent encryption
TLS Transport Layer Security, supersedes SSL
VPN Virtual Private Network
VPS Virtual Private Server (opposed to shared hosting)
ZFS Solaris/Linux filesystem focusing on data integrity
k8s Kubernetes container management package
nginx Popular HTTP server

20 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

[Thread #899 for this sub, first seen 30th Jul 2024, 23:35] [FAQ] [Full list] [Contact] [Source code]

I would’ve wished

  • don’t rush things into production.
  • dont offer a service to a friend without really knowing and having the experience to keep it up when needed.
  • dont make it your life. The services are there to help you, not to be your life.
  • use docker. Podman is not yet ready for mainstream, in my experience. When the services move to podman officially it’s time to move. Just because jellyfin offers official documentation for it, doesn’t mean it’ll work with podman (my experience)
  • just test all services with the base docker install. If something isn’t working, there may be a bug or two. Report if it is a bug. Hunt a bug down if you can. maybe it’s just something that isn’t documented (well enough) for a beginner.
  • start on your own machine before getting a server. A pi is enough for lightweight stuff but probably not for a fast and smooth experience with e.g. nextcloud.
  • backup.
  • search for help. If not available in a forum. ask for help. Dont waste many many hours if something isnt working. But research it first and read the documentation.
@xantoxis@lemmy.world
link
fedilink
English
10
edit-2
4M

Podman is not yet ready for mainstream, in my experience

My experience varies wildly from yours, so please don’t take this bit as gospel.

Have yet to find a container that doesn’t work perfectly well in podman. The options may not be the same. Most issues I’ve found with running containers boil down to things that would be equally a problem in docker. A sample:

  • “rootless” containers are hard to configure. It can almost always be fixed with “–privileged” or some combination of permission flags. This would be equally true for docker; the only meaningful difference is podman tries to push everything into rootless. You don’t have to.
  • network filesystems cause headaches, especially smbfs + sqlite app. I’ve had to use NFS or ext4 inside a network-mounted image for some apps. This problem is identical for docker.
  • container networking–for specific cases–needs to managed carefully. These cases are identical for docker.

And that’s it. I generally run things once from the podman command line, then use podlet to create a quadlet out of that configuration, something you can’t do with docker. If you are having any trouble with running containers under podman, try the --privileged shortcut, see that it works, and then double back if you think you really need rootless.

@kitnaht@lemmy.world
link
fedilink
English
11
edit-2
4M
    • Learning. If you ever found yourself tired of learning new things, your life is basically done.
    • Cost. You already have an internet connection at home. It’s practically a necessity these days. The connection is likely fast enough for most things. Renting even the most piddly of VPS is wildly expensive. Just throw a spare machine at it and go wild.
    • Freedom. Your own data is constantly being collected, regurgitated, and sold back to you. More people need to care about this incessant invasion of our lives.
    • Backups. 3 copies, on different forms of storage, in multiple PHYSICALLY distinct locations. Just when you have that teeny little imp in the back of your mind say “hmm, I should probably back up soon” – stop everything you’re doing and run a backup.
    • Test your recovery! Backups are only good if you can recover from them. Many have lost data because they failed to ever fail-test their backups.
    • Google. Legitimately the best skill you can ever attain is simply being able to search effectively and be able to learn jargon quickly. Once you have the lingo down, searches become clearer, quicker, more precise.
@ChapulinColorado@lemmy.world
link
fedilink
English
1
edit-2
4M

For #1 I would say not to focus on learning the same kind of thing that you started at some point recently. It took me a few months to get my local setup going since I would do it after work (also similar skills) and get tired of poking around.

At some point I gave up and started doing other things that brought me joy (video games, paint night with YouTube tutorials, movies/TV). When I finally decided to get back to it, it was enjoyable again. If I have to re-do it from scratch it could be done in probably a few hours or at most some nights after work and would be enjoyable since the annoying “got ya” lessons are somewhere on memory or some searches away that could be filtered much quicker.

@ikidd@lemmy.world
link
fedilink
English
24M

That sex isn’t love.

And love isn’t sex

Ebby
link
fedilink
English
54M
  1. data stays local for the most part. Every file you send to the cloud becomes property of the cloud. Yeah, you get access, but so does the hosting provider, their 3rd party resources, and typical government compliances. Hard drives are cheap and fast enough.

  2. not quite answering this right, but I very much enjoy learning and evolving. But technology changes and sometimes implementing new software like caddy/traefik on existing setups is a PITA! I suppose if I went back in time, I would tell myself to do it the hard way and save a headache later. I wouldn’t have listened to me though.

  3. Portainer is so nice, but has quirks. It’s no replacement for the command line, but wow, does it save time. The console is nerdy, but when time is on the line, find a good GUI.

For item #1, self hosted solutions like home assistant also allow using “smart” devices without the cloud in some instances. You are not at the mercy of a vendor going out of business or dropping support and your devices becoming bricks.

Not all devices are compatible, but from what I’ve learned, I would never buy another device with so called “smart” features if it is not compatible with home assistant.

The big thing for #2 would be to seperate out what you actually need vs what people keep recommending.

General guidance is useful, but there’s a lot of ‘You need ZFS!’ and ‘You should use K8s!’ and ‘Use X software!’

My life got immensely easier when I figured out I did not need any features ZFS brought to the table, and I did not need any of the features K8s brought to the table, and that less is absolutely more. I ended up doing MergerFS with a proper offsite backup method because, well, it’s shockingly low-complexity.

And I ended up doing Docker with a bunch of compose files and bind mounts, because it’s shockingly low-complexity. And it’s just running on Debian, instead of some OS that has a couple of layers of additional software to make things “easier” because, again, it’s low-complexity.

I can re-deploy the entire stack on new hardware in about ~10 minutes (I’ve tested this a few times just to make sure my backup scripts work), and there’s basically zero vendor tie-in or dependencies that you’d have to get working first since it’s just a pile of tarballs and packages from the distro’s package manager on, well, ANY distro.

@Eximius@lemmy.world
link
fedilink
English
14M

btrfs with its send/receive (incremental fs-level backups) is already stable enough for mostly everything (just has some issues with raid 5/6), and is much more performant than zfs. And it is also in the linux kernel tree (quite hugely useful). Of course, if more zfs-like functionality is what you look for.

lemmyvore
link
fedilink
English
24M

IMHO 99% of the time btrfs features are used as a band-aid for things that would be much better done otherwise. Generally by using a stable distro and a decent backup solution (like Debian + Borg). And you get to use a truly stable, proven, boring fs ike ext4 or xfs.

@Eximius@lemmy.world
link
fedilink
English
44M

Stable yes, but no protection from bitrot, and the journal of ext4 is the band aid, instead of a cow fs like zfs or btrfs.

lemmyvore
link
fedilink
English
14M

You can protect important data with backups, which you should do anyway, and in practice I feel like the added complexity of BTRFS and ZFS is not worth the COW.

BTRFS is cool but they tried to cram way too much too fast into it and it added a ton of complexity and it’s still not 100% done after all these years. A COW mode for ext4 would have been adopted much faster.

“Already stable enough”

  1. no it isn’t.
  2. if fucking should be, it’s been around 15 years!
@spechter@lemmy.ml
link
fedilink
English
3
edit-2
4M

My only experience with btrfs was when trying out Opensuse Tumbleweed. Within a couple days my home partition was busted, next time it was another partition. No idea if the problems could be fixed as these were fairly new installations to give Opensuse a try and I couldn’t be bothered to fix a system that’s troubling me from the very beginning.

Between all the options that just work ™, btrfs is the one I’ve learned to stay away from.

EDIT: that was four or five years ago

@thomasloven@lemmy.world
link
fedilink
English
0
edit-2
4M

And I’ve been using it for eight six of those 15 in RAID 5/6 with zero issues, so YMMW I guess. Sorry you experienced problems.

Honestly it’s not; BTRFS has been in my ‘that’s neat, but it’s still got a non-zero chance of deciding to light everything on fire because it’s bored’ list for, uh, a decade now?

The NAS build is old enough to more or less predate BTRFS being usable (closing in on a decade since I did the initial OS install, jeez) and none of the features matter for what I’m storing: if every drive in my NAS died today, I’d be very annoyed for a couple of hours during the rebuild, and would lose terrabytes of linux ISOs that I can just download again, if I wanted to use Jellyfin to install them a 2nd time. (Any data I care about is pulled offsite at least once a day, so I’ve got pretty comprehensive backups minus the ISOs.)

I know EXT4 and mergerfs and snapraid are not cool, or have shiny features, but I’ve also had zero problems with them over the last decade, even between Ubuntu upgrades (16.04, 18.04, 20.04, 22.04) and hardware platform upgrades (6600k, 8700k, 10950k) and the entire replacement of all the system drives (hdd -> ssd -> nvme) and the expansion of and replacement of dead HDDs, of varying sizes (4tb drives to 8tb drives to 16tb drives to some 20tb drives).

It all just… worked, and at no point was I concerned about the filesystem not working if I replaced or upgraded or changed something, which is not something ZFS or BTRFS would have guaranteed during that same time window.

Can you elaborate on how your backup script re-deploys on new hardware? Sounds very nice to have.

elaborate

It’s a really simple script.

Everything is deployed with a docker compose, and all the docker volume data are bind mounts and, for example, a Jellyfin install would have everything in /stacks/jellyfin.

The backup script makes a tarball of each service individually (and stops the stack if there’s anything in there doing database things or anything else that might end up being inconsistent by just archiving the filesystem), and uploads them to a S3 storage provider AND burns them to a BluRay.

The recovery script does the opposite: it downloads and unarchives the data.

As long as you’re on Linux and have Docker, it should just magically work.

I see! Thanks, will try to back up my docker compose services this way.

If you write the script yourself, just make sure you test it a couple of times, and preferably with different datasets from different runs.

I found some edgecase stuff that would have prevented a restore even after I had tested it successfully (some permission issues due to changes in containers and whatnot were resulting in less than the expected data being archived and restored) a couple of times.

I have made that migration myself going from a Raspberry PI 4 to a n100 based NAS. It was 10 minutes for the software stack as you said This not taking into account media migration which was done on the background over a few hours on WiFi (I had everything on an external hard drive at the time).

That last part is the only thing I would change about my self hosting solution. Yes, the NAS has a nice form factor, is power efficient and has so far been very optimal for my needs (no lag like rpi4), however I have seen they don’t really sell motherboard or parts to repair them. They want you to replace it with another one. Reason 2 on the same is vendor lock in. Depending on the options you select when creating the storage groups/pools (whatever they are called), you could be stuck needing to get something from the same vendor to read your data if the device stops working but the disks are salvageable. Reason 3 is they’ve had security incidents so a lot of the “features” I would not recommend using ever to avoid exposing your data to ransomware over the internet. I don’t trust their competitors either. I know how commercial software is made with the smallest amount of care for security best practices.

Yeah, I just use plain boring desktop hardware. (Oh no! I’m experiencing data corruption due to the lack of ECC!) It’s cheap, it’s available, it’s trivial to upgrade and expand, and there’s very few little gotchas in there: you get pretty much exactly what it looks like you get.

Also nice is that that you can have a Ship of Theseus NAS by upgrading what needs upgrading as you go along and aren’t tied into entire platform swaps unless it makes sense - my last big rebuild was 3 years ago, but this is basically a 10 year old NAS at this point.

@sem@lemmy.blahaj.zone
link
fedilink
English
14M

So did you buy ecc ram?

Last
link
fedilink
English
84M

deleted by creator

IMO a homelab for learning and a server that you’re self-hosting services on really aren’t the same thing and maybe shouldn’t be treated that way, if you can swing it.

I’d rather my password manager or jellyfin or my peertube instance or whatever not be relying on a tech stack I don’t entirely understand and might not be able to easily fix if it breaks.

I guess a lot of it is new to doing this vs greybeard split, since the longer I’ve done sysadmin work the less I care about the cool new thing and have a preference for the old, stable, documented, bugfixed, supported, and with a clear roadmap software.

I should probably get a job doing sysadmin work for a bank, lmao.

Last
link
fedilink
English
3
edit-2
4M

deleted by creator

This is going to be a bit of my grumpy-greybeard, but again: if you’re learning, then something like Docker and docker-compose is much simpler and less prone to fuckups than a bunch of K8s.

If you don’t know ANYTHING about what you’re doing, starting with the simplest tools and then deciding if you want to learn the more complicated ones is probably a less insane path than jumping right into the configuration-as-code DevOps pipeline.

And, at that point, you should have your “production” and “testing” environments set up in such a way they won’t eat each other when you do an oops.

Last
link
fedilink
English
1
edit-2
4M

deleted by creator

nfh
link
fedilink
English
74M
  1. I’ve learned a number of tools I’d never used before, and refreshed my skills from when I used to be a sysadmin back in college. I can also do things other people don’t loudly recommend, but fit my style (Proxmox + Puppet for VMs), which is nice. If you have the right skills, it’s arbitrarily flexible.

  2. What electricity costs in my area. $0.32/KWh at the wrong time of day. Pricier hardware could have saved me money in the long run. Bigger drives could also mean fewer, and thus less power consumption.

  3. Google, selfhosting communities like this one, and tutorial-oriented YouTubers like NetworkChuck. Get ideas from people, learn enough to make it happen, then tweak it so you understand it. Repeat, and you’ll eventually know a lot.

@tburkhol@lemmy.world
link
fedilink
English
04M
  1. What electricity costs in my area. $0.32/KWh at the wrong time of day.

I assume you have this on a UPS. What about using a smart plug to switch to UPS during the expensive part of the day, then back to mains to charge when it’s cheaper? I imagine that needs a bigger UPS than one would ordinarily spec, and that cost would probably outweigh the electric bill, but never know.

That’s not really what a UPS is designed for, they’re meant to last minutes. Long enough for a clean shutdown or to start a generator.

You’d want something like a whole house battery backup instead.

Last
link
fedilink
English
14M

deleted by creator

I wish I knew not to trust closed source self-hosted applications, such as Plex. Would have saved a lot of time and money.

@warlaan@feddit.org
link
fedilink
English
8
edit-2
4M

Can you elaborate?

@zutto@lemmy.fedi.zutto.fi
link
fedilink
English
36
edit-2
4M

Plex is a great example here. I’ve been Hetzner customer for many many years, and bought a lifetime license to Plex. Only to receive few months later a notification from Plex that I am no longer allowed to self-host Plex for myself(and only myself) at Hetzner and that they will block all access to my self-hosted Plex instance. I tried to ask for leniency or a refund, but that was wasted effort as well.

In short, I was caught on a crossfire when for-profit company tried to please hollywood by attempting to reduce piracy, so they could get new VC funding.

I am now a happy Jellyfin user and warmly recommend all Plex users to try it, the Jellyfin community is awesome!

(Use your favourite search engine to look up “Hetzner Plex ban” for more details)

@zutto @warlaan Searching about, this was Plex banning the use of Plex on Hetzner’s IP block, right? Not a decision made by Hetzner?

@zutto@lemmy.fedi.zutto.fi
link
fedilink
English
11
edit-2
4M

Yes, correct.

I apologize if someone misunderstood my reply, Plex was the bad actor here.

@0x0@programming.dev
link
fedilink
English
44M

Are you still on Hetzner? How’s their customer support in general?

Still with Hetzner yeah. Haven’t had to deal with Hetzner customer support in the recent years at all, but they have been great in the past.

@JustMarkov@lemmy.ml
link
fedilink
English
16
edit-2
4M

2.What do you wish you would have known as a beginner starting out?

Caddy. Once you try Caddy there’s no turning back to Nginx or Apache.

LOL, as a noob I went with caddy, then traefik before settling on NPM. Ironically, all the “QoL” features people brag about just made base configs harder and lead to shit randomly failing.

NPM has been solid as a rock, even if I have to do slightly more work, it’s more reliable and does what I want quicker and easier than the alternative.

poVoq
link
fedilink
English
144M

That’s what everyone thinks for a while, and then they go back to Nginx.

Eh, my main reason for switching is that Caddy builds in LetsEncrypt. My Caddyfile is really simple, it’s just a reverse proxy that handles TLS and proxies regular HTTP to my services. I don’t have it serving any files or really knowing anything about the services. Here’s my setup:

  1. HAProxy - directs subdomains to devices (in VPN) based on SNI
  2. Caddy - manages TLS and LetsEncrypt and communicates w/ services over HTTP
  3. Nginx - serves files for things like NextCloud, if needed (most services have their own HTTP server)

Each of these are separate Docker containers, which makes it really easy to manage and diagnose problems. The syntax for Nginx is more complex for 1&2, and the performance benefit of managing it all in one service just isn’t relevant for a self-hosted system, so I use this layered approach that makes each level as simple as possible.

lemmyvore
link
fedilink
English
14M

I’m currently in the process of separating the certificate renewal service from the reverse proxy completely.

But if you’re just starting out Nginx Proxy Manager makes it so easy.

@ahal@lemmy.ca
link
fedilink
English
24M

Out of curiosity, what’s the benefit of splitting those?

lemmyvore
link
fedilink
English
3
edit-2
4M

It lets you change reverse proxy or run a website with TLS completely independently of the certbot. The certbot deals with obtaining certs and leaves them in a dir, and the proxies or webservers just take them from that dir. If the proxy container breaks the certbot still does its thing etc.

It also makes it easier to do stuff like run different proxies in paralel for different things, chain proxies (for instance if you need to use a VPS because you can’t forward ports) and so on.

But it’s all for advanced setups, for basic stuff I’d still go with NPM.

@ahal@lemmy.ca
link
fedilink
English
24M

Cool makes sense, thanks for the reply! And yeah, I don’t think I’m quite there yet.

kadotux
link
fedilink
English
24M

As someone who just learned about Caddy, could you elaborate?

poVoq
link
fedilink
English
5
edit-2
4M

You usually want less integration, not more. Simple self-contained things. Nginx is good at that. That’s also why you don’t want to use Nginx Proxy Manager or Certbot’s Nginx integration etc. It first looks like they make it easier, but there is too much hidden complexity under the hood.

Also, sooner or later you will run into some software that you would really like to try, which is only documented for Nginx and uses some sort of image caching or so, that is hard to replicate with Caddy etc.

Deemo
link
fedilink
English
24M

Certbot’s Nginx integration etc

How do you obtain ssl certs then?

poVoq
link
fedilink
English
1
edit-2
4M

I switched to Dehydrated (with dns-01 challenge), but Certbot itself is fine, the problem is the Nginx integration that tries to automatically change your Nginx config files.

Deemo
link
fedilink
English
24M

Do you manually update the config every 3 months?

poVoq
link
fedilink
English
14M

??? The location and the file name of the certificates don’t change, so why would I have to do that?

On the contrary, before I disabled the certbot’s Nginx integration, every three months certbot would “manage” to break my Nginx and I had to manually repair it.

I think we are not talking about the same thing. I mean the Certbot extension that automatically modifies the Nginx config files. A telltale sign are usually the comments "#managed by certbot” that it likes to leave behind all over your config files.

@Zeoic@lemmy.world
link
fedilink
English
14M

Not sure I agree about proxy manager. Anything you need to access is in the gui. You can easily add advanced configs to the entries. Been using it for 5 or so years, and there hasnt been anything I was missing from using straight nginx before that.

The benefit of using config files is easy version management via git.
Makes it easy to rebuild from scratch and easy to rollback a change that breaks something

@Zeoic@lemmy.world
link
fedilink
English
14M

Fair enough. I manage the same by backing up the vm its on.

@keyez@lemmy.world
link
fedilink
English
24M

I went from NGinx to HAProxy for 5 years, now on Caddy for 2 and loving it. So much simpler and efficient.

Deemo
link
fedilink
English
24M

Silly question how safe are caddy plugins? (especially dns challenge modules like cloudflare, duckdns, etc).

https://github.com/caddy-dns https://github.com/caddy-dns/cloudflare/tree/master

Not sure if those plugins are covered by caddy’s security disclosure policy

https://github.com/caddyserver/caddy/security

@xinayder@infosec.pub
link
fedilink
English
14M

I maintain the DNS plugin for Vultr and I can say that it’s “safe”, but if you’re worried you should check their source code.

I believe it’s easier to have a vulnerability in the external provider’s API (for example, caddy-dns/vultr uses govultr) than Caddy. But I wouldn’t take things for granted if I was skeptical about these plugins.

@farcaller@fstab.sh
link
fedilink
English
34M

Apparently traefik might be better if you run docker compose and such, as it does auto-discovery, which reduces the amount of manual configuration required.

@ahal@lemmy.ca
link
fedilink
English
14M

I’ve been meaning to try Caddy, but I just can’t even imagine something simpler than NginxProxyManager.

Presi300
link
fedilink
English
14
edit-2
4M

Benefits:

  • Cheap storage that I can use both locally and as a private cloud. Very convenient for piracy storing all my legally obtained files.

  • Network wide adblocking. Massive for mobile games/apps.

  • Pivate VPN. Really useful for using public networks and bypassing network restrictions.

  • Gives me an excuse to buy really cool, old server and networking hardware.

As for things I wish I knew… Don’t use windows for servers. Just don’t.

SMB sucks, try NFS.

Use docker, managing 5 or 10 different apps without containers is a nightmare.

Bold of you to assume I’m a computer scientist or engineer or that I have a degree lmao. I just hate ads, subscriptions and network restrictions, so I learned how to avoid those things. As for resources to get started… Look up TrueNAS scale. It basically does all of the work for you.

@rekorse@lemmy.world
link
fedilink
English
14M

How’s the network wide ad blocking work, that would solve a big issue with my kids.

icedterminal
link
fedilink
English
14M

You either set the DNS settings per device to the system running PiHole / AdGuard Home, or if your router allows, set the DNS there. It’s ideal to set it on the router.

Any time a device makes a DNS request to a domain, it’s checked against the list. If found, it’s stopped. If not found, it gets sent upstream to your choice of a public DNS configured during setup. I use Cloudflare (1.1.1.1, 1.0.0.1).

@subtext@lemmy.world
link
fedilink
English
64M

For #2 and #3, it’s probably exceedingly obvious, but wish I would have truly understood ssh, remote VS Code, and enough git to put my configs on a git server.

So much easier to manage things now that I’m not trying to edit docker compose files with nano and hoping and praying I find the issue when I mess something up.

I know this is coming up on my radar, but I am not quite sure where to start. Might you have any resources on hand to point me in the right direction?

Especially once I have everything dialed in the way I want, I’d love to be able to pull from my own repo to get stuff running again/spin up a new instance

@subtext@lemmy.world
link
fedilink
English
3
edit-2
4M

Honestly, I learned a ton from these guys: https://www.smarthomebeginner.com/

I’ve diverged a good bit since then of the services I’ve added and the specifics of how I configure things (I still use Traefik whereas I think they’ve shifted to Nginx), but they have a great example of a GitHub repo and what it looks like to manage a self-hosted server.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 221 users / day
  • 624 users / week
  • 1.4K users / month
  • 3.93K users / 6 months
  • 1 subscriber
  • 3.78K Posts
  • 76.6K Comments
  • Modlog