• 0 Posts
  • 65 Comments
Joined 1Y ago
cake
Cake day: Jun 13, 2023

help-circle
rss

Kibana/ES is overkill and not worth it. I have Loki, promtail and grafana setup for my 4 VMs and 2 systems. Took about a week to get dashboards and stuff going (plus geoip and worldmap plugin config for my public servers) but haven’t had to touch them in about 2 years since.


I had it open for a web server for 2.5 years because I was lazy and my IP changed a lot and I traveled and didn’t have a VPN setup and never had any issues as far as I could tell. Disabled password and root auth but was also fine with wiping that server if there were issues. It’s certainly not recommended but isn’t immediately always going to be an issue


Interesting the CVEs don’t have information yet and didn’t appear to affect bitwarden and it’s containers. Haven’t seen a security release from them since around March.


I went from NGinx to HAProxy for 5 years, now on Caddy for 2 and loving it. So much simpler and efficient.


Heavy disagree on the storage statement from what I’ve used and seen but it works for lots of people so not going to detract. NFS is always a pain but longhorn seems to have advantages


Does the app support push notifications? Would be interested in this but I already use tasks.org since they support push notifications and I won’t take the trash out until right before bed instead of before it gets dark otherwise.


They switched away from the donations to implement this when they got acquired by FUTO


The cheapest option Is the monthly one for no security updates, there are still regular pro and higher plans which are one and done, no grandfathering


Once you learn it it isn’t super crazy but takes a lot of effort obviously. I think most people who do use k3s and k8s at home are people who use it for work so already knows how and where things should work and be. That said I work with kubernetes every day for work managing a handful of giant production clusters and at home I use unraid to keep it simple.


I use Kavita since I don’t get audiobooks and found out about it before ABS. I convert amazon bought ebooks to epub using calibre and put them on my unraid server to get picked up by kavita. Any epub can be emailed from kavita to my device.


I have an all Ubiquiti setup and only use local accounts for everything. UDM Pro, 2 8 port switches and 2 APs, U6Mesh and another older AP. One of my accounts had me turn on MFA but every device still let’s me use a local account with a password and ssh key. Do you know what devices are forcing that?



May have to explore this, I still run influxdb and telegraf for a push metrics operation instead of pull like prom. Things have been smooth for a while but a couple months ago disk temps and metrics stopped working with no errors or missing plugins


Hey! Finally gave it a go this morning but ran into some headaches pointing to existing dockerized mysql and postgres containers on unraid. I reached out in discord this afternoon but setting up auth according to the docker-compose on the site and github I get lots of errors about missing tables or properties during the database initialization.


Only two I’ve thought through naming are

Roshar - Unraid server where 90% of apps/services live.

Cobalt Guard - Ubiquiti UDMPro

Maybe Knight Radiant or a character who is one, or even one of the orders would have fit better for protecting roshar but I like how cobalt guard sounds for a FW


Good to see this works with antennapod, just need to get a gpodder thing setup in my unraid server and give this a go. Been using antennapod on my phone for the last several years but didn’t do backups and exports often enough and when my Samsung dropped and died I lost 8 months of data. This would make it a lot easier to also be able to stream on my desktop during work. Will be giving it a go here soon!


I have not used Fedora server yet but like their desktop is. Currently my two VMs in unraid are Rocky Linux. Been using centos and now Rocky for the last 5-6 years and haven’t had any complaints


Been trying to read through to understand and see how all this is supposed to work, I guess it’s so you can use beeper app and infra and APIs to talk to your matrix server and the encryption/decryption/handshake happens here between matrix and beeper and then send to their servers for delivery and all that portion.


Ooo definitely going to give this a shot thanks for linking it. Their docs and guides say all of these bridges are encrypted and though things go through their app/services they cannot see or save anything, will be good to verify with my own bridge/instance however.


I’ve had one realm with 5 clients and nothing crazy setup running for about 3 years across 3 major versions and haven’t had many problems


They do have a doc for this. https://immich.app/docs/administration/backup-and-restore/

I dump my immich db weekly and every 2 weeks sync the media folders to a remote destination


Also posted it because unraid is not moving solely to annual subscriptions as your title and others have indicated. Previous pro and other fully included lifetime subs are just increasing in price and a lower tier is coming in to place.



Grafana, fronting information from Prometheus, Loki and Telegraf/influxdb since I’m used to that from work and has been a bit more set and forget compared to node_exporter. Easier to add in plugins as well instead of a new container/service to scrape.


I have rokus and use a pihole plus NAT routing rules to force them to not use hard coded DNS so they can’t reach their APIs and most ad domains and while not perfect I don’t see many ads. Maybe the odd poster scrolling around to get to Plex or Netflix


I think the simplest setup is keeping all the apps and services on the local network and doing something like this guide so they are always behind a VPN. Then setup another VPN on unraid or another device to access from outside the local network. There are plenty of other guides for unraid and Plex and the arr stack out there, unraid is just what I use but can use whatever OS you would prefer.

https://unraid-guides.com/2021/05/19/how-to-route-any-docker-container-on-unraid-through-a-vpn/


I use Kavita and KavitaEmail to organize and have a frontend for my books, and the latter to email them to my kindle if it’s not on there yet. My kavita container is stopped most of the time because I already know what I’m going to read next and just need it up to sync or send new books.

Used to just have my library I exported from Amazon and ebooks com on a single folder on my NAS, kavita helped clean it up a bit.

I also tried audiobookshelf but mostly for audiobooks and podcasts and didnt quite fit my workflow I already had and liked using kavita and Antennapod.


Fair, I was more thinking from the server side not the client side where cloudflare certs are the ones seen first.


I have a cloudflare tunnel setup for 1 service in my homelab and have it connecting to my reverse proxy so the data between cloudflare and my backend is encrypted separately. I get no malformed requests and no issues from cloudflare, even remote public IP data in the headers.

Everyone mentions this as an issue, and I am sure doing the default of pointing cloudflared at a http local service but it’s not the ONLY option.


Anything that is actually helpful and useful helps to keep better in touch with aunts and cousins and my parents who all have iPhones and I miss out on group chats since they wont install or use anything else. Apple isn’t going to get rid of their garden unless forced so I’m glad someone is trying something.


My work environments use Prometheus and node-exporter and grafana. At home I use telegraf, influxdb and grafana (and Prometheus for other app specific metrics) but the biggest reason I went with telegraf and influxdb at home is because Prometheus scrapes data from the configured clients (pull), while telegraf sends the data on the configured interval to influxdb (push) and starting my homelab adventure I had 2 VMS in the cloud and 2 pis at home and having telegraf sending the data in to my pis rather than going out and scraping made it a lot easier for that remote setup. I had influxdb setup behind a reverse proxy and auth so telegraf was sending data over TLS and needed to authenticate to just the single endpoint. That is the major difference to me, but there are also subsets of other exporters and plugins and stuff to tailor data for each one depending on what you want.


I selfhost kavita for about 30 ebooks and use KavitaEmail to send epubs to my kindle. I also tried out audiobookshelf only for podcasts and wasn’t quite up to my current workflow that antennapod running only on my phone exceeds at. I also recently saw audiobookshelf can host epubs and send them via SMTP via one container so once the android app for the latter comes out of beta and has better local file and androidauto support I may give that a shot again.


I use kavita and tried audiobookshelf a bit after and all kavita requires is a specific folder of “Last, First” for authors and can toss any jpgs or epubs in those folders and that’s how I have mine structured. I didn’t have any structure setup before so adopting this one made sense to me.


I appreciate the thoughtful reply but my issue with their explanation is not in the concepts or how it operates but in the fact they stated that Cloudflare tunnels were not an option to choose despite proving they have no knowledge in how they are used or operate.


I’m self hosting cloudflared right now, the TLS from cloudflare terminates in a container in my network and then goes to my reverse proxy container for my local network. I’m definitely going to poke around tailscale and their funnels for the future, I’m just playing devils advocate for those replying not knowing anything about cloudflare tunnels yet saying they’re the wrong choice.


Got any info on how cloudflare MITM and decrypts all traffic but tailscale doesn’t? Playing devils advocate and pointing out how not much you’re saying is making sense.


Just my two cents I’d prefer my traffic going through Cloudflare vs Tailscale if it’s all the same, since I’ve heard a lot about Tailscale but know nothing. I’ve interacted on Github threads with people from cloudflare and they’re all super nice and their blog posts and post-mortems are very insightful. Was curious to see if people had actual insight but appears it’s just auto cloudflare = bad.


I’m curious how if they’re functionally the same, one has all the data and the other “shouldn’t be getting your data anyway”. Was mostly curious to hear about informed differences in the products but clearly not going to get that, cheers.


Thanks for the info. Though I fail to see how it’s much different than cloudflare tunnels, I’ll probably stick with that for the near future but will try out tailscale funnel in the future.


Is there a specific reason tailscale having all the same traffic opposed to cloudflare is a better option? I use cloudflare tunnels right now and figured them handling some of the data is better than me by myself.