• 3 Posts
  • 147 Comments
Joined 1Y ago
cake
Cake day: Jun 11, 2023

help-circle
rss

Does Actual support investment accounts / stocks? I was using beancount/fava for tracking, but have been lazy and haven’t updated it in a long time.


It’s already been recommended, but I think Grist or a lowcode/nocode thing like baserow or nocodb might work for you.

Also, I’d love to see what you come up with! My cats are picky eaters and I’ve been wanting to keep track of what wet food they like or not.


I use the Nexus free version. You can cache docker registries and other repos like apt/yum/pypi/etc.

It works pretty well, but could be overkill compared to some of the other options.


If the operator doesn’t allow it for some reason, uninstall it and try with the helm chart instead?

Or is there a reason to use the operator?



You can use docker exec with garage docker image.

I’m on mobile but I think you just need something like: docker exec containerid ./garage stats


Garage is the simplest of the three imo.

I’ve only used it in a cluster, but it should be even simpler for one instance



What mobile issues do you have? I use it both on desktop and mobile with sync mode turned on in the PWA.


I like it, it seems pretty stable to me. I didn’t use it much before the query/template stuff was changed. I think both are fine right now, but don’t really know what it looked like before.

There’s also “space-script” now which is basically like mini javascript plugins you can write inside your notes. It’s what drew me away from trilium in the end.

I don’t blame you for taking a break if you ran into breaking changes though. That’s one benefit to keeping your notes in regular markdown files too.



Do you use garage for backups by any chance? I was wanting to deploy it in kubernetes, but one of my uses would be to back up volumes, and… that doesn’t really help me if the kubernetes cluster itself is broken somehow and I have to rebuild it.

I kind of want to avoid a separate cluster for storage or even separate vms. I’m still thinking of deploying garage in k8s, and then just using rclone or something to copy the contents from garage s3 to my nas


I really like mine too, I also have a tube and a pro. Both of them have a weird issue with the TV I use most often though. Both shields won’t display anything unless I boot them in safe mode.

They both work on a different tv that is 4k. This one is an older 1080p plasma. But it’s weird that it used to work just fine. It might be related to the TV, but no other devices have issues so it’s cheaper to replace one of the shields than buy a new tv lol.


I’m still using an Nvidia shield which I guess counts as an android box. I thought they’d release a new version by now, but I’m considering building a htpc instead.

I used to use a raspberry pi 2 or 3 and it worked fine for 1080p content. Not sure if the newer pis support 4k, but it’s on my list to look into eventually.


This is an option, my main reason for not wanting to use a hosted k8s service is cost. I already have the hardware, so I’d rather use it first if possible.

Though I have been thinking of converting some sites to be statically-generated and hosted externally.


Network Policies are a good idea, thanks.

I was more worried about escaping the container, but maybe I shouldn’t be. I’m using Talos now as the OS and there isn’t much on the OS as it is. I can probably also enforce all of my public services to run as non-root users and not allow privileged containers/etc.

Thanks for recommending crowdsec/falco too. I’ll look into those


It’s mostly working fine for me.

An alternative I tried before was just whitelisting which IPs are allowed to access specific ingresses, but having the ingress listen on both public/private networks. I like having a separate ingress controller better because I know the ingress isn’t accessible at all from a public ip. It keeps the logs separated as well.

Another alternative would be an external load balancer or reverse proxy that can access your cluster. It’d act as the “public” ingress, but would need to be configured to allow specific hostnames/services through.


I did actually consider a 3rd cluster for infra stuff like dns/monitoring/etc, but at the moment I have those things in separate vms so that they don’t depend on me not breaking kubernetes.

Do you have your actual public services running in the public cluster, or only the load balancer/ingress for those public resources?

Also how are you liking garage so far? I was looking at it (instead of minio) to set up backups for a few things.


Quadlet

I haven’t heard of Quadlet before this, thanks I’ll take a look at it.


Unraid has this with their cache pools. ZFS can also be configured to have a cache drive for writes.

You can also DIY with something like mergerfs and separate file systems.


What you read online may have been referring to how cloudflare itself can always see the unencrypted traffic?

Cloudflare tunnels are encrypted, but inside of that encrypted tunnel could be a regular http stream.


Do you need to search inside of files for text, or just file names?

If inside of files, something simple like ripgrep/ag/grep like someone else mentioned would be an easy option.

If just file names, why not create an index of filenames and search that instead?

If you need an advanced search, maybe ElasticSearch would work for you? You’d have to upload each file to the elasticsearch server though.


Should I keep shared or separate k8s clusters?
I've been in the process of migrating a lot things back to kubernetes, and I'm debating whether I should have separate private and public clusters. Some stuff I'll keep out of kubernetes and leave in separate vms, like nextcloud/immich/etc. Basically anything I think would be more likely to have sensitive data in it. I also have a few public-facing things like public websites, a matrix server, etc. Right now I'm solving this by having two separate ingress controllers in one cluster - one for private stuff only available over a vpn, and one only available over public ips. The main concern I'd have is reducing the blast radius if something gets compromised. But I also don't know if I really want to maintain multiple personal clusters. I am using Omni+Talos for kubernetes, so it's not _too_ difficult to maintain two clusters. It would be more inefficient as far as resources go since some of the nodes are baremetal servers and others are only vms. I wouldn't be able to share a large baremetal server anymore, unless I split it into vms. What are y'all's opinions on whether to keep everything in one cluster or not?
fedilink

I have not had any issues with Kopia so far, but I have also only used it for maybe a year? My main reason for trying it was that I wanted to be able to give something to family members to use as a backup client with a reasonable ui. I can also control the default exclude list and default policies for compression/etc pretty easily.

I don’t know how many years of restic backups I have, but I still rely on it for my most important data. Anything really important on my desktop/laptop gets backed up via kopia, but also gets copied (usually via nextcloud) to a server that has hourly zfs snapshots and daily restic snapshots. Both the restic and kopia snapshots get stored on a local nas and then synced to rsync.net.


I was talking about dumping the database as an alternative to backing up the raw database files without stopping the database first. Taking a filesystem-level snapshot of the raw database without stopping the database first also isn’t guaranteed to be consistent. Most databases are fairly resilient now though and can recover themselves even if the raw files aren’t completely consistent. Stopping the database first and then backing up the raw files should be fine.

The important thing is to test restoring :)


If you’re worried a out a database being corrupt, I’d recommend doing an actual backup dump of the database and not only backing up the raw disk files for it.

That should help provide some consistency. Of course it takes longer too if it’s a big db


Restic with rest-server is great.

Kopia is a little newer and has an actual web ui, so may be a good choice too.

I still use restic on all of my severs, but have started using Kopia for my non server machines.

Both support compression, encryption, and deduplication.


What’re you wanting to use it for, what are your main concerns?

I’d recommend searxng, but mostly because I don’t have experience with very many others. I’ve never had any issues with it either.

Check out a public instance if the engines you’re interested in and see which you like more?


Sounds pretty cool, thanks for the details! Any chance of some pictures? My worry would be the same, I don’t know if I trust myself not to flood the house lol

I did think about using a mechanical float like in the back of a toilet, and an overflow drain in case it never stops filling


Snipe it and grocy are the ones I see pretty often. I haven’t tried homebox to compare yet though


Kanboard is pretty great even if it does feel dated. I tried a lot of the newer alternatives and they all had either weird bugs or quirks I didn’t appreciate.


Why mtls support specifically? You could use any web based notes app (with PWA) and have the web server / reverse proxy handle the mtls part.


Eh while it sucks, registrars and web hosts get so many abuse reports that sometimes they just err on the side of caution and don’t investigate as thoroughly as you’d like.

Of course it also depends a lot on various things like what type of complaint, how much money you spend with them, account history, complaint source, etc.

They should be able to tell you what they had a problem with and give you a chance to fix it.


Self cleaning? Is it something you made or what’s the name is it? I’d be interested in details either way

I really want a fancier water fountain for my cats but never found a self cleaning one :(


Uptime Kuma is great for simple up/down and web checks. Librenms is worth looking at too for other metrics.


The only time rsync is really slow is when your dealing with millions of small files since it only transfers a single file at a time.

rclone is better in that respect since it transfers multiple files in parallel. I don’t think the speed of a single transfer is going to differ much.


If you’re wanting something that keeps historical data, vnstat is another good one for network usage



Dumb question but what do you mean you cycled them a few times?


Thanks for linking it, that’s a pretty cool idea.



Simple authentication for homelab?
What's everyones recommendations for a self-hosted authentication system? My requirements are basically something lightweight that can handle logins for both regular users and google. I only have 4-5 total users. So far, I've looked at and tested: - Authentik - Seems okay, but also really slow for some reason. I'm also not a fan of the username on one page, password on the next screen flow - Keycloak - Looks like it might be lighter in resources these days, but definitely complicated to use - LLDAP - I'd be happy to use it for the ldap backend, but it doesn't solve the whole problem - Authelia - No web ui, which is fine, but also doesn't support social logins as far as I can tell. I think it would be my choice if it did support oidc - Zitadel - Sounds promising, but I spent a couple hours troubleshooting it just to get it working. I might go back to it, but I've had the most trouble with it so far and can't even compare the actual config yet
fedilink

Centralized backup server (with clients)?
Does anyone have recommendations for centralized backup servers that use the server/client model? My backups are relatively simple in that I use rsync to pull everything from remote machines to a single server and then run restic on that server to back them up and also copy that backup to cloud storage. I've been looking at some other software again like Bacula/Bareos/UrBackup and wondering if anyone's currently using one of them or something like it that they like? Ideally I'm looking for a more user-friendly polished interface for managing backups across multiple servers and desktops/laptops. I'm testing Bareos now, but it'll probably not work out since the web ui doesn't allow adding new jobs/volumes/etc.
fedilink