• 0 Posts
  • 49 Comments
Joined 9M ago
cake
Cake day: Dec 12, 2023

help-circle
rss

I think shared hosting there is more meant to refer to the older “upload your files in webmin and we’ll shove them in /cgi-bin/ with everybody else’s”-style hosting where multiple users sites are running on a single instance of a webserver versus a VPS giving you a VM with SSH access?


Where the metadata goes I think is important as well.

All Signal metadata necessarily goes through Signal’s servers and is tied to your phone number, but not all Matrix metadata ever gets near the Matrix.org if you are using a different homeserver.

I think both are less than ideal in that regard, and I think Briar (strictly P2P) has a much better model for dealing with this at the expense of generally being a UX disaster.


The server software appears to be available and updated now, which they’ve been spotty about in the past. I’ve updated to remove the closed-source part since that is not correct.

As for phone number: Signal still requires me to enter a phone number to create an account as of about 5 minutes ago.


Signal is centralized, closed-source, not-selfhostable (edit: in any meaningful way) and requires being attached to a phone number. (Edit: server source is available, but self-hosting requires recompiling and distributing a custom app to all of your contacts to actually use it.)

Matrix is decentralized, federated, fully open source with multiple client and server implementations, self-hostable, and does not require being attached to a phone number.



If you are dead set on a specifically certificate-backed access control scheme, a VPN with the ability to use the hardware-backed certificate store (such as OpenVPN) is likely easier to set up as it is better supported on mobile devices and doesn’t require application-level support (i.e. everything is protected, not just the apps w/ mTLS support)

https://openvpn.net/faq/how-do-i-use-a-client-certificate-and-private-key-from-the-android-keychain/


I do find rclone to be a bit more comprehensible for that purpose, rsync always makes me feel like I’m in https://xkcd.com/1168/


Must have an android client,support mtls,support attachments and card layout.

ps: pls don’t suggest to save to local storage and sync that.

pls don’t suggest this app that cant do that but its great.

Anyways anyone aware of any app that can do that?

Nope, you seem to be well aware of the options available to you and there isn’t any one single app that meets all of your requirements, so unfortunately we can’t recommend anything at all to you, per your specific request.

You’ll have to build it yourself either from scratch or by taking one of the existing open-source tools and adding the missing functionality.

Looking forward to your pull requests!


Restic and borg are both sorta considered ‘standard’ for doing incremental backups beyond filesystem snapshotting.

I use restic and it automatically handles stuff like snapshotting, compression, deduplication, and encryption for you.


DigitalOcean and Vultr are options that “just work” and have reasonable options available in $5-6/month category.

DO is more established and I’ve used them for nearly 10 years now for a $6/mo VPS and for managing DNS for my domains. Vultr has some much closer datacenter options if you happen to be in the southeast US, rather than basically just covering California and NYC like DO does.


Given how common it is for people to use the ‘reset password’ link for this exact purpose, it does make it seem kinda redundant to even implement passwords on many services to begin with.


People recommend backblaze B2 as a restic/rclone/borg backend because it works extremely well and is an excellent value compared to other available options at a near-flat $6/TB*month rate.

The reason they ‘force linux users to use their b2 product’ is very specifically done, on purpose, to avoid the exact kind of abuse you want to do, which is upload 18TB of near-incompressible data for them to store for $9/month or less.

Buy a 20TB harddrive and keep it in a fireproof filebox, and maybe another to keep at a friends house. You don’t need cloud backups for media you can reaquire relatively easily, save that for the stuff you can’t trivially replace.


What CPU governor are you using? I saved about 40W idle powerdraw switching to powersave vs the default on a Ryzen 9 3900X.


I ran RAID-Z2 across 4x14TB and a (4+8)TB LVM LV for close to a year before finally swapping the (4+8)TB LV for a 5th 14TB drive for via zpool replace without issue. I did, however, make sure to use RAID-Z2 rather than Z1 to account for said shenanigans out of an abundance of caution and I would highly recommend doing the same. That is to say, the extra 2x2TB would be good additional parity, but I would only consider it as additional parity, not the only parity.

Based on fairly unscientific testing from before and after, it did not appear to meaningfully affect performance.


125W (Less than $15/month) or so for

  • Ryzen 9 3900X
  • 64GB RAM
  • 2x4TB NVMe (ZFS Mirror)
  • 5x14TB HDD (ZFS RAID-Z2)
  • 2.5GBe Network Card
  • 5-port 2.5GBe Network Switch
  • 5-port 1GBe POE Network Switch w/ one Reolink Camera attached

I generally leave powerManagement.cpuFreqGovernor = "powersave" in my Nix config as well, which saves about 40W ($4/mo or so) for my typical load as best as I can tell, and I disable it if I’m doing bulk data processing on a time crunch.


My partner and I use a git repository on our self-hosted gitea instance for household management.

Issue tracker and kanban boards for task management, wiki for documentation, and some infrastructure components are version controlled in the repo itself. You could almost certainly get away with just the issue tracker.

Home Assistant (also self-hosted) provides the ability to easily and automatically create issues based on schedules and sensor data, like creating a git issue when when weather conditions tomorrow may necessitate checking this afternoon that nothing gets left out in the rain.

Matrix (also self-hosted) lets Gitea and Home Assistant bully us into remembering to do things we might have forgotten. (Send a second notification if the washer finished 15 minutes ago, but the dryer never started)

It’s been fantastic being able to create git issues for honey-dos as well as having the automations for creating issues for recurring tasks. “Hey we need to take X to the vet for Y sometime next week” “Oh yeah, can you go ahead and put in a ticket?” And vice versa.


what does industry do when they need to automate provisioning of thousands of devices for POS, retail, barcode scanning, delivery drivers, etc.

MDM doesn’t help with the kind of stuff OP is trying to automate, but it does usually cover most business use cases and if you need more than that, you generally either have a contract to get the manufacturer to do it for you or just put what you need into the org-specific superapp you already have to have.


Oh nice a nicely-formatted list of reasons I don’t switch phones more frequently than once every 5 years: I loathe setting them up as specifically as I want them to behave


I’ve read many many discussions about why manufacturers would list such a pessimistic number on their datasheets over the years and haven’t really come any closer to understanding why it would be listed that way, when you can trivially prove how pessimistic it is by repeatedly running badblocks on a dozen of large (20TB+) enterprise drives that will nearly all dutifully accept hundreds of TBs written to and read from with no issues when the URE rate suggests that would result in a dozen UREs on average.

I conjecture, without any specific evidence, that it might be an accurate value with respect to some inherent physical property of the platters themselves that manufactures can and do measure that hasn’t improved considerably, but has long been abstracted away by increaed redundancy and error correction at the sector level that result in much more reliable effective performance, but the raw quantity is still used for some internal historical/comparative reason rather than being replaced by the effective value that matters more directly to users.


If the actual error rate were anywhere near that high, modern enterprise hard drives wouldn’t be usable as a storage medium at all.

A 65% filled array of 10x20TB drives would average at least 1 bit failure on every single scrub (which is full read of all data present in the array), but that doesn’t actually happen with any real degree of regularity.


I think it’s worth pointing out that this article is 11 years old, so that 1TB rule-of-thumb probably probably needs to be adjusted for modern disks.

If you have 2 full backups (18TB drives being more than sufficient) of the array, especially if one of those is offsite, then I’d say you’re really not at a high enough risk of losing data during a rebuild to justify proactively rebuilding the array until you have at least 2 or more disks to add.


Still a few Ubuntu Server stragglers here and there, but it works quite well as long as you keep your base config fairly lean and push the complexity into the containers.

Documentation tends to be either good or nonexistent depending on what you’re doing, so for anything beyond standard configuration but it can usually be pieced together from ArchWiki and the systemd docs.

All in all, powerful and repeatable (and a lot less tedious than Ansible, etc), but perhaps not super beginner-friendly once you start getting into the weeds. Ubuntu Server is just better documented and supported if you need something super quick and easy.


NextCloud main use is file synchronization

Is it? Interesting. I don’t think I’ve ever even considered using it for that purpose.

I mostly use it as an easily web-accessible interface for a variety of unified productivity and organization software (file upload/download, office suite, notes, calendar, etc), with easy ability to do stuff like create a password-protected shared folders of pictures/documents I can easily share with friends and family who don’t have accounts so they can upload/download/organize/edit files with me and each other from a browser without having to install additional software on client devices.


Which I’m not sure I get the popular mentioning of since it seems to serve a very different purpose than NextCloud does, like not even similar niches.

Nothing against it, of course, it just doesn’t feel like an ‘alternative’ to NC.


I would strongly suggest not using 900GB 10kRPM drives (and especially not 10 of them) in [current year] when brand-new 8TB hard drives cost $120, and 14+TB recertified drives aren’t much more than that. The power costs of 7 more drives than you need for the capacity definitely add up over several years of runtime.


Washing machine is a threshold sensor in Home Assistant on the power draw entity on a sonoff s31 smart outlet flashed w/ ESPHome.

Dryer is another threshold sensor on a current clamp connected to an ESP32 running ESPHome.


We’ve both got a software dev background, so it wasn’t a particularly difficult solution to sell, as soon as we came up with it it was very much a “oh duh, why didn’t one of us think of that way earlier”



My partner and I use a git repository on our self-hosted gitea instance for household management.

Issue tracker and kanban boards for task management, wiki for documentation, and some infrastructure components are version controlled in the repo itself.

Home Assistant (also self-hosted) provides the ability to easily and automatically create issues based on schedules and sensor data, like creating a git issue when when weather conditions tomorrow may necessitate checking this afternoon that nothing gets left out in the rain.

Matrix (also self-hosted) lets Gitea and Home Assistant bully us into remembering to do things we might have forgotten. (Send a second notification if the washer finished 15 minutes ago, but the dryer never started)

It’s been fantsstic being able to create git issues for honey-dos as well as having the automations for creating issues for recurring tasks. “Hey we need to take X to the vet for Y sometime next week” “Oh yeah, can you go ahead and put in a ticket?” And vice versa.


I have looked at the ROI for getting more efficient kit and ended up discovering that going for something like a low-idle-power-draw system like a NUC or thin client and a disk enclosure has a return period on the order of multiple years.

Based on that information, I’ve instead put that money towards lower hanging fruit in the form of upgrading older inefficient appliances and adding multi-zone temperature control for power savings.

The energy savings I’ve been able to make based on long-term energy use data collected via Home Assistant has more than offset all of the electricity I’ve ever used to power the system itself.


~120W with an old server motherboard and 6 spinning drives (42TB of storage overall).

Currently running Nextcloud, Home Assistant, Gitea, Matrix, Jellyfin, Lemmy, Mastodon, Vaultwarden, and a bunch of other smaller stuff alongside storing a few months worth of surveillance footage, so ~$12/month in power certainly ain’t a bad deal versus paying for hosted versions of even a fraction of those services.


they need to be using shared storage for disks

You can perform a live migration without shared storage with libvirt



Stop using a rolling release distro for something that you actually rely on day-to-day.


Yeah, rolling release on a server sounds horrifying. You couldn’t pay me enough to live that nightmare.

There’s a reason “enterprise” server distros exist. Install LTS release once every 2, 4, or 5 years depending on taste, login to update as you remember the machine is even running an OS, and just generally forget the machine exists for several years at a time.


OpenWRT, because it has a nice interface, runs on half a toaster, and I’ve yet to find something that I need it do that it couldn’t do but OPNSense could.

I did try PFSense many years back and it just seemed overly complicated and generally flaky. I had trouble setting it up as tinc vpn client despite that being a trivial task in OpenWRT, so I switched back.


Mastodon is a hellavalot easier to self-host then Lemmy, so if you got Lemmy running reliably then Mastodon would be a breeze.


My partner and I use a pinned issue as our grocery list on our git repo for managing our household. All running on top of a self-hosted gitea instance.

Great for being able to create git issues for honey-dos as well as having automations for creating issues for recurring tasks.

“Hey we need to take X to the vet for Y sometime next week” “Oh yeah, can you go ahead and put in a ticket?” Amd vice versa


SBCs like the RPi are kind of awkwardly in-between a microcontroller like an Arduino or ESP32 that you can actually trust with handling GPIO and data logging, and a real Linux system that can actually do meaningful computational work.

Pretty much the only task I’ve found them reliably appropriate for is running OctoPrint, really really light computer vision tasks for robotics, or hooking up an RTL-SDR to use as a police/HAM scanner. Outside of those, it’s so much easier to use either a cheaper and more reliable MCU or a much more powerful old laptop or desktop.