Mama told me not to come.

She said, that ain’t the way to have fun.

  • 1 Post
  • 323 Comments
Joined 2Y ago
cake
Cake day: Jun 11, 2023

help-circle
rss

There are a bunch of other static site generators as well. They’re mostly targeted at blogs and whatnot, but maybe that’s a good thing if you want to leave some instructions/documentation about each one.


Or if you want to learn a JS framework, you can also do it that way.


Mine is okay, but maybe I just have high standards. I’m using redis and postgresql, so I’m probably about as optimized as I can be. Page loads in like 2-3s, but I wish it was faster.

If there was an alternative to Nextcloud that could replace Google Docs and wasn’t written in PHP, I’d switch. I don’t need much, I just want to access documents and spreadsheets in the browser.

But Nextcloud is good enough.



Yeah, flash memory doesn’t store well, hence the recommendation to keep checking on it. This article claims 10 years, but I think checking on it every year or two is a good idea.

And yeah, M-Disk looks like a good option, especially if you never need to read from it. I would make multiple copies though, because disks break, get lost, etc. And get an extra drive so your successors don’t need to go find one; people are lazy and you want as few obstacles to them using it as possible.

I personally don’t like using optical media because players can be finicky and storage can be annoying. But it’s probably a good solution for your stated needs.


i don’t know much about custom fonts, but there are two main options for self-hosted “word” replacements:

I use Collabora with Nextcloud (hence the link).


I use Collabora CODE, which is an online version of LibreOffice. I don’t know a ton about the technical details, but I’m pretty sure it does server side rendering.

Here’s a guide to configure it with Nextcloud.


I haven’t used Netflix on my Pi for a few years, but at least in the past it worked fine by pulling the DRM lib from Android. I used Netflix and Disney Plus on Kodi (with a plugin) for a couple years until we stopped watching on that TV (in the bedroom).


I’m not talking about USB sticks, I’m talking about USB drives, like a HDD or SSD. If you want to go with flash memory, I recommend SD cards because they’re small and cheap, so keeping a few copies isn’t particularly burdensome.

I wouldn’t trust any of these options to last a long time on a shelf though. Check them every year or two and replace every 5-10 years, maybe a little longer if you buy higher quality.

So I might use a pair of mirrored hard drives with SATA->USB cable, then include instructions along the lines of “plug into my linux laptop to access, or take to a computer repair show if you can’t work it out”.

That’s basically what I’m planning too. But my use case is disaster recovery, basically as a cheaper alternative to paying for hosted backup for important, but recoverable information (e.g. ripped media). Everything truly important (pictures and documents) goes to hosted backup as well.

I’m largely relying on documents explaining how to access the backups. If I pass, I expect my survivors to either figure it out themselves or hire someone who can figure it out from my documentation.


Then I’d go with FAT on a USB, which should be plenty portable into the future. You’ll want to replace it every 5-10 years, and check on it every other year or so.

That’s about as easy to use as I can think of. Decades down the road, physical media like DVDs and tapes may be difficult to find readers for, but USB is versatile enough that someone is bound to have access. Micro SD cards may also be a good option, as long as you keep a couple USB readers around.


How can I tell if individual files get corrupted?

Checksums. A good filesystem will do this for you, but you can do it yourself if you want.

If you sync a drive with rsync or something periodically, it’ll replace files with different checksums, fixing any corruption as you go. Then smart tests should tell you if there’s any corruption the drive is aware of. I’m sure automated backup tools have options for this.


Exactly. I have a document for my SO that describes what to do if I pass (where the money is, how the WiFi is set up, various important accounts, etc). It’s not a will (nothing about who gets what, though that’s assumed by the state to be my SO, or my kids equally if we pass together), just a document that explains the stuff I handle.


But I will because it won’t work the next time I take it home to sync. The chance that it’ll fail during the few months between a sync and an emergency is incredibly low.

I wouldn’t leave it on a shelf for years, just a few months at a time (approximately quarterly).


I’m thinking of using a HDD and keeping it at work, which is climate controlled. I’d bring it back every few months to sync the latest.

Since it’s constantly being used, I’m pretty confident it’ll be usable as a backup if my NAS fails, so it only needs to be “shelf stable” for a few months at a time. If you’re retired or something, a safe deposit box at your local bank should do the trick.


  1. Absolutely!
  2. Yes, but they get cleaned up with prune, so you could accidentally blow all your data away

I use BTRFS w/ RAID 1 (mirror) with two drives (both 8TB), because that’s all I’ve needed so far. If I had four, I’d probably do to separate RAID 1 pairs and combine them into a logical volume, instead of the typical RAID 10 setup where blocks are striped across mirrored sets.

RAID 5 makes sense if you really want the extra capacity and are willing to take on a little more risk of cascading failure when resilvering a new drive.

ZFS is also a great choice, I just went w/ BTRFS because it’s natively supported by my OS (openSUSE Leap) with snapshots and rollbacks. I technically only need that for my root FS (SSD), but I figured I might as well use the same filesystem for the RAID array as well.

Here’s what I’d do:

  1. 4x 16TB HDDs either in a RAID 10 or two RAID 1 pairs in one logical volume - total space is 32TB
  2. 500GB SSD -> boot drive and maybe disk cache
  3. 8TB HDD - load w/ critical data and store at work as an off-site backup, and do this a few times/year; the 4x HDDs are for bulk, recoverable data

That said, RAID 5 is a great option as well, as long as you’re comfortable with the (relatively unlikely) risk of losing the whole array. If you have decent backups, having an extra 16TB could be worth the risk.


That video is about hardware RAID. Software RAID is still alive and well (e.g. mdadm).

I personally use BTRFS w/ RAID 1, and if I had OP’s setup, I’d probably do RAID 10. Just don’t use RAID 5/6 w/ BTRFS.

ZFS isn’t the only sane option.


Watchtower

Glad it works for you.

Automatic updates of software with potential breaking changes scares me. I’m not familiar with watchtower, since I don’t use it or anything like it, but I have several services that I don’t use very often, but would suck if they silently stopped working properly.

When I think of a service, I think of something like Nextcloud, Immich, etc, even if they consist of multiple containers. For example, I have a separate containers for libre office online and Nextcloud, but I upgrade them together. I don’t want automated upgrades of either because I never know if future builds will be compatible. So I go update things when I remember, but I make sure everything works after.

That said, it seems watchtower can be used to merely notify, so maybe I’ll use it for that. I certainly want to be around for any automatic updates though.


Automatically upgrading docker images sounds like a recipe for disaster because:

  • could pull down change that requires manual intervention, so things “randomly” break
  • docker holds on to everything, so you’d need to prune old images or you’ll eventually run out of disk space; if a container is stopped, your prune would make it unbootable (good luck if the newer images are incompatible with when it last ran)

That’s why I refuse to automate updates. I sometimes go weeks or months between using a given service, so I’d rather use vulnerable containers than have to go fix it when I need it.

I run OS updates every month or two, and honestly I’d be okay automating those. I run docker pulls every few months, and there’s no way I’d automate that.


Oh yeah, as a hobby, it’s absolutely fun. I like tinkering with all kinds of things.

My point was to just be careful since it’s not necessarily going to be worth the expense and time.

I’ve been considering getting a breaker-level power monitor to watch for spikes. It’s a bit more expensive (hundreds of dollars), but it measures the types of things I’m interested in. My kid flipped on our gutter heaters (I never use them) and shot our electricity bill to the moon for a couple months until I noticed. If I had a home energy monitor, I would’ve noticed a crazy energy spike and that might have paid for itself.


Cool!

Just be cautious that you don’t over-optimize for power. I ran around my house w/ a Kill-a-watt meter checking everything and made some tweaks, and I still don’t think it has paid for itself since power costs are so low here ($0.12-0.13/kWh, so 10Wh 24/7 < $1/month), and some of the things I tried doing made my life kinda suck. So I backed off a bit and found a good middleground where I got 80% of the benefit w/o any real compromises.

For example, here’s what I ended up with:

  • put desktop to sleep - power draw is negligible, and I don’t need to keep typing my FDE password to use it
  • “upgraded” NAS from old 2009 HW to my old gaming PC HW (1st gen Ryzen) - cut power draw in half, but I had to buy some RAM; will take years to pay off w/ electricity savings, but it has much better performance in the meantime
  • turn off work laptop - was drawing ~20W; I WFH MThF, so I leave it on Th night for convenience, but have it sleep M-W and turn it off Friday

I could probably cut a bit more if I really try, but that would be annoying.


Same, but openSUSE. Tumbleweed on my desktop and laptop, Leap on my servers.

And yeah, if I need to babysit something, I’ll use an alternative. I’ll upgrade when I’m ready to, which is usually over holidays when I’m bored and looking for a project.


Constant maintenance? What’s that?

Here’s my setup:

  • OS - openSUSE Leap - I upgrade when I remember
  • software - Docker images in a docker compose file; upgrading is a simple docker command, and I’ll only do it if I need something in the update
  • hardware - old desktop; I’ll upgrade when I have extra hardware

I honestly don’t think about it. I run updates when I get to it (every month or so), and I’ll do an OS upgrade a little while after a new release is available (every couple years?). Software gets updated periodically if I’m bored and avoiding more important things. And I upgrade one thing at a time, so I don’t end up with everything breaking at once. BTRFS snapshots means I can always roll back if I break something.

I don’t even know what TrueCharts is. Maybe that’s your issue?


I use my old desktop. It’s totally overkill, but it’s also free since I was going to throw it out instead.


I also like browsing folders of data, which makes backups easy. I only use volumes for sharing incidental data between containers (e.g. certificates before I switched to Caddy, or build pipelines for my coding projects).

Use volumes if you don’t care about the data long term, but you may need to share it with other containers. Otherwise, or if in doubt, use bind mounts.


Can confirm. I’d host my own, but I’m lazy and The Dude hasn’t given me a reason to.


Yup. Paying $50 or whatever for an extra checked bag is probably worth it.


To add to this, you should practice good security elsewhere as well:

  • host everything in containers, and only let them access what they need
  • manage TLS behind your firewall, so a vulnerability doesn’t expose packets for other services
  • run your containers with minimal privileges (look into podman, for example), so they’ll be limited if they escape the container
  • use a strong root password (or no root), and put passwords on any SSH keys you use there (e.g. for git repos, accessing other servers, etc)

Once you expose something inside your network, you need to ramp up security.


Surely you could run Syncthing in a docker container or flatpak or something to force it to work on the same machine. I don’t know what mechanism is used, but you can spoof a lot on Linux.


Then you’d be wrong. Unless you pick SQLite and that’s all you need.


Generally speaking, if a professor recommends something, it probably sucks. Their information is incredibly outdated and is usually whatever they used in their own undergrad program.

At school I learned:

  • Java
  • PHP
  • MySQL
  • C#
  • C++
  • Racket (Lisp)

Each of those has a better alternative, with C# being the least bad. For example:

  • Java -> Kotlin
  • PHP -> Python
  • MySQL -> SQLite or Postgres
  • C# -> Python (desktop QT GUIs) or web stack (e.g. Tauri for desktop web stack)
  • C++ -> Rust (non-games) or a game engine
  • Lisp -> Haskell

Formal education is for learning concepts, learn programming languages and tools on your own.


Postgres. It’s more strict by default, which leads to a lot fewer surprises.

Here’s my rule of thumb:

  1. SQLite - if it’s enough
  2. Postgres
  3. MariaDB - if you don’t care about your data and just want the thing to work
  4. MySQL - if you sold your soul to Oracle, but still can’t afford their license fee
  5. Something else - you’re a hipster or have very unique requirements

Persistence and reading comprehension.

There’s no need to learn Python or any programming language to self host stuff, you just need to be able to follow blog posts and run some Docker commands.

I’m a software dev and haven’t touched a single line of code on my NAS. Everything is docker compose and other config files.


I’m behind CGNAT, so I have a local DNS server that resolves to the internal IP, and regular DNS resolves to my VPS, which tunnels into my home network through Wireguard.

If you’re not behind CGNAT, you’ll just hit your router after DNS resolution and you’re golden.


You’re doing fine. Have a wonderful day.


My apologies.

In the west, we have an informal concept called “wife approval factor,” which is how supportive your wife would be about something. Then there’s the idea of “a happy wife, a life” and “if momma ain’t happy, ain’t nobody happy,” so it’s in the husband’s interest to keep the wife happy.

I thought this was pretty universally true. I have coworkers from very different parts of India (one Muslim from the north, the other Hindu from the very south), and if we have a surprise work-provided lunch, they’ll eat the one they brought from home at the end of the day so their wives don’t get mad at them not eating the lunch they prepared. So even in a very patriarchal society, they’ll still go out of their way to keep their wives happy.

It’s not that women call shots (men get away with a lot of nonsense here), the “permission” is largely about keeping the wife happy.



I’m curious too. Hopefully I can come up with a design that feels natural and encourages the kind of interaction I’m after.


About 10k power on hours. That’s honestly a little surprising since I’ve had them for 7 years or so, but it’s only been on 24/7 for the last year or two (used to just turn on when watching a movie or something).

From those hours, I should expect a few more trouble free years.

My OS drive is >30k hours since it used to be my desktop boot drive (tiny 120GB SATA SSD). I’ve been thinking about upgrading to NVME, since my desktop NVME is getting a little full (500GB), and it could also make for a nice cache. It’s nowhere near dying though, with ~16TBW, so I’m in no hurry.


It’s not top of the line, but my Ryzen 1700 is way overkill for my NAS. I’ll probably add a build server, not because I need it, but because I can.


Looking for HW recommendations for DIY NAS/Homelab
Here's what I currently have: - Ryzen 1700 w/ 16GB RAM - GTX 750 ti - 1x SATA SSD - 120GB, currently use <50GB - 2x 8TB SATA HDD - runs openSUSE Leap, considering switch to microOS And main services I run (total disk usage for OS+services - data is : - NextCloud - possibly switch to ownCloud infinite scale - Jellyfin - transcoding is nice to have, but not required - samba - various small services (Unifi Controller, vaultwarden, etc) And services I plan to run: - CI/CD for Rust projects - infrequent builds - HomeAssistant - maybe speech to text? I'm looking to build an Alexa replacement - Minecraft server - small scale, only like 2-3 players, very few mods HW wishlist: - 16GB RAM - 8GB may be a little low longer term - 4x SATA - may add 2 more HDDs - m.2 - replace my SATA SSD; ideally 2x for RAID, but I can do backups; performance isn't the concern here (1x sata + PCIe would work) - dual NIC - not required, but would simplify router config for private network; could use USB to Eth dongle, this is just for security cameras and whatnot - very small - mini-ITX at the largest; I want to shove this under my bed - very quiet - very low power - my Ryzen 1700 is overkill, this is mostly for the "quiet" req, but also paying less is nice I've heard good things about N100 devices, but I haven't seen anything w/ 4x SATA or an accessible PCIe for a SATA adapter. The closest I've seen is a ZimaBlade, but I'm worried about: - performance, especially as a CI server - power supply - why couldn't they just do regular USB-C? - access to extra USB ports - its hidden in the case I don't need x86 for anything, ARM would be fine, but I'm having trouble finding anything with >8GB RAM and SATA/PCIe options are a bit... limited. Anyway, thoughts?
fedilink