Just an Aussie tech guy - home automation, ESP gadgets, networking. Also love my camping and 4WDing.

Be a good motherfucker. Peace.

  • 4 Posts
  • 240 Comments
Joined 1Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss

Jesus - they don’t even name the report so interested people can search for it. Lame.

Also, least-intrusive doesn’t mean most-trustworthy. Just don’t use any of them or, if you do, be sure to take all steps at your disposal to not give any personal information to them.



lol - I love that I canned all my paid subs that were fucking me up the arse like this, and then used the savings to setup a half-decent Plex server for my family. Fuck those greedy cunts.


Well, that’s probably because he’s hit the cap on base salary. After a certain point in Amazon, the majority of your income at Amazon is derived from shares.

That said, after the signing shares are yours after the first 4-5 years, you’re down to the yearly grants they hand out, which come the year after they were granted, in quarterly amounts.

Also, if your brother is high up, he probably got more shares this year than usual, as Amazon announced that only certain levels and below were getting salary increases. Higher up only got shares.



How are they retaining staff?

They retain them for the 4-5 years it takes for signing cash and signing stock units to all run out, at which point many people start to get itchy feet.


Yeah - they call it URA, for “unregretted attrition”. Tell me that doesn’t sound like a shitty way to manage your people.


You know, at some point, you gotta assume they’ll eventually hire and fire/lose all the usable talent they have access to, and shit like this will prevent them finding new talent. Until some exec “invents” WFH as a perk…


Oh, sure. I get that. Sending yourself reminders is absolutely understandable. Sending yourself documented evidence of your plans to defraud someone is entirely different.


In a 2017 email to himself, Smith calculated that he could stream his songs 661,440 times daily, potentially earning $3,307.20 per day and up to $1.2 million annually.

Great idea, but why would you email yourself about it?


Isn’t the picture from Logan?

Edit: oh, it’s called johntucker.jpg.


Later in the same comment I mention how I think social media only benefits the corporations that run it.

It’s pretty clear what I meant.


My own belief is that all social media is a cancer, and to be avoided entirely. I’m able to do that for myself, but I’m also realistic about the chances of keeping my kids away from it. So, I focus my energy on trying to equip them with the mental skills to neutralise the toxic aspects of social media.

For my 9yo, that means teaching her to employ natural skepticism and critical thinking. I’m also trying to drum into her the understanding that social media is inherently untrustworthy and unreliable, and exists solely for the benefit of the corporations that run it.

That said, I’ve blocked Tik Tok on my home network, much to the older kids’ chagrin. They have to use mobile data if they want to access that shit on their phones.


The casting bit is the missing piece for me.

I’ve built a RasPi with Kodi for our caravan, to use Plex and stream our free-to-air TV here in Australia (using Musk’s space innernets). I just miss being able to cast from my phone, for the occasional thing I can’t do with a Kodi add-on.


Shit like this is why I intend to keep my (currently) 9yo as far away from social media as I can, for as long as I can. This fucking terrifies me, as it should any parent.


RAID5 and unlimited downloads on my 1Gbps fibre. All I backup is my library metadata itself, using a 2N+C strategy.




Seriously, fuck all these “subscription” ideas.

Why in the ever-loving fuck would I want to pay a subscription for a goddam computer mouse? Some techbro fuckwit is probably chest-bumping his own reflection in the mirror for coming up with this dumb idea.

Here’s a novel idea to help you keep revenue going the right direction: try innovating something truly useful and new, rather than selling the same, regurgitated Hotel California bullshit to hapless users.


  • About 1,400 movies: 6.7TB
  • About 15,100 episodes: 10.9TB

Spread across a couple of NASes, each with 4 x 4TB drives in RAID5.


+1 to everything you just said - I’ve been using Immich for a little less (370 days, thanks to the same button). It’s feature rich and rock solid.

Only thing I hope they add to the mobile app is the Years/Months/Days option, to make it easy to quickly group, then find, your photos. It’s the one thing that keeps me using my phone’s own Photos app (locally - no cloud sync).


Time and time again, we’ve proven the best weapon we have against corporate greed is our ability (and willingness) to share knowledge.


Do yuo have IDP/IPS turned on on pfSense? My OPNsense on my 1Gbps fibre will easily drop from an average of 900Mbps down to around 300Mbps-500Mbps, if I turn on IDS.


I’m still using it via mbasic. It looks like shit, but I can get to my messages and reply, etc.


It’s won’t be on-prem, but it will be dedicated data centres, built and run by Amazon, so almost the same as. Why? Because AWS runs better data centres than the gov ever could.

Gov is outsourcing the physical infrastructure risk, just like any other ocmpany that puts their stuff in the cloud.



In your mobile browser, instead of m[dot]facebook[dot]com, try mbasic[dot]facebook[dot]com.

Very no frills FB for mobile, that lets you access Messenger. It looks like arse, but it beats using their spyware.


Yes - I do this with Pi-hole. It happens to be the same domain name that I host (very few) public services on too, so those DNS names work both inside and outside my network.


It all depends on how you want to homelab.

I was into low power homelabbing for a while - half a dozen Raspberry Pis - and it was great. But I’m an incessant tinkerer. I like to experiment with new tech all the time, and am always cloning various repos to try out new stuff. I was reaching a limit with how much I could achieve with just Docker alone, and I really wanted to virtualise my firewall/router. There were other drivers too. I wanted to cut the streaming cord, and saving that monthly spend helped justify what came next.

I bought a pair of ex enterprise servers (HP DL360s) and jumped into Proxmox. I now have an OPNsense VM for my firewall/router, and host over 40 Proxmox CTs, running (at a guess) around 60-70 different services across them.

I love it, because Proxmox gives me full separation of each service. Each one has its own CT. Think of that as me running dozens of Raspberry Pis, without the headache of managing all that hardware. On top of that, Docker gives me complete portability and recoverability. I can move services around quite easily, and can update/rollback with ease.

Finally, the combination of the two gives me a huge advantage over bare metal for rapid prototyping.

Let’s say there’s a new contender that competes with Immich. They offer the promise of a really cool feature no one else has thought of in a self-hosted personal photo library. I have Immich hosted on a CT, using Docker, and hiding behind Nginx Proxy Manager (also on a CT), accessible via photos.domain on my home network.

I can spin up a Proxmox CT from my custom Debian template, use my Ansible playbook to provision Docker and all the other bits, access it in Portainer and spin up the latest and greatest Immich competitor, all within mere minutes. Like, literally 10 minutes max.

I have a play with the competitor for a bit. If I don’t like it, I just delete the CT and move on. If I do, I can point my photos.domain hostname (via Nginx Proxy Manager) to the new service and start using it full-time. Importantly, I can still keep my original Immich CT in place - maybe shutdown, maybe not - just in case I discover something I don’t like about the new kid on the block.

That’s a simplified example, but hopefully illustrates at least what I get out of using Proxmox the way I do.

The cons for me is the cost. Initial cost of hardware, and the cost of powering beefier kit like this. I’m about to invest in some decent centralised storage (been surviving with a couple li’l ARM-based NASes) to I can get true HA with my OPNsense firewall (and a few other services), so that’s more cost again.


I’ve written my wiki so that, if I end up shuffling off this mortal coil, my wife can give access to one of my brothers and they can help her by unpicking all the smart home stuff.


I’m using self hosted wiki.js and draw.io. Works a treat, and trivial to backup with everything in Postgres.


It doesn’t have to be hard - you just need to think methodically through each of your services and assess the cost of creating/storing the backup strategy you want versus the cost (in time, effort, inconvenience, etc) if you had to rebuild it from scratch.

For me, that means my photo and video library (currently Immich) and my digital records (Paperless) are backed up using a 2N+C strategy: a copy on each of 2 NASes locally, and another copy stored in the cloud.

Ditto for backups of my important homelab data. I have some important services (like Home Assistant, Node-RED, etc) that push their configs into a personal Gitlab instance each time there’s a change. So, I simply back that Gitlab instance up using the same strategy. It’s mainly raw text in files and a small database of git metadata, so it all compresses really nicely.

For other services/data that I’m less attached to, I only backup the metadata.

Say, for example, I’m hosting a media library that might replace my personal use of services that rhyme with “GetDicks” and “Slime Video”. I won’t necessarily backup the media files themselves - that would take way more space than I’m prepared to pay for. But I do backup the databases for that service that tells me what media files I had, and even the exact name of the media files when I “found” them.

In a total loss of all local data, even though the inconvenience factor would be quite high, the cost of storing backups would far outweigh that. Using the metadata I do backup, I could theoretically just set about rebuilding the media library from there. If I were hosting something like that, that is…


The whole point of this particular comment thread here is that we’re already starting to see what’s happening: people are taking back control. You’re here on Lemmy, proving that exact point.

I never said we needed Cory to tell us what comes next. Just come up with another colourfully descriptive term like he did with enshittification.

You sound like that insufferable ponytail from Good Will Hunting.



We need Cory to coin a term for what comes after enshittification. Perhaps we can call it the Great Wipening, where we all stop paying to be treated like serfs and start taking back control of our content and data.


I pay for Usenet - not my fault if they don’t pass it on.

Joking aside, like some others have said, I support many artists via Bandcamp.


lol - I’m the same, and frequently wonder if I’m allowing tech debt to creep in. My last update took me to 8.0.3, and that was only because I built a new node and couldn’t get an older version for the architecture I wanted to run it on.


I just have a one-liner in crontab that keeps the last 7 nightly database dumps. That destination location is on one my my NASes, which rclones everything to my secondary NAS and an S3 bucket.

ls -tp /storage/proxmox-data/paperless/backups/*.sql.gz | grep -v '/$' | tail -n +7 | xargs -I {} rm -- {}; docker exec -t paperless-db-1 pg_dumpall -c -U paperless | gzip > /storage/proxmox-data/paperless/backups/paperless_$( date +\%Y\%m\%d )T$( date +\%H\%M\%S ).sql.gz


Yep - they introduced paid subscription tiers and put multi-user support into those: https://www.photoprism.app/editions#compare


You do need to be able to reach your public IP to be able to VPN back in. I have a static IP, so no real concerns there. But, even if I didn’t, I have a Python script that updates a Route53 DNS record for me in my own domain - a self-hosted dynamic DNS really.

You certainly can run Wireguard server in a docker container - the good folks over at Linuxserver have just the repo for you.


What are your homelab stats?
I just spent a good chunk of today migrating some services onto new docker containers in Proxmox LXCs. As I was updating my network diagram, I was struck by just how many services, hosts, and LXCs I'm running, so counted everything up. - 116 docker containers - Running on 25 docker hosts - 50 are the same on each docker host - Watchtower and Portainer agent - 38 Proxmox LXCs (19 are docker hosts) - 8 physical servers - 7 VLANs - 5 SSIDs - 2 NASes So, it got me wondering about the size of other people's homelabs. What are your stats?
fedilink

How are you keeping on top of fleet updates?
Just wondering what tools and techniques people are using to keep on top of updates, particularly security-related updates, for their self-hosting fleet. I'm not talking about docker containers - that's relatively easy. I have Watchtower pull (not update) latest images once per week. My Saturday mornings are usually spent combing through Portainer and hitting the recreate button for those containers with updated images. After checking the service is good, I manually delete the old images. But, I don't have a centralised, automated solution for all my Linux hosts. I have a few RasPis and a bunch of LXCs on a pair of Proxmox nodes, all running their respective variation of Debian. Not a lot of this stuff is exposed direct to the internet - less than a handful of services, with the rest only accessible over Wireguard. I'm also running OPNsense with IPS enabled, so this problem isn't exactly keeping me up at night right now. But, as we all know, security is about layers. Some time ago, on one of my RasPis, I did setup Unattended Upgrades and it works OK, but there was a little bit of work involved in getting it setup just right. I don't relish the idea of doing that another 40 or so times for the rest of my fleet. I also don't want all of those hosts grabbing updates at around the same time, smashing my internet link (yes, I could randomise the cron job within a time range, but I'd rather not have to). I have a fledgling Ansible setup that I'm just starting to wrap my head around. Is that the answer? Is there something better? Would love to hear how others are dealing with this. Cheers!
fedilink

[advice sought] NAS for Proxmox HA
So I recently (a couple months ago) moved my fragmented docker-on-raspberry-pi architecture over to a Proxmox cluster. I'm running it on a pair of HP DL360 G6s, and I couldn't be happier. Except, well, I could be happier with just one more thing: **high availability**. In particular, I want HA for my OPNsense firewall/router, but eventually for more of the workloads my family are depending on for life in general - Home Assistant, Plex, Overseerr, Immich, etc etc. My current storage setup is a couple ratty old ARM-based NASes - an ancient Netgear ReadyNAS and an even more ancient Qnap TS-410. They're both populated with 4 x 4TB (max raw size they can take) using RAID5, so I get about 22TB usable across the pair of them. They mostly store media for my Plex setup, but also support my 2N+C backup strategy for stuff like Immich, Paperless, and other important data. My high-level plan is to grab another DL360, so I can have a quorum, then introduce a new storage system that: - provides an iSCSI target for my Proxmox cluster; and - can eventually grow to replace my old NASes. The two solutions I'm pondering are: 1. Build a TrueNAS setup from scratch - mini ITX case, board - the lot 2. Pickup something tried, true and proven in the market, like a Synology Up front cost is a consideration - I have a family to feed, so I can't just run out and buy an 8-bay enclosure and fully populate it with 16TB disks. Whatever I get, I'm likely to want to start with, say, 3 disks and grow it over time. So, I guess this is a call out to the community to share any and all successes, war stories, and other advice. The more technical, the better. I want to make a sound, data-based decision here, and anecdotes from others who think like me are the best way to set my compass. Cheers for anything you can offer!
fedilink

OPNsense on Proxmox WAN speeds
This weekend, I cutover my home network to OPNsense on Proxmox. So far, it's been... OK. I'm having some issues with state tracking on a couple of VLANs, so need to dig into some pcaps from my switch and see what's going on there. But one question I have is how to get the best out of my hardware, as it seems my WAN speed is a lot less than it should be. I'm running Proxmox on a HP DL360 G6, with the pair of built-in 1Gbps NICs. One NIC is dedicated to my WAN connection, using a bridge in Proxmox, and it's plugged in directly to my 1Gbps fibre internet. The OPNsense VM has 4 cores, 8GB of RAM, and a 40GB volume. Using my previous hardware router/firewall (Draytek VIgor 2865), I was easily getting some decent speeds - 500Mbps to 700Mbps+. But, I'm lucky if I can get speeds any higher than about 120Mbps right now through OPNsense. I've disabled hardware checksum offload and hardware TCP segmentation in the OPNsense firewall. Then I found [this post](https://forum.opnsense.org/index.php?topic=16531.0) that suggested doing the same to the NIC and bridge in Proxmox as well. I've even tried rate limiting the interfaces on the OPNsense VM to 1000Mbps (OPNsense says they're 10Gbps), but nothing's made a difference. So, throwing out to my newfound Lemmy network: does anyone have any suggestions on what to try, or look at, next, please? Kinda worried I might have to go back to the Draytek, which would be a real shame. OPNsense has already proven to be far superior in every other way.
fedilink