• 1 Post
  • 83 Comments
Joined 1Y ago
cake
Cake day: Jun 19, 2023

help-circle
rss

Something like

!“A line with exactly 0 or 1 characters, or a line with a sequence of 1 or 3 or more characters, repeated at least twice”!<



Yup, this - batteries are consumables. They have a service life of ~2-5 years depending on load. If the manual doesn’t tell you how to replace them then it’s basically ewaste already


Depends on what you need:

  • As cheap as possible, but actually want a VM: OCI free tier will be way bigger than you will probably need
  • Happy paying money but still want to learn about Linux things: I’ve had good experiences with Scaleway
  • I just want something I can set up and not think about: don’t use a VPS. Architect your site as a pure-static site, stick it in an S3 bucket. You’ll probably be within the free tier unless you do absolutely bonkers traffic, and once it’s running you can leave it alone for literal years without worrying about patches or upgrades

If only we lived in a world so simple as to allow the whims of managers, customers and third parties to be completely definable in UML


Keycloak to provide OIDC, although in hindsight I should have gone with Authelia Authentik


There are very few things more obnoxious than an asshole with unsolicited parenting advice



I moved just about everything to Route53 for registration - I run my own DNS so I don’t need to pay for that, and it’s ~40% cheaper than Gandi for better service.

Now I just need to move my .nz domain (R53 supports .{co,net,org}.nz, but not .nz itself?) and the 2 .xyz domains that are “premium” for some reason so R53 won’t touch


For anything that is related to my backup scheme, it’s printed out hard copy, put in an envelope in a fire safe in my house. I can tell you from experience there is nothing more stressful than “oh fuck I need my backups but the key to unlock the backups is in the backups fuck fuck fuck”.

And for future reference, anyone thinking about breaking into my house to get access to my backups just DM me, I’m sure we can come to an arrangement that’s less hassle for both of us


I was in the same place as you a few years ago - I liked swarm, and was a bit intimidated by kubernetes - so I’d encourage you to take a stab at kubernetes. Everything you like about swam kubernetes does better, and tools like k3s make it super simple to get set up. There _is& a learning curve, but I’d say it’s worth it. Swarm is more or less a dead end tech at this point, and there are a lot more resources about kubernetes out there.


The early twenties intermediate dev on my team was explaining the other week that if you remember a time before smartphones and broadband, you are old


I personally am familiar with 2 organisations with millions of dollars in annual revenue that deploy critical line of business applications like this in 2024


They are, but I think the question was more “does the increased speed of an SSD make a practical difference in user experience for immich specifically”

I suspect that the biggest difference would be running the Postgres DB on an SSD where the fast random access is going to make queries significantly faster (unless you have enough ram that Postgres can keep the entire DB in memory where it makes less of a difference).

Putting the actual image storage on SSD might improve latency slightly, but your hard drive is probably already faster than your internet connection so unless you’ve got lots of concurrent users or other things accessing the hard drive a bunch it’ll probably be fast enough.

These are all Reckons without data to back it up, so maybe do some testing


Pretty much - I try and time it so the dumps happen ~an hour before restic runs, but it’s not super critical


pg_dumpall on a schedule, then restic to backup the dumps. I’m running Zalando Postgres in kubernetes so scheduled tasks and intercontainer networking is a bit simpler, but should be able to run a sidecar container in your compose file


If you figure it out, I know several companies that would be more than willing to drop 7 figures a year to license the tech from you


Yeah, they are mostly designed for classification and inference tasks; given a piece of input data, decide which of these categories it belongs to - the sort of things you are going to want to do in near real time, where it isn’t really practical to ship off to a data centre somewhere for processing.


Dealing with this at the moment - in an org that’s been pretty lax at writing anything down about what and why as far as internal software goes, trying (with support from C-suite) to get people to actually write up any amount of detail in their requests is like pulling teeth.

I tend to take that position as well; if it’s not defined, I get to define it. If I ask for feedback or review and get silence, that means you approve.


Seems pretty reasonable. At the end of the day people have to eat, so projects like this either trundle on as hobby-and-spare-time projects for a few years until people get bored and burnt out, or you find a way to make working on the project a paid gig for the core people


This is an “x-y question” - what are you actually trying to achieve?

Clearly you are concerned about… someone… knowing your home IP address - who, and why?


Jia Tan probably wasn’t one person - most likely the identity was operated by a team of people at an intelligence agency, probably Russian or Chinese


As in, hardware RAID is a terrible idea and should never be used. Ever.

With hardware RAID, you are moving your single point of failure from your drive to your RAID controller - when the controller fails, and they fail more often then you would expect - you are fucked, your data is gone, nice try, play again some time. In theory you could swap the controller out, but in practice it’s a coin flip if that will actually work unless you can find exactly the same model controller with exactly the same firmware manufactured in the same production line while the moon was in the same phase and even then your odds are still only 2 in 3.

Do yourself a favour, look at an external disk shelf/DAS/drive enclosure that connects over SAS and do RAID in software. Hardware RAID made sense when CPUs were hewn from granite and had clock rates measures in tens of megahertz so offloading things to dedicated silicon made things faster, but that’s not been the case this century.


I think the conclusion is that as a population of people grows the average behaviour stays pretty much fine, but the extremes of the bell curve become more apparent


Yeah, not even slightly true. Know a few people who work support for a major piece of financials software. Company has a written procedure for dealing with death threats that gets exercised multiple times per year


It’s not just let’s encrypt - the common names of any SSL cert issued by a public CA have to be recorded in a public certificate transparency log. You can use tools like https://crt.sh to search the logs


Previously Gandi, but they’ve jacked up their prices and cut features, so in the process of moving to AWS Route53.

My main requirements are:

  • Competitively priced (doesn’t need to be the absolute cheapest, but the feature set better justify the price)
  • Able to manage domain with Terraform (I’ve got 10 domains, and copy-pasting DNSSEC keys around gets old really fast)
  • Not be CloudFlare (fuck those guys in particular)

I’d considered doing something similar at some point but couldn’t quite figure out what the likely behaviour was if the workers lost connection back to the control plane. I guess containers keep running, but does kubelet restart failed containers without a controller to tell it to do so? Obviously connections to pods on other machines will fail if there is no connectivity between machines, but I’m also guessing connections between pods on the same machine will be an issue if the machine can’t reach coredns?


Do this specifically so a judge has to rule if someone is being a dick or not. File amicus briefs on the definition of being a dick. Assemble a jury of peers to decide if the defendants are being a dick. Appeal to the supreme court to rule if the court erred in their judgement of the dickishness at question in this matter.


I’ve started a similar process to yours and am moving domains as they come up for renewal, with a slightly different technical approach:

  • I’m using AWS Route 53 as my registrar. They aren’t the cheapest, but still work out at about half the price of Gandi and one of my key requirements was to be able to use Terraform to configure DS records for DNSSEC and NS records in the parent zone
  • I run an authoritative nameserver on an OCI free tier VM using PowerDNS, and replicate the zones to https://ns-global.zone/ for redundancy. I’m investigating setting up another authoritative server on a different cloud provider in case OCI yank the free tier or something
  • I use https://migadu.com/ for email

I have one .nz domain which I’ll need to find a different registrar for, cos for some reason route53 doesn’t support .nz domains, but otherwise the move is going pretty smoothly. Kinda sad where Gandi has gone - I opened a support ticket to ask how they can justify being twice the price of their competitors and got a non-answer


This is relevant to my interests, thanks. Looks like it’s pretty early stages though?


  • An HP ML350p w/ 2x HT 8 core xeons (forget the model number) and 256GB DDR3 running Ubuntu and K3s as the primary application host
  • A pair of Raspberry Pi’s (one 3, one 4) as anycast DNS resolvers
  • A random minipc I got for free from work running VyOS as by border router
  • A Brocade ICX 6610-48p as core switch

Hardware is total overkill. Software wise everything is running in containers, deployed into kubernetes using helmfile, Jenkins and gitea


Something like odoo (https://www.odoo.com/) might work?

You probably aren’t going to find something that works for your specific needs right out of the box, so your best bet would be finding a platform that gets you 80% of the way there and provides enough of a plugin mechanism that you can develop the remaining 20% of the functionality yourself


This is something I’m also interested in; if you find something please update us


  • There has been some technical decisions over the last few years that I don’t think fit my needs terribly well; chief of these is the push for Snaps - they are a proprietary distribution format, that adds significant overhead without any real benefit, and Canonical has been pushing more and more functionality into Snap
  • I previously chose Ubuntu over Debian because I needed more up to date versions of things like Python and PHP, with Docker this isn’t really a concern any more, so slower, more conservative approach Debian takes isn’t as big of an issue

Ubuntu LTS, but in the process of replacing it with Debian


Look, it’s all about authorial intent - if the author had wanted their book to be easy to reference or accessible to people who use screen readers, they would have published a DRM free PDF in the first place. Gotta respect the artist’s vision.


From the previous issue it sounds like the developer has proper legal representation, but in his place I wouldn’t even begin talking with Haier until they formally revoke the C&D, and provide enforceable assurances that they won’t sue in the future.

Also I don’t know what their margins are like, but even if this cost them an extra $1000 in AWS fees on top of what their official app would have cost them (I seriously doubt it would be that much unless their infrastructure is absolute bananas), then it would probably only be a single-digit number of sales that they would have needed to loose to come out worse off from this.



https://glitchtip.com/

API compatible, but lower resource consumption - is missing some of the newer features (big one for me is tracing, but just install Tempo).

Not actually tried it, but looks promising


Tool to manage CLI tools
I'm trying to find a thing, and I'm not turning up anything in my web searches so I figure I'd ask the cool people for help. I've got several projects, tracked in Git, that rely on having a set of command line tools installed to work on locally - as an example, one requires Helm, Helmfile, sops, several Helm plugins, Pluto, Kubeval and the Kubernetes CLI. Because I don't hate future me, I want to ensure that I'm installing specific versions of these tools rather than just grabbing whatever happens to be the latest version. I _also_ want to ensure that my CI runner grabs the same versions, so I can be reasonably sure that what I've tried locally will actually work when I go to deploy it. My current solution to this is a big ol' Bash script, which _works_, but is kind of a pain to maintain. What I'm trying to find is a tool where I: * Can write a definition, ideally somewhere shared between projects, of what it means to "install tool X" * Include a file in my project that lists the tools and versions I want * Run the tool on my machine and let it go grab the platform- and architecture- specific binaries from wherever, and install them somewhere that I can add to my $PATH for this specific project * Run the tool in CI and do the same - if it can cache stuff then awesome Linux support is a must, other platforms would be nice as well. Basically I'm looking for Pythons' pip + virtualenv workflow, but for prebuilt tools like helm, terraform, sops, etc. Anyone know of anything? I've looked at homebrew (seems to want to install system-wide), and VSCode dev containers (doesn't solve the CI need, and I'd still need to solve installing the tools myself)
fedilink