• 2 Posts
  • 23 Comments
Joined 1Y ago
cake
Cake day: Jul 05, 2023

help-circle
rss
  1. It might be a card grabber.
  2. Don’t put real card details of course.

For me the value of podman is how easily it works without root. Just install and run, no need for sudo or adding myself to docker group.

I use it for testing and dev work, not for running any services.


TPM stores the encryption key against secure boot. That way, if attacker disables/alters secure boot then TPM won’t unseal the key. I use clevis to decrypt the drive.


Thank you… I had to learn kubernetes for work and it was around 2 weeks of time investment and then I figured out I could use it to fix my docker-compose pains at home.

If you run a lot of services, I can attest that kubernetes is definitely not overkill, it is a good tool for managing complexity. I have 8 services on a single-node kubernetes and I like how I can manage configuration for each service independent of each other and also the underlying infrastructure.


don’t create one network with Gitlab, Redmine and OpenLDAP - do two, one with Gitlab and OpenLDAP, and one with Redmine and OpenLDAP.

This was the setup I had, but now I am already using kubernetes with no intention to switch back.


I was writing my own compose files, but see my response to a sibling comment for the issue I had.


If one service needs to connect to another service then I have to add a shared network between them. In that case, the services essentially shared a common namespace regarding DNS. DNS resolution would routinely leak from one service to another and cause outages, e.g if I connect Gitlab container and Redmine container with OpenLDAP container then sometimes Redmine’s nginx container would access Gitlab container instead of Redmine container and Gitlab container would access Redmine’s DB instead of its own DB.

I maintained some workarounds, like starting Gitlab after starting Redmine would work fine but starting them other way round would have this issue. But switching to Kubernetes and replacing the cross-service connections with network policies solved the issue for me.


As someone who is operating kubernetes for 2 years in my home server, using containers is much more maintainable compared to installing everything directly on the server.

I tried using docker-compose first to manage my services. It works well for 2-3 services, but as the number of services grew they started to interfere with each other, at that point I switched to kubernetes.



I run a crude automation on top of OpenSSL CA. It checks for certain labels attached to kubernetes services. Based on that it creates kubernetes secrets containing the generated certificates.


It might be a failing fan. I have an Intel nuc whose fan started sounding like an air raid siren, so I took the fan out, drilled a hole into its bearing and added coconut oil into it. It is working fine till this date, but buying a new fan is probably better.


It is for a challenge, the goal is to build a cloud with workload decoupled from servers decoupled from users who’d deploy the workload, with redundant network and storage, no single choke point for network traffic, and I am trying to achieve this with a small budget.


The level1 video shows thunderbolt networking though. It is an interesting concept, but it requires nodes with at-least 2 thunderbolt ports in order to have more than 2 nodes.


If redundant everything is important then you need to change your planning toward proper rack servers and switches

I ain’t got that budget man.


Yes, the entire network is supposed to be redundant and load-balanced, except for some clients that can only connect to one switch (but if a switch fails it should be trivial to just move them to another switch.)

I am choosing dell optiplex boxes because it is the smallest x86 nodes I can get my hands on. There is no pcie slot in it other than m.2 SSD slot which will be used for SSD.


I plan to have 2 switches.

Of course, if a switch fails, client devices connected to the switch would drop out, but any computer connected to both switches should have link redundancy.


There would be some quality-of-life improvements like being able to replace a switch without pulling down entire cluster, but it is mostly for a challenge.


I am building my personal private cloud. I am considering using second hand dell optiplexes as worker nodes, but they only have 1 NIC and I'd need a contraption like this for my redundant network. Then this wish came to my mind. Theoretically, such a one box solution could be faster than gigabit too.
fedilink

sleep 120 #TODO: actually solve a problem
echo "Sorry, we could not solve this problem."

Gosh, if I ever get into the business of writing software for spacecraft with long duration missions, I have to test for such cases.


Go: Why is your every second sentence a caution?





Based on the title, I misunderstood as “oh shit I messed up real bad while booted into this ISO”. (I have that one too.)

Until recently I have been switching between Ubuntu ISO and custom Arch ISO. Now I have a regular Arch install on a fast USB drive for repairs (not ISO).