• 1 Post
  • 18 Comments
Joined 1Y ago
cake
Cake day: Jun 12, 2023

help-circle
rss

You don’t. That’s not what caddy is. Use a bastion for ssh.

Edit: link https://www.redhat.com/sysadmin/ssh-proxy-bastion-proxyjump


The answer to your overarching question is not “common maintenance procedures”, but “change management processes”

When things change, things can break. Immutable OSes and declarative configuration notwithstanding.

OS and Configuration drift only actually matter if you’ve got a documented baseline. That’s what your declaratives can solve. However they don’t help when you’re tinkering in a home server and drifting your declaratives.

I’m pretty certain every service I want to run has a docker image already, so does it matter?

This right here is the attitude that’s going to undermine everything you’re asking. There’s nothing about containers that is inherently “safer” than running native OS packages or even building your own. Containerization is about scalability and repeatability, not availability or reliability. It’s still up to you to monitor changelogs and determine exactly what is going to break when you pull the latest docker image. That’s no different than a native package.


Just cause you’ve never seen them doesn’t make it not true.

Try using quadlet and a .container file on current Debian stable. It doesn’t work. Architecture changed, quadlet is now recommended.

Try setting device permissions in the container after updating to Debian testing. Also doesn’t work the same way. Architecture changed.

Redhat hasn’t ruined it yet, but Ansible should provide a pretty good idea of the potential trajectory.


It isn’t. It’s architecture changes pretty significantly with each version, which is annoying when you need it to be stable. It’s also dominated by Redhat, which is a legit concern since they’ll likely start paywalling capabilities eventually.


Every complaint here is PEBKAC.

It’s a legit argument that Docker has a stable architecture while podman is still evolving, but that’s how software do. I haven’t seen anything that isn’t backward compatible, or very strongly deprecated with notice.

Complaining about selinux in 2024? Setenforce 0, audit2allow, and get on with it.

Docker doing that while selinux is enforcing is an actual bad thing that you don’t want.


Function/class/variables are bricks, you stack those bricks together and you are a programmer.

I just hired a team to work on a bunch of Power platform stuff, and this “low/no-code” SaaS platform paradigm has made the mentality almost literal.


Yup. Treating VMs similar to containers. The alternative, older-school method is cold snapshots of the VM, apply patches/updates (after pre-prod testing & validation), usually in an A/B or red/green phased rollout, and roll back snaps when things go tits up.


If you are in a position to ask this question, it means you have no actual uptime requirements, and the question is largely irrelevant. However, in the “real” world where seconds of downtime matter:

Things not changing means less maintenance, and nothing will break compatibility all of the sudden.

This is a bit of a misconception. You have just as many maintenance cycles (e.g. “Patch Tuesdays”) because packages constantly need security updates. What it actually means is fewer, better documented changes with maintenance cycles. This makes it easier and faster to determine what’s likely to break before you even enter your testing cycle.

Less chance to break.*

Sort of. Security changes frequently break running software, especially 3rd party software that just happened to need a certain security flaw or out-of-date library to function. The world has got much better about this, but it’s still a huge headache.

Services are up to date anyway, since they are usually containerized (e.g. Docker).

Assuming that the containerized software doesn’t need maintenance is a great way to run broken, insecure containers. Containerization helps to limit attack surfaces and outage impacts, but it isn’t inherently more secure. The biggest benefit of containerization is the abstraction of software maintenance from OS maintenance. It’s a lot of what makes Dev(Sec)Ops really valuable.

Edit since it’s on my mind: Containers are great, but amateurs always seem to forget they’re all sharing the host kernel. One container causing a kernel panic, or hosing misconfigured SHM settings can take down the entire host. Virtual machines are much, much safer in this regard, but have their own downsides.

And, for Debian especially, there’s one of the biggest availability of services and documentation, since it’s THE server OS.

No it isn’t. THE server OS is the one that fits your specific use-case best. For us self-hosted types, sure, we use Debian a lot. Maybe. For critical software applications, organizations want a vendor so support them, if for no other reason than to offload liability when something goes wrong.

It is running only rarely. Most of the time, the device is powered off. I only power it on a few times per month when I want to print something.

This isn’t a server. It’s a printing appliance. You’re going to have a similar experience of needing updates with every power-on, but with CoreOS, you’re going to have many more updates. When something breaks, you’re going to have a much longer list of things to track down as the culprit.

And, last but not least, I’ve lost my password.

JFC uptime and stability isn’t your problem. You also very probably don’t need to wipe the OS to recover a password.

My Raspberry Pi on the other hand is only used as print server, running Octoprint for my 3D-printer. I have installed Octoprint there in the form of Octopi, which is a Raspian fork distro where Octoprint is pre-installed, which is the recommended way.

That is the answer to your question. You’re running this RPi as a “server” for your 3d printing. If you want your printing to work reliably, then do what Octoprint recommends.

What it sounds like is you’re curious about CoreOS and how to run other distributions. Since breakage is basically a minor inconvenience for you, have at it. Unstable distros are great learning experiences and will keep you up to date on modern software better than “safer” things like Debian Stable. Once you get it doing what you want, it’ll usually keep doing that. Until it doesn’t, and then learning how to fix it is another great way to get smarter about running computers.

E: Reformatting


This is an AB problem in which you’re going to eventually solve the actual problem that isn’t actually systemd after looking real hard at ways to replace systemd.

Or else you’re going to find yourself in an increasingly painful maintenance process trying to retrofit rc scripts into constantly evolving distributions.

There’s a lot I prefer about the old SysV, and I’m still not thrilled that everything is being more dependent on these large monolithic daemons. But I’ve yet to find a systemd problem that wasn’t just me not knowing how to use systemd.


.local is reserved for mDNS responses, don’t use that.

It’s more than best practice. Your active directory controllers want to be the resolvers for their members, separate from other zones such as external MX records or the like. Your AD domain should always be a separate zone, aka a subdomain. “ad.example.com”.

If your DCs are controlling members at the top level, you’ll eventually run into problems with Internet facing services and public NS records.

Also per below. You can’t get commercially signed certificates for fake domains. Self hosting certificate authorities is a massive pain in the ass. Don’t try unless you have a real need, like work-related learning.



Instead of paying for multiple services, I am now renting a decently sized VPS on Scaleway, and hosting all my projects on them.

That’s not self hosting. That’s moving your managed services down the stack from PaaS to IsaS.

It’s an unserious take on the impacts as well. No discussion of availability? Backups? Server hardening and general security? Access and authentication models? Sysadmin on aVPS is more than “running a bunch of commands now and then”, and the author ignores that entire workload.


I had no real idea how to phrase it, but all these posts have helped. What I was actually focused on when I posted was mainly hardware that can do what the Arlo cameras do:

  • Wifi + battery/solar my house is old and hardwires are a pain in the ass.
  • High def, preferably 4k, but 1080 is ok.
  • Night vision, color or not doesn’t matter
  • Motion-activated, and preferably some way to filter out and not trigger on things like passing traffic cars.
  • As small a form-factor as possible.

The Reolink hardware mentioned below seems to fit the bill hardware-wise.

I hadn’t even really considered the software, as I don’t need a lot of features. All I need is to use motion-activated capture to stream to some local storage, and an ability to view a live-stream when I want one. But it looks like there’s a lot of options I need to consider.


Reolink looks like a solid answer, thanks.


I already hate Ubiquiti’s Unifi networking that I got myself stuck with. I won’t do any of their other products.


I’m somewhat stuck on Unifi for wifi APs and Routers, because all the other consumer-grade devices can’t handle the number of small IoT devices I’ve got. Netgear and Asus just lose connections with ESP devices and refuse to let them connect after about a dozen. The commercial grade stuff, in addition to being too expensive, is all rack mounted, high power draw and noisy af.

Aside from the fact that my stuff seems stable on the Ubiquiti hardware, I hate the products. The interface is terrible, Unifi insists on hiding the advanced networking behind a halfass gui, the SSH console lacks half the features of even that terrible gui, and every time i try to create a new routed network, the wifi devices stop connecting.


Anyone know of self-hostable security cameras?
Edit: ideally wifi cameras that I can solar power. Looking to replace my Arlo cameras with something self-hostable. Arlo lets you store on a USB stick, but there's no way to get out from under their cloud, which gets more expensive all the time.
fedilink

Depends on your specific VPN, but look for a feature or setting called “split tunnel.” It should create a separate non-vpn route for the local network.

Usually client-side setting, but not always if the tunnel is built on connection.


Analogies are inherently false equivalences.

It’s illustrating the problem with the argument, not equating DRM technology with puppy kicking.