• 2 Posts
  • 58 Comments
Joined 1Y ago
cake
Cake day: Jul 31, 2023

help-circle
rss

Podman is not yet ready for mainstream, in my experience

My experience varies wildly from yours, so please don’t take this bit as gospel.

Have yet to find a container that doesn’t work perfectly well in podman. The options may not be the same. Most issues I’ve found with running containers boil down to things that would be equally a problem in docker. A sample:

  • “rootless” containers are hard to configure. It can almost always be fixed with “–privileged” or some combination of permission flags. This would be equally true for docker; the only meaningful difference is podman tries to push everything into rootless. You don’t have to.
  • network filesystems cause headaches, especially smbfs + sqlite app. I’ve had to use NFS or ext4 inside a network-mounted image for some apps. This problem is identical for docker.
  • container networking–for specific cases–needs to managed carefully. These cases are identical for docker.

And that’s it. I generally run things once from the podman command line, then use podlet to create a quadlet out of that configuration, something you can’t do with docker. If you are having any trouble with running containers under podman, try the --privileged shortcut, see that it works, and then double back if you think you really need rootless.


I haven’t deployed Cloudflare but I’ve deployed Tailscale, which has many similarities to the CF tunnel.

  • Is the tunnel solution appropriate for Jellyfin?

I assume you’re talking about speed/performance here. The overhead added by establishing the connection is mostly just once at the connection phase, and it’s not much. In the case of Tailscale there’s additional wireguard encryption overhead for active connections, but it remains fast enough for high-bandwidth video streams. (I download torrents over wireguard, and they download much faster than realtime.) Cloudflare’s solution is only adding encryption in the form of TLS to their edge. Everything these days uses TLS, you don’t have to sweat that performance-wise.

(You might want to sweat a little over the fact that cloudflare terminates TLS itself, meaning your data is transiting its network without encryption. Depending on your use case that might be okay.)

  • I suppose it’s OK for vaultwarden as there isnt much data being transfered?

Performance wise, vaultwarden won’t care at all. But please note the above caveat about cloudflare and be sure you really want your vaultwarden TLS terminated by Cloudflare.

  • Would it be better to run nginx proxy manager for everything or can I run both of the solutions?

There’s no conflict between the two technologies. A reverse proxy like nginx or caddy can run quite happily inside your network, fronting all of your homelab applications; this is how I do it, with caddy. Think of a reverse proxy as just a special website that branches out to every other website. With that model in mind, the tunnel is providing access to the reverse proxy, which is providing access to everything else on its own. This is what I’m doing with tailscale and caddy.

  • General recs

Consider tailscale? Especially if you’re using vaultwarden from outside your home network. There are ways to set it up like cloudflare, but the usual way is to install tailscale on the devices you are going to use to access your network. Either way it’s fully encrypted in transit through tailscale’s network.


Wouldn’t even take a month, just prepay for those reserved instances.


Thanks! I’ll try this and report back. This sounds like a version of (#1) - merge accounts.


immich SSO migration path for existing user?
I'm an immich user, switching from a standard u/p login to an SSO-based login. I've tested the SSO login successfully, it seems to work, and I'm not having any issues with that. However, the account generated by SSO login has a different email address and identifiers from the account I created earlier. I don't want to start from scratch with my photos, as I've spent countless hours updating metadata. I think I need one of the following: 1. a supported, tested way to merge an account with another account. I don't know if this is going to be similar to the "partner sharing" feature. I don't want to simply share the photos, I want to have full control over them; including, if I delete a photo, it's gone forever. 2. a tested way to manually update the database to change all identifiers over to the new account 3. a way to login to the existing account via my SSO portal. I can create any SSO user I want, for example. 4. a way to export the entire library with metadata and re-import it to the new SSO account, structured exactly the same way. Ideally this would also restore anything ML has done with my photos, but it's not a disaster if I have to wait for ML to recreate what it already did in the new account. Does anyone have information on how to achieve one of the above? Followup question: - can anyone confirm with certainty that metadata changes I made in immich have been saved in the image files in `/library/upload/*`? I am already making backups (both pg_dump and the entire contents of the library), but it would be nice to know where the metadata is actually kept, in case I have to do DR.
fedilink

Home assistant’s main use case is showing you where your house is on a single map, though. Not sure how immich works, but if it’s one tile per photo with location data, that would be a MUCH bigger ask.


  1. Seems like a very reasonable objection to me. I’d guess that most of us Immich users are using it in the first place because it improves the privacy of our photos, and a third party seeing our location data certainly undermines that.
  2. I would have complained had I noticed, so you might be the first one to notice. Immich’s userbase isn’t huge right now, it’s definitely possible.
  3. Featurewise, I’d like: a) a clearly documented way to disable map data leaving my server; b) a set of well-integrated choices (maybe even just two, as long as one of them is something like openstreetmap); c) the current configurability to be well documented.
  4. I’d love it if all such outbound data streams are also documented. Many security and privacy-focused products give you a “quiet” mode of some kind, where you can turn off everything that sends your data somewhere else. It’s a requirement in many enterprise installations.


Some troubleshooting thoughts:

What do you mean when you say SSH is “down”:

  1. connection refused (fail2ban’s activity could result in a connection refused, but a VPN should have avoided that problem, as you said)
  2. connection timeout. probably a failure at the port forwarding level.
  3. connection succeeded but closed; this can happen for a few reasons, such as the system is in an early boot up state. there’s usually a message in this case.
  4. connection succeeded but auth rejected. this can happen if your os failed to boot but came up in a fallback state of some kind.

Knowing which one of these it is can give you a lot more information about what’s wrong:

System can’t get past initial boot = Maybe your NAS is unplugged? Maybe your home DNS cache is down?

Connection refused = either fail2ban or possibly your home IP has moved and you’re trying to connect to somebody else’s computer? (nginx is very popular after all, it’s not impossible somebody else at your ISP has it running). This can also be a port forwarding failure = something’s wrong with your router.

Connection succeeded + closed is similar to “can’t get past initial boot”

Auth rejected might give you a fallback option if you can figure out a default username/password, although you should hope that’s not the case because it means anyone else can also get in when your system is in fallback.

Very few of these things are actually fixable remotely, btw. I suggest having your sister unplug everything related to your setup, one device at a time. Internet router, raspberry pi, NAS, your VM host, etc. Make sure to give them a minute to cool down. Hardware, particularly cheap hardware, tends to fail when it gets hot, and this can take a while to happen, and, well, it’s been hot.

Here’s a few things with a high likelihood of failing when you’re away from home:

  • heat, as previously mentioned.
  • running out of disk space. Maybe you’re logging too much, throw some more disk in there and tune down the logging. This can definitely affect SSH, and definitely won’t be fixed by a reboot.
  • OOM failures (or other resource leaks). This isn’t likely to affect your bare metal ssh, but it could. Some things leak memory, and this can lead to cascading process destruction by the OS. In this scenario you’d probably be able to connect to things in the first few minutes after a reboot, though.
  • shitty cabling. Sometimes stuff just falls out of the socket, if it wasn’t plugged in perfectly to begin with. (Heat can also contribute to this one.)
  • reliance on a cloud service that’s currently down. (This can include: you didn’t pay the bill.) Hopefully your OS boot doesn’t fail due to a cloud service, but I’ve definitely seen setups that could.

This kid’s full name is Fortran Sucksdontlearnit Johnson. His parents actually hated Fortran. Imagine the disappointment they’re about to experience.


I probably won’t switch to Plex because of what they did with sharing all your activity without your consent, but I’m curious what you liked better about it as a music backend?


Good suggestion! I intend to mess with finamp and symfonium. I had no idea jellyfin was so popular as a music backend so I’ll just keep using that.


Yeah, I’ll probably just buy a few more albums than I used to. Streaming payments has always been a way to wring dollars out of artists, so I’d rather find other ways anyhow.


Gluetun is kind of a wrapper around wireguard or openvpn, that greatly simplifies setup and configurability.

I have a VM that runs wireguard to airvpn, in a container made of gluetun. Then you share that container’s network with a qbittorrent container (or pick your torrent) and an nzbget container (or pick your nzb downloader). Tada, your downloaders are VPN’d forever.


Thanks! Yeah, figuring out how to get gluetun working properly with a vpn and downloaders was a chore and a half. Glad I got that sorted, now I feel pretty confident I can punch a mobile app through into the network pretty easily.


Personal music servarr with a mobile app?
I've been on Tidal for years, but it's frustrating to use for lots of reasons (they only pay their artists slightly better than Spotify, streaming services are flaky, works poorly with my DLNA home speakers). I'm looking for something I can selfhost with the following features, and I would appreciate any suggestions in this direction: - integrates with downloading services (nzbget and qbittorrent; or better yet prowlarr) - has a suggestions/radio/mix feature, or integrates well with something that does. I currently use jellyseerr for other kinds of media, so something in that vein. - has a mobile app which lets me download all the tracks I want, or integrates with one that does. Big bonus points if the mobile app can play to DLNA speakers. A bit about my lab: - Proxmox-based, lots of VMs and containers on 2 different cluster nodes. Lots of underprovisioned RAM in the cluster. Nodes run Fedora and I'm partial to quadlets, but I can convert anything to a quadlet if I need to. - Airvpn port tunneling is available to me. TIA!
fedilink

See, that’s a cool symbol. Make the right angle part of that symbol into a snake, you’re done. 1000% better than the AI’s mess.


When we say LLMs don’t know or understand anything, this is what we mean. This is a perfect example of an “AI” just not having any idea what it’s doing.

  • I’ll start with a bit of praise: It does do a fairly good job of decomposing the elements of Python and the actuary profession into bits that would be representative of those realms.

But:

  • In the text version of the response, there are already far too many elements for a good tattoo, demonstrating it doesn’t understand tattoo design or even just design
  • In the drawn version, the design uses big blocks of color with no detail, which (even if they looked good on a white background; and they don’t;) would look like shit inked on someone’s skin. So again, no understand of tattoo art.
  • It produces a “simplified version” of the python logo. I assume those elements are the blue and yellow hexagons, which are at least the correct colors. But it doesn’t understand that, for this to be PART OF THE SAME DESIGN, they must be visually connected, not just near each other. It also doesn’t understand that the design is more like a plus; nor that the design is composed of two snakes; nor that the Python logo is ALREADY VERY SIMPLE, nor that the logo, lacking snakes, loses any meaning in its role of representing Python.
  • It says there’s a briefcase and glasses in there. Maybe the brown rectangle? Or is the gray rectangle meant to be a briefcase lying on its side so the handle is visible? No understanding here of how humans process visual information, or what makes a visual representation recognizable to a human brain.
  • Math stuff can be very visually interesting. Lots of mathematical constructs have compelling visuals that go with them. A competent designer could even tie them into the Python stuff in a unified way; like, imagine a bar graph where the bars were snakes, twining around each other in a double helix. You got math, you got Python, you got data analysis. None of this ties together, or is even made to look good on its own. No understanding of what makes something interesting.
  • Everything is just randomly scattered. Once again, no understanding of what design is.

AIs do not understand anything. They just regurgitate in ways that the algorithm chooses. There’s no attempt to make the algorithm right, or smart, or relevant, or anything except an algorithm that’s just mashing up strings and vectors.


Fair enough. I guess I’m not saying “there’s no point in --” because I know people do these things. (Man, I wish I had the attention span to read as much as you.) I’m just saying I’m not going to host something just to keep track with no recommendations or interaction because that doesn’t click for me personally.


Same. I don’t really see the point of tracking what you read if you’re not interested in connecting it to other peoples’ readings. Storygraph has been great.


Sorry, I didn’t know we might be hurting the LLM’s feelings.

Seriously, why be an apologist for the software? There’s no effective difference between blaming the technology and blaming the companies who are using it uncritically. I could just as easily be an apologist for the company: not their fault they’re using software they were told would produce accurate information out of nonsense on the Internet.

Neither the tech nor the corps deploying it are blameless here. I’m well aware than an algorithm only does exactly what it’s told to do, but the people who made it are also lying to us about it.


Leaving my chicken for 10 minutes near a window on a warm summer day and then digging in


I guess I’m not surprised that programmers don’t know how to follow meme standards.

The three panels following the first one are supposed to be helping the first one.


These bugs are always opened by IC developers who need help and have little agency. So,

Closed “won’t fix” with note

Contributions accepted if you want to deliver the fix. If you are not in a position to dictate to your employer how your time is spent (and, if so, I understand your problem) please report to your manager that you will be unable to use this software without contributing the fix. Alternately, switch to [competitor]. Your manager should understand that the cost to the company of contributing a fix for this bug is less than the switching cost for [competitor]. I wish you luck, either way.

And then make the above text a template response, so you don’t have to spend your time typing it more than once.


So, I’m curious.

What do you think happens in the infinite loop that “runs you” moment to moment? Passing the same instance of consciousness to itself, over and over?

Consciousness isn’t an instance. It isn’t static, it’s a constantly self-modifying waveform that remembers bits about its former self from moment to moment.

You can upload it without destroying the original if you can find a way for it to meaningfully interact with processing architecture and media that are digital in nature; and if you can do that without shutting you off. Here’s the kinky part: We can already do this. You can make a device that takes a brain signal and stimulates a remote device; and you can stimulate a brain with a digital signal. Set it up for feedback in a manner similar to the ongoing continuous feedback of our neural structures and you have now extended yourself into a digital device in a meaningful way.

Then you just keep adding to that architecture gradually, and gradually peeling away redundant bits of the original brain hardware, until most or all of you is being kept alive in the digital device instead of the meat body. To you, it’s continuous and it’s still you on the other end. Tada, consciousness uploaded.


Well, I think they should revoke that guy’s PGP key


I feel like when you title your post “Hilarious”, you’re being sarcastic. Are you, perhaps, aware that this is actually pretty unfunny? Yet you posted it here nonetheless.



So in the, let’s say, top one-third tier of the options, something like this: https://www.amazon.com/Beelink-SER5-Mini-PC-Desktop-Computer/dp/B0C286SR8V/ref=pd_ci_mcx_pspc_dp_d_2_i_1?pd_rd_i=B0C286SR8V

Or, similarly, this, which is my current mediaserver: https://www.amazon.com/gp/product/B0C1X191NR/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1

I went with the second one–more ram. Anecdotally, some people think beelink is more reliable but this is not a universal opinion, and my experience has been that the minisforum is extremely reliable.

If you search similar you can find options both up and down depending on your actual budget. You probably don’t want to do components on these things, apart from maybe putting in a bigger m.2 nvme.



Samwise was a fellow who was always there with a ready smile and a network security recommendation


Well people use ansible for a wide variety of things so there’s no straightforward answer. It’s a Python program, it can in theory do anything, and you’ll find people trying to do anything with it. That said, some common ways to replace it include

  • you need terraform or pulumi or something for provisioning infrastructure anyway, so a ton of stuff can be done that way instead of using ansible. Infra tools aren’t really the same thing, but there are definitely a few neat tricks you can do with them that might save you from reaching for ansible.
  • Kubernetes + helm is a big bear to wrestle, but if your company is also a big bear, it’s worth doing. K8s will also solve a lot of the same problems as ansible in a more maintainable way.
  • Containerization of components is great even if you don’t use kubernetes.
  • if you’re working at the VM level instead of the container level, cloud-init can allow you to take your generic multipurpose image and make it configure itself into whatever you need at boot. Teams sometimes use ansible in the cloud-init architecture, but it’s usually doing only a tiny amount of localhost work and no dynamic invetory in that role, so it’s a lot nicer there.
  • maybe just write a Python program or even a shell script? If your team has development skills at all, a simple bespoke tool to solve a specific problem can be way nicer.

Really all of these have solutions, but they’re constantly biting you and slowing down development and requiring people to be constantly trained on the gotchas. So it’s not that you can’t make it work, it’s that the cost of keeping it working eats away at all the productive things you can be doing, and that problem accelerates.

The last bullet is perhaps unfair; any decent system would be a maintainable system, and any unmaintainable system becomes less maintainable the bigger your investment in it. Still, it’s why I urge teams to stop using it as soon as they can, because the problem only gets worse.


Sure, I mean, we could talk about

  • dynamic inventory on AWS means the ansible interpreter will end up with three completely separate sets of hostnames for your architecture, not even including the actual DNS name. if you also need dynamic inventory on GCP, that’s three completely different sets of hostnames, i.e. they are derived from different properties of the instances than the AWS names.
  • btw, those names are exposed to the ansible runtime graph via different names i.e. ansible_inventory vs some other thing, based on who even fuckin knows, but sometimes the way you access the name will completely change from one role to the next.
  • ansible-vault’s semantics for when things can be decrypted and when they can’t leads to completely nonsense solutions like a yaml file with normal contents where individual strings are encrypted and base64-encoded inline within the yaml, and others are not. This syntax doesn’t work everywhere. The opaque contents of the encrypted strings can sometimes be treated as traversible yaml and sometimes cannot be.
  • ansible uses the system python interpreter, so if you need it to do anything that uses a different Python interpreter (because that’s where your apps are installed), you have to force it to switch back and forth between interpreters. Also, the python setting in ansible is global to the interpreter meaning you could end up leaking the wrong interpreter into the role that follows the one you were trying to tweak, causing almost invisible problems.
  • ansible output and error reporting is just a goddamn mess. I mean look at this shit. Care to guess which one of those gives you a stream which is parseable as json? Just kidding, none of them do, because ansible always prefixes each line.
  • tags are a joke. do you want to run just part of a playbook? --start-at. But oops, because not every single task in your playbook is idempotent, that will not work, ever, because something was supposed to happen earlier on that didn’t. So if you start at a particular tag, or run only the tasks that have a particular tag, your playbook will fail. Or worse, it will work, but it will work completely differently than in production because of some value that leaked into the role you were skipping into.
  • Last but not least, using ansible in production means your engineers will keep building onto it, making it more and more complex, “just one more task bro”. The bigger it gets, the more fragile it gets, and the more all of these problems rears its head.

Heh. I am, as I said, a cloud sw eng, which is why I would never touch any solution that mentioned ansible, outside of the work I am required to do professionally. Too many scars. It’s like owning a pet raccoon, you can maybe get it to do clever things if you give it enough treats, but it will eventually kill your dog.


I can answer this one, but mainly only in reference to the other popular solutions:

  • nginx. Solid, reliable, uncomplicated, but. Reverse proxy semantics have a weird dependency on manually setting up a dns resolver (why??) and you have to restart the instance if your upstream gets replaced.
  • traefik. I am literally a cloud software engineer, I’ve been doing Linux networking since 1994 and I’ve made 3 separate attempts to configure traefik to work according to its promises. It has never worked correctly. Traefik’s main selling point to me is its automatic docker proxying via labels, but this doesn’t even help you if you also have multiple VMs. Basically a non-starter due to poor docs and complexity.
  • caddy. Solid, reliable, uncomplicated. It will do acme cert provisioning out of the box for you if you want (I don’t use that feature because I have a wildcard cert, but it seems nice). Also doesn’t suffer from the problems I’ve listed above.


one three seven seven
vs
one three three seven

And you’ll notice that the entry under “Torrents” does not actually match the name you typed into your description text, as yours has two sevens. Crafty indeed, and they could stand to make this a little more obvious in the document.



Does that make it not a substantive complaint about nextcloud, if it can’t run well in docker?

I have a dozen apps all running perfectly happy in Docker, i don’t see why Nextcloud should get a pass for this