Podman is not yet ready for mainstream, in my experience
My experience varies wildly from yours, so please don’t take this bit as gospel.
Have yet to find a container that doesn’t work perfectly well in podman. The options may not be the same. Most issues I’ve found with running containers boil down to things that would be equally a problem in docker. A sample:
And that’s it. I generally run things once from the podman command line, then use podlet to create a quadlet out of that configuration, something you can’t do with docker. If you are having any trouble with running containers under podman, try the --privileged shortcut, see that it works, and then double back if you think you really need rootless.
I haven’t deployed Cloudflare but I’ve deployed Tailscale, which has many similarities to the CF tunnel.
I assume you’re talking about speed/performance here. The overhead added by establishing the connection is mostly just once at the connection phase, and it’s not much. In the case of Tailscale there’s additional wireguard encryption overhead for active connections, but it remains fast enough for high-bandwidth video streams. (I download torrents over wireguard, and they download much faster than realtime.) Cloudflare’s solution is only adding encryption in the form of TLS to their edge. Everything these days uses TLS, you don’t have to sweat that performance-wise.
(You might want to sweat a little over the fact that cloudflare terminates TLS itself, meaning your data is transiting its network without encryption. Depending on your use case that might be okay.)
Performance wise, vaultwarden won’t care at all. But please note the above caveat about cloudflare and be sure you really want your vaultwarden TLS terminated by Cloudflare.
There’s no conflict between the two technologies. A reverse proxy like nginx or caddy can run quite happily inside your network, fronting all of your homelab applications; this is how I do it, with caddy. Think of a reverse proxy as just a special website that branches out to every other website. With that model in mind, the tunnel is providing access to the reverse proxy, which is providing access to everything else on its own. This is what I’m doing with tailscale and caddy.
Consider tailscale? Especially if you’re using vaultwarden from outside your home network. There are ways to set it up like cloudflare, but the usual way is to install tailscale on the devices you are going to use to access your network. Either way it’s fully encrypted in transit through tailscale’s network.
Some troubleshooting thoughts:
What do you mean when you say SSH is “down”:
Knowing which one of these it is can give you a lot more information about what’s wrong:
System can’t get past initial boot = Maybe your NAS is unplugged? Maybe your home DNS cache is down?
Connection refused = either fail2ban or possibly your home IP has moved and you’re trying to connect to somebody else’s computer? (nginx is very popular after all, it’s not impossible somebody else at your ISP has it running). This can also be a port forwarding failure = something’s wrong with your router.
Connection succeeded + closed is similar to “can’t get past initial boot”
Auth rejected might give you a fallback option if you can figure out a default username/password, although you should hope that’s not the case because it means anyone else can also get in when your system is in fallback.
Very few of these things are actually fixable remotely, btw. I suggest having your sister unplug everything related to your setup, one device at a time. Internet router, raspberry pi, NAS, your VM host, etc. Make sure to give them a minute to cool down. Hardware, particularly cheap hardware, tends to fail when it gets hot, and this can take a while to happen, and, well, it’s been hot.
Here’s a few things with a high likelihood of failing when you’re away from home:
Gluetun is kind of a wrapper around wireguard or openvpn, that greatly simplifies setup and configurability.
I have a VM that runs wireguard to airvpn, in a container made of gluetun. Then you share that container’s network with a qbittorrent container (or pick your torrent) and an nzbget container (or pick your nzb downloader). Tada, your downloaders are VPN’d forever.
When we say LLMs don’t know or understand anything, this is what we mean. This is a perfect example of an “AI” just not having any idea what it’s doing.
But:
AIs do not understand anything. They just regurgitate in ways that the algorithm chooses. There’s no attempt to make the algorithm right, or smart, or relevant, or anything except an algorithm that’s just mashing up strings and vectors.
Fair enough. I guess I’m not saying “there’s no point in --” because I know people do these things. (Man, I wish I had the attention span to read as much as you.) I’m just saying I’m not going to host something just to keep track with no recommendations or interaction because that doesn’t click for me personally.
Sorry, I didn’t know we might be hurting the LLM’s feelings.
Seriously, why be an apologist for the software? There’s no effective difference between blaming the technology and blaming the companies who are using it uncritically. I could just as easily be an apologist for the company: not their fault they’re using software they were told would produce accurate information out of nonsense on the Internet.
Neither the tech nor the corps deploying it are blameless here. I’m well aware than an algorithm only does exactly what it’s told to do, but the people who made it are also lying to us about it.
These bugs are always opened by IC developers who need help and have little agency. So,
Closed “won’t fix” with note
Contributions accepted if you want to deliver the fix. If you are not in a position to dictate to your employer how your time is spent (and, if so, I understand your problem) please report to your manager that you will be unable to use this software without contributing the fix. Alternately, switch to [competitor]. Your manager should understand that the cost to the company of contributing a fix for this bug is less than the switching cost for [competitor]. I wish you luck, either way.
And then make the above text a template response, so you don’t have to spend your time typing it more than once.
So, I’m curious.
What do you think happens in the infinite loop that “runs you” moment to moment? Passing the same instance of consciousness to itself, over and over?
Consciousness isn’t an instance. It isn’t static, it’s a constantly self-modifying waveform that remembers bits about its former self from moment to moment.
You can upload it without destroying the original if you can find a way for it to meaningfully interact with processing architecture and media that are digital in nature; and if you can do that without shutting you off. Here’s the kinky part: We can already do this. You can make a device that takes a brain signal and stimulates a remote device; and you can stimulate a brain with a digital signal. Set it up for feedback in a manner similar to the ongoing continuous feedback of our neural structures and you have now extended yourself into a digital device in a meaningful way.
Then you just keep adding to that architecture gradually, and gradually peeling away redundant bits of the original brain hardware, until most or all of you is being kept alive in the digital device instead of the meat body. To you, it’s continuous and it’s still you on the other end. Tada, consciousness uploaded.
So in the, let’s say, top one-third tier of the options, something like this: https://www.amazon.com/Beelink-SER5-Mini-PC-Desktop-Computer/dp/B0C286SR8V/ref=pd_ci_mcx_pspc_dp_d_2_i_1?pd_rd_i=B0C286SR8V
Or, similarly, this, which is my current mediaserver: https://www.amazon.com/gp/product/B0C1X191NR/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1
I went with the second one–more ram. Anecdotally, some people think beelink is more reliable but this is not a universal opinion, and my experience has been that the minisforum is extremely reliable.
If you search similar you can find options both up and down depending on your actual budget. You probably don’t want to do components on these things, apart from maybe putting in a bigger m.2 nvme.
Well people use ansible for a wide variety of things so there’s no straightforward answer. It’s a Python program, it can in theory do anything, and you’ll find people trying to do anything with it. That said, some common ways to replace it include
Really all of these have solutions, but they’re constantly biting you and slowing down development and requiring people to be constantly trained on the gotchas. So it’s not that you can’t make it work, it’s that the cost of keeping it working eats away at all the productive things you can be doing, and that problem accelerates.
The last bullet is perhaps unfair; any decent system would be a maintainable system, and any unmaintainable system becomes less maintainable the bigger your investment in it. Still, it’s why I urge teams to stop using it as soon as they can, because the problem only gets worse.
Sure, I mean, we could talk about
ansible_inventory
vs some other thing, based on who even fuckin knows, but sometimes the way you access the name will completely change from one role to the next.Heh. I am, as I said, a cloud sw eng, which is why I would never touch any solution that mentioned ansible, outside of the work I am required to do professionally. Too many scars. It’s like owning a pet raccoon, you can maybe get it to do clever things if you give it enough treats, but it will eventually kill your dog.
I can answer this one, but mainly only in reference to the other popular solutions:
I’ve been eyeing Wizard with a Gun for a minute, maybe this is a sign
I like how menacing this headline is.