I use Backblaze B2, but stored in an encrypted Restic container, set up using this guide:
Restic has been great for automating backups, and even letting me mount the encrypted storage to grab individual files. I like doing it this way since I don’t have to trust Backblaze isn’t reading my data - I know for sure that they can’t.
Performance of storage that is both remote and encrypted is about what you would expect, but I don’t need access to the data unless something bad happens.
Are there any “open” solutions to mesh networking that can compare to TP-Link Omada? I don’t think any open source hardware or software can come close, especially not for the newer Wi-Fi standards.
I haven’t bought them yet, but I’m seriously thinking about some Omadas. I imagine I can prevent them from phoning home, and the management software can run locally in a Docker container. Running it like that would be good enough for me even though they’re not “open.”
I’m planning a rework of my home Wi-Fi, and my current plan is an OPNsense box from Protectli, and a few EAP772’s:
https://www.tp-link.com/us/business-networking/omada-wifi-ceiling-mount/eap772/
If there’s something comparable/better that’s more of an open ecosystem, you definitely have my attention while I’m shopping around for different options.
Definitely recommend Motrix:
If the Google download link supports it, it should be fairly resistant to interruptions. If it doesn’t, this might not help much, but you should still use this instead of just a browser.
I haven’t tried to download a Google takeout, so you might need to get clever with how you add the download link to it.
If you just can’t get it to work, you can try getting the browser extension to automatically send all downloads to Motrix. There is some setup required, though:
https://github.com/gautamkrishnar/motrix-webextension
Good luck!
Before it got enshittified with an update a few years ago, I used the RealVNC Android app to connect to a few of my own VNC servers. Wasn’t interested in any of the fancy features, I just wanted a good VNC app.
Now I use AVNC. It’s solid, performs better than RealVNC used to, and it’s open source! You can get it on FDroid.
It should still work!
I only go back and make changes to LED if something breaks with a major Lemmy update, but Lemmy hasn’t had a major update since January. Lemmy v0.19.4 isn’t released yet, but when it is, I’ll make sure the deployment is up to date.
Note that it does not have any advanced features that a major instance might want, such as storing images on S3, exporting data, or image moderation. If you intend for your instance to grow for 100+ users, this isn’t for you. This is only intended for beginners who are overwhelmed by the other Lemmy hosting options, and want an easy way to host a small single-user or small-user instance.
I’m scratching my head to think what Vultr could do better in this case
There was substantial room for improvement in the way they spoke publicly about this issue. See my comment above.
I still don’t like how flippant they’ve been in every public communication. I read the ToS. It’s short for a ToS, everyone should read it. They claim it was taken “out of context,” but there wasn’t much context to take it out of. The ToS didn’t make this distinction they’re claiming, there was no separation of Vultr forum data from cloud service data. It was just a bad, poorly written ToS, plain and simple.
They haven’t taken an ounce of responsibility for that, and have instead placed the blame on “a Reddit post” (when this was being discussed in way more detail on other tech forums, Vultr even chimed in on LowEndTalk).
As for this:
Section 12.1(a) of our ToS, which was added in 2021, ends with “for purposes of providing the Services to you.” This is intended to make it clear that any rights referenced are solely for the purposes of providing the Services to you.
This means nothing. A simple “we are enhancing your user experience by mining your data and giving you a better quality service” would have covered them on this.
We only got an explanation behind the ToS ransom dialog after their CMO whined in a CRN article. That information should have been right in the dialog on the website.
In both places, they’ve actively done vague things to cause confusion, and are offended when people interpret it incorrectly.
You are giving it the -d
flag. -d
means “detached.” There are logs, you are just preventing yourself from seeing them.
Replace the -d
with an -i
(for interactive) and try again.
Have you completed the podman rootless setup in order to be able to use it? You may need to edit /etc/subuid
and /etc/subgid
to get containers to run:
More than likely, this might have something to do with podman being unprivileged, and this wanting to bind to port 80
in the container (a privileged port). You may need to specify a --userns
flag to podman.
Running in interactive mode will give you the logs you want and will hopefully point you in the right direction.
Hi :)
If you’re already running an instance, you’re not going to have a good time of this on the same server unfortunately. The webserver config I ship assumes a single instance, and all of the handling assumes only one domain. You would have to basically modify my entire script to support something like this.
You can take a look at my advanced configuration page to figure out what files you can edit, but this would be a very manual process for what you want to do.
Apologies, but you would be better off deploying a new server.
In this case, it sure does sound like abuse. Considering the careful wording, combined with the seemingly kneejerk reaction of requiring authentication, there was likely illegal activity going on:
Earlier this year we saw an increase in the number of reports we received about some people using our service in ways that we cannot tolerate. To be more clear, this was not about some people merely saying things that others disliked.
Over the past several months we tried multiple strategies in order to end the violations of our terms of service. However in the end, we determined that requiring authentication was a necessary step to continue operating meet.jit.si.
It was a free, anonymous service that let people stream video and send messages. Consider for a moment if that “video” was actually non-video data encoded to be streamed through Jitsi and sent to another location. Or, consider if the video was video, but was so egregious and illegal, that Jitsi had to take action. It doesn’t take a lot of thinking to consider the kinds of activities could have been going on.
Why is everyone up in arms about this? The abuse of their free service was rampant. This isn’t a core project change, this is just a measure to keep a version of the project up for free without completely taking it down. They don’t even have a way to monetize this. An alternative was to simply shut it down and only allow you to self host it.
I self host my Jitsi instance, but as a privacy nut, I don’t see a problem with this. Absolute privacy cannot always coexist with free anonymous services. Don’t blame Jitsi, blame the people who ruined it for everyone else.
It’s really hard to take calls to action like this seriously, when they unironically talk like this:
You cannot pass this invasive “browser check” without enabling JavaScript. This is a waste of five(or more) seconds of your valuable life.
Most of the other points are either grasping, misleading, or make the classic FOSS-centric assumption that we live in a fantasy land where all hosting is free and companies don’t need to exist.
I’m not out here trying to say Cloudflare is vital to society, but come on, these arguments are toothless.
This was fixed already, but a new release of Lutris has not been published with the fix included. The exact line in your screenshot was specifically removed in this commit:
https://github.com/lutris/lutris/commit/3b64e70e2a2a4f90e2679b12f9f2bf56cb0a5986
I’ve seen people have similar issues on my issue tracker. Turns out it was caused by Cloudflare’s JS minimization or rocket-loader being enabled. Something changed in 0.18.3 that made it incompatible with those Cloudflare features. If you use the Cloudflare proxy to serve your site, you will need to turn those off.
I plan to support this for as long as I’m using Lemmy, which should be a good while.
All the script really does is generate a docker-compose.yml stack that’s best for your desired setup. So even if I do stop supporting the script, you’re not locked into using it. You can always manage the Docker Compose stack manually and do your own updates, which is what people not using my script will have to do anyway.
Also, I don’t bake Lemmy versions directly into this script, I just pull the latest Lemmy version from GitHub and deploy that. So in theory, unless the Lemmy team changes something major, this should continue working for a long time after I stop supporting it.
If you want to be prepared, I would recommend reading up on Docker Compose and getting familiar with it!
Shameless self plug:
https://github.com/ubergeek77/Lemmy-Easy-Deploy
All you need is a server, a domain, and your DNS records set to your server’s IP address. After that, my script takes care of the rest!
Please let me know if you have any issues! I am constantly keeping this updated based on people’s feedback!
Before this week, I would have told you no. But I have big plans for the 0.18.1 update.
The Lemmy team has completely broken ARM support with seemingly no plan to support it again. They switched to a base Docker image that only supports x86_64. This is why your build fails. I still don’t understand why they would move from a multiarch image to an x86_64-only one.
I’ve been working on this for about a week, and just yesterday I finished a GitHub Actions pipeline that builds multiarch images for x64/arm/arm64. I currently have successful builds for 0.18.1-rc.2. In a future update for my script, I will have it use these, that way ARM users don’t need to compile it anymore. I just ask for a little patience, I haven’t been able to do any work on Lemmy Easy Deploy since I’ve been working on this pipeline :)
I also do want to qualify - don’t get your hopes up until you see it running for yourself. Ultimately, I am just a DevOps guy, not a Lemmy maintainer. I haven’t tested my ARM images yet, and while I did my best to get these to build properly, I can’t fix everything. If anything else breaks due to running on ARM, it will be up to the Lemmy team to fix those issues (which is not likely anytime soon, if their updated x86_64 Dockerfiles are any indication).
But, fingers crossed everything goes smoothly! Keep an eye out for an update, I’m working hard on it, hopefully I can get it out in time for 0.18.1!
EDIT:
Putting my notes on this progress here:
I haven’t actually used the embedded postfix server at all, I keep mine disabled. I only include it because it’s “included” in the official Docker deployment files, and I try to keep this deployment as close to that as possible.
I’m considering adding support for an external email service, as you mentioned, but I have nearly zero experience in using managed email services, and I’m not sure if non-technical users would be able to navigate the configuration of things I can’t do for them (i.e. on a web dashboard somewhere). And if I can’t do it for them, it means more issues for me, so I hesitate to add support for it at all.
I’d love to hear your experience in setting up sendgrid and how easy that was. And the tracking stuff you mentioned as well.
I didn’t put my actual inquiry in the comment since it would have made it too long. But I wasn’t asking them about moving to Squarespace, I was very clear that I am burning a bridge with both of them and have no interest in being a customer of either of them. I told them I’ve already moved my domains out of Google Domains, and I wanted to clarify if any historical data about me and my domains (domain ownership history, purchase history, receipts, etc) would go to Squarespace. And they replied with what I put in my comment.
If I consider their reply to me, and the stuff I’m reading in the link OP posted, this isn’t really a “transition,” Squarespace is just buying the rights to all 10M+ domains Google Domains owns. But if Google Domains doesn’t own a domain anymore, it won’t be part of that transaction.
That’s what I gathered, anyway. Hopefully they can be less ambiguous before the transaction actually happens. It will probably take the better part of a year, so there is plenty of time.
The article covers this a little bit, but I thought I’d share my email response from Google when I asked them “how can I prevent Squarespace from receiving any of my data?” They responded with:
Based on the summary you have shared, I understand that you need help with your general inquiry about the Google Domains transition to Squarespace. To answer this, if you will be transferring your domains out of Google, all of the data will also be removed. This means that once the transition between Squarespace and Google happens, your data will also be removed.
I responded to this and basically said, that wording is ambiguous. Will my data be removed before or after the transition? They replied:
I’m sorry for the confusion. To be clear, Squarespace will not receive any of your Google Domains data. Only the active domain names, excluding the domain names that have been deleted or transferred out, will be affected by the data shift to Squarespace.
So if I trust their word, it means, if I’ve already transferred out my domains (which I have), Squarespace shouldn’t receive any of my customer information, or even have a record of who I am. Hopefully that’s true.
Will you only be supporting yourself and maybe a small subset of users? If you don’t need your instance to scale, you can (shameless self plug) try my deployment script to get yourself running.
It just uses the recommended Postgres configuration as seen in the deployment files in Lemmy’s official repo. It would just be in a Docker volume on disk, so if you had thoughts of scaling in the future, and wanted to use a managed Postgres service, I would not recommend using my script.
I run an instance just for myself, CPU resources are so low that pretty much anything you can get in the cloud will be good. Disk space is a much more important factor. In terms of just Lemmy-created data, my personal 10-day instance has stored about 6.2GB of data. 2.4GB of this is just thumbnails. Note that this does not include other things that consume resources, such as my Docker images or my Docker build cache, which I clear manually.
So, that is roughly 640MB of new data generated per day. Your experience will vary depending on how many communities you subscribe to, but that’s a good rough estimate. Round it up to 700MB to have a safer estimate. But remember, this is with Lemmy’s current rate of activity. If the amount of posts and comments doubles, triples in the future, my storage requirements will likely go up considerably.
I am genuinely not sure what long-term Lemmy maintenance looks like in terms of releasing disk space. I can clear my thumbnail data and be fine, but I wonder what’s going to happen with the postgres database. Is there some way to prune old data out of it to save space? Will my cloud storage costs become so unreasonable in a year, that I’ll have to stop hosting Lemmy? These are the questions I don’t have answers to yet.
If there is something clever you can do to plan ahead and save yourself disk space costs in the future (like, are managed Postgres services cheaper to host than on disk ones?), I’d recommend doing that.
Sorry, I don’t have access to an unRaid system to test it with.
However, I know most NAS systems at least support CLI-style Docker and Docker Compose, so if you can manage to get Docker running, it might work? The script has some Docker detection if you’re not sure.
However, I know Synology hogs use of port 80 and 443. I’m not sure if unRaid is the same way. If it is, this might not be the best solution for you. But, if you want to give it a shot, I do have some advanced options in my config that lets you change to different ports and turn off HTTPS (so you can run a proxy in front of it). I can’t really help people who run it behind a webserver like this, but the template files in my repo can be freely modified, so you’re welcome to hack at my script any way you like to get it working!
And that is why I don’t advertise this as supporting email out of the box, and why it’s an advanced option without any support from me. The embedded postfix server is part of the official Docker Compose deployment from upstream Lemmy, and it’s part of the officially supported Ansible deployment too. Those deployment methods are what this is modeled after. That is as far as I go on email support. If upstream Lemmy started including some automatic AWS SNS configuration, I would adopt it, but they have not done so.
Everyone who has reported success to me so far are running single user instances for themselves. That is my target audience, and for that audience (and myself), email is not even close to being a hard requirement.
However, if you would like to improve this script by adding support for more robust and secure email systems, I would be happy if you submitted a PR to do just that :)
If you are bypassing my Caddy service, you will need to expose lemmy-ui
as well. Look at my Caddyfile to see how things are supposed to be routed. Don’t forget the @
-prefixed handles. Those are important.
Unfortunately, if you have a specific use case involving a webserver, Lemmy Easy Deploy may not be for you. However, you can also take a look at Lemmy’s own Docker files:
Sorry, combining this with an already-running webserver is not a use case I support for this easy deployment script. My script is intended for new deployments for people not already running servers.
The best thing you can do is change the ports in docker-compose.yml.template
, and today I will make an update that gives you environment variables for them.
Unfortunately I do not have time to help you dig deeper into the issue, but hopefully these tips help you:
docker-compose.yml.template
to something that won’t conflict with your webserver. Take note of what port you used for 80
config.env
and set CADDY_DISABLE_TLS
to true
Since you’re using your own webserver, doing it this way will not automatically retrieve certificates for you. Hopefully you have a system in place for that already.
Good luck!
I’m not sure what you mean? Most people are just self hosting instances for themselves, where email isn’t needed. My instance doesn’t have an email service.
And as I explained, if email is something you want, I have an advanced option for this. It’s not the default because there is not a public VPS host out there that lets you use port 25 without special approval.
I’ll add some better instructions for this to the readme.
You can do any Docker compose commands by changing to the ./live
folder, then running:
docker compose -p lemmy-easy-deploy <command>
<command> can be whatever Docker Compose supports, up
,down
,ps
, etc.
I don’t have config options for the ports, but you can just change them in docker-compose.yml.template
to change what they’re listening on. As long as yourdomain.com:80 is reachable from the public, it shouldn’t matter what routing shenanigans are going on behind it.
I haven’t tested a local only use case, but you can probably set these options in config.env
LEMMY_HOSTNAME
to localhost
CADDY_DISABLE_TLS
to true
TLS_ENABLED
to false
This will disable any HTTPS certificate generation and only run Lemmy on port 80. I don’t know if Caddy or Lemmy will act weird if the hostname is localhost
, but this should work for you. Let me know if it doesn’t.
People need to understand what this will mean from a developer perspective before getting all up in arms. This initiative is more kneejerk emotional than it is realistic.
If you’re going to watch only one of these videos, watch the second one:
https://youtu.be/ioqSvLqB46Y
https://youtu.be/x3jMKeg9S-s