Errar es humano. Propagar errores automáticamente es #devops
VPS + VPN is the cheapest option I believe for the services. It doesn’t have to be “elaborated”.
You can port-forward public VPS ports to your private addresses/ports. If you don’t want to use iptables
you can use firewalld
.
The only “but” will be latency. For gaming it won’t perform as you may need.
If your comments have been federated to other instances, they will be there until they are deleted locally. If someone clicks on your user profile, they will get a DNS error if the domain is no longer there. Images in the comments pointing to you instance will be broken too. Nothing terrible actually happens.
Migrating accounts a la Mastodon is not happening soon in Lemmy.
My advice is: Go on and save some money.
Some security tips:
Firewall should block everything by default, and you start allowing incoming and outgoing connections when you need them or if something fails.
Disable passwords and root access in ssh daemon.
Use fail2ban or something similar to block bots failing to log-in.
Use random long passwords for everything (eg: like databases). And put then in a password manager. If you can remember the database password, it’s not strong enough. If you can remember the admin password for a public web service, it’s weak.
Don’t repeat the passwords. Everything should have its own random long password.
.env files and files with secrets should be readable only by its service user. Chmod them to 400.
Monitor logs from time to time to see if something funny is happening.
Random ports are easy to discover and there are tools to discover what service is behind a port.
It’s annoying for the legitimate user and easy to bypass by an actual attacker.
Also, if you use a random port above 1024 it could be a security issue since any user could star listening if the legitimate process crashes.
See this
Wow! this is exactly what I needed. Although, I didn’t exactly ask for it.
Thank you very much
It’s not a good idea to let children go wherever part of the city they want to go. Specially for no-go zones in the city.
Internet should be treated like streets. If you trust a teenager to go outside with certain restrictions of time and places, same should apply for internet.
But a minor who barely reads shouldn’t be alone in the streets all day. The same for the Internet. Similar dangers may be involved.
You may have one psql server per region and then use Bucardo to synchronize them.
I’ve never done this in production, so take my advice with a grain of salt.
I don’t know any product that matches your requirements.
If I had to deal with that today I’d buy a rasberry pi, a USB sim card dongle and some raspberry hat with GPS receiver.
You can write a small API that listens to the raspberries, who sends periodically their positions, and save it to a database.
But it’s a quite large project. There’s a lot of aspects to consider. The GUI, security, batteries, and a way to attach it to an animal without being lost or destroyed.
Sorry for not giving a useful answer lol. If you come out with an actual solution I’ll be glad to hear it, so I can track my cats in case they get lost.
In Chile I recall Microsoft sending a notification to my former worplace because someone used torrent to download a game from inside the company network. That person didn’t notice that all traffic was being routed to company’s VPN hosted in MS Azure.
ISPs don’t give a shit. The goverment has laws against piracy that are never applied (you know: Southamerica, the lawlessness). But gringo companies do care.
My advice is to avoid Google, MS and the big tech to follow your pirates activities. They may suspend services to you, or notifiy some local authority.
Use a different browser or machine for your big tech interactions, and you’ll be fine.
Edit: typos.
Are you using Docker Desktop? It uses a headless virtual machine inside host, so connecting to host is tricky.
You may use hostname host.docker.internal
from the container to access host.
edit: link to the docs https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host
Kubernetes is useful if you have gone full cattle over pets. And that is very uncommon in home setups. If you only own one or two small machines you cannot destroy infra easily in a “cattle” way, and the bloatware that comes with Kubernetes doesn’t help you neither.
In homelabs and home servers the pros of Kubernetes are not very useful: high availability, auto-scaling, gitops integrations, etc: Why would you need autoscaling and HA for a SFTP used only by you? Instead you write a docker-compose.yml and call it a day.
It looks like system is thrashing. Because of the high disk usage and very low amount of physical memory available previous the incident.
Look what dmesg
says. Maybe you’ll see some OOM errors.
The solution, I believe, should be to limit the amount of resources your services can use. In their config or something, or put them inside containers with limited amount of memory, or migrate one of the services to other machine.
A posible attack from an untrusted client, is to create a lots of VMs in a short period of time.
1440 VMs running for a minute cost the same as a single one running for a day. 43200 VMs running for a minute cost the same as a single one running for a month.
Therefore, attacks are kinda cheap, specially if you are paid by the competence.
So, for an untrusted client, the best is to limit the maximum number of VMs she can create.
AWS does something similar. I recall something like 20 VMs as the limit for a new client.
Edit: Here are AWS docs about that: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html
yay! I can’t wait to have a virtual machine with windows and chrome just to get an appointment for public services. It will be nice when other OS and browsers will be only usefull to post memes. I do miss the days when I needed IE, because my shithole country made a lot of public stuff only compatible with that.
/s
On a completely unrelated side note: I like to see paralellisms of SOLID principles of OOP development and system administration.
A container may have one responsability. Or a service config (like nginx) may be closed to modifications but open to extensions, to avoid some automated client breaking elsewhere, etc, etc.
Sometimes I like to thing about system administration like some kind of very high level development.
To mods: I have no problem to delete this comments if it doesn’t fit this community
In my opinion, for home selfhosted stuff you don’t have to go for complex solutions. In the industry, the problem is that secrets needs to be served to different systems, by different people, with some kind of audit logs. Unless you are working with lots of people, environment variables are OK. You github/gitlab may have all scripts with variables, and your disk may have a .env file with mode 400. If you make any machine or container with a single responsibility, there should be no secret leaks among them.
For example, let say your wordpress instance gets pwned. It should only have its needed secrets (like its db credentials), so your wikimedia instance is still fine.
Do you really need the RAID online all the time? Because if you can afford to shut it down for a few hours, it is way less work to do a backup, and then build a new RAID with your SSDs.
I’m not sure if the RAID controller will like two different kind of drives. I’d check the docs if it says something.
me before reading this: I know the basics of CSS.
me after reading this: I know nothing about CSS.