Just some Internet guy

He/him/them 🏳️‍🌈

  • 0 Posts
  • 242 Comments
Joined 2Y ago
cake
Cake day: Jun 25, 2023

help-circle
rss

A lot of those identify as christian because of cultural heritage and because it’s the “not some brown people’s religion” but are non-practicing or straight up non-believers otherwise. Those that do maybe go in the church once a year for the christmas stuff

The churches are packed with mostly tourists and the parking lot is filled with Ontario plates.

You’re just not gonna find many nutjobs like the rest of Canada and the US here. Even my grandparents pretty much just go out of habit from the old times. I haven’t once been in a religious argument in Québec my whole life. It’s basically unavoidable in the US.

The quiet revolution is a fairly interesting piece of history.


A good chunk of them have already been converted into condos and shops. I even hooked up with a guy that lived in one of those.

Christianity died in the 70s in Québec, you won’t find many people under like 40 that still gives a crap about religion in Québec.


It’s not impossible, been running my own email server for about 10 years and I inbox pretty much everywhere. I even emailed my work address and straight to inbox. I do have the full SPF, DKIM and DMARC stuff set up, for which I get notices from several email provides of failed spoof attempts.

Takes a while and effort to gain that reputation, but it’s doable. And OVH’s IPs don’t exactly have a great reputation either. Once you’re delisted from most spam databases / old spam reputation is expired, it’s not that bad.

Although I do agree it’s possibly one of the hardest services to self host. The software to run email servers is ancient and weird, and takes a lot to set up right. If you get it wrong you relay spam and start over, it’s rough.


Ordered two drives from them, came in very well packaged and even included the PWDIS adapter. Very good deals. Could throw the box across the yard and the drives would probably survive.


As a starting point. Are there any hardware recommendations for a toy home server?

Whatever you already have. Old desktop, even old laptop (those come with a built-in battery backup!). Failing what, Raspberry Pis are pretty popular and cheap and low power consumption, which makes it great if you’re not sure how much you want to spend.

Otherwise, ideally enough to run everything you need based on rough napkin math. Literally the only requirement is that the stuff you intend to run fits on it. For reference, my primary server which hosts my Lemmy instance (and emails and NextCloud and IRC and Matrix and Minecraft) is an old Xeon processor close to a third gen Intel i7 with 32GB of DDR3 memory, there’s 5 virtual machines on it (one of which is the Lemmy one), and it feels perfectly sufficient for my needs. I could make it work with half of that no problem. My home lab machine is my wife’s old Dell OptiPlex.

Speaking of virtual machines, you can test the waters on your regular PC by just loading whatever OS you choose in a virtual machine (libvirt if you’re on Linux, VirtualBox or VMware otherwise). Then play with it. When it works makes a snapshot. Continue playing with it, break it, revert to the last good snapshot. A real home server will basically be the same but as a real machine that’s on 24/7. It’s also useful to test things out as a practice run before putting them on your real server machine. It’s also give you a rough idea how much resources it uses, and you can always grow your VM until it fits and then know how much you need for the real thing.

Don’t worry too much about getting it right (except the backups, get those right, verify and test those regularly). You will get it wrong and eventually tear it down and rebuild it better what what you learn (or want to learn). Once you gain more experience it’ll start looking more and more like a real server setup, out of your own desire and needs.


I feel like a lot of the answers in this thread are throwing a lot of things with a lot of moving parts: Unraid, Docker, YunoHost, all that stuff. Those all still require generally knowing what the hell a Docker container is, how to use them and such.

I wouldn’t worry about any of that and start much simpler than that: just grab any old computer you want to be your home server or rent a VPS and start messing with it. Just pick something you think would be cool to run at home. Anything you run on your personal computer you wish was up 24/7? Start with that.

Ultimately there’s no right or wrong way to do things. It’s all about that learning experience and building up that experience over time. You get good by trying out things, failing and learning. Don’t want to learn Linux? Put Windows on it. You’ll get a lot of flack for it maybe, but at the very least over time you’ll probably learn why people don’t use Windows for server stuff generally. Or maybe you’ll like it, that happens too.

Just pick a project and see it to completion. Although if you start with NextCloud and expose it publicly, maybe wait to be more comfortable with the security aspect before you start putting copies of your taxes and personal documents on it just in case.

What would you like to self host to get started?


Making the news and people unable to keep up is part of the strategy.


He’s going to create a crisis and have demands, threaten the tariffs until Canada makes some form of deal, postpone it, and come back with a new crisis, rinse and repeat.


Maybe if he had an actual platform and an actual plan that’s not based entirely on undoing what Trudeau did…


but I’m curious if it’s hitting the server, then going the router, only to be routed back to the same machine again. 10.0.0.3 is the same machine as 192.168.1.14

No, when you talk to yourself you talk to yourself it doesn’t go out over the network. But you can always check using utilities like tracepath, traceroute and mtr. It’ll show you the exact path taken.

Technically you could make the 172.18.0.0/16 subnet accessible directly to the VPS over WireGuard and skip the double DNAT on the game server’s side but that’s about it. The extra DNAT really won’t matter at that scale though.

It’s possible to do without any connection tracking or NAT, but at the expense of significantly more complicated routing for the containers. I would do that on a busy 10Gbit router or if somehow I really need to public IP of the connecting client to not get mangled. The biggest downside of your setup is, the game server will see every player as coming from 192.168.1.14 or 172.18.0.1. With the subnet routed over WireGuard it would appear to come from VPN IP of the VPS (guessing 10.0.0.2). It’s possible to get the real IP forwarded but then the routing needs to be adjusted so that it doesn’t go Client -> VPS -> VPN -> Game Server -> Home router -> Client.


“We’re going to be demanding respect from other nations,” Trump said.

Respect needs to be mutual. He sounds like the typical asshole uncle that always acts like they’re owed respect be have never showed any in return.


LineageOS’s default music app, Twelve, supports Jellyfin as a source:


People went there days before the actual ban, but I do wonder how many more came due to those influencers. Would explain why it’s gotten kinda meh since sunday.


Wasn’t Facebook and Twitter also parroting all this, including members of congress? Vaccine deniers were everywhere during covid, not just on TikTok.


As someone that admins hundreds of MySQL at work, I’d go with PostgreSQL.


The idea that GPT has a mind and wants to self-preserve is insane. It’s still just text prediction, and all the literature it’s trained on is written by humans with a sense of self preservation, of course it’ll show patterns of talking about self preservation.

It has no idea what self preservation is, even then it only knows it’s an AI because we told it it is. It doesn’t even run continuously anyway, it literally shuts down after every reply and its context fed back in for the next query.

I’m tired of this particular kind of AI clickbait, it needlessly scares people.


To kind of visually see it, I found this thread of some guy that took oscilloscope captures of the output of their UPS and they’re all pseudo-sines: https://forums.anandtech.com/threads/so-i-bought-an-oscilloscope.2413789/

As you can see, the power isn’t very smooth at all. It’s good enough for a lot of use cases and lower end power supplies, because they just shove that into a bridge rectifier and capacitors. Higher end power supplies have tighter margins, and are also more likely to have more safety features to protect the PC so they can get into protection mode and shut off. Because bad power can mean dips in power to the system which can cause calculation errors which is very undesirable especially in on a server. It probably also messes with power factor correction circuits, which is something cheap PSUs often cheap out on but a good high quality one would have and may shut down because of it.

As you can see in those images too, it spends a significant amount of time at 0V (no power, that’s at the middle of the screen) whereas the sine waves spends an infinitely short time at 0, it goes positive and then negative immediately. All the time spent at 0, you rely on big capacitors in the PSU to hold enough charge to make it to the next burst of power. With the sine wave they’d hold just long enough (we’re going down to 12V and 5V from 120/240V input, so the amount of time normally spent at or below ±12V is actually fairly short).

It’s technically the same average power, so most devices don’t really care. It really depends on the design of the particular unit, some can deal with some really bad power inputs and manage just fine and some will get damaged over long term use. Old linear ones with an AC transformer on the input in particular can be unhappy because of magnetic field saturation and other crazy inductor shenanigans.

Pure sine UPSes are better because they’re basically the same as what comes out of the wall outlet. Line interactive ones are even better because they’re ready to take over the moment power goes out and exactly at the same spot in the sine wave so the jitter isn’t quite as bad during the transition. Double conversion is the top tier because they always run off the battery, so there’s no interruption for the connected computer at all. Losing power just means the battery isn’t being charged/kept topped off from the wall anymore so it starts discharging.


If you look at my username you’ll see I do run my own instance so I’ve gone through the process :)


I would probably just skip the Lemmy Easy Deploy and just do a regular deployment so it doesn’t mess with your existing. Getting it running with just Docker is not that much harder and you just need to point your NGINX to it. Easy Deploy kind of assumes it’s got the whole machine for itself so it’ll try to bind on the same ports as your existing NGINX, so does the official Ansible as well.

You really just need a postgres instance, the backend, pictrs, the frontend and some NGINX glue to make it work. I recommend stealing the files from the official Ansible, as there’s a few gotchas in the NGINX config as the frontend and backend share the same host and one is just layered on top.


Hasn’t cost me a penny, hurray for unmetered bandwidth


To add: a lot of cert providers also offer ACME so while the primary user of ACME is LetsEncrypt, you can use the same tech and validations as LetsEncrypt on other vendors too.


What often happens next is the realization that the existing system was handling far more edge cases than it initially appears. You often discover these edge cases when the new system is deployed and someone complains about their use case breaking.

The reverse is also sometimes true and it’s when a rewrite is justifyable.

I’ve worked with many systems that piled up a ton of edge cases handling for things that are no longer possible, it makes the code way harder to follow than it should.

I’ve had successful rewrites that used 10x+ less the amount of code, for more features and significantly more reliable. And completely eliminated many of the edge cases by design.


IMO a lot of what makes nice self-hostable software is clean and sane software in general. A lot of stuff tend to end up trying to be too easy and you can’t scale up, or stuff so unbelievably complicated you can’t scale it down. Don’t make me install an email server and API keys to services needed by features I won’t even use.

I don’t particularly mind needing a database and Redis and the likes, but if you need MySQL and PostgreSQL and Redis and memcached and an ElasticSearch cluster and some of it is Go, some of it is Ruby and some of it is Java with a sprinkle of someone’s erlang phase, … no, just no, screw that.

What really sucks is when Docker is used as a bandaid to hide all that insanity under the guise of easy self-hosting. It works but it’s still a pain to maintain and debug, and it often uses way more resources than it really need. Well written software is flexible and sane.

My stuff at work runs equally fine locally in under a gig of RAM and barely any CPU at idle, and yet spans dozens of servers and microservices in production. That’s sane software.


I think it’s not as much as we expect everyone to host theirs themselves, but that it’s possible at all so multiple companies can compete without having to start from scratch.

Sure there will be hobbyists that do it, but already just on Lemmy users already have the freedom of going with lemmy.ml, lemmy.world, SJW, lemm.ee and plenty more.

It’s about spreading the risk and having alternatives to run to.


You just put both in the server_name line and you’re good to go.


With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

  • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
  • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
  • Docker adds its own rules to ensure that this works as expected.

The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.


It’s sitting at around 46GB at the moment, not too bad.

Instance is a year and a few months old, so I could probably trim down the storage a bit if needed by purging stuff < 6 months old or something.

I think it initially grows as your users table fills up and pictrs caches the profile pictures, and then it stabilizes a bit. I definitely saw much more growth initially.


I subscribe to a few more communities and my DB dump is about 3GB plain text, but same story, box sits at 5-15% most of the time.


You mean you’re not actually supposed to spend 2 hours daily unfucking everyone’s shit during the standup turn by turn?


Having the web server be able to overwrite its own app code is such a good feature for security. Very safe. Only need a path traversal exploit to backdoor config.php!


Yep, and I’d guess there’s probably a huge component of “it must be as easy as possible” because the primary target is selfhosters that don’t really even want to learn how to set up Docker containers properly.

The AIO Docker image is an abomination. The other ones are slightly more sane but they still fundamentally mix code and data in the same folder so it’s not trivial to just replace the app.

In Docker, the auto updater should be completely neutered, it’s the wrong way to update the app.

The packages in the Arch repo are legit saner than the Docker version.


I’ve heard very good things about resold HGST Helium enterprise drives and can be found fairly cheap for what they are on eBay.

I’m looking for something from 4TB upwards. I think I remember that drives with very high capacity are more likely to fail sooner - is that correct?

4TB isn’t even close to “very high capacity” these days. There’s like 32TB HDDs out there, just avoid the shingled archival drives. I believe the belief about higher capacity drives is a question of maturity of the technology rather than the capacity. 4TB drives made today are much better than the very first 4TB drives we made a long time ago when they were pushing the limits of technology.

Backblaze has pretty good drive reviews as well, with real world failure rate data and all.


Ethernet splitter

What kind of splitter? Not a hub or switch, just a passive splitter?

Those do exist to do 4x 100M links on a single pair each, but you can’t just plug those into a router or switch and get 4 ports, it still needs to eventually terminate as 4 ports on both ends.


If you’re behind Cloudflare, don’t. Just get an origin certificate from CF, it’s a cert that CF trust between itself and your server. By using Cloudflare you’re making Cloudflare responsible for your cert.


There’s also Cockpit if you just want a basic UI


Air Canada has everything it needs to end the strike, no need for government intervention: all they have to do is give pilots their raise.

Last thing I want is be on a plane with a pilot that has to do a side job and more likely to be tired and make mistakes.


No but it does show how much capitalism relies on the absolute exploitation of the labor market and the double-standards from the US in that regard. Free market good but only when US companies are the ones fucking everyone over.

  • US companies buying cheap stuff from China and marking it up 500%: good, American values
  • China cuts the middleman and sells the same product for the same price they would sell it to the reseller: noooooo we can’t compete with that, China bad, it’s so unfair! Waaaaaaa

At least the EU doesn’t constantly brag about muh freedom and how the free market is the best thing ever and you’re a commie if you don’t agree that capitalism is the best.


I believe you, but I also very much believe that there are security vendors out there demonizing LE and free stuff in general. The more expensive equals better more serious thinking is unfortunately still quite present, especially in big corps. Big corps also seem to like the concept of having to prove yourself with a high price of entry, they just can’t believe a tiny company could possibly have a better product.

That doesn’t make it any less ridiculous, but I believe it. I’ve definitely heard my share of “we must use $sketchyVendor because $dubiousReason”. I’ve had to install ClamAV on readonly diskless VMs at work because otherwise customers refuse to sign because “we have no security systems”. Everything has to be TLS encrypted, even if it goes to localhost. Box checkers vs common sense.


LetsEncrypt certs are DV certs. That a put a TXT record for LetsEncrypt vs a TXT record for a paid DigiCert makes no difference whatsoever.

I just checked and Shopify uses a LetsEncrypt cert, so that’s a big one that uses the plebian certs.


Neither does Google Trust Services or DigiCert. They’re all HTTP validation on Cloudflare and we have Fortune 100 companies served with LetsEncrypt certs.

I haven’t seen an EV cert in years, browsers stopped caring ages ago. It’s all been domain validated.

LetsEncrypt publicly logs which IP requested a certificate, that’s a lot more than what regular CAs do.

I guess one more to the pile of why everyone hates Zscaler.