• 2 Posts
  • 203 Comments
Joined 1Y ago
cake
Cake day: Jun 19, 2023

help-circle
rss

Amazing stuff. Thank you so much!


It is easier to think of the SSL termination in legs.

  1. Client to Cloudflare; if you’re behind orange cloud, you get this for free, don’t turn orange cloud off unless you want to have direct exposure.
  2. Cloudflare to your sever; use their origin cert, this is easiest and secure. You can even get one made specific so your subdomains, or wildcard of your subdomain. Unless you have specific compliance needs, you shouldn’t need to turn this off, and you don’t need to roll your own cert.
  3. Your reverse proxy to your apps; honestly, it’s already on your machine, you can do self signed cert if it really bothers you, but at the end of the day, probably not worth the hassle.

If, however, you want to directly expose your service without orange cloud (running a game server on the same subdomain for example), then you’d disable the orange cloud and do Let’s Encrypt or deploy your own certificate on your reverse proxy.


Looking great! I think it would be amazing if there are filters for processor generations as well as form factor. Thanks for sharing this tool!


In the old days, it used to be a problem because everyone just connect their windows 98 desktop with all their services directly exposed to the internet because they’re using dial up internet without the concept of a gateway that prevents internet from accessing internal resources. Now days, you’re most likely behind your ISP router that doesn’t forward ports by default, and you’re only exposing the things you’d actually want to expose.

For things you’d actually want to expose, having a service on the default port is fine, and reduces the chances of other systems interacting with it failing because they’d expect it on the default port. Moving them to a different port is just security through obscurity, and honestly doesn’t add too much value. You can port scan the entire public IPv4 space fairly quickly fairly cheaply. In fact, it is most likely that it’s already been mapped:

https://www.shodan.io/host/<your-ip-here>

Keeping the service up-to-date regularly and applying best practices around it would be much more important and beneficial. For SSH, make sure you’re using key based authentication, and have password based authentication disabled; add fail2ban to automatically ban those trying to brute force. For Minecraft, online mode and white listed only unless you’re running a public one for everyone.


I’m not saying you’re wrong — I’ve even upvoted your earlier comments because I’m generally in agreement; you’re an instance admin judging by your handle, go and check the vote history yourself lol.

I’m saying people shouldn’t force their janky unproven solo solution on to someone else who doesn’t have their level of distrust, and would just rather trust the multibillion multinational corporation, when all they want is something that’s been working fine for them for all they care.


There’s always the add more of everything so something could fail without impacting the stability aspect, and that’s great for a corporation needing the redundancy; but it’s probably prudent to not forget there’s also the “I’m interested in learning” aspect, where people running a home server to play with software side of things.

You’re spot on in that we’d need to know what it is that OP would like to do with the system, but I’m getting the feeling that stability isn’t that high of a concern just yet.


Until the basement floods and the server goes offline for a few days; or botched upgrade that’s failing quietly; over zealous spam assassin configuration; etc etc

It sounded like they were trying to archive things from Gmail to their own server, so just cut the middleman jank out, and let the wife continue to use her Gmail as intended.


Or better yet, let her keep her gmail. Don’t force any lab instability on to others… especially email. One lost important email (even if not your fault) and you’ll never hear the end of it.


The answer depends on how you’re serving your content. Based on what you’ve described about your setup, your content is likely served over HTTP through the secured tunnel. The tunnel acts like an encrypted VPN, which allows unencrypted content to be sent securely over the wire. This means although your web server is serving unencrypted content, it gets encrypted before it goes to Cloudflare, so no one along the path could snoop on it.


No PRs means no automated tests/CI/CD, which means you’d slow down the release train. It might typically be just a 2 minutes quick cycle, but that one time it goes off for longer due to a botched update from upstream means you’re never going to do that again during business hours.


Must be very unique sector. Good luck with your explorations!


I’m aware this is the selfhost community, but for a company of 20 engineers, it is probably best to use something commercial in the cloud.

Biggest pain point was for our ops guy, who constantly had to stay behind to perform upgrades and maintenance, as they couldn’t do it during business hours when the engineers are working. With a team of at least 20, scheduling downtimes could get increasingly more difficult.

It also adds an entire system to be audited by the auditors.

The selfhost vs buy commercial kind of bounces back and forth. For smaller teams, less than 5 to 10 engineers, it might be a fun endeavour; but from that point on, until you get to mega corp scale with dedicated ops department maintaining your entire infrastructure, it is probably more effective to just pay for a solution from a major vendor in the cloud instead.



You really should have separate services for registration, DNS and hosting. That way you’re not held hostage by a single provider.



Are you by chance using something like Cloudflare? It may be possible that during the reboot the static IP changed, so your “gateway” cannot reach your router on your old IP no more?

In other words : it’s always the DNS?


I think it would be a good idea to take a step back and ask what is it that you’re trying to achieve.

Userbase, the service linked, is a backend as a service platform that offers you authentication and basic database that you can access via their api. You’d then code your own front end web app to interact with their service and store data there. You pay only per storage used by their storage tiers, which are frankly fairly fair priced. If that is something you’d need, that’s a good idea, but you’d be coding the front end yourself.

If you’re only looking for authentication with OAuth, and then coding your own API backend, then something like Authentik would be a nice self hosted authentication provider. Others that commonly gets mentioned but I’ve got limited/no experience with worlds new keycloak, or fusionauth. Managed services here would be your Auth0, Okta, etc.

If you’ve got a specific use case in mind, then it may be a good idea to say what service you’re thinking about, and the community may be able to suggest prebuilt solutions that good better and require less lift.


Strictly speaking, they’re leveraging free users to increase the number of domains they have under their DNS service. This gives them a larger end-user reach, as it in turn makes ISPs hit their DNS servers more frequently. The increased usage better positions them to lead peering agreement discussions with ISPs. More peering agreements leads to overall cheaper bandwidth for their CDN and faster responses, which they can use as a selling point for their enterprise clients. The benefits are pretty universal, so is actually a good thing for everyone all around… that is unless you’re trying to become a competitor and get your own peering agreement setup, as it’d be quite a bit harder for you to acquire customers at the same scale/pace.


I tend to recommend sticking with more reputable providers, even if it means a couple of dollars extra on a recurring basis. Way too many kiddie hosts popping up, trying to make a quick buck during spring break/summer and then fail to provide adequate services when it actually comes time to provide service.

It may also be a good idea to check LET/WHT before committing into paying longer than month-to-month term with a provider.


OP Currently has in their possession 2 drives.

OP has confirmed they’re 12TB each, and in total there is 19TB of data across the two drives.

Assuming there is only one partition, each one might look something like this:

Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 12345678-9abc-def0-1234-56789abcdef0

Device         Start        End            Sectors        Size      Type
/dev/sda1      2048         23437499966    23437497919    12.0T     Linux filesystem

OP wants to buy a new drive (also 12TB) and make a RAID5 array without losing existing data. Kind of madness, but it is achievable. OP buys a new drive, and set it up as such:

Device         Start        End            Sectors        Size      Type
/dev/sdc1      2048         3906252047     3906250000     2.0T      Linux RAID

Unallocated space:
3906252048      23437500000   19531247953    10.0T

Then, OP must shrink the existing partition to something smaller, say 10TB for example, and then make use of the rest of the space as part of their RAID5 :

Device         Start        End            Sectors        Size      Type
/dev/sda1      2048         19531250000    19531247953    10.0T     Linux filesystem
/dev/sda2      19531250001  23437499999    3906250000     2.0T      Linux RAID

Now with the 3x 2TB partitions, they can create their RAID5 initially:

sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc1

Make ext4 partition on md0, copy 4TB of data (2TB from sda1 and 2TB from sdb1) into it, verify RAID5 working properly. Once OP is happy with the data on md0, they can delete the copied data from sda1 and sdb1, shrink the filesystem there (resize2fs), expand sda2 and sdb2, expand the sdc1, and resize the raid (mdadm --grow ...)

Rinse and repeat, at the end of the process, they’d end up having all their data in the newly created md0, which is a RAID5 volume spanning across all three disks.

Hope this is clear enough and that there is no more disconnect.


Fun story but I’m most impressed with the earbud part of the story. WOW. Absolutely amazing and unexpected.


This is smart! Should help reduce the number of loops they’d need to go through and could reduce the stress on the older drives.


I’m afraid I don’t have an answer for that.

It is heavily dependent on drive speed and number of times you’d need to repeat. Each time you copy data into the RAID, the array would need to write the data plus figuring out the parity data; then, when you expand the array, the array would need to be rebuilt, which takes more time again.

My only tangentially relatable experience with something similar scale is with raid expansion for my RAID6 (so two parity here compared to one on yours) from 5x8TB using 20 out of 24TB to 8x8TB. These are shucked white label WD red equivalents, so 5k RPM 256Mb cache SATA drives. Since it was a direct expansion, I didn’t need to do multiple passes of shrinking and expanding etc., but the expansion itself I think took my server a couple of days to rebuild.

Someone else mentioned you could potentially move some data into the third drive and start with a larger initial chunk… I think that could help reduce the number of passes you’d need to do as well, may be worth considering.


They’re going for RAID5, not 6, so with the third drive these’s no additional requirement.

Say for example if they have 2x 12T drive with 10T used each (they mentioned they’ve got 20T of data currently). They can acquire a 3rd 12T drive, create a RAID5 volume with 3x 1TB, thereby giving them 2TB of space on the RAID volume. They can then copy 2TB of data into the RAID volume, 1TB from each of the existing, verify the copy worked as intended, delete from outside, shrink FS outside on each of the drives by 1TB, add the newly available 1TB into the RAID, rebuild the array, and rinse and repeat.

At the very end, there’d be no data left outside and the RAID volume can be expanded to the full capacity available… assuming the older drives don’t fail during this high stress maneuver.


Even if you could free up only 1GB on each of the drives, you could start the process with a RAID5 of 1GB per disk, migrate two TB of data into it, free up the 2GB in the old disks, to expand the RAID and rinse and repeat. It will take a very long time, and run a lot of risk due to increased stress on the old drives, but it is certainly something that’s theoretically achievable.


Pretty sure UniFi Access can also control the lock mechanism they’re describing. So it’d be a nicely integrated solution.


You aren’t wrong, but that’s also the point… It makes no difference if they’re securing a VPS or their own network. In fact, they’d need to secure both systems — and I’ve seen so many neglected VPS’s in my time… I’ll be the first to admit: myself included.

There are very valid reasons to need a tunnel; CGNAT, ISP level port blocking, network policies (ie campus dorm), etc etc etc. However, if you read the other replies, this doesn’t seem to be the case here, and OP doesn’t seem to even know why they’re hiding their IP. They just wanted to do it because of some loose notion that it may be nice since they’re opening up their port.

For someone in that situation, introducing a whole stack that punches through the firewall via an VPN or alike introduces way more risk than just securing down the gateway directly, and handle the other issues as they come up.


Say someone wants to take your service down, you’ve got 500Mbits line at home ISP, and 10Gbits on your VPS; they sends 1Gbits of traffic to your VPS, your VPS happily tries to forward 1Gbits, fully saturating your home ISP line. Now you’re knocked offline.

Say someone discovers the actual IP, dropping traffic from anything else other than the VPS doesn’t help if they just, again, flood your line with 500Mbits of traffic. The traffic still flows from the ISP to your gateway before they could be dropped.

Say someone wants to perform SQL injection on your website, there is no WAF in this stack to prevent that.

Say someone abuses a remote code execution bug from the application you’re hosting in order to create a reverse shell to get into your system, this complex stack introduced doesn’t protect that.

You’ve provided a comprehensive guide, and I don’t want to single you out for being helpful, but I must ask: What problem does this solve, and does OP actually have the problem this stack can solve? From the replies we’ve seen in this thread, OP doesn’t have sufficient understanding to the full scope of the situation. Prescribing a well intended solution might be helpful, but it gives a false sense of security that doesn’t really help with the full picture.


You do not strictly need to open a port – tunnelling through another server could be a solution, but let’s park this for a moment.

What you are describing as “open a port in my firewall” is actually many smaller parts, some key ones that may be relevant are:

  1. (Firewall) Telling your gateway to not drop traffic when someone outside is request to connect to the specified port; and
  2. (Port Forwarding) Telling your gateway to forward traffic from that port to a specific computer’s specific port within the network (i.e.: your computer, port 80)
  3. (Running a service) Having a service (say for example, a web server) running on the specified computer’s specific port answering requests

All three things (amongst others that’s not immediately relevant here) must be properly setup for any network request to happen. What do I mean by that? I can have a port not drop traffic (i.e.: firewall down). When someone from outside of my network trying to access the port, they’d get to my router, but nothing happens because there’s no where for the packet to go. I can have my firewall down, and port forwarding enabled, but the web server isn’t running. When someone from outside of my network trying to access the port, they’d get to my router, get forwarded to my computer, but because the web server isn’t running, nothing happens. Someone from outside of my network can only gain access to my service (and only that service) only when all three are setup and working together.

“But what about the hackers?”

Yes, the untrusted networks, such as the internet, could be a bad place with people with bad intentions. There are many different things they could do to make things undesirable; let’s explore some of them together.

Say we want to run an instance of Lemmy using a new experimental server software (i.e.: not the official Lemmy server). Now, unfortunately, some racist people decided to come and make racist posts on our instance. A tunnel / proxy doesn’t solve this. Instead, we have to ban their accounts. It may not seem much, and it was completely innocuous to our system, but we’ve just dealt with our first attack.

One of those racist person happens to be the “scary hacker” type, so they came back and try to brute force our admin account’s password to unban themselves. This is not too bad, but we need to address this somehow. A tunnel / proxy doesn’t solve this; but something like Fail2Ban might be able to look at the login failures and put a temporary IP ban on the attacker.

They’re back! And this time, they decide to repeatedly hammer the search function, thereby taking all the resources from our database, so our instance cannot serve other users. A tunnel / proxy doesn’t solve this; but some rate limiting configurations in the server application might help.

They’re not happy about getting rate limited there. So this time, they decided to continuously post garbage to our instance, not even normal requests, just connect to our web server, and spam AAAAAAAAAAAAAA… non stop, at such a quick pace that it fully saturates our network connection, and we cannot do anything else on the network. A tunnel / proxy doesn’t solve this; we’d need to block them from the firewall. This is not entirely true; blocking them at the firewall doesn’t solve the problem, because the traffic still goes from the ISP to the firewall, which will still be saturated before the firewall could drop the traffic, but to use as an example it narrates a potential problem well enough.

They’re angry now, and they pay a few bucks to botnets to have many many many thousands of infected computers to spam AAAAAAAAA… non stop at our service. Again, a tunnel / proxy doesn’t solve this; we’d need to have something smarter than just our firewall and individually ban the IP addresses. This is where we’d need the professionals with typically commercial offerings.

It could escalade the other direction. Instead of attacking with aim to take the service down, they could do other damaging things. Say they found a problem with our server software. Instead of giving the /post/<postid> a numeric id, they can do something fancy like /post/1 AND 1 ==1; UPDATE users SET banned = FALSE WHERE username = 'racist-user' and unban themselves. A tunnel / proxy doesn’t solve this; but a Web Application Firewall (WAF) might.

Now it escalades more. Through a complex chain of intentionally malformed image uploaded to the instance, the image resizer attempting to resize the image, which gets tripped over by the malicious image, which causes a remote code execution, which they use to create a remote access trojan (RAT) shell so they can connect to our server and run commands. This is usually the “big bad” that most people are scared of… someone from outside of their network having access to their system and thus gains the ability to extract their documents or encrypt their photos etc. A tunnel / proxy doesn’t solve this; but a WAF or an anti-virus on the server itself might.

Through these albeit simplified but lengthy exploration, we see that none of these would actually be addressed by a tunnel / proxy. There are other possible attacks, and they’d require other solutions.

So, goes back to what I was saying earlier… it is important to know why you’re trying to do something. Blindly prescribing tunnel / proxy doesn’t actually solve the problem.


What kind of attacks, against what service?

DDoS? It’s cheaper to hire botnets to attack than to defend. You’d most likely still be knocked off even just by the amount of traffic that leaks through your proxy before the VM gets cut off at the data centre. Specifically: it is much more likely that data centres will give higher thresholds before null routing your VM than your residential ISP would be wiling to tolerate.

Brute force on shell? SQL injection? Remote shell execution? Deploying the extra layer will not protect you from these as your own proxy will not give you WAF.

It is always important to know why you’re doing something, before anyone can prescribe a solution.


Again, that’s what you’d like to achieve, but why?

Without the reason, there is no way to provide a useful answer that would adequately address the underlying reason.


What is your objective for ‘hide server IP’?

Privacy to disconnect your identity from the service? There is no solution to this. Full stop. Even with Tor, the state backed acronym entities will figure it out if you get on their radar.

If your objective is to keep your service online, you’re going to be hard pressed to find cost effective alternatives… Commercial solutions are expensive, like, “if you have to ask about the price, you can’t afford it” expensive.

Alternatively, you can try to roll your own by having many many proxy servers yourself… but if you’ve got a target on your back, you’ll never have enough instances; DDOS-as-a-Service is much cheaper than the amount of reverse proxies required to keep your service online.

There’s probably other use cases, but chances are, you’d still be hard pressed to find a solution that’s cost effective.


Locks can happen by registrar (I.e.: ninjala, cloudflare, namecheap etc.) or registry (I.e.: gen.xyz, identity digital, verisign, etc.).

Typically, registry locks cannot be resolved through your registrar, and the registrant may need to work with the registry to see about resolving the problem. This could be complicated with Whois privacy as you may not be considered the registrant of the domain.

In all cases, most registries do not take domain suspensions lightly, and generally tend to lock only on legal issues. Check your Whois record’s EPP status codes to get hints as to what may be happening.


So just because they don’t know technology like you do, they should be left behind the times instead of taking advantage of advancements? A bit elitist and gate keeping there, don’t you think?

Everyone have their own choices to make, and for most, they’ve already decided they’d rather benefit from advancements than care about what you care about.


And here’s the reason why layman should not: they’re much more likely to make that one wrong move and suffer irrecoverable data loss than some faceless corporation selling their data.

At the end of the day, those of us who are technical enough will take the risk and learn, but for vast majority of the people, it is and will continue to remain as a non starter for the foreseeable future.


If memory serves, the default docker compose expose the database port with a basic hard coded password, too. So imagine using the compose without reading too much, next thing you know you’re running a free Postgres database for the world.

Edit: yep, still publishing the db port with hard coded password…


No multi-region unless you roll it yourself. Their offerings are primarily web hosting centric, so you’d need to do the heavy lifting yourself if you want more infra. Also worth noting that they’re definitely not in the same league as the big players, they’re just an old vendor that isn’t likely to disappear on you.


BuyVM has $24s/yr KVM server that you can attach storage at $5/TB/mn. So 5TB should set you back $325/yr all in. They’ve been around for quite some time — I’ve been client since 2011 — so they’re not likely to disappear anytime soon.


There’s two ways around the symptoms you’re trying to treat:

  1. Don’t bother with internal vs external. Always route through external which gets encrypted by the origin cert to CloudFlare and then CloudFlare to your browser. This is simplest in that you don’t need to manage two sets of DNS records and you don’t end up with different certificates for the same domain (in the odd event where you end up needing to do something like certificate pinning). Or;
  2. Just add the origin cert to your systems’ trust store. You know the certificate, it will encrypt the traffic anyway, also you’re accessing the service via intranet so there’s really no attack vector here.

Probably worth calling out that although 1 feels like there’s more hops (and there absolutely are), with any decent internet, you’re probably not going to feel it. This is because the edge server is probably situated very close to your ISP (that’s how they make sure everything responds quickly) so your over all round trip should only be affected by a negligible amount of time that you most likely won’t notice.


The RAID rebuild time is going to be longer than the OEM warranty… love it!


Self hosted SSH key repository?
I have too many machines floating around, some virtual, some physical, and they're getting added and removed semi-frequently as I play around with different tools/try out ideas. One recurring pain point is I have no easy way to manage SSH keys around them, and it's a pain to deal with adding/removing/cycling keys. I know I can use `AuthorizedKeysCommand` on sshd_config to make the system fetch a remote key for validation, I know I could theoretically publish my pub key to github or alike, but I'm wondering if there's something more flexible/powerful where I can manage multiple users (essentially roles) such that each machine can be assigned a role and automatically allow access accordingly? I've seen [Keyper](https://keyper.dbsentry.com) before, but the container haven't been updated for years, and the support discord owner actively kicks everyone from the server, even after asking questions. Is there any other solution out there that would streamline this process a bit?
fedilink

Lemmy via Docker Compose, using Traefik and CloudFlare
**Disclaimers:** First thing first, I'm new to the whole Fediverse, and Lemmy thing, so please don't hesitate to point out any problems you're foreseeing. Secondly, I'm by no means saying this is the ideal implementation, something something see above. Please don't hesitate to make recommendations for improvements. Lastly, I'm not sure if it is completely working. I'm still noticing a few issues that I will document and monitor towards the end of the post. If you know of the cause or how to debug further, please do let me know! **Notes and Assumptions:** 1. I am using an ARM server. So I'm using ARM images, you will need to make sure you're using the correct architecture image. 2. I assume you have Traefik up and running in a separate network. I used docker compose to bring traefik up, minimal configurations, and I'm just hijacking the `default` network there (project folder was `gateway` so the complete network name is `gateway_default`)... there's probably better ways to do this. 3. On note of networks, I really don't like the fact that the default postgres was left wide open on the `lemmyexternalproxy` network. I think I've locked my down, but you may wish to double check my work. 4. I'm not sure if what I am doing with the hostnames are correct, but it seems to work for the most part, so I'm not complaining. If there is a better way, please do advise! 5. I used an [override file for docker compose](https://docs.docker.com/compose/extends/) to apply extra settings. This allows me to keep the original `docker-compose.yml` untouched, and I can just pull in new changes (theoretically). 6. Since I'm using traefik, I don't need nginx running doing nothing. I replaced it with a light weight alpine image that just shuts down successfully, so it doesn't use resources. Without further delays, here's my files: `docker-compose.override.yml`: ``` version: "3.3" networks: lemmyexternalproxy: internal: true lemmygateway: name: gateway_default external: true services: lemmy: image: dessalines/lemmy:0.17-linux-arm64 labels: - "traefik.enable=true" - "traefik.http.routers.lemmy.entrypoints=websecure" - "traefik.http.routers.lemmy.rule=Host(`lemmy.chiisana.net`) && HeadersRegexp(`Accept`, `^application/`) || Host(`lemmy.chiisana.net`) && Method(`POST`) || Host(`lemmy.chiisana.net`) && PathPrefix(`/{path:(api|pictrs|feeds|nodeinfo|.well-known)}`)" - "traefik.http.routers.lemmy.tls=true" - "traefik.http.services.lemmy-svc.loadbalancer.server.port=8536" - "traefik.docker.network=gateway_default" networks: - lemmygateway lemmy-ui: image: dessalines/lemmy-ui:0.17-linux-arm64 environment: - LEMMY_UI_HOST=0.0.0.0:1234 - LEMMY_UI_LEMMY_INTERNAL_HOST=lemmy:8536 - LEMMY_UI_LEMMY_EXTERNAL_HOST=lemmy.chiisana.net - LEMMY_UI_HTTPS=true - LEMMY_UI_DEBUG=false labels: - "traefik.enable=true" - "traefik.http.routers.lemmy-ui.entrypoints=websecure" - "traefik.http.routers.lemmy-ui.rule=Host(`lemmy.chiisana.net`)" - "traefik.http.routers.lemmy-ui.tls=true" - "traefik.http.services.lemmy-ui-svc.loadbalancer.server.port=1234" - "traefik.docker.network=gateway_default" networks: - lemmygateway proxy: image: alpine:latest command: "true" entrypoint: "true" restart: "no" pictrs: image: asonix/pictrs:0.4.0-rc.3 ``` `lemmy.hjson`: ```{ setup: { admin_username: "chiisana" admin_password: "password-redacted-duh" site_name: "chiisana lemmy site" } database: { host: "postgres" user: "lemmy" password: "password-redacted-duh" database: "lemmy" } email: { smtp_server: "smtp.mailgun.org:587" smtp_login: "lemmy@chiisana.net" smtp_password: "password-redacted-duh" smtp_from_address: "lemmy@chiisana.net" tls_type: "tls" } pictrs: { url: "http://pictrs:8080/" api_key: "API_KEY" } hostname: "lemmy.chiisana.net" bind: "0.0.0.0" port: 8536 tls_enabled: true } ``` ---- **Known issue(s)?** 1. ~~I have my registration disabled as the instance is supposed to be just for my own auth not be depended on other instances. In my `/admin` section, I'm seeing a ton of users from `endlesstalk.org` pop up as banned users. I have no idea what that is about, as `endlesstalk.org` seems to also be used only by one user. I'll be monitoring this and see what's to come of it.~~ Edit: Looks like this is just the way the system is designed, and not a configuration error on my part! All good here. Thanks for clarifying it @lemmy@endlesstalk.org ! 2. I'm not sure if I'm getting all the messages federated. In this community, for example, I can see most if not all recent threads. However, most threads have no comments in it. Some newer threads, I see comments, but it seems to be incomplete. I'm not sure if I'm only supposed to receive new messages, or if something else is happening. I'll be monitoring this, and hoping the federation will just catch up over time. 3. Edit: It would appear this post itself is not federating to [!selfhosted@lemmy.world](https://lemmy.world/c/selfhosted) for some reason... I'm partially hoping it is just caught in some kind of moderation queue, but seeing other posts made after this appear on the list leads me to believe there's still something amiss. If you encounter any other issue, please do post back so we can try to debug it together. Hope this helps someone!
fedilink