• 0 Posts
  • 13 Comments
Joined 1Y ago
cake
Cake day: Jun 30, 2023

help-circle
rss

I think what would be interesting is to get everyone who self hosts this do part of the indexing. As in, find some way to split the indexing over self-hosted instances running this search engine. Then make sure “the internet” is divided somewhat reasonably. Kind of what crypto does, but instead producing the indexes instead of nothing.


It’s a security thing. The HttpOnly cookie can’t be stolen using XSS or something like that, while a bearer token must be stored somewhere where javascript can see it.


Where my Java/Kotlin frameworks at?


I’ve recently switched from Notion to Obsidian (almost anyway). But I still have to find a good way to sync. I tried nextcloud, but I couldn’t get two way sync to work on mobile. I feel like €10,- a month for just sync is a bit much, and it (partly) defeats the idea of “the files are mine”.

I wish the official sync software was available for self hosting (ie as a Docker container). Maybe even against a one time fee?

What solutions do others here use for syncing?


For Google, yes that’s true. Subscription model YouTube is working so that seems fine. But for other websites, such as news sites etc. I wonder if there is a feasible alternative because I don’t want those sites to go away.


Will ungoogled chromium be able to patch this out?

Beside this, I’m still a bit worried about the state of the internet. Currently, ad revenue is what keeps a lot of sites online/free to use. Within the current economic system, is it even feasible to have privacy online?


I do see the other way around as well. So the games don’t get updated and then don’t support newer versions of Android. I see the same with RCT Classic.


I agree for serious/business applications. In my case, it’s a home server that I and some close friends and family access. But it is indeed exposed (basically, my router routes port 80 and 443 to the server within the NAT).

To discuss the header-based firewall: the host is used by Traefik to determine which docker container to route (reverse proxy) the traffic to. So I don’t see any obvious ways an attacker can really manipulate that to reach (in my case) the bitwarden instance. My main attack angle this protects if is there is some vulnerability in Bitwarden and bots are going around exploiting that. That way at least Traefik would already block them before they get to bitwarden.

But yes another service with a vulnerability could be exploited, then escape that docker container and you’re in the whole server.


Something to keep in mind here. I am also using an IP white-list, but only on some services Traefik hosts.

So for example bitwarden is behind an IP white-list, but subsonic is reachable from any IP.

I think in that case it’s not really possible (or doable) to write L2 firewall routes.

But if you want all traffic to port 80 on your server to be IP whitelisted, then a regular firewall would be good.




Well to be fair, changes like switching to 64 bit always are very slow (especially if they’re not being forced by completely blocking 32 bit). But I don’t think it was overhyped, it just takes time but more RAM was definitely needed to achieve the kinds of games/apps we have now.