• 2 Posts
  • 200 Comments
Joined 1Y ago
cake
Cake day: Jun 09, 2023

help-circle
rss

If you have the impression that there’s a dominant, homogeneous “mass” sharing the same opinion, you are right there in the middle of an information bubble and a victim of those “algorithms”.


Matrix seemed interesting right until I got to self hosting it. Then, getting to know it from up close, and the absolute trainwreck that the protocol is, made me love XMPP. Matrix has no excuse for being so messy and fragile at this point. You do you, but I decided that it isn’t worth my sysadmin time (especially when something like ejabberd is practically fire and forget).


I don’t think our views are so incompatible, I just think there are two conflictual paradigms supporting a false dichotomy: one that’s prevalent in the business world where “cost of labour shrinks cost of hardware” and where it’s acceptable to trade some (= a lot of) efficiency for convenience/saving manhours. But this is the “self-hosted” community, where people are running things on their own hardware, often in their own house, paying the high price of inefficiency very directly (electricity costs, less living space, more heat/noise, etc).

And docker is absolutely fine and relevant in this space, but only when “done right”, i.e. when containers are not just spun up as isolated black boxes, but carefully organized as to avoid overlapping services and resources wastage, in which case managing containers ends-up requiring more effort, not less.

But this is absolutely not what you suggest. What you suggest would have a much greater wastage impact than “few percent of cpu usage or a little bit of ram”, because essentially you propose for every container to ship its own web server, application server, database, etc… We are no longer talking “few percent” of overhead of the container stack, we are talking “whole new machines” software and compute requirements.

So, in short, I don’t think there’s a very large overlap between the business world throwing money at their problems and the self-hosting community, and so the behaviours are different (there’s more than one way to use containers, and my observation is that it goes very differently in either). I’m also not hostile to containers in general, but they cannot be recommended in good faith to self-hosters as a solution that is both efficient and convenient (you must pick one).



I don’t care […] because it’s in the container or stack and doesn’t impact anything else running on the system.

This is obviously not how any of this works: down the line those stacks will very much add-up and compete against each other for CPU/memory/IO/…. That’s inherent to the physical nature of the hardware, its architecture and the finiteness of its resources. And here come the balancing act, it’s just unavoidable.

You may not notice it as the result of having too much hardware thrown at it, I wouldn’t exactly call this a winning strategy long term, and especially not in the context of self-hosting where you directly foot the bill.

Moreover, those server components which you are needlessly multiplying (web servers, databases, application runtimes, …) have spent decades optimizing for resource pooling (with shared buffers, caching, event scheduling, …). These efforts are all thrown away when run for a single client/container further lowering (and quite drastically at that) the headroom for optimization and scaling.


That’s… a tool in the bucket for that. But I’m not really sure that’s the point here?


I don’t think containers are bad, nor that the performance lost in abstractions really is significant. I just think that running multiple services on a physical machine is a delicate balancing act that requires knowledge of what’s truly going on, and careful sharing of resources, sometimes across containers. By the time you’ve reached that point (and know what every container does and how its services are set-up), you’ve defeated the main reason why many people use containers in the first place (just to fire and forget black boxes that just work, mostly), and only added layers of tooling and complexity between yourself and what’s going on.



With only one having your interests at heart. An easy choice.


I’d like to share your optimism, but what you suggest leaving us to “deal with” isn’t “AI” (which has been present in web search for decades as increasingly clever summarization techniques…) but LLMs, a very specific and especially inscrutable class of AI which has been designed for “sounding convincing”, without care for correctness or truthfulness. Effectively, more humans’ time will be wasted reading invented or counterfeit stories (with no easy way to tell); first-hand information will be harder to source and acknowledge by being increasingly diluted into the AI-generated noise.

I also haven’t seen any practical advantage to using LLM prompts vs. traditional search engines in the general case: you end up typing more, for the sake of “babysitting” the LLM, and get more to read as a result (which is, again, aggravated by the fact that you are now given a single source/one-sided view on the matter, without citation, reference nor reproducible step to this conclusion).

Last but not least, LLMs are an environmental disaster in the making, the computational cost is enormous (in new hardware and electricity), and we are at a point where all companies partaking in this new gold rush are selling us a solution in need of a problem, every one of them having to justify the expenditure (so far, none is making a profit out of it, which is the first step towards offsetting the incurred pollution).


You can always give a shot at using a third party client (possibly acting as bridge for other/better protocols, like e.g. slidge.im>xmpp or the buggy matrix equivalent), but you need to keep in mind that they will all require you to authenticate (and remain authenticated) using a smartphone, and that usage of 3rd party clients is forbidden from WA’s terms and conditions (which may lead to your account being blocked/deleted).


How about nextcloud with only the bare minimum amount of plugins? Filles alone is pretty snappy.


Pydio used to be called ajaxplorer and was a pretty solid and lightweight (although featureful) solution, but then they rewrote the UI with lots of misguided choices (touch controls and android inspired interactions on desktop devices) and it became so horrendous, heavy and clunky that I almost forgot about it. I wonder if they reversed the trend (but from the screenshots it doesn’t look so).



I agree with the sentiment and everything, but the whole gaming console industry has gone to crap after they started putting hard drives/storage in them with the goal of needing you to be online and not owning anything anymore. They are all equally despicable for that. Which makes emulation even more essential, just for preserving those games into the future when the online front will inexorably shut down.


I’m with you. Hg-git still is to this day the best git UI I know…


I have no idea what this is about, but was kotlin native considered here? And what ruled it out in favour of rust?

I’ve seen multiple JVM languages going the route of AOT/native compilation and now taking the spot of systems languages in some use cases (CLI utils, low footprint “cloud native” stacks, things requiring tight os-level integration) with often outstanding performance.


The problem I’ve observed with XMPP as an outsider is the lack of a standard. Each server or client has its own supported features and I’m not sure which one to choose.

That’s a valid concern, but I wouldn’t call it a problem. There are practically 2 types of clients/servers: the ones which are maintained, and which work absolutely fine and well together, and the rest, the unmaintained/abandoned part of the ecosystem.

And with the protocol being so stable and backwards/forwards compatible in large parts, those unmaintained clients will just work, just not with the latest and greatest features (XMPP has the machinery to let clients and servers advertise about their supported features so the experience is at least cohesive).

Which client would you recommend?

Depends on which platform you are on and the type of usage. You should be able to pick one as advertised on https://joinjabber.org , that should keep you away from the fringe/unmaintained stuff. Personally I use gajim and monocles.


They both qualify as “open, federated messaging protocols”, with XMPP being the oldest (about 25 years old) and an internet standard (IETF) but at this point we can consider Matrix to be quite old, too (10 years old). On the paper they are quite interchangeable, they both focus on bridging with established protocols, etc.

Where things differ, though, is that Matrix is practically a single vendor implementation: the same organization (Element/New Vector/ however it’s called these days) develops both the reference client and the reference server. Which incidentally is super complex, not well documented (the code is the documentation), and practically not compatible with the other (semi-official) implementations. This is a red herring because it also happens that this organization was built on venture capital money with no financial stability in sight. XMPP is a much more diverse and accessible ecosystem: there are multiple independent teams and corporations implementing servers and clients, the protocol itself is very stable, versatile and extensible. This is how you can find XMPP today running the backbone of the modern internet, dispatching notifications to all Android devices, being the signaling system behind millions of IoT devices, providing messaging to billion of users (WhatsApp is, by the way, based on XMPP)

Another significant difference is that, despite 10 years of existence and millions invested into it, Matrix still has not reached stability (and probably never will): the organization recently announced Matrix 2 as the (yet another) definitive answer to the protocol’s shortcomings, without changing anything to what makes the protocol so painful to work with, and the requirements (compute, memory, bandwidth) to run Matrix at even a small scale are still orders of magnitude higher than XMPP. This discouraged many organizations (even serious ones, like Mozilla, KDE, …) from running Matrix themselves and further contributes to the de-facto centralization and single point of control federated protocols are meant to prevent.


Sorry if this isn't the right venue for that, I thought it'd be in the tone of "self-hosting" and "federation" :) tl;dr: some XMPP servers started to deploy a mod to report back about how they federate with the rest of the network, and now there is a pretty graph to show for it at https://xmppnetwork.goodbytes.im/webgl.html
fedilink

public Matrix server

Let’s see how long before it bankrupts you


It’s part of the reason why I think decentralized services could be the future. Lemmy or Mastodon can have a lot of small servers with reasonable costs spread across many admins, instead of one centralized service that costs a significant amount to run.

Ohh, absolutely, or rather, it is the past. I mean, internet was built that way, as a resilient federation of networks and protocols. Lemmy could be seen as us just rediscovering emails after the tech giants almost succeeded in killing it. We should approach all the services we use by asking ourselves basic sustainability questions:

  • is that thing opensource?

  • self hostable?

  • does it federate/interoperate with equivalent services?

  • can I pull my data out of it/relocate to another provider on a whim?

  • if not, is this a trustworthy and ethical business?

  • is it profitable?

  • are there open financial records available showing where/for what the money is going?

  • is it at risk of being acquired?

  • is it subject to foreign/unlawful interference

Etc Etc


Until i can give a laptop with linux to my neighbour without also needing to also provide support, its not there yet.

I mean, isn’t your neighbor already getting Windows support from his son or nephew anyway? Let’s not pretend that there exists a magical and perfect OS for those who don’t want to learn one. Some learning is required, whichever the OS, and I would be hard to convince that a current preinstalled Linux is more difficult to handle than a current preinstalled Windows.

Windows has for itself that it’s a devil most people know/got exposure to (thanks to Microsoft schemes and monopolistic practices), there is nothing inherently better or easier about it (and arguably quite the opposite).


What I found compelling about the sync is that you can have your other machines’ histories there with you, but in the background, behind a different shortcut, just in case you need to re-run or check that command you ran somewhere else few years ago…

As I said, I haven’t used that yet, but that’s in many ways more appealing than having to SSH onto said machine (assuming it’s even possible).




Been using it for months, haven’t gotten to use the sync yet, my only regret so far is that it doesn’t support case insensitive search which is a pretty big deal for me unfortunately.


Mercurial* and no, GitHub never supported hg, that was kind of the distinguishing feature of bitbucket back in the glory days of VCS plurality.

Now if you need mercurial hosting, heptapod (a friendly fork of gitlab with mercurial support) is a great way to go


Most containers don’t package DB programs. Precisely so you don’t have to run 10 different database programs. You can have one Postgres container or whatever.

Well, that’s not the case of the official Nextcloud image: https://hub.docker.com/_/nextcloud (it defaults to sqlite which might as well be the reason of so many complaints), and the point about services duplication still holds: https://github.com/docker-library/repo-info/tree/master/repos/nextcloud

You can typically configure the software in a docker container just as much as you could if you installed it on your host OS…

True, but how large do you estimate the intersection of “users using docker by default because it’s convenient” and “users using docker and having the knowledge and putting the effort to fine-tune each and every container, optimizing/rebuilding/recomposing images as needed”?

I’m not saying it’s not feasible, I’m saying that nextcloud’s packaging can be quite tricky due to the breadth of its scope, and by the time you’ve given yourself fair chances for success, you’ve already thrown away most of the convenience docker brings.


See my reply to a sibling post. Nextcloud can do a great many things, are your dozen other containers really comparable? Would throwing in another “heavy” container like Gitlab not also result in the same outcome?


Well, that is boldly assuming:

  • that endlessly duplicating services across containers causes no overhead: you probably already have a SQL server, a Redis server, a PHP daemon, a Web server, … but a docker image doesn’t know, and indeed, doesn’t care about redundancy and wasting storage and memory

  • that the sum of those individual components work as well and as efficiently as a single (highly-optimized) pooled instance: every service/database in its own container duplicates tight event loops, socket communications, JITs, caches, … instead of pooling it and optimizing globally for the whole server, wasting threads, causing CPU cache misses, missing optimization paths, and increasing CPU load in the process

  • that those images are configured according to your actual end-users needs, and not to some packager’s conception of a “typical user”: do you do mailing? A/V calling? collaborative document editing? … Your container probably includes (and runs) those things, and more, whether you want it or not

  • that those images are properly tuned for your hardware, by somehow betting on the packager to know in advance (and for every deployment) about your usable memory, storage layout, available cores/threads, baseline load and service prioritization

And this is even before assuming that docker abstractions are free (which they are not)


and why would that be? More abstraction thrown in for the sake of sysadmin convenience doesn’t magically make things more efficient…


Take that as you want but a vast majority of the complaints I hear about nextcloud are from people running it through docker.


Would be nice to be able to run WG on the NAS directly and not need a server, wouldn’t it? I believe there are a few go/rust userspace WG servers out there but I don’t know if anyone’s using them for anything like that.


What is “old arse” to you might be blazing fast and great for someone else (potentially in a less fortunate area of this world), and besides that, no matter your or my sensobilities, if it works, it works and should be kept that way as long as it has a purpose and the hardware permits it.


Except for a marginal fraction of the top YouTubers, aren’t most of them getting paid to inject sponsored links and from donations/patronage these days? It seems that the deal you are referring to has been off the table for a majority of YouTubers for a very long time now, and I don’t see why other platforms wouldn’t be as good, or even healthier than YouTube to provide them that kind of revenue.


unison is currently the closest to showing how it is actually done

What makes you say that? As far as I’m aware, even the theoretical soundness of it isn’t a done deal (this is a harder nut to crack than e.g. rust’s borrow checker)

Overall, I think one of 2 things will happen:

In this niche, perhaps, I don’t believe any of those will gain mainstream adoption (though I hope I’m wrong)



functional languages aren’t battle tested or imply they aren’t useful in real world problem solving

Yup, I never said that, though? What I was about was to draw a parallel between functional programming languages and explorations from several decades ago vs the new languages and explorations going into effect typing/capabilities programming now (and the long way ahead for those).

What I find interesting is that those pioneering FP languages never came to top the popularity chart, implying that I’m not expecting Unison to be different (but the good parts might make it into Java/C#/Python/… many years from now).


“Capabilities” is the new “Functional Programming” of decades prior,

Scala is also expanding in this area via the Caprese project: https://docs.scala-lang.org/scala3/reference/experimental/cc.html and it promises Safe Exceptions, Safe Nullability, Safe Asynchronicity in direct style/without the “what color is your function” dilemma, delineation of pure vs impure functions, … even Rust’s borrow checker (and memory guarantees) becomes a special case of Capabilities.

I believe this is a major paradigm shift, but the ergonomics have yet to be figured out and be battle-tested in the real world. Ultimately, like for Functional Programming Languages (OCaml, F#, Haskell, …) I don’t expect pionniers like Unison/Koka/Scala to ever become mainstream, but the “good parts” to be ported to ever the more complex and clunky “general purpose” programming languages (or, why I love Scala which is multiparadigm and still very thin/clean at its core).


I can’t pretend to know the future, but if you read between the lines and the justifications provided, this isn’t really about AGPL per se, but about Element brokering AGPL exceptions. Practically we can expect all kinds of forks with opencore options that might enshittify the user experience in different ways, and further solidification of Element’s single-handed control over Matrix (which had been a prime concern for many years). Matrix is by the day closer to the closed-source centralized silos it was first pretending to oppose.