Matrix and its implementations like Synapse have a very intimidating architecture (I’d go as far as to call most of the implementations somewhat overengineered) and the documentation ranges from inconsistent to horrific. I ran into this particular situation myself, Fortunately this particular step you’re overthinking it. You can use any random string you want. It doesn’t even have to be random, just as long as what you put in the config file matches. It’s basically just a temporary admin password.
Matrix was by far the worst thing I’ve ever tried to self-host. It’s a hot mess. Good luck, I think you’re close to the finish line.
While it sounds a bit hacky, I think this is an underrated solution. It’s actually quite a clever way to bypass the whole problem. Physics is your enemy here, not economics.
This is kind of like trying to find an electric motor with the highest efficiency and torque at 1 RPM. While it’s not theoretically impossible, it’s not just a matter of price or design, it’s a matter of asking the equipment to do something it’s simply not good at, while you want to do it really well. It can’t, certainly not affordably or without significant compromises in other areas. In the case of a motor, you’d be better off letting the motor spin at its much higher optimal RPM and gear it down, even though there will be a little loss in the geartrain it’s still a much better solution overall and that’s why essentially every low speed motor is designed this way.
In the case of an ammeter, it seems totally reasonable to bring it up to a more ideal operating range by adding a constant artificial load. In fact the high precision/low range multimeters and oscilloscopes are usually internally doing almost exactly the same thing with their probes, just in a somewhat more complex way behind the scenes.
The end result is exactly the same.
The difference is that you can install an iso on a computer without an internet connection. The normal iso contains copies of most or all relevant packages. Although maybe not all of the latest and most up to date ones, the bulk are enough to get you started. The net install, like the name suggests, requires an internet connection to download packages for anything except the most minimal, bare-bones configuration. The connection would hopefully be nearly as fast if not faster than the iso and be guaranteed to have the latest updates available which the iso may not. While such a fast connection is usually taken for granted nowadays, it is not always available in some situations and locations, it is not always convenient, and some hardware may have difficulty with the network stack that may be difficult to resolve before a full system is installed or may require specialized tools to configure or diagnose that are only available as packages.
In almost all cases, the netinst works great and is a more efficient and sensible way to install. However, if it doesn’t work well in your particular situation, the iso will be more reliable, with some downsides and redundancy that wastes disk space and time.
Things like windows updates and some large and complex software programs and systems often come with similar “web” and “offline” installers that make the same distinctions for the same reasons. The tradeoff is the same, as both options have valid use cases.
To be fair, in the case of something like a Linux ISO, you are only a tiny fraction of the target or you may not even need to be the target at all to become collateral damage. You only need to be worth $1 to the attacker if there’s 99,999 other people downloading it too, or if there’s one other guy who is worth $99,999 and you don’t need to be worth anything if the guy/organization they’re targeting is worth $10 million. Obviously there are other challenges that would be involved in attacking the torrent swarm like the fact that you’re not likely to have a sole seeder with corrupted checksums, and a naive implementation will almost certainly end up with a corrupted file instead of a working attack, but to someone with the resources and motivation to plan something like this it could get dangerous pretty quickly.
Supply chain attacks are increasingly becoming a serious risk, and we do need to start looking at upgrading security on things like the checksums we’re using to harden them against attackers, who are realizing that this can be a very effective and relatively cheap way to widely distribute malware.
It is mostly a myth (and scare tactic invented by copyright trolls and encouraged by overzealous virus scanners) that pirated games are always riddled with viruses. They certainly can be, if you download them from untrustworthy sources, but if you’re familiar with the actual piracy scene, you have to understand that trust is and always will be a huge part of it, ways to build trust are built into the community, that’s why trust and reputation are valued higher than even the software itself. Those names embedded into the torrent names, the people and the release groups they come from, the sources where they’re distributed, have meaning to the community, and this is why. Nobody’s going to blow 20 years of reputation to try to sneak a virus into their keygen. All the virus scans that say “Virus detected! ALARM! ALARM!” on every keygen you download? If you look at the actual detection information about what it actually detected, and you dig deep enough through their obfuscated scary-severity-risks-wall-of-text, you’ll find that in almost all cases, it’s actually just a generic, non-specific detection of “tools associated with piracy or hacking” or something along those lines. They all have their own ways of spinning it, but in every case it’s literally detecting the fact that it’s a keygen, and saying “that’s scary! you won’t want pirated illegal software on your computer right?! Don’t worry, I, your noble antivirus program will helpfully delete it for you!”
It’s not as scary as you think, they just want you to think it is, because it helps drive people back to paying for their software. It’s classic FUD tactics and they’re all part of it. Antivirus companies are part of the same racket, they want you paying for their software too.
It is. The web was eventually corporatized and the corporations sucked all the air out of the room suffocating anything too small to compete. The fediverse is, if not taking it back, at least opening a space for those who don’t want to consume from a fully corporatized web. These include many of the people who used to make “websites” instead of “apps” or “platforms”. When people complain that it doesn’t have as much content as say, Reddit, I look at that as a benefit, it’s helping solve the (massive) discovery problem by self-curating thoughtful people who can curate content intelligently and provide real opinions and meaningful thoughts. The signal to noise ratio is much higher, and it’s refreshing.
Never had a single functional problem with Nextcloud, other than the fact that it’s oppressively slow with the amount of files I’ve shoved into it. Mind you I also don’t use MySQL/MariaDB which I consider a garbage-tier DB. Despite Postgres not being the “Recommended DB” for Nextcloud it works perfectly for me. Maybe that’s the difference.
I would need to factory reset the whole server for that, which would be … highly inconvenient for me. It took me quite a long time to get everything working, and I don’t wanna loose my configuration.
This is your actual problem you need to solve. Reinstalling your server should be as convenient as installing a basic OS and running a configuration script. It needs to be reproducible and documented, not some mystery black box of subtle configurations you’ve forgotten about ages ago. A nice, idempotent configuration script is both convenient and a self-documenting system for tracking all the changes you’ve ever implemented on your server.
Once you can do that, adding whatever encryption you want is just a matter of finding the right sequence of commands, testing it (in another docker perhaps) and then running your configuration script to migrate your server into the desired state.
Just like tobacco companies were (and still are) fighting to deny the harm caused by their products, there is no surprise that we see the same from plastic, chemical and oil industries. They will scream even louder every time we try to prevent them from killing us, and they will never feel a twinkle of remorse about it. They will murder millions to get at our wallets, they truly don’t care about the consequences as long as they make money.
CANDU is one of the best reactor designs currently running, in my opinion. The problem is that it’s expensive to build, and requires expensive maintenance, and in a world that loves to cut corners and find efficiencies that is not popular. But it is solid, it is safe, and I support many more of them being refurbished, maintained and built in this country despite the cost.
Advanced CANDU on the other hand, had little in common with CANDU despite the name, and bowed to all of the previously mentioned pressure to cut corners and find efficiencies resulting in a dangerous and ultimately non-viable design that basically killed Canada’s nuclear industry. It was a classic boondoggle, and while I was and am infuriated with the way Harper killed it and put it out of its misery, the real mistake had been pursuing it down that path in the first place which was a decision that came well before his time and was based on global circumstances that made it simply impossible to justify.
I would love to see even more advanced reactors being researched, designed and built here, modular, pebble bed, sodium, thorium, all of it. But sadly I think that is mostly unrealistic given the current state of our nuclear industry. CANDU is however at least one proven technology that we can and should continue to take advantage of. Even if we will probably never be nuclear leaders again thanks to the mismanagement and sabotage of our nuclear industry, at least we can cling to its legacy.
A Dockerfile is basically just a script that starts a container image (ranging from standard Linux OS installs like ubuntu or debian or alpine to the very specialized pre-made containers with every piece of software you want already installed and configured and everything unnecessary stripped away) and then does various stuff to it (copies files/dirs from local, runs commands, configures networking). It’s all very straightforward, and if you know how to write a bash script or even just a basic batch file that’s pretty much all its doing, and the end result is a container, which is basically a miniature Linux virtual machine (that is supposed to be “single purpose” but there’s no technical limitation forcing it to be)
The simplest way to create a container is to use a standard OS container as I mentioned and install the software you want exactly as you normally would in that OS, using the OS package manager if you want, following tutorials for that OS or installing manually using the instructions from the software itself. Either way should work fine. Again, it’s basically not much different from having a virtual machine running that OS. You can even start up a root bash prompt and install it that way if you prefer, or even connect over ssh by running an sshd server on it (although that’s totally uneccessary and requires extra work).
For basic Dockerfile syntax, look at other people’s Dockerfiles and realize you probably don’t need 90% of the more complex ones. There are millions of them out there, you should be able to find some simple straightforward ones and just mimic those. Will you run into “gotchas”? Sure you will, Docker is full of them, and when you do your Dockerfile will get a little more complex as you find a way to deal with the problem Docker has created for you. Here’s a pretty simple tutorial example of a Dockerfile that just installs a bunch of packages from Debian and doesn’t even run any specific services, or alternatively here’s a Dockerfile that does nothing but run and configure an ssh server like I mentioned above (again that’s totally unnecessary normally but the point is you can certainly do it if you want to!)
Counter rant: This is why we built encryption and VPNs many years ago. This is a solved problem, but rather than solving it you’d rather just complain ineffectually about it. The solution, the product of years of work of technical people and privacy people, is sitting right there staring you in the face available for you to use as a free service, a paid service, or your own self-hosted service. Use a VPN, that’s what it’s for.
“GL iNet” has a dizzying array of products and some are designed for this (you can find them for sale on the usual scumbags). Surprisingly, for a niche brand you’ve likely never heard of the firmware is surprisingly robust and has a small but loyal community. Expensive, and you might not want to carry an extra piece of hardware to provide a bulletproof VPN connection, but worth it for the security IMO.
“Welcome to moon city! Please ensure your oxygen regulator is fully loaded with credits at all times, we are not responsible for any respiratory failure caused by lack of payment. Consult your residency permit terms and conditions if you have any questions about this policy, Breathe Easy™ and enjoy your stay!”
Yeah even the article admits they’re not even trying to produce as much durum wheat as they have been in the past, yeah, there’s some drought and stuff but it’s not really to blame for this because our agriculture system knows the risks of drought and should be able to compensate. What’s happening is artificial scarcity. Our whole agricultural system is broken. Its priorities are fucked by quotas and subsidies and its in large companies interests to keep things fucked so they can profit. The days of the family farmer are gone, the whole food supply chain has just turned into yet another oligopoly that wants to bleed everyone of as much money as they possibly can.
Owncloud is not fully open source. Nextcloud is. They have developed in different directions since then, but that remains the fundamental difference that split them apart in the first place. If that matters to you, Nextcloud is the right choice. If that doesn’t matter to you, then use whichever you prefer and has the features you need.
There’s going to be a bunch of caveats here, but basically…
Assuming you’re using a NAT router to connect to the internet (basically everyone is nowadays): If you’re using a local LAN IP address (10..., 192.168.., or 172.[16-32]..*) then nobody on the internet can access any services on that IP, unless you specifically port forward it through your router. Assuming there’s nobody dangerous on your local network (and nobody gets a remote-access virus) and your router itself is not hackable then yes it’s entirely safe.
You don’t technically need a public domain name to set up an SSL certificate, but to smoothly streamline the process in a way that modern software trusts it, you do. A self-signed certificate can be created for any IP address and it will provide full encryption and avoid interception of traffic between established clients, but you will get a scary warning that the certificate is self-signed every time you connect a new client or browser, because it cannot be verified. It still works though, it’s just (intentionally) scary, because it doesn’t know what you’re doing with it and it doesn’t know how to establish trust. You probably don’t need this, but it is an option. Setting up a self-signed certificate will have various degrees of complexity in documentation depending on what web server you’re using, I would recommend using the simplest guide you can find for the relevant web server if you choose to go that route, you don’t need anything complex for this. The keywords you’re looking for are “self-signed certificate”
Welcome to self-hosting. Nextcloud is a great thing to self-host, too. Hope you enjoy.
On the first offense, depending on circumstances. On the second offense, without exception.