• 8 Posts
  • 719 Comments
Joined 1Y ago
cake
Cake day: Jun 15, 2023

help-circle
rss

How do you avoid interaction if it’s being done automatically by your machine when you open up a print dialog, and if malicious servers can use the same names as legit printers?


You don’t have to install drivers or CUPS on client devices. Linux and Android support IPP out of the box. Just make sure your CUPS on the server is multicasting to the LAN.

You may need to install Avahi on the server if it’s not already (that’s what does the actual multicasting). The printer(s) should then auto magically appear in the print dialogs on apps on Linux clients and in the printer service on Android.

On Linux it may take a few seconds to appear after you turn it on and may not appear when it’s off. On Android it shows up anyways as long as the CUPS server is on.


From what I understand OP’s images aren’t the same image, just very similar.


Any PC can do that, it’s called “status after power off” or something like that.



Mozilla has already shipped strict privacy mode by default in recent versions of Firefox so they’re already a leg up on this.

Google is currently trying to transition people to its own proprietary method of tracking (where the browser itself tracks you) so they would love it if third party cookies were no longer usable for that.

Mozilla has also added a direct tracking feature (anonimized) to Firefox btw. Not sure what their agenda is.

Websites are irrelevant, if third party cookies stop working in major browsers there’s no point in setting them anymore, they’ll be ignored.


TBF in most cases forced app obsolescence is on the developers. Some of them are super aggressive and will force you to update without really needing it. Like, come on, package tracking app, I really don’t believe you’re unable to show me the package pick-up barcode without updating. 🙄

But yeah, on iOS it’s completely impossible to get older versions, once you’ve updated something that’s it. And even on Android I’ve noticed that it’s become impossible to downgrade some apps even if I have the old apk, the Google installer simply fails to install it if I’ve ever had a newer version installed.


In the olden days software used to be sold by individual major versions. You paid for version 9, you paid for version 10. Or you skipped versions you didn’t need. You could use versions side by side. The newest installed would import its data from the older ones. etc.

App stores have made this very awkward or almost impossible. There’s no concept of separating major versions. You’d have to buy and install completely different apps to be able to pay for them separately and to use them side by side, but if they’re separate apps they can’t import your data from each other. Not to mention that people seem to hate having “too many apps” for some reason.

Software subscriptions switch the “support per major version” to “support per time of use”. It’s obviously shittier but it’s more realistic than a one-time price and expecting to use the app in all future versions in perpetuity. The one time price would have to be very large to be realistic.


It’s impossible to tell how meaningful Backblaze’s numbers are because we don’t know the global failure rate for each model they test, so we can’t calculate the statistical significance. Also there are other factors involved like the age of the drives and the type of workload they were used for.

buying more reliable devices can definitely save you time and headache in the future by having to deal with failures less frequently.

That’s a recipe for sorrow. Don’t waste time on “reliability” research, just plan for failure. All HDDs fail. Assume they will and backup or replicate your data.


Any difference you personally experience between the three big brands is meaningless. For any failed HDD you have there’s going to be another person who swears by them and has had five of them running for 10 years without a hitch.

But whatever’s cheaper in your area and stop worrying. Your reliability should be assured by backups anyway not by betting on a single drive. Any drive can fail.


For home setup you don’t care because you should have either redundancy or backup (preferably both).

So that typically means buying the cheapest HDD that’s new and from one of the established brands (Seagate, Western Digital, Toshiba) that’s in the correct size for your needs, and you can afford to buy it at least twice (for the aforementioned backups or redundancy), or even thrice, and replace as soon as needed.

In other words there’s no need to speculate on how long an HDD will last, you simply replace it when needed.

Please also note that HDDs over 10 TB are starting to get increasingly replaced with enterprise models which run hotter and make more noise.


I doubt they intend to mine it. For the Russian state is easier to acquire Bitcoin by hacking wallets than mining, and the plebs can’t afford the electricity.


Trading is trading and they’d be risking sanctions whether they take payment with Swift or Bitcoin.


They had some sort of point-based scheme, nuf said. It was so monumentally stupid I can’t even describe it without getting upset.



This is not a new problem, .internal is just a new gimmick but people have been using .lan and whatnot for ages.

Certificates are a web-specific problem but there’s more to intranets than HTTPS. All devices on my network get a .lan name but not all of them run a web app.


7 was actually surprisingly well optimized. It ran OK on an office PC with 512 MB of RAM and a 512 MHz CPU.

You wouldn’t use it like that because by that time apps like browsers and office were starting to feel restricted by that little RAM to the point you could only run either or. But the OS itself stayed out of the way as much as possible, and if you gave it just a little more RAM (like 1 GB) suddenly you had a usable office machine.


But you only have two kidneys, how will you buy a third Mac?

Macs are outrageously priced for the hardware you get.

Non-Apple laptops can be just as reliable and last just as long nowadays, and you get to upgrade them at a fraction of the cost. Actually I should say you get to upgrade them, period.


Everybody should be using DNS over HTTPS (DoH) or over TLS (DoT) nowadays. Clear DNS is way too easy to subvert and even when it’s not being tampered with most ISP snoop on it to compile statistics about what their customers visit.

DoH and DoT aren’t a full-proof solution though. HTTPS connections still leak domain names when the target server doesn’t use Encrypted Hello (ECH) and you need to be using DoH for ECH to work.

Even if all that is in place, a determined ISP, workplace or state actor can identify DoH/DoT servers and compile block lists, perform deep packet inspection to detect such connections regardless of server, or set up their own honey trap servers.

There’s also the negative side of DoH/DoT, when appliances and IoT devices on your network use it to bypass your control over your LAN.


As opposed to what, the domain certificate? Which can’t be air-gapped because it needs to be used by services and reverse proxies.


If you mean properly signed certificates (as opposed to self-signed) you’ll need a domain name, and you’ll need your LAN DNS server to resolve a made-up subdomain like lan.domain.com. With that you can get a wildcard Let’s Encrypt certificate for *.lan.domain.com and all your https://whatever.lan.domain.com URLs will work normally in any browser (for as long as you’re on the LAN).


One day Proton will retire their bridge and there will be a lot of Pikachu faces.


Unfortunately all the volume-based email providers I know (Purely, MXroute, Migadu) are one or two-person operations. Doesn’t stop them from being excellent, of course.

I wish the volume-based pricing model was more popular but unfortunately very few people know about it, and is course the large providers prefer to charge by account or add all kinds of artificial limitations because they make much more money that way. Having multiple mailboxes for the same domain costs the provider nothing and yet you get charged per mailbox.


Use a volume-based email provider like MXroute, where you pay strictly for the resources you consume (storage space and mails sent) not made-up limitations like number of accounts, aliases, domains etc. that cost the provider nothing.


Then why do they offer a separate, distinct DDoS mitigation feature on the enterprise plans? And did you notice they call them “mitigation” and not “protection”? 🙂

Look at the description of each one, the free one “stops illegitimate traffic at the edge”. Meaning they’ll serve from cache, it’s not getting through to your actual site. You can get caching from any CDN service, it doesn’t have to be CF. All CDN services are distributed and will try to serve for as long as possible because their whole purpose is to deal with traffic spikes.

And if you want to know for how long CF (or any service) will serve from cache and how far they’ll go for an account (especially a free account), you want to check the terms of service not the plans. The plans are made to sell to you, the fine print is in the terms.

Anyway, I really don’t understand people’s obsession with DDoS, particularly self-hosting people. The chances of their little website ever being the target of a DDoS are astronomical. Many of them don’t take proper backups, and don’t worry about theft or fire or electric spikes, which are far more likely, but go frantic when they hear about features they’ll never use.


Use your common sense. They’re not going to expend any significant resources to keep up a free website.

They have a small capacity available for mitigating DoS for free accounts together, while resources last. If you happen to fit in that capacity at any given time that’s nice, if you don’t, you go down.


If anything ever happens that involves [the lack of] DNSSEC or CAA you’ll have to buy another domain because the old one will be on every block list.


As a workaround for Windows you can sync files to a Linux machine with SyncThing for example, and use Borg there.


Make your website all static files (if you can) and host on a CDN like Bunny.net. It’s $1/month and your website might actually be able to get through some large traffic spikes. It won’t work against a targeted sustained DDoS but like the other comments said that’s not likely to happen.


You don’t have to worry about DDoS:

  • DDoS is an advanced technique and the people who can do that spend a lot of time and effort putting malware on machines that can be ordered to perform DDoS on command. They usually sell that attack capability and it ends up getting used against worthy targets, we’re talking attacks that disrupt entire industries, elections, warfare etc. Do you really think what you’ll be hosting will attract that kind of attention and be impossible to take down with simpler methods?
  • To survive a DDoS attack you need a lot of resources, from a professional platform (like CloudFlare). The stuff they offer for free is not going to get you through a DDoS. If you’ll read their terms you’ll see it’s worded just ambiguously enough to mean nothing. If you ever actually get targeted by an actual DDoS and you haven’t paid a lot of money to a platform like that, everybody will simply drop you instantly (your ISP, your VPS provider, your tunnel provider, your VPN provider etc.) and possibly kick you off their service too.

If the stuff you’ll be hosting is static files you can use a CDN service. CDN’s are designed to be distributed and redundant so they’re somewhat resilient to DoS attacks by default. They’ll still kick you off if it gets to be too much but maybe you can weather shorter/moderate attacks.

If you’re hosting a dynamic/interactive service forget about it.


Does Kore make the Netflix app stream to Kodi?

Even if it does, I can’t exactly make everybody who comes by install an app to be able to stream to my TV. Everybody (who’s on Wifi) can stream to the Chromecast.


CAA and DNSSEC aren’t obscure. I would not even consider managing any domain nowadays without them.

Neither are ALIAS/DNAME/HTTPS, which you’ll be running into more and more in the future if you haven’t already. You could argue there are multiple competing standards at work there but Afraid doesn’t implement any of them.


If you don’t need CI/CD I’m not sure why you need a centralized frontend at all. Git itself is distributed and you can setup any code flow you can think of. It has hooks that can be used to set up code quality checks on select branches. There are local history browser apps for every platform and IDE plugins.

A frontend is no substitute for developer communication — usually what the “PR” thing does is sugarcoat the fact the devs don’t know how to use Git and/or don’t talk to each other.


I liked the puzzle battle style of games like Disgaea 2.

Not a big fan of the classic “let’s all stand face to face and take turns bashing each other” approach.


what record types are you referring to not being supported?

AFAIK it only supports a small subset of all the types currently in use.


…I thought that was the whole point of Spez blocking other spiders.


It lets you change reverse proxy or run a website with TLS completely independently of the certbot. The certbot deals with obtaining certs and leaves them in a dir, and the proxies or webservers just take them from that dir. If the proxy container breaks the certbot still does its thing etc.

It also makes it easier to do stuff like run different proxies in paralel for different things, chain proxies (for instance if you need to use a VPS because you can’t forward ports) and so on.

But it’s all for advanced setups, for basic stuff I’d still go with NPM.


You don’t run your own DNS, they are services hosted by someone else, just like Afraid. The difference, on top of the interface, is that they support modern record types, they have redundant servers all over the world, there’s a team working on them instead of just one guy, they have APIs that can let you manage your many domains easier, they have zone backup and restore etc.

I’ve used Afraid too, back when I was starting out and didn’t know any better, but once I’ve seen some of the other services out there I’ve never looked back. You’ll never know what extra features you could want if your current service doesn’t offer you any.


You can protect important data with backups, which you should do anyway, and in practice I feel like the added complexity of BTRFS and ZFS is not worth the COW.

BTRFS is cool but they tried to cram way too much too fast into it and it added a ton of complexity and it’s still not 100% done after all these years. A COW mode for ext4 would have been adopted much faster.


They’re all “standards”, yet no two TVs will work the same (if at all) — even when they’re the same make as the phone.


How do you guys use Tailscale (or other VPN) with containers
I wanted to run my VPN/Tailscale setup past you, see if anybody has suggestions on how I could do things better. * Setup: home LAN (`10.0.0.0/24`), router+DNS on `10.0.0.1`, server running docker containers on `10.0.0.2`. * LAN DNS points `*.local.dom.tld` to the server, public DNS points `*.dom.tld` to my dynamic public IP. * Containers run in bridge mode with host, expose ports on host IPs via "ports:" mapping. * NPM with LE certs also in container, exposes `10.0.0.2:443`, forwards to various other services. Goals for Tailscale: * Accessing HTTP services via NPM from my phone when away from home. * Exposing select UDP and TCP non-HTTP services such as syncthing (:22000) or deluge RCP admin (:58846) to other tailnet devices or to phone on the go. Goals in general: * Some containers need to expose ports on the LAN. * Some containers need to expose ports via Tailscale. * Some containers need to broadcast on the LAN (DLNA stuff) – but I don't want them broadcasting to Tailscale. * Generally speaking I'd like to explicitly control what's exposed from each container on either LAN or Tailscale. * I'd like to avoid hacking images with Dockerfile. I can make my own images to do stuff, just don't want to keep up with hacking other images. How I progresed with Tailscale: 1. First tried running it directly on the host. Good: tailnet IP (let's call it `100.64.0.2`) available on the host's default network stack. Containers can use "ports:" to map to `100.64.0.2` (tailscale) and/or `10.0.0.2` (LAN). Bad: tailscale would mess with `/etc/resolv.conf` on host. Also bad: tailscale0 on host picked up stuff that binds to `0.0.0.0`. 2. Moved tailscale to a container running on the host network stack (`network_mode: host`). Made it leave `/etc/resolv.conf` alone. tailscale0 on host stack still picks up everything on `0.0.0.0`. This is kinda where I'm stuck. I can make the tailscale container bridged which would put the tailscale0 interface inside the container. It wouldn't pick up `0.0.0.0` from host but how would I publish ports to it? * The tailscale recommended way of doing it is by putting other containers in the tailscale's container network stack (`network_mode: container:tailscale`). This would prevent said containers from using "ports:" to map to host anymore. Also, everything they publish locally would end up on tailscale0 whether I like it or not. * Tailscale has an env var TS_DEST_IP that can mirror another IP. I could allocate an IP on host eth0 like `10.1.1.1`, mirror that from the tailscale container, and target it from other containers explicitly with "ports:" when I want to publish a port to tailscale. Downside: `10.1.1.1` would be in the host's network stack so still picks up `0.0.0.0`. * I could bridge the tailscale container with other containers on a private subnet, say `192.168.1.0/24` and use `tailscale serve` to forward specific ports to other containers over that subnet. Unfortunately `serve` is fairly limited; it can't do UDP and technically it refuses to forward TCP either to non-localhost (but you can dump the serve config to JSON, and hack that config, and use it with `TS_SERVE_CONFIG=` 🤮). * I could bridge tailscale with other containers and create a special container with a fixed IP on that subnet, mirror the IP from tailscale, and use iptables on that container to forward specific ports to other containers. This would actually solve everything I want except... * If I ever want to use another VPN which doesn't have the mirror feature. I don't know how I'd deal with that.
fedilink

Migrating away from Gandi, 9 months later
I'm posting this in selfhosted because Gandi increasing prices actually helped me a lot with being more serious about selfhosting, made me look into things like DNS and reverse proxies and VPN and docker and also ended up saving me money by re-evaluating my service needs. For background, Gandi.net is a large and old (25 years) domain registrar and hosting provider in the EU, who after two successive rounds of being acquired by investment funds have hiked up prices across the board for all their services. In July 2023 when they announced the changes for November I was using their services for pretty much everything because I manage domains for friends and family. That means a wide selection of domains registered with them (both TLDs and European ccTLDs), LAMP hosting, and was taking advantage of their free email hosting for multiple domains. For the record I don't hold the price hike against them, it was just unsustainable for us. Their email prices (~5€/mailbox/mo) are in line with market prices and so are hosting prices. Their domain prices are however exaggerated (€25-30/yr is their lower price now). I also think they could've been smarter about email, they could've offered lower prices if you keep domains registered with them. [These prices include the VAT for my country btw. They will appear lower in USD.] What I did: **Domains:** looked into alternative registrars with decent prices, support for all the ccTLDs I needed, DNSSEC, enforced whois privacy, and representative services (some ccTLDs require a local contact). Went with INWX.com (Germany) and Netim.com (France). Saved about €70/yr. Could have saved more for .org/.net/.com domains with an American registrar but didn't want to spread too thin. **DNS:** learned to use a dedicated DNS service, especially now that I was using multiple registrars since I didn't want to manage DNS in multiple places. Wanted something with support for DNSSEC and API. Went with deSEC.io (Germany) as main service and Bunny.net (Slovenia) as backup. deSEC is free, more on Bunny pricing below. Learned a lot about DNS in the process. **Email:** having multiple low-volume mailboxes forced me to look into volume-based providers who charge for storage and emails sent/received not mailboxes. I've found Migadu (Swiss with servers in France at OVH), MXRoute (self-hosted in Texas) and PurelyMail (don't know). Fair warning, they're all 1-2 man operations. But their prices are amazing because you pay a flat fee per year and can have any number of domains and mailboxes instead of monthly fees for one mailbox at one domain. Saved €130/yr. Learned a lot about MX records and SPF/DKIM/DMARC. **Hosting:** had a revelation that none of the webpages I was hosting actually needed live dynamic services (like PHP and MySQL). Those that were using a CMS like WordPress or PHP photo galleries could be self-hosted in docker containers because only one person was using each, and the static output hosted on a CDN. Enter Bunny.net, who also offer CDN and static storage services. For Europe and North America it costs 1 cent per GB with a $1 minimum/mo, so basically $12/yr since all websites are low traffic personal websites. Saved another €130/yr. Learned a lot about Docker, reverse proxies and self-hosting in general. Keep in mind that I already had a decent PC for self-hosting, but at €330 saved per year I could've afforded buying a decent machine and some storage either way. I think separating registrars, DNS, email and hosting was a good decision because it allows a lot of flexibility should any of them have any issues, price hikes etc. It does complicate things if I should kick the bucket – compared to having everything in one place – which is something I'll have to consider. I've put together written details for now. Any comments or questions are welcome. If there are others that have gone through similar migrations I'd be curious what you chose.
fedilink

Webmail client with decent search and large mailbox support?
I'm thinking of putting all my email archive (55k messages, about 6 GB) on a private IMAP server but I'm wondering how to access it remotely when needed. Obviously I'd need a webmail client but is there any that can deal with that amount of data and also be able to search through To, From, Subject and body efficiently? I can also set up a standalone search engine of some sort (the messages are stored one per file in regular folders) but then how do I view the message once I locate it? I can also expose the IMAP server itself and see if I can find a mobile app that fits the bill but I'd rather not do that. A webmail client would be much easier to reverse proxy and protect.
fedilink

Subtitles for the despecialized Star Wars fan remakes?
Hi, I'm trying to find the subtitles for Harmy's "Despecialized" Star Wars remakes and I was wondering if anybody has any ideas. The original website for Project Threepio points at a blog that seems abandoned and an old private tracker (MySpleen) that never opens to public anymore. Even just the English subs would be great (the original pack contained extensive language coverage in DVD format so I was given to understand it was quite large). TIA for any hints.
fedilink

Upgrading a self-hosted server (episode 3)
# Upgrading a self-hosted server (3) * [Episode 1: Introduction and plans](https://feddit.nl/post/370973) * [Episode 2: Hardware upgrades and installing Debian stable](https://feddit.nl/post/2610711) * **Episode 3: Installing Docker and basic containers (multimedia, files, printing)** ## A short intro to Docker Docker is a lot less complicated than it was made out to be. Docker is a way of taking a service (something like Plex) and making it work in a sort of "slice" cut out of the real machine's resources (CPU, RAM and disk space). These slices are called **containers**. There are several benefits: * If someone breaks into one of your services, they only reach one container not the real machine, and not any of the other containers. * It's very easy to restore a container in case of machine reinstall, using "magical" recipe files called "docker compose yaml". If the main OS blows up you just need to reinstall stock Debian stable and Docker, then use the magical recipes. * The containers share similar files among themselves, so if you have 10 containers that use the same files it will only be stored once, not 10 times. * You can try out any server software without worrying you'll mess up your host machine. Or you can use a second configurations of the same service in a second container without worrying you'll mess up the first one. ## Basic Docker commands * `docker-compose up -d`: run this into the same dir as a magical yaml file to create a container for the first time. * `docker stop cups`, `docker start cups`, `docker restart cups` will stop/start/restart the **cups** container. * `docker container list` shows all containers you've created. * `docker rm cups` removes the cups container (if it's not running). * `docker image list` shows the software images that the containers are using. * `docker rmi olbat/cupsd` will remove the **olbat/cupsd** image, but only if the **cups** container that is based on it has been removed (and stopped). * `docker exec cups ls /etc/cups` will execute a command inside the container. You can execute `/bin/sh` or `/bin/bash` to explore inside the container machine. * `ctop` is a nice tool that will show you all containers and let you start/stop/restart them. ## Preparing for using Docker There are a couple of things you need to add to a fresh Debian install in order to use Docker: * docker, obviously, the package on Debian is called `docker.io`. * `ctop` is a nice CLI tool that shows your containers and lets you do stuff with them (stop, start, restart, enter shell, view logs etc.) It's basically a simple CLI version of Portainer (which I never bothered installing after all). The following tools are indirectly related to services inside docker containers: * `vainfo` will verify that GPU-accelerated video encoding/decoding is working for AMD and Intel GPUs. This will be useful for many media streaming containers. [See the Arch wiki for more.](https://wiki.archlinux.org/title/Hardware_video_acceleration) * `avahi` (which is `avahi-daemon` on Debian) and `avahi-dnsconfd` will help autodiscover some services on LAN between Linux machines. Only applicable if you have more than one Linux machine on your LAN, of course, and it's only relevant to some services (eg. CUPS). ## Some tips about Docker Should you use Docker or Podman? If you're a beginner just use Docker. It's a lot simpler. Yes, there are good reasons to use Podman but you don't need the extra headache when you're starting out. You will be able to transition into Podman easier later. Use `restart: "always"` in your compose yaml's and save yourself unnecessary trouble. Some people try to micromanage their containers and end up writing sysctl scripts for each of them and so on and so forth. With this restart policy your containers will stay stopped if stopped manually, but will start each time the docker daemon [re]starts, which most likely means at boot, which is probably all you want right now. The one docker issue that will give you the most trouble is mapping users from the real machine to the container machine and back. You want the service in the container to run as a certain user, and maybe you want to give it access to some files or devices on the real machine too. But some docker images were made by people who apparently don't understand how Linux permissions work. A good docker image will let you specify what users and groups it needs to work with (`emby/embyserver` is a very good example). A bad image will make up some UID and GID that's completely unrelated to anything on your machine and give you no way to change them; for such images you can try to force them to use root (UID and GID 0) but that negates some of the benefits a container was supposed to give you. So when looking for images see if they have a description, and if it mentions any UID and GID mapping. Otherwise you will probably have a bad time. How do you find (good) docker images? On [hub.docker.com](https://hub.docker.com), just search for what you need (eg. "cups"). I suggest sorting images by most recently updated, and have a look at how many downloads and stars it has, too. It's also very helpful if they have a description and instructions. One last thing before we get to the good stuff. I made a dir on my RAID array, `/mnt/array/docker` where I make one subdir for each service (eg. `/mnt/array/docker/cups`), where I keep the magical yaml (compose.yaml) for each service, and sometimes I map config files from the container, so they persist even if the container is deleted. I also use `git` in those dirs to keep track of the changes to the yaml files. ## Using Emby in a Docker container Emby is a media server that you use to index movies and series and watch them remotely on TV or your phone and tablet. Some other popular alternatives are Plex, Jellyfin and Serviio. [See this comparison chart.](https://github.com/Protektor-Desura/Archon/wiki/Compare-Media-Servers) Here's the `docker-compose.yaml`, explanations below: version: "2.3" services: emby: image: emby/embyserver container_name: emby #runtime: nvidia # for NVIDIA GPUs #network_mode: host # if you need DLNA or Wake-on-Lan environment: - UID=1000 # The UID to run emby as - GID=100 # The GID to run emby as - GIDLIST=100,44,105 # extra groups for /dev/dri/* devices volumes: - "./data:/config" # emby data dir (note the dot at the start) - "/mnt/nas/array/multimedia:/mnt/nas/array/multimedia" ports: - "8096:8096/tcp" # HTTP port - "8920:8920/tcp" # HTTPS port devices: - "/dev/dri:/dev/dri" # VAAPI/NVDEC/NVENC render nodes restart: always * **version** is the [compose yaml specification](https://docs.docker.com/compose/compose-file/compose-file-v3/) version. Don't worry about this. * **services** and **emby** defines the service for this container. * **image** indicates what image to download from the docker hub. * **container_name** will name your container, normally you'd want this to match your service (and for me the dir I put this in). * **runtime** is only relevant if you have an Nvidia GPU, for accelerated transcoding. Mine is Intel so... More details on the emby image description. * **network_mode: host** will expose the container networking directly to the host machine. In this case you don't need to manually map the ports anymore. As it says, this is only needed for some special stuff like DLNA or WoL (and not even then, I achieve DLNA for example with BubbleUPnP Server without resorting to host mode). * **environment** does what I mentioned before. This is a very nicely behaved and well written docker image that not only lets you map the primary UID and GID but also adds a list of extra GUIDs because it knows we need to access `/dev` devices that are owned by 3rd party users like `video` and `render`. Very nice. * **volumes** maps dirs or files from the local real machine to the container. The config dir holds *everything* about Emby (settings, cache, data) so I map it outside of the container to keep it. When I installed this container I pointed it to the location of my old Emby stuff from the previous install and It Just Worked. * **devices** similarly maps device files. * **ports** maps the network ports that the app is listening on. * **restart** remember what I said about this above. ## Using Deluge in a Docker container Let's look at another nicely made Docker image. Deluge is a BitTorrent client, what we put in the Docker container is actually just the server part. The server can deal with the uploads/downloads but needs an UI app to manage it. The UI apps can be installed on your phone for example (I like Transdroid) but the Deluge server also includes a web interface on port 8112. version: "2.1" services: deluge: image: lscr.io/linuxserver/deluge:latest container_name: deluge environment: - PUID=1000 - PGID=1000 - DELUGE_LOGLEVEL=error volumes: - "./config:/config" # mind the dot at the start - "/mnt/nas/array/deluge:/downloads" ports: - "8112:8112/tcp" # web UI - "60000:60000/tcp" # BT transfers - "60000:60000/udp" # BT transfers - "58846:58846/tcp" # daemon remote control (for Transdroid) restart: always Most of this is covered above with Emby so I won't repeat everything, just the important distinctions: * Notice how **environment** lets you choose what UID and GUID to work as. * I use **volumes** to map out the dir with the actual downloads, as well as map all the Deluge config dir locally so I can save it across container resets/reinstalls. * The **ports** need to be defined in the deluge config (which you can do via the web UI or edit the config directly) before you map them here. IIRC these are the defaults but please check. ## Using Navidrome in a Docker container Navidrome is a music indexer and streaming server (sort of like your own Spotify). It's follows the Subsonic spec so any client app that works with Subsonic will work with Navidrome (I like Substreamer). It also includes a web UI. version: "3" services: navidrome: image: deluan/navidrome:latest container_name: navidrome environment: ND_SCANSCHEDULE: 1h ND_LOGLEVEL: info ND_BASEURL: "" ND_PORT: 4533 ND_DATAFOLDER: /data ND_MUSICFOLDER: /music volumes: - "./data:/data" - "/mnt/nas/array/music:/music:ro" ports: - "4533:4533/tcp" restart: "always" Again, mostly self-explanatory: * Environment settings are nice but this image stopped short of allowing UID customization and just said fuck it and ran as root by default. Nothing to do here, other than go look for a nicer image. * I mapped the data dir locally so I preserve it between resets, and the music is shared read-only. ## Using BubbleUPnPServer in a Docker container This server can do some interesting things. Its bread and butter is DLNA. It has a companion Android app called, you guessed it, BubbleUPnP, which acts as a DLNA controller. The server part here can do local transcoding so the Android phone doesn't have to (subject to some caveats, for example the phone, the DLNA source and the Bubble server need to be on the same LAN; and it can only transcode one stream at a time). It can also identify media providers (like Emby or Plex) on the LAN and media renderers (like Chromecast or Home Mini speaker) and DLNA-enables them so they appear in the Bubble app as well as on other DLNA-aware devices. version: "3.3" services: deluge: image: bubblesoftapps/bubbleupnpserver-openj9-leap container_name: bubbleupnpserver network_mode: "host" user: "0:0" devices: - "/dev/dri:/dev/dri:rw" volumes: - "./data/configuration.xml:/opt/bubbleupnpserver/configuration.xml:rw" restart: "always" * **network_mode** is "host" because this server needs to interact with lots of things on the LAN automagically. * **user** forces the server to run as root. The image uses a completely made up UID and GID, there's no way to customize it, and it needs to access `/dev/dri` which are restricted to `video` and `render` groups to access GPU-accelerated transcoding. So using root is the only solution here (short of looking for a nicer image). * I map the configuration file outside the container so it's saved across reset/reinstalls. ## Using Samba in a Docker container Normally I'd install samba on the host machine but Debian wanted me to install like 30 packages for it so I think that's a valid reason to use a container. version: "2.3" services: samba: image: twistify/anonymous-samba container_name: samba volumes: - "./etc/samba:/etc/samba:ro" - "/mnt/nas/array:/mnt/nas/array" ports: - "445:445/tcp" # SMB - "139:139/tcp" # NetBIOS restart: "always" Normally this should be a simple enough setup, and it is simple as far as docker is concerned. Map some ports, map the config files, map the array so you can give out shares, done. But the image doesn't offer UID customization and just runs as root. For reference I give the `/etc/samba/smb.conf` here too because I know it's tricky. This one only offers anonymous read-only shares, which mainly worked out of the box (hence why I stuck to this image in spite of the root thing). [global] workgroup = WORKGROUP log file = /dev/stdout security = user map to guest = Bad User log level = 2 browseable = yes read only = yes create mask = 666 directory mask = 555 guest ok = yes guest only = yes force user = root [iso] path=/mnt/nas/multimedia/iso You can add your own shares aside from [iso], as long as the paths are mapped in the yaml. Notice the ugly use of root in the Samba config too. ## Using CUPS inside a Docker container CUPS is a printer server, which I need because my printer is connected via USB to the server and I want to be able to print from my desktop machine. version: "2.3" services: cups: image: aguslr/cups:latest container_name: cups privileged: "yes" environment: - CUPS_USER=admin - CUPS_PASS=admin volumes: - "/dev/bus/usb:/dev/bus/usb" # keep this under volumes, not devices - "/run/dbus:/run/dbus" ports: - "631:631/tcp" # CUPS restart: "always" The docker setup is not terribly complicated if you overlook things like `/dev/bus/usb` needing to be a volume mapping not a device mapping, or the privileged mode. CUPS is complicated because it's a complex beast, so I'll try to add some pointers below (mostly unrelated to docker): * You can use `lpstat -p` inside the host to check if CUPS know about your printer, and `/usr/lib/cups/backend/usb` to check if it knows about the USB printer in particular. * You need CUPS on both the server and the desktop machine you want to print from. You need to add the printer on both of them. The CUPS interface will be at :631 on both of them, for printer management on the server the user+pass is admin:admin as you can see above, on the desktop machine God only knows (typically "root" and its password, or the password of the main user). * So the server CUPS will probably detect the USB printer and have drivers for it, this image did for mine (after I figured out the USB bus snafu). You need to mark the printer as shared! * ...but in order for the *desktop* machine to detect the printer you need to do one more thing: install Avahi daemon and dnsconfd packages on *both* machines because that's the stuff that actually makes it easy for the desktop machine to autodetect the remote printer. * ...and don't rely on the drivers from the server, the desktop machine needs its own drivers, which it may or may not have. For my printer (Brother HL-2030) I had to install an AUR package on desktop – and then the driver showed up when setting up the printer in the desktop CUPS. **See you next time** with more docker recipes! As usual any and all comments and suggestions are welcome, including "omg you're so dumb, that thing could be done easier like this".
fedilink

Upgrading a self-hosted server (episode II)
[<< Episode i: outline](https://feddit.nl/post/370973) [>> Episode iii: docker recipes](https://feddit.nl/post/2774335) **In this episode:** installing a secondary disk in the machine, making a system backup, installing Debian, and configuring essential service. **Adding a secondary disk to the machine** Why? Because I'd like to keep the old system around while tinkering with the new one, just in case something goes south. Also the old system is full of config files that are still useful. To this end I grabbed a spare SSD I had lying around and popped it into the machine. ...except it was actually a bit more involved than I expected. (Skip ahead if you're not interested in this.) The machine has two M.2 slots with the old system disk occupying one of them, and 6 SATA ports on the motherboard, being used by the 6 HDDs. The spare SSD would need a 7th SATA port. I could go get another M.2 but filling the second M.2 takes away one SATA channel, so I would be back to being one port short. 🤦 The solution was a PCI SATA expansion card which I happened to have around. Alternatively I could've disconnected one of the less immediately useful arrays and free up some SATA ports that way. **Taking a system backup** This was simplified by the fact I used a single partition for the old system, so there are no multiple partitions for things like /home, /var, /tmp, swap etc. So: * Use `fdisk` to wipe the partition table on the backup SSD and create one primary Linux partition across the whole disk. In fdisk that basically means hitting Enter-Enter-Enter and accepting all defaults. * Use mkfs.ext4 -m5 to create an Ext4 filesystem on the SSD. * Mount the SSD partition and use `cp -avx` to copy the root filesystem to it. Regarding this last point: I've seen many people recommend `dd` or clonezilla but that also duplicates the block ID, which you then need to change or there would be trouble. Others go for `rsync` or piping through `tar`, with complicated options, but `cp -ax` is perfectly fine for this. A backup is not complete without making the SSD bootable as well. * Use `blkid` to find the UUID of the SSD partition and change it *in the SSD's* `/etc/fstab`. * If you have a swap partition and others /home, /var etc. you would do the same for them. I use a `/swapfile` in the root which I created with `dd` and formatted with `mkswap`, which was copied by `cp`. I mount it with `/swapfile swap swap defaults 0 0` so that stays the same. * Install grub on the SSD. You need to chroot into the SSD for this one as well as `mount --bind` /dev, /sys and /proc. Here's an example for that last point, taken from [this nice reddit post](https://www.reddit.com/r/linuxquestions/comments/f6xdrk/move_entire_linux_partition_to_another_drive/): mkdir /mnt/chroot mount /dev/SDD-partition-1 /mnt/chroot mount --bind /dev /mnt/chroot/dev mount --bind /proc /mnt/chroot/proc mount --bind /sys /mnt/chroot/sys mount --bind /tmp /mnt/chroot/tmp chroot /mnt/chroot Verify /etc/grub.d/30-os_prober or similar is installed and executable Update GRUB: update-grub Install GRUB: grub-install /dev/SDD-whole-disk Update the initrd: update-initramfs -u I verified that I could boot the backup disk before proceeding. **Installing Debian stable** Grabbed an amd64 ISO off their website and put it on a flash stick. I used the graphical gnome-disk-utility application for that but you can also do `dd bs=4M if=image.iso of=/dev/stick-device`. Usual warnings apply, make sure it's the right /dev, umount any pre-existing flash partition first etc. Booting the flash stick should have been uneventful but the machine would not see it in BIOS or in the quick-boot BIOS menu so I had to poke around and disable some kind of secure feature (which will be of no use to you on your BIOS so good luck with that). During the install I selected the usual things like keymap, timezone, the disk to install to (the M.2, not the SSD backup with the old system), chose to use the whole disk in one partition as before, created a user, disallowed login as root, and requested explicit installation of a SSH server. Networking works via DHCP so I didn't need to configure anything. **After install** SSH'd into the machine using user + password, copied the public SSH key from my desktop machine into the Debian `~/.ssh/authorized_keys`, and then disabled password logins in `/etc/ssh/sshd_config` and `service ssh restart`. Made sure I could login with public key. Installed `ntpdate` and added `0 * * * * /usr/sbin/ntpdate router.lan &>/dev/null` to the root crontab. **Mounting RAID arrays** The RAID arrays are MD (the Linux software driver) so they should have been already detected by the Linux kernel. A quick look at `/proc/mdstat` and `/etc/mdadm/mdadm.conf` confirms that the arrays have indeed been detected and configured and are running fine. All that's left is to mount the arrays. After creating `/mnt` directories I added them to `/etc/fstab` with entries such as this: `UUID=array-uuid-here /mnt/nas/array ext4 rw,nosuid,nodev 0 0` ...and then `systemctl daemon-reload` to pick up the new fstab right away, followed by a `mount -a` as root to mount the arrays. **Publishing the arrays over NFS** Last step in restoring basic functionality to my LAN is to publish the mounted arrays over NFS. I installed and used `aptitude` to poke around Debian packages for a suitable NFS server, then installed `nfs-kernel-server`. Next I added the arrays to `/etc/exports` with entries such as this: `/mnt/nas/array desktop.lan(rw,async,secure,no_subtree_check,mp,no_root_squash)` And after a `service nfs-kernel-server restart` the desktop machine was able to mount the NFS shares without any issues. For completion's sake, the desktop machine is Manjaro, uses `nfs-utils` and mounts NFS shares in `/etc/fstab` like this: `nas:/mnt/nas/array /mnt/nas/array nfs vers=4,rw,hard,intr,noatime,timeo=10,retrans=2,retry=0,rsize=1048576,wsize=1048576 0 0` The first one (with nas: prefix) is the remote dir, the second is the local dir. I usually mirror the locations on the server and clients but you can of course use different ones. **All done** That's essential functionality restored, with SSH and NFS + RAID working. In the next episode I will attempt to install something non-essential, like Emby or Deluge, in a docker container. I intend to keep the system installed on the metal very basic, basically just ssh, nfs and docker over the barebones Debian install. Everything else should go into docker containers. My goals when working with docker containers: * Keep the docker images on the system disk but all configurations on a RAID array. * Map all file access to a RAID array and network ports to the host. This includes things like databases and other persistent files, as long as they're important and not just cache or runtime data. Basically the idea is that if I ever lose the system disk I should be able to simply reinstall a barebones Debian and redo the containers using the stored configs, and all the important mutable files would be on RAID. See you next time, and meanwhile I appreciate any comments about what I could've done better as well as suggestions for next steps!.
fedilink

Why I’m leaving Gandi and where I’m going
I've been using Gandi for over 20 years, almost since it was founded. Since being acquired in 2019 by Montefiore Investment and this year by Total Webhosting Solutions their service have become more and more expensive and have finally priced me out. For context, I administer a bunch of domains, mailboxes and HTML websites for my family and extended family, and I prefer services hosted in the EU because of GDPR and local availability. This post is meant as a list of practical decisions in 2023 for the small time selfhoster. If anybody wants to comment on what Gandi (or rather TWS) is doing feel free to do so in the comments, I'm curious myself. Prices I've mentioned use my country's VAT so will vary slightly for you. **Domain names** Domain names have always been a bit on the expensive side with Gandi but they used to include a lot of features for free with them (SSL, DNSSEC, mailboxes, a small static website, WHOIS privacy, local contact for TLDs that need it etc.) and what they added extra was proportional to the base TLD cost. For the next renewal all my domains were slated to jump to €28 across the board. If you have domains with Gandi try adding some renewals to the cart and check in advance. I had to look for an European registrar because I have lots of European ccTLDs that the usual suspects like Cloudflare and Porkbun don't support. I'm moving to INWX.de and will be saving 25-60% per domain. This takes into account WHOIS privacy where needed for an extra 5€/domain (EU ccTLDs are private due to GDPR but we own a couple of TLDs too) as well as local contact services where required (price varies by country). **Email** I manage multiple mailboxes but they have low traffic and low storage requirements. Gandi will be offering them at €55/mailbox/year. I'm not questioning their pricing, 3-4€/month for email is common, but typically charged by email-focused services. Anyway, this per-mailbox model would price us into hundreds of euros for resources that go 99% unused. I'm switching to Migadu.com, who allows unlimited domains and mailboxes (within common sense) under a single account and charges for the conflated storage space and emails sent/received across all mailboxes. Migadu tiers start at 20€/year for 5GB and 200/20/day (soft limits). **Webhosting** We were using Gandi's smallest hosting package for about 100€/year, which was slated to jump to €135. Not an outlandish price for your typical PHP + MySQL hosting, especially since it had some VPS-like features. Then again the typical webhosting service would include a couple of mailboxes and some other goodies. This was a good opportunity for us to reevaluate out hosting needs and realize we can ditch PHP+MySQL (if we really have to revisit it we'll consider VPS offers in the future). It's mostly static sites, image galleries and a bit of blogging. We've cached all our stuff as plain HTML/CSS/images and moved it to BunnyCDN. Bunny lets you define a file bundle, gives you FTP access with a unique username+password, lets you pick the extent of replication, puts a CDN on top of it, and lets you point a domain name to it. Also throws a bunch of web server-ish features on top like rules/rewrites and Let's Encrypt SSL. They actually offer more features than that but I've just mentioned the minimum you need for serving a bunch of static websites. Bunny pricing starts at $0.01/GB (with a minimum of $1/month) and you pay as you go. **Nameservers** Since we're doing this I've taken the opportunity to dab into DNS. Turns out it's not that hard. There's only like half a dozen of commonly used DNS record types and everybody's helping you with them – email services like Migadu generate the email-related ones for you, registrars and managed DNS services generate the SOA for you, they have forms that tell you what fields are needed etc. There are lots of managed DNS options. Registrars usually include nameservers and let you mess with the records so INWX was one choice. Bunny offers DNS service that integrates with their CDN. deSEC is a completely free service I'll be using as backup. All of the above also offer APIs so a bash script will be taking care of dynamic DNS.
fedilink

Upgrading a self-hosted server (episode 1)
# Upgrading a self-hosted server (1) * **Episode 1: Introduction and plans** * [Episode 2: Hardware upgrades and installing Debian stable](https://feddit.nl/post/2610711) * [Episode 3: Installing Docker and basic containers (multimedia, files, printing)](https://feddit.nl/post/2774335) ## Welcome Hi, I'm starting a series of posts that will follow the upgrades I'll be doing to a self-hosted machine that serves as NAS and also runs all kinds of self-hosted software. I'm lazy so it will probably take time, don't expect me to post too often. About me: I've been using Linux exclusively for personal use (both desktop and servers) for about 20 years now. I've used several distributions over the years, I've built my own stuff from source (including kernels) and I've done Linux From Scratch. I'm not a Linux expert or professional sysadmin but I know my way around it, and I can learn what I don't know. So don't be afraid to make any suggestions no matter how complicated. ## The current state of the machine * It's a PC using an i5 7400 CPU, has a built-in GPU with support for h264 hardware encoding and MPEG2, VP8, VP9 and HEVC hardware decoding (this will come in handy for video transcoding). * Only 4 GB of RAM, I have ordered a dual 2x16 GB kit. * The system drive is a Transcend M.2 SSD (32 GB). SATA rather than PCIe unfortunately but it will do fine for the time being. * The OS is Ubuntu Server 16.04 LTS using Expanded Security Maintenance for updates. * It's currently running SSH, NFS, Samba, CUPS, OpenVPN, Emby and Deluge on bare metal. Some of them come from distro packages, some from binary releases straight from the developer. * There are 6 HDDs forming 3 pairs of RAID 1 arrays. 6 drives was a limit I chose from the beginning, and the case and motherboard were chosen accordingly (cage for 6 drives and 6 SATA connectors). * My ISP provides a public dynamic IP and allows port forwards. * I have a router that I've recently upgraded to the latest OpenWRT so it also runs Linux, can install packages, it has a web admin interface etc. and can do some interesting stuff. ## What I'd like to do * Increase the RAM to 32 GB. * Stick with a Linux distro, as opposed to a NAS-tailored OS, Unraid etc. * Install Debian Stable on a SSD, most likely via debootstrap from the Ubuntu system. * Add a GRUB menu entry that makes a passthrough to the other system, so I can keep them both around for a while. * Use `docker-compose` and possibly Portainer for as many of the services as it makes sense. Not sure if it's worth bothering to make containers for things like SSH, NFS, Samba. * Add more services. I'd like to try Jellyfin, NextCloud and other stuff (trying to degoogle for example). * I'd like to find a better solution for accessing services from outside the LAN. Currently using OpenVPN which is nice for individual devices but gets complicated when you want an entire remote LAN to be able to access (to allow smart TVs or Chromecast to use Emby/Jellyfin for example). I'm hoping Authelia + reverse proxy will be able to help with this. ## What I'm not interested in * Not interested in using Plex. I've used it for a couple of years, it's a fine piece of software but I don't like the fact they now mandate access through their server or injecting ads. * Not interested in changing the filesystem or the RAID setup for the HDDs. RAID 1 pairs give me enough redundancy. The HDD upgrades are very simple. I'm fine with losing 50% of capacity. Any and all suggestions and comments are welcome! Even if they're about things I said I'm not interested in. It's always possible there are things I haven't considered.
fedilink