• 1 Post
  • 41 Comments
Joined 1Y ago
cake
Cake day: Aug 10, 2023

help-circle
rss

Because forgejo’s ssh isn’t for a normal ssh service, but rather so that users can access git over ssh.

Now technically, a bastion should work, but it’s not really what people want when they are trying to set up git over ssh. Since git/ssh is a service, rather than an administrative tool, why shouldn’t it be configured within the other tools used for exposes services? (Reverse proxy/caddy).

And in addition to that, people most probably want git/ssh to be available publicly, which a bastion host doesn’t do.


So based on what you’ve said in the comments, I am guessing you are managing all your users with Nixos, in the Nixos config, and want to share these users to other services?

Yeah, I don’t even know sharing Unix users is possible. EDIT: It seems to be based on comments below.

But what I do know is possible, is for Unix/Linux to get it’s users from LDAP. Even sudo is able to read from LDAP, and use LDAP groups to authorize users as being able to sudo.

Setting these up on Nixos is trivial. You can use the users.ldap set of options on Nixos to configure authentication against an external LDAP user. Then, you can configure sudo

After all of that, you could declaratively configure an LDAP server using Nixos, including setting up users. For example, it looks like you can configure users and groups fro the kanidm ldap server

Or you could have a config file for the openldap server

RE: Manage auth at the reverse proxy: If you use Authentik as your LDAP server, it can reverse proxy services and auth users at that step. A common setup I’ve seen is to run another reverse proxy in front of authentik, and then just point that reverse proxy at authentik, and then use authentik to reverse proxy just the services you want behind a login page.


Google put an API into Chrome that sends extra system info but only to*.google.com domains. In every Chromium browser.

Only vivaldi caught this issue. Brave had this api enabled, most likely on accident.

But the problem is, that chromium is just such big and complex software, when combined with development being driven by Google, it’s just impossible for any significant changes or auditing to be done by third parties. Google is capable of exteriting control over Brave, simply by hiding changes like above, or by making massive changes like manifest v3, which are expensive for third parties to maintain.

Brave can maintain 1 big change to chromium, but for how long? What about 2, 3, etc.

My other big problem with brave is that I see them somewhat mimicking Google’s beginnings. Google started out with 3 things: an ad network, a browser, and a search engine.

Right now, Brave has those same three things. It feels very ominous to me, and I would rather not repeat the cycle of enshittification that drove me away from chrome and goolgle.


What was it? I’m planning to do a nextcloud deployment via helm soon.


sn1per is not open source, according to the OSI’s definition

The license for sn1per can be found here: https://github.com/1N3/Sn1per/blob/master/LICENSE.md

It’s more a EULA than an actual license. It prohibits a lot of stuff, and is basically source-available.

You agree not to create any product or service from any par of the Code from this Project, paid or free

There is also:

Sn1perSecurity LLC reserves the right to change the licensing terms at any time, without advance notice. Sn1perSecurity LLC reserves the right to terminate your license at any time.

So yeah. I decided to test it out anyways… but what I see… is not promising.

FROM docker.io/blackarchlinux/blackarch:latest

# Upgrade system
RUN pacman -Syu --noconfirm

# Install sn1per from official repository
RUN pacman -Sy sn1per --noconfirm

CMD ["sn1per"]

The two pacman commands are redundant. You only need to run pacman -Syu sn1per --noconfirm once. This also goes against docker best practice, as it creates two layers where only one would be necessary. In addition to that, best practice also includes deleting cache files, which isn’t done here. The final docker image is probably significantly larger than it needs to be.

Their kali image has similar issues:

RUN set -x \
        && apt -yqq update \
        && apt -yqq full-upgrade \
        && apt clean
RUN apt install --yes metasploit-framework

https://www.docker.com/blog/intro-guide-to-dockerfile-best-practices/

It’s still building right now. I might edit this post with more info if it’s worth it. I really just want a command-line vulnerability scanner, and sn1per seems to offer that with greenbone/openvas as a backend.

I could modify the dockerfiles with something better, but I don’t know if I’m legally allowed to do so outside of their repo, and I don’t feel comfortable contributing to a repo that’s not FOSS.



Why? In case authentik goes down, so you can recover data? Or something else?

I am settting up authentik and other selfhosted services right now and my plan was for authentik to have all the accounts.


I use this too, and it should be noted that this does not require wireguard or any VPN solution. Rathole can be served publicly, allowing a machine behind a NAT or firewall to connect.


LXD/Incus. It’s truly free/open

Please stop saying this about lxd. You know it isn’t true, ever since they started requiring a CLA.

LXD is literally less free than proxmox, looking at those terms, since Canonical isn’t required to open source any custom lxd versions they host.

Also, I’ve literally brought this up to you before, and you acknowledged it. But you continue to spread this despite the fact that you should know better.

Anyway, Incus currently isn’t packaged in debian bookworm, only trixie.

The version of lxd debian packages is before the license change so that’s still free. But for people on other distros, it’s better to clarify that incus is the truly FOSS option.


Edge WebView2

I’m like 90% sure this requires edge to be installed, even though the EU mandated that they make edge uninstallable. So that might be their game here.


Dockers manipulation of nftables is pretty well defined in their documentation

Documentation people don’t read. People expect, that, like most other services, docker binds to ports/addresses behind the firewall. Literally no other container runtime/engine does this, including, notably, podman.

As to the usage of the docker socket that is widely advised against unless you really know what you’re doing.

Too bad people don’t read that advice. They just deploy the webtop docker compose, without understanding what any of it is. I like (hate?) linuxserver’s webtop, because it’s an example of the two of the worst footguns in docker in one

To include the rest of my comment that I linked to:

Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.

You originally stated:

I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

And I’m trying to say that even if that was true, it would still be better than a footgun where people expose stuff that’s not supposed to be exposed.

But that isn’t the case for podman. A quick look through the github issues for podman, and I don’t see it inundated with newbies asking “how to expose services?” because they assume the firewall port needs to be opened, probably. Instead, there are bug reports in the opposite direction, like this one, where services are being exposed despite the firewall being up.

(I don’t have anything against you, I just really hate the way docker does things.)


Probably not an issue, but you should check. If the port opened is something like 127.0.0.1:portnumber, then it’s only bound to localhost, and only that local machine can access it. If no address is specified, then anyone with access to the server can access that service.

An easy way to see containers running is: docker ps, where you can look at forwarded ports.

Alternatively, you can use the nmap tool to scan your own server for exposed ports. nmap -A serverip does the slowest, but most indepth scan.


Yes it is a security risk, but if you don’t have all ports forwarded, someone would still have to breach your internal network IIRC, so you would have many many more problems than docker.

I think from the dev’s point of view (not that it is right or wrong), this is intended behavior simply because if docker didn’t do this, they would get 1,000 issues opened per day of people saying containers don’t work when they forgot to add a firewall rules for a new container.

My problem with this, is that when running a public facing server, this ends up with people exposing containers that really, really shouldn’t be exposed.

Excerpt from another comment of mine:

It’s only docker where you have to deal with something like this:

---
services:
  webtop:
    image: lscr.io/linuxserver/webtop:latest
    container_name: webtop
    security_opt:
      - seccomp:unconfined #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - SUBFOLDER=/ #optional
      - TITLE=Webtop #optional
    volumes:
      - /path/to/data:/config
      - /var/run/docker.sock:/var/run/docker.sock #optional
    ports:
      - 3000:3000
      - 3001:3001
    restart: unless-stopped

Originally from here, edited for brevity.

Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.


If you need public access:

https://github.com/anderspitman/awesome-tunneling

From this list, I use rathole. One rathole container runs on my vps, and another runs on my home server, and it exposes my reverse proxy (caddy), to the public.


I used to spend a ton of time helping people on reddit with linux and related things, and the “why” matters immensely in that case.

XY problem was extremely common, where someone was trying to achieve a goal through “incorrect” means.

I also saw many, many people’s issues where they wanted something, but were referring to it by a different name, ending up confused and lost. All I had to do was say “you actually want Y” and point them on their way, and they would be happy.

And then of course, sometimes people try to do something that’s simply not possible (or more usually, not implemented in software.).

But in general, it’s very difficult to help people who don’t make it easy for you to help them, and part of that is explaining the “why”, in addition to their issue.


I usually use nix to manage my development environments.

At the root of the git repo for my blog, there is a shell.nix file. This file, shell.nix, declares an entire shell environment, giving me tools, environment variables, and other things I need. I just run nix-shell while in the same directory as the shell.nix file, and it creates that shell environment.

There are other options, like VSCode has support for developing in a docker container (only docker, not podman or lxc).

I think lxc/incus (same thing) containers are kinda excessive for this case, because those containers are a full linux system, complete with an init system and whatnot. Such a thing is going to use more resources (ram, cpu, and storage space), and it’s also going to be more to manage compared to application containers (docker, podman), which are typically very stripped down and come with only what is needed to run the application.

I used to use anaconda, but switched away because it doesn’t have all the packages I wanted, and couldn’t control the versions of packages installed very well, whereas nix does these both very well. Anaconda is very similar in usage though, especially once you start setting up multiple virtual anaconda environments for separate projects. However, I don’t know if anaconda is as portable as nix is, able to create an entire environment from a single file of code.


I’m not too well versed in rustdesk, but it seems that they use end to end encryption (is it good? Idk).

https://github.com/rustdesk/rustdesk/discussions/2239#discussioncomment-5647075

I have experience with a similar software that uses relays, syncthing. With syncthing, everything is e2ee, so there’s no concern about whether or not the relay’s are trustworthy, and you can even host your own public relay server.

I find it hard to believe that rustdesk, another relay based software, wouldn’t have a similar architecture.

edit: typo



I use https://quarto.org

Pros: Markdown, easy to use. Docs are very good. Also, despite being a a static site, it comes with fulltext site searching, all done locally, enabled by default:

https://quarto.org/docs/websites/website-search.html

It uses pandoc under the hood, so anything that works with pandoc works there.

Cons: No support for any kind of template engine beyond simple variable replacement, as far as I know.


Nothing that is more questionable than lxd, which now requires a contributor license agreement, allowing canonical to not open source their hosted versions, despite lxd being agpl.

Thankfully, it’s been forked as incus, and debian is encouraging users to migrate.

But yeah. They haven’t said what makes proxmox’s license questionable.



Someone recommended ssh, which is good, but it can’t do udp connections.

https://github.com/anderspitman/awesome-tunneling

From this list, I selected rathole since they claimed to be more performant than frp, the most popular solution.



Don’t do unattended upgrades. Neither host nor containers. Do blind or automated updates if you want but check up on them and be ready to roll back if something is wrong.

Those issues are only common on rolling releases. On stable distros, they put tape between breaking changes, test that tape, and then roll out updates.

Debian, and many other distros support it officially: https://wiki.debian.org/UnattendedUpgrades. It’s not just a cronjob running “apt install”, but an actual process, including automated checks. You can configure it to not upgrade specific packages, or stick to security updates.

As for containers, it is trivial to rollback versions, which is why unattended upgrades are ok. Although, if data or configuration is corrupted by a bug, then you probably would have to restore from backup (probably something I should have suggested in my initial reply).

It should be noted that unattended upgrade doesn’t always mean “upgrade to the latest version”. For docker/podman containers, you can pin them to a stable release, and then it will do unattended upgrades within that release, preventing any major breaking changes.

Similarly, on many distros, you can configure them to only do the minimum security updates, while leaving other packages untouched.

People should use what distro they know best. A rolling distro they know how to handle is much better than a non-rolling one they don’t.

I don’t really feel like reinstalling the bootloader over ssh, to a machine that doesn’t have a monitor, but you do you. There are real significant differences between stable and rolling release distros, that make a stable release more suited for a server, especially one you don’t want to baby remotely.

I use arch. But the only reason I can afford to baby a rolling release distro is because I have two laptops (both running arch). I can feel confident that if one breaks, I can use the other. All my data is replicated to each laptop, and backed up to a remote server running syncthing, so I can even reinstall and not lose anything. But I still panicked when I saw that message suggesting that I should reinstall grub.

That remote server? Ubuntu with unattended upgrades, by the way. Most VPS providers will give you a linux distro image with unattended security upgrades enabled because it removes a footgun from the customer. On Contabo with Rocky 9, it even seems to do automatic reboots. This ensures that their customers don’t have insecure, outdated binaries or libraries.

Docker doesn’t “bypass” the firewall. It manages rules so the ports that you pass to host will work. Because there’s no point in mapping blocked ports. You want to add and remove firewall rules by hand every time a container starts or stops, and look up container interfaces yourself? Be my guest.

Docker is a way for me to run services on my server. Literally every other service application respects the firewall. Sometimes I want services to be exposed on my home network, but not on a public wifi, something docker isn’t capable of doing, but the firewall is. Sometimes I may want to configure a service while keeping it running. Or maybe I want to test it locally. Or maybe I want to use it locally

It’s only docker where you have to deal with something like this:

---
services:
  webtop:
    image: lscr.io/linuxserver/webtop:latest
    container_name: webtop
    security_opt:
      - seccomp:unconfined #optional
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Etc/UTC
      - SUBFOLDER=/ #optional
      - TITLE=Webtop #optional
    volumes:
      - /path/to/data:/config
      - /var/run/docker.sock:/var/run/docker.sock #optional
    ports:
      - 3000:3000
      - 3001:3001
    restart: unless-stopped

Originally from here, edited for brevity.

Resulting in exposed services. Feel free to look at shodan or zoomeye, internet connected search engines, for exposed versions of this service. This service is highly dangerous to expose, as it gives people an in to your system via the docker socket.

Do any of those poor saps on zoomeye expect that I can pwn them by literally opening a webpage?

No. They expect their firewall to protect them by not allowing remote traffic to those ports. You can argue semantics all you want, but not informing people of this gives them another footgun to shoot themselves with. Hence, docker “bypasses” the firewall.

On the other hand, podman respects your firewall rules. Yes, you have to edit the rules yourself. But that’s better than a footgun. The literal point of a firewall is to ensure that any services you accidentally have running aren’t exposed to the internet, and docker throws that out the window.


A tip I have is to move away from manjaro.

When you use a rolling release, you lose one of the main features of stable release distros: Automatic, unattended upgrades. AFAIK, every stable release distro has those, and none of the rolling releases do (except maybe opensuses’s new slowroll and centos rolling, but I wouldn’t recommend or use them).

Manjaro has other issues too, but that’s the big one.

Although I use arch on my laptop, I run debian on my server because I don’t want to have to baby it, especially since I primarily access it remotely. Automatic upgrades are one less complication removed, allowing me to focus on my server itself.

As for application deployment itself, I recommend using application containers, either via docker or podman. There are many premade containers for those platforms, for apps like jellyfin, or the various music streaming apps people use to replace spotify (I can’t remember any of the top of my head, but I know you have lots of options).

However, there are two caveats to docker (not podman) people should know:

  • Docker containers don’t auto update. Although you can use something like watchtower to automatically update them. As for podman, podman has an auto update command you can probably configure to run regularly.
  • Docker bypasses your firewall. If you forward port 80, docker will go around the firewall and publish it. The reason for this is that most linux firewalls work by using iptables or nftables behind the hood, but docker also edits those directly… this has security implications, I’ve seen many container services people didn’t intend to put on the public internet, on there.

Podman, however, respects your firewall rules. Podman isn’t perfect though, there are some apps that won’t run in podman containers, although my use case is a little more niche (greenbone service and vulnerability scanner).

As for where to start, projects like linuxserver provide podman/docker containers, which you can use to deploy many apps fairly easily, once you learn how to launch apps with the compose file. Check out this nextcloud dockerized, they provide. Nextcloud is a google drive alternative, although sometimes people complain about it being slow… I don’t know about the quality of linuxserver’s nextcloud, so you’d have to do some research for that, and find a good docker container.


your typical manga/light novel weebo

No chinese support :(

I read a ton of web novels translated from Chinese, and reading the untranslated versions would be a fun way to learn Chinese. Or Korean.

I don’t really like the Japanese light novels as much.

Edit: hmmm, it seems like their are similar projects, and some have custom language support. I may need to look into those into the future.


The tldr as I understand it is that Mac M1/M2 devices are unique in that the vram (gpu ram) is the same as the normal ram. This sharing allows LLM models to run on the gpu of those chips, and in their “vram” as well, allowing you to run bigger models on smaller devices.

Llama.cpp was the software that users did this with originalky. I can’t find the original guide/article I looked at, but here is a github gist, where the commenters have done benchmarks:

https://gist.github.com/cedrickchee/e8d4cb0c4b1df6cc47ce8b18457ebde0


If I run two mysql containers, it won’t necessarily take twice the resources of a single mysql containers

It’s complicated, but essentially, no.

Docker images, are built in layers. Each layer is a step in the build process. Layers that are identical, are shared between containers to the point of it taking up the ram of only running the layer once.

Although, it should be noted that docker doesn’t load the whole container into memory, like a normal linux os. Unused stuff will just sit on your disk, just like normal. So rather, binaries or libraries loaded twice via two docker containers will only use up the ram of one instance. This is similar to how shared libraries reduce ram usage.

Docker only has these features, deduplication, if you are using overlayfs or aufs, but I think overlayfs is the default.

https://moonpiedumplings.github.io/projects/setting-up-kasm/#turns-out-memory-deduplication-is-on-by-default-for-docker-containers

Should you run more than one database container? Well I dunno how mysql scales. If there is performance benefit from having only one mysqld instance, then it’s probably worth it. Like, if mysql uses up that much ram regardless of what databases you have loaded in a way that can’t be deduplicated, then you’d definitely see a benefit from a single container.

What if your services need different database versions, or even software? Then different database containers is probably better.


Nginx and nginx proxy manager are two different things, although nginx proxy manager uses nginx underneath the hodd.

Nginx is a lightweight reverse proxy and http(s) server configured via config files.

https://nginx.org/en/

Nginx proxy manager is a docker container that runs nginx, but also had a webui on top of it to make it much, much easier to configure.

Sometimes abbreviated as NPM.

https://nginxproxymanager.com/

That’s why people keep asking you for your nginx config since when you just say nginx, people are expecting that you are using just nginx, and configuring it through text files.


cross-posted from: https://programming.dev/post/5669401 > [docker-tcp-switchboard](https://github.com/OverTheWireOrg/docker-tcp-switchboard/) is pretty good, but it has two problems for me: > > * Doesn't support non-ssh connections > * Containers, not virtual machines > > I am setting up a simple CTF for my college's cybersecurity club, and I want each competitor to be isolated to their own virtual machine. Normally I'd use containers, but they don't really work for this, because it's a container escape ctf... > > My idea is to deploy [linuxserver/webtop](https://docs.linuxserver.io/images/docker-webtop), as the entry point for the CTF, (with the insecure option enabled, if you know what I mean), but but it only supports one user at a time, if multiple users attempt to connect, they all see the same X session. > > I don't have too much time, so I don't want to write a custom solution. If worst comes to worst, then I will just put a virtual machine on each of the desktops in the shared lab. > > Any ideas?
fedilink

I heard obfsproxy

Yeah, tor obs4 bridges.

But somehow, my high school managed to block those. My high school was literally more locked down than the great firewall of China.

I set up: https://github.com/cognetwork-dev/Metallic

At first, then I eventually switched to https://github.com/v2ray/v2ray-core as metallic struggled on some things. Both v2ray and xray are built for the great firewall of China, and iirc, they use the same tech.

It’s not too fast though. That privacy comes at a price. This may be the slowest proxy/vpn out there (although it’s speedy enough for normal web browsing), whereas wireguard is the fastest. Maybe you want something in between? It depends on your threat model.


You can just run it from your local computer. I did that because I wanted it to be available offline.


Somewhat related, there is a site I follow called royalroad. Royalroad is a site for web serials, which are basically books uploaded to the internet chapter by chapter.

Although royalroad used to be only google ads, at some point they started accepting user submitted ads. (Also, ads on that site have always been unobtrusive).

I like these ads much better because they are more privacy respecting (literally an a image and a link).

Also, they are really funny. User’s with no art skills will make memes, or doodle stick figures, and I clicked on that one anyways, and the story was soooo good.


You want webtop: https://docs.linuxserver.io/images/docker-webtop

But just like with kasm, not all software will work, although I think most will.

About kasm:

Not really. I don’t thing the default kasm images come with sudo or a root password, so you cant “sudo apt” or the like.

If you do create a software image with sudo, them yes, but only for a single session, if you keep it long running. Every time you destroy the session completely it will be reset.

Although, If you need software in your images, it’s better to just build your own docker images for use with kasm, that have everything you want.


Also relevant: https://wiki.archlinux.org/title/Internet_sharing

Important to note from that article: docker (the “docker” one, but not podman) edits iptables rules so you have to run different iptables commands if you want it to work right.


Can you elaborate on what you found lacking in kasm? Because afaik, kasm is one of the best solutions for this, giving you a full desktop session inside a docker container.


The chances I am going to manage a linux distro without systemd are low, but some systems (arch for example) don’t have cron out of the box.

Not that big of a deal since it’s easy to translate them all, but that’s one of the reasons why I default to systemd/timer units.


Once federation gets added to one of the FOSS, self hosted alternatives, I’ll probably switch. I’ll mirror stuff to github probably, for resume/recruiter purposes, but the CI/CD, website deployment, and main development will happen on whatever alternative I chose.




rclone, but i don’t know if there is is a desktop application for it that does everything (is that what you meant by interface?)

There is https://github.com/kapitainsky/RcloneBrowser, but it seems to be unmaintained, so I don’t know if it supports rclone’s “crypt” feature.

However, there is a web gui: https://rclone.org/gui/