• 0 Posts
  • 268 Comments
Joined 1Y ago
cake
Cake day: Jul 07, 2023

help-circle
rss

It is literally the product OP is struggling to host and understand. Nothing wrong with saving yourself the struggle and recover time by just buying the official product.




You’re probably going to need logs to rule out any permissions errors or the like.


Focus on DNS for the host machine and it’s port mappings, not the individual containers.

If you’re instead asking “How can I easily map a DNS name to service and port?”, then you want a reverse proxy on your host machine, like nginx (simplest) or Traefik (more complex, but geared towards service discovery and containers).

In the latter scenario you setup a named virtual host for each service that maps back to the service port exposed for your containers. Example: a request for jellyfin.localdomain.com points to the host machine, nginx answers the request and maps the host name in the request, then proxies your session to the container.

It’s copy and paste for the most part once you get the first one going unless you’re dealing with streaming.

If you’re running a flexible platform on your router like OpenWRT, you could also do some port forwarding as a means to achieve the same thing.




Then I’m not sure what the product you’re selling is though. Tech Support? That’s going to be a hard sell.


Oooohhhhh boy. Another one of these 🤣

It’s not like a package thing you can sell if you’re not supporting it. Then you’re just selling hardware at an inflated price. It’s not even self-hosting at that point. Why wouldn’t you just pay a regular company for a product?


You’re thinking too hard about this.

There needs to be a source of truth. LDAP is just a simple protocol that can be backed by whatever. You’re worried about the LDAP server going down, but guess what? It’s all in flat files. Go ahead and set it up in a bit repo for config management service for the server/protocol portion, and backup the DB. Easy peasy.

You can also cluster your LDAP service amongst all of your nodes if you have 3+ nodes and un-even number of them to ensure consensus amongst them. You can even back LDAP with etcd if you really want to go down that road.

You’re being paranoid about what happens if LDAP goes down, so solve for that. Any consumer of LDAP should be smart enough to work on cached info, and if not, it’s badly implemented. Solve for the problem you have, not for what MIGHT happen, or else you’re going to paranoid spiral like you are now because there is no such thing as a 100% effective solution to anything.


Then it’s the same situation. Find a box, setup an LDAP service, populate it, and you’re good to go. That’s it.



It’s not fine. Easiest way to rack up utilization on your server is getting hits on all the default service ports. Change that port to any unprivileged port to avoid that somewhat. Not every bot crawler is doing port scans on random non-commercial and unidentified IP space.

What you’re describing is security through obscurity, but switching from the default port has other benefits like the above.


Not sure I can expand on it a ton more in a way that will make sense if it already doesn’t sound familiar.

Basically, there are various methods to authenticate yourself to most services. Password is usually the weakest and most succeptible to brute-force and social engineering. There’s certificates, key pairs, RBAC…etc. You can even setup TOTP/MFA really easily for anything that supports it these days. Just don’t leave a service hanging out on the Internet to get brute-forced by password though.

If you’re unfamiliar with this, start with SSH and key pairs. It’s probably the simplest intro and you can be up and running to try it out in seconds.


You cant. You can only do your best to make it as secure as possible, but given enough time, someone can break it.

Basic tips:

  • don’t run any services on their defaults ports
  • don’t allow password auth for any exposed service. Ever.
  • run intrusion detection (fail2ban for simple ssh / Crowdsec for something a little beefier)

For ssh specifically, lock down your sshd config, make sure only key-based auth is enabled, and maybe as an extra step, create a dedicated user, and jail it by only allowing it access for the commands you need to interact with.


Your local auth services are configured to use LDAP as a source, whatever your local auth mechanism is checks credentials, and then you’re auth’d or not. Some distros have easy to use interfaces to configure this, some don’t, but mostly it’s just configuring pam.d (for Linux), and a caching daemon of some sort to keep locally cached copies of the shadow info so you can auth when the LDAP server can’t be contacted (if you’ve previously authenticated once). You can set up many different authentication sources and backends as well, and set their preferences, restrictions, options…etc.

RHEL/Fedora examples: https://www.redhat.com/sysadmin/pam-authconfig

Debian examples: https://wiki.debian.org/LDAP/PAM


Then you don’t understand how it works with local auth services.


I think you’re missing the point of LDAP then. It’s a centralized directory used for querying information. It’s not necessarily about user information, but can be anything.

What you’re asking for is akin to locally hosting a SQL server that other machines can talk to? Then it’s just a server. Start an LDAP server somewhere, then talk to it. That’s how it works.

If you don’t want a network service for this purpose, then don’t use LDAP. If you want a bunch of users to exist on many machines without having to manually create them, then use LDAP, or a system configuration tool that creates and keeps them all eventually consistent.


Look into a Gl.Inet device. Ships with OpenWRT and can run whatever you want as an integration.



Well if you’re talking about isolated networks, that’s a different story, and not in your post. That’s a completely different scenario than what you posted about.

In that case, you could also use port forwarding and IPP via CUPS to achieve the same result without needing to build a web form. If you’re unfamiliar with CUPS, try enabling the WebUI and setting it up from there, but there is an option to allow printing from the internet, meaning it’s enabling IPP and accepting requests from outside the source network it’s hosted on (not the global internet, because surely you have a firewall on the edge router of your home network), effectively creating a bridge between your two networks for this specific purpose and only using that one port for printing.


Just going to say it again: IPP (Internet Printing Protocol) via CUPS solves for all of this, but you seem destined for a specific thing you want to do and don’t actually need help with your current issues, so not sure why you posted here.


🙄

“Pedantic Asshole tries the whole ‘You seem upset’ but on the Internet and proceeds to try and explain their way out of being embarrassed about being wrong, so throws some idiotic semantics into a further argument while wrong.”

Great headline.

Computers also don’t learn, or change state. Apparently you didn’t read the CS101 link after all.

Also, another newsflash is coming in here, one sec:

“Textbooks and course plans written by educators and professors in the fields they are experts in are not ‘peer reviewed’ and worded for your amusement, dipshit.”

Whoa, that was a big one.


I am saying that CUPS requires zero drivers or anything else from clients. It advertises the printer on the network, a device sees it, and submits a job. That’s it. Exactly what you are describing doing with a web form, except CUPS already does all of this.

Sounds like you’re not sure how it works.


CUPS already does this though, and that’s where I’m getting confused. This is the entire point of CUPS. If your issue is with drivers, then you need to configure it to just print from its own driver via a spooled queue like PostScript, or maybe IPP.

https://wiki.archlinux.org/title/CUPS/Printer_sharing



Gotta say, this question and the process explained threw me for a loop.

You have a network print server where it’s advertising an available printer, but instead of the native printer system on a client device, you want to NOT use the CUPS server to print? That’s what it’s there for. I’m confused on why you have it then.

If your goal is just to have clients print as directly as possible to a printer…you already have that with CUPS running. I guess I’m not getting why submitting via web form is useful in this case.



It’s Boolean. This isn’t an opinion, it’s a fact. Feel free to get informed though.


Everything you just described is instruction. Everything from an input path and desired result can be tracked and followed to a conclusory instruction. That is not decision making.

Again. Computers do not make decisions.



Just use whatever works best for you. Just make sure there is a way you can easily export data from the tool in case you need to migrate to something else in the future.


The problem is that OP is asking for something to automatically make decisions for him. Computers don’t make decisions, they follow instructions.

If you have 10 similar images and want a script to delete 9 you don’t want, then how would it know what to delete and keep?

If it doesn’t matter, or if you’ve already chosen the one out of the set you want, just go delete the rest. Easy.

As far as identifying similar images, this is high school level programming at best with a CV model. You just run a pass through something with Yolo or whatever and have it output similarities in confidence of a set of images. The problem is you need a source image to compare it to. If you’re running through thousands of files comprising dozens or hundreds of sets of similar images, you need a source for comparison.


I think you’re doing this on hard mode. K8s and Longhorn gets you what you want.

You could try Ceph in your setup, but you’re going to run into issues if your environments aren’t all exactly the same.


Well how would you know which ones you’d be okay with a program deleting or not? You’re the one taking the pictures.

Deduplication checking is about files that have exactly the same data payload contents. Filesystems don’t have a concept of images versus other files. They just store data objects.



How would describe the “reduced attack surface” of something running a container?


VLANs are for organizing traffic, not authorization of traffic.

Can be pretty easily spoofed by packet.


There are entire books dating back to the 80’s that go into this, that are still fairly valid to this day.

If you want to take things further at your own risk, look into how to use TPM and Secure Boot to your advantage. It’s tricky, but worth a delve.

For network security, you’re only going to be as effective as the attack hitting you, and self-hosting is not where you want to get tested. Cloudflare is a fine and cheap solution for that. VLANS won’t save you, and an on-prem attack won’t save you here. Look into Crowdsec.

Disable any wireless comms. Use your BIOS to ensure things like Bluetooth is disabled…you get the idea. Use RFKill to ensure the OS respects the disablement of wireless devices.

At the end of the day, every single OS in existence is only as secure as the attack vectors you allow it to have. Eventually, somebody can get in. Just removing the obvious entry points is the best you can do.


Dep ending on your NAS ports, you may have a better direct connection over USB vs Ethernet, but otherwise no issues.