• 0 Posts
  • 51 Comments
Joined 1Y ago
cake
Cake day: Jul 03, 2023

help-circle
rss


If you really think someone is wrong don’t ask them “why, why, why” incessantly like a toddler, grow a pair of balls and just speak your mind.

And in this case I meant “your IP” as in, the grand scheme of things “an IP address that you own”, a VPS for instance, not necessarily the destination. Obviously you wouldn’t need to tell a firewall what its own public IP is. Have I clarified my thought to your standards?


No fucking shit? In that scenario your friend could use DDNS and you point your access rule to his FQDN to allow access.

Did you really ask me a billion fucking “why” questions just to come back and fucking what prove me wrong? Is this a good use of your time? I literally thought you were a noobie looking to understand.

Fuck off.


An access rule for instance. To say to allow all traffic or specific types of traffic from a public IP address. This could be if you wanted to allow access to some media server from your friends house or something.


If OP needs a firewall rule to do any number of things that a firewall does.


Because you’re not going to setup any rules pointed to a dynamic public IP address. Otherwise you’re going to be finding a way to change the rule every time the ip changes.

The ddns automatically updates an A record with your public IP address any time it changes, so yeah the rules would use the fqdn for that A record.


To resolve whatever hostname you’ve setup for ddns


As long as whatever firewall rules you’re using is capable of resolving FQDNs then I don’t see an advantage of doing this. Maybe in the off chance that your IP changes, someone else gets the old IP and exploits it before the DDNS setup has a chance to update. I think that’s really unlikely.

Edit: just to add to this, I do think static IPs are preferable to DDNS, just because it’s easier, but they also typically cost money.


So here’s my two cents:

I think that if you have a bunch of services, then you should use caddy or Apache or nginx. doing this in caddy and Apache is not that difficult, but I understand the hesitation (I don’t have much experience with nginx)

If you just want to get something working you could do bookmarks with the http://host.whatever.com:port and that would be Gucci.

You could also use another registrar or name server besides Cloudflare to make URL redirect records. This is like an A record but it also includes a port. This is not a standard type of record, but some places will do it like Namecheap.

Again, if you want to do it the right and best way, then I do think a reverse proxy is the way to go.


-1 for Netdata. I used it for a bit, but the configuration is not very intuitive and the docs for alerts were basically “rest of the fucking owl”, at least for the non-cloud version. I ended up just switching to Glances which is pretty boneless but it’s easy.

Though for OP I’d probably recommend Prometheus.


It’s at /app/public/conf.yml within the container. But I suppose you’re asking how you would pull it out? I’d probably just get into the container interactively and just copy the contents of that file. I would suggest using volumes in the future for persistent data.



Sure and I know that you meant. But I also think that with a little creativity and compromise it’s also not difficult at all to get something that’s not that long and also easily said.


Even if they are finite, the number would be so impossibly large that for all practical purposes this would not be the case.


I use Portainer for this, though it doesn’t aggregate logs or anything. It just makes them easy to get to and read.


You don’t need a protectli, even an old optiplex should be able to handle opnsense and/or a pi hole. You would just want to have 2+ NICS.

Or if it needs to be low powered there are definitely other options.


Look, I never said you were wrong man. Clearly you probably have a lot more experience than i do. Which is why I said what I said. Because I personally believe Proxmox is way easier for someone who is a casual like me. That’s all.

Edit: Also, though it doesn’t really matter, I don’t use LXC.


I’m going to disagree with this. I’ve setup everything in one Debian server before and it became unwieldy to keep in check when you’re trying new things, because you can end up with all kinds of dependencies and leftover files from shit that you didn’t like.

I’m sure this can be avoided with forethought and more so if you’re experienced with Debian, but I’m going to assume that OP is not some guru and is also interested in trying new things, and that’s why he’s asked this question.

Proxmox is perfectly fine. For many years I had an OMV VM for my file server and another server for my containers. If you don’t like what you’ve done it is much easier to just remove one VM doing one thing and switch to some other solution.


It seems cool but it’s just going to be a big headache man. I would just spin up a domain controller and maybe some workstations to play around with.


I would check out serverpartdeals as they’re pretty reputable. But for any used drive, I would make sure that you have a limited warranty or at least some sort of return policy. Once you get the drive, run badblocks on it, which will check for… bad blocks.


Looks like jellyfin has an api. I’m sure that could be leveraged. Just would need to have a way to send over api requests. You mentioned JavaScript, but I could see this being done in maybe DJango instead if you’re familiar with python. Though the learning curve for Django is a beast in itself imo


Seems like you could just make a simple web page for this.


.local is definitely local but it’s common for it to be used with mDNS primarily. To the second part of your question, yes that’s correct, since it will be reserved it will not be any public DNS server, even if it did look outside it wouldn’t find anything.


Sure. Though I’m not an expert on mDNS or anything. It stands for multi cast DNS. In a normal scenario, when your PC tries to connect to a local resource at its hostname it will use a local DNS server (or its own cache). It’s like a phone book. I know who I’m looking for, I just need to look in the phone book and see what their IP is. With mDNS there is no server. You’ll have a service that will plan to respond at a particular .local hostname, so like jellyfin.local (this is just an example, I don’t know if it has mDNS) but that isn’t registered on a server. Instead when your PC wants to reach jellyfin it will send a multi-cast to the other local devices and say “ok, I’m looking for some guy named jellyfin.local, which one of y’all is that?” And the jellyfin server will respond and say “yo what up, this is my ip address”

So anyway, that only works with .local addresses. You could use .local with a regular dns server, but then you may run into a conflict. So that would be the benefit of reserving .internal


If you want more confidence, run badblocks on the drives right after you get them. It will test the drive for any… bad blocks. Will take a while depending on your drive size.



It’s for internal resources. You can really use whatever subdomain you want internally, but this decision would be to basically say to registrars, this TLD is reserved, we will never sell this TLD to anyone to use. That way you know that if you use it internally, there’s no way a whoopsie would happen where your DNS server finds a public record for this TLD.


I do the same. I just have it do a transcode job every Sunday.


I don’t have any answers to your questions, I would just like to mention that you can get complete images that do both of these things together. I use this one, but there apparently to be a bunch of different ones.

https://github.com/MarkusMcNugen/docker-qBittorrentvpn

Was very easy to setup.


I don’t use jellyfin but I do use Emby with my Roku. The problem seemed to be with .mp4 files. I transcode all my movies to mkv and no problems now.





Does self-host have to be computer-based? I would self-host a whiteboard with some dividers on it and get some cute magnets. Have something on the wall that the kiddos can look at when they walk by and be proud of and also to get them off the damn tablet for once! ( I don’t have kids)


It sounds like raid may already be configured through the bios. Have you checked that?


Does the A record for your domain match what your IP is right now?


I have been running OMV for years and it is super stable. I rarely have to go in there. It has a lot of functionality thought the UI. My biggest gripe is that all of permissions options/ACLs combined with normal Linux permissions can be kind of confusing.

Unraid is also super simple, but maybe a bit too simple for some people. I don’t use anything but the core functionality in either one of these products. If you’re on the fence, you can do an unraid trial for 30* days (30 days, but technically you can stay on the trial as long as your disk array does not have to be restarted)


Based on what you described I really don’t think you need a vlan.


I run my dockers all in one VM, with persistent volumes over NFS. That way the entire thing could take a dump and as long as I have the nfs volume, we’re Gucci.