• 4 Posts
  • 24 Comments
Joined 1Y ago
cake
Cake day: Jun 07, 2023

help-circle
rss

Is keeping everything inside of a local “walled garden”, then exposing the minimum amount of services needed to a WireGuard VPN not sufficient?

There would be be no attack surface from WAN other than the port opened to WireGuard



Please do. I took stole it >:D




I try to balance things between what I find enjoyable/ worth the effort, and what ends up becoming more of a recurring headache


I have a somewhat dated (but decently specd) NUC running Proxmox, and it’s the backbone of my home lab. No issues to date.



I was using a WD PR4100, but I upgraded to a Synology RS1221+ and it’s been fantastic :)


I have a beefed up Intel NUC running Proxmox (and my self hosted services within those VMs) and a stand alone NAS that I mount on the necessary VMs via fstab.

I really like this approach, as it decouples my storage and compute servers.


4 currently with 8GB RAM and no pass through for transcoding (only direct play)


That’s a good point; My Virtualization server is running on a (fairly beefy) Intel NUC, and it has 2 eth ports on it. One is for management, and the other I plug my VLAN trunk into, which is where all the traffic is going through. I will limit the connection speed of the client that is pulling large video files in hopes the line does not saturate, and long term I’ll try to get a different box where I can separate the VLAN’s onto their own ports instead of gloming them all into one port.


Very nice of you to offer. I made a few changes (routing my problem Jellyfin client directly to the Jellyfin server and cutting out the NGINX hop, as well as limiting the bandwidth of that client incase the line is getting saturated).

I’ll try to report back if there’s any updates.



Good point. I just checked and streaming something to my TV causes IO delay to spike to like 70%. I’m also wondering if maybe me routing my Jellyfin (and some other things) through NGINX (also hosted on Proxmox) has something to do with it… Maybe I need to allocate more resources to NGINX(?)

The system running Proxmox has a couple Samsung Evo 980s in it, so I don’t think they would be the issue.


I typically prefer VM’s just because I can change the kernel as I please (containers such as LXC will use the host kernel). I know it’s overkill, but I have the storage/ memory to spare. Typically I’m at about 80% (memory) utilization under full load.


Yeah, I’ve been looking into it for some time. It seems to normally be an issue on the client side (Nvidia shield), the playback will stop randomly and then restart, and this may happen a couple times (no one really knows why, it seems). I recently reinstalled that server on a new VM and a new OS (Debian) with nothing else running on it, and the only client to seem to be able to cause the crash is the TV running the Shield. It’s hard to find a good client for Jellyfin on the TV it seems :(


Proxmox VM’s hanging when one has issues
I've noticed that sometimes when a particular VM/ service is having issues, they all seem to hang. For example, I have a VM hosting my DNS (pihole) and another hosting my media server (jellyfin). If Jellyfin crashes for some reason, my internet in the entire house also goes down because it seems DNS is unable to be reached for a minute or so while the Jellyfin VM recovers. Is this expected, and is there a way to prevent it?
fedilink


I mean, both of them have common ownership. Maybe just correlation and not causation, but I’ve definitely noticed these types of changes post acquisition.




Best practice for external facing service
On my network, I have quite a few VLANS. One for work, one for IoT devices, one for security cameras and home automation, one for Guests, etc. I typically keep everything inward facing, with the only way to access them via my OpenVPN connection (which only can see specific services on specific VLANs). Recently, I thought of hosting a little Lemmy instance, since I have a couple domains I'm not doing much with. I know I can just expose that one system/NGINX proxy and the necessary ports via WAN, but is it best practice to put external facing things on their own VLANs? I was thinking of just throwing it on my IoT VLAN, but if it were to be compromised, it would have access to other devices on that VLAN because (to my knowledge) you cannot prevent communication between clients within the same VLAN.
fedilink

I have a (beefy specd) Intel NUC that’s running Proxmox. A few of the VMs mount to my RS1221+ for things like media (Jellyfin), etc.

On Proxmox I run

  • Jellyfin (media server)
  • Home Assistant (home automation)
  • PiHole (DNS)
  • Ansible (For keeping everything up to date and applying bulk actions)
  • NGINX Proxy Manager (so I can access things locally with a nice URL)
  • VM to host my Discord bots
  • Whoogle (Search engine)
  • AMP game server

Probably missing a few, but that’s the jist


The safest (but not as convenient) way is to run a VPN, so that the services are only exposed to the VPN interface and not the whole world.

In pfsense I specify which services my OpenVPN connections can access (just an internal facing NGINX for the most part) and then I can just go to jellyfin.homelab, etc when connected.

Not as smooth as just having NGINX outward facing, but gives me piece of mind knowing my network is locked down


Self hosted services with SSL cert
So, I have a few services (Jellyfin, Home Assistant, etc) that I am running, and have been acessing via their IP's and port numbers. Recently, I started using NGINX so that I could setup entries in my Pi Hole, and access my services via some made up hostname (jellyfin.home, homeassistant.home, etc). This is working great, but I also own a few domains, and thought of adding an SSL cert to them as well, which I have seen several tutorials on and it seems straight forward. My questions: - Will there be any issues running SSL certs if all of my internal service are inward facing, with no WAN access? My understanding is that when I try to go to jellyfin.mydomainname.com, it will do the DNS lookup, which will point to a local address for NGINX on my network, which the requesting device will then point to and get the IP of the actual server. - Are there risks of anything being exposed externally if I use an actual CA for my cert? My main goal is to keep my home setup off of the internet.
fedilink

Understanding proxies
Hey all, So I've been playing with Nginx so that I can reference my self hosted services internally by hostname rather than by IP and port. I set some custom entries in my pihole, setup the proxies on Nxing, and boom. All is working as expected. I can access Jellyfin via jellyfin.homelab, amp via amp.homelab, etc. I wanted to have all of these internally facing, because I don't really have a need for them outside of my network, and really just wanted the convenience of referencing them. Question 1) If I wanted to add SSL certs to my made up `homelab` domain, how hard would that be? Question 2) When accessing something like Jellyfin via jellyfin.homelab, all traffic is then going through my nginx VM, correct? Or is Nginx just acting as a sort of lookup which passes on the correct IP and port information?
fedilink