• 6 Posts
  • 7 Comments
Joined 7M ago
cake
Cake day: Apr 10, 2024

help-circle
rss
Best Guest VM Filesystem for NTFS Host
I am setting up a Linux server (probably will be NixOS) where my VM disk files will be stored on top of an NTFS partition. (Yes I know NTFS sucks but it has to be this way.) I am asking which guest filesystem will have the best performance for a very mixed workload. If I had access to the extra features of BTRFS or ZFS I would use them but I have no idea how CoW interacts with NTFS; that is why I am asking here. Also I would like some NTFS performance tuning pointers.
fedilink

Can a System Handle Brown/Blackouts on only the GPU?
I am planning to build a multipurpose home server. It will be a NAS, virtualization host, and have the typical selfhosted services. I want all of these services to have high uptime and be protected from power surges/balckouts, so I will put my server on a UPS. I also want to run an LLM server on this machine, so I plan to add one or more GPUs and pass them through to a VM. I do not care about high uptime on the LLM server. However, this of course means that I will need a more powerful UPS, which I do not have the space for. My plan is to get a second power supply to power only the GPUs. I do not want to put this PSU on the UPS. I will turn on the second PSU via an Add2PSU. In the event of a blackout, this means that the base system will get full power and the GPUs will get power via the PCIe slot, but they will lose the power from the dedicated power plug. Obviously this will slow down or kill the LLM server, but will this have an effect on the rest of the system?
fedilink

In the past I have used Proxmox with ZFS raid on a basic mini PC. With ZFS raid it syncs everything except /boot. Proxmox has a tool called “proxmox-boot-tool-refresh” which will syncs /boot between drives. ZFS kernel module can be loaded in the initramfs so it will boot fine, even if missing a drive.

For this project I do not plan to use ZFS, but AFAIK software raid is now standard. Here is a popular video from Level1Techs talking about the flaws of hardware RAID: https://youtu.be/l55GfAwa8RI


Is RAID1 over USB Reliable?
I have an 11th gen Framework mainboard which I would like to repurpose as a server. Unfortunately, (unless I do some super janky stuff) I can only connect 1 drive to it over M.2 and any additional ones must be over USB. I am thinking of just using some portable hard drives and plugging them in over USB. I plan to RAID1 them and use them as boot drives and data storage, and use the M.2 slot for something unrelated. In your experiences, is USB reliable enough nowadays to run a RAID array for a server like this? If it is, does it depend on the specific drive used?
fedilink

Thank you for the detailed reply. You seem very knowledgeable. I will implement your suggestions as I redesign my network.


Thanks. Some of these entries maybe (20%) have IOMMU groups listed under “lspci_all”. But it is extremely awkward to search through. So maybe I will put a feature request in the forum to make IOMMU more searchable. But this is still likely the largest database of IOMMU groupings on the web, even if it is not easily searchable.


Thanks but these are only lists of CPUs and motherboards that support IOMMU, not the IOMMU groups. For me (and many others) the groupings are just as important as whether there is support at all.

The groupings are defined by the motherboard. In my experience, all motherboards that support IOMMU will put at least 1 PCIe slot in its own own group, which is good for Graphics Card passthrough. However, the grouping of other stuff like SATA controllers and NICs varies wildly between board, and that is what I am interested in.


Best IOMMU Group Database?
I am looking to buy a new mini PC home server and I want to be able to pass through my iGPU and NIC to different VMs. Where can I find a well-maintained database of IOMMU groups so that I can pick a good match for my needs? There exists iommu.info but that barely has any entries.
fedilink

Thank you, that is a very good point, I never thought of that. Just to confirm, best standard practice is for every connection, even as simple as a Nextcloud server accessing an NFS server, to go through the firewall?

Then I could just have one interface per host but use Proxmox host ID as the VLAN so they are all unique. Then, I would make a trunk on the guest OPNsense VM. In that way it is a router on a stick.

I was a bit hesitant to do firewall rules based off of IP addresses, as a compromised host could change its IP address. However, if each host is on its own VLAN, then I could add a firewall rule to only allow through the 1 “legitimate” IP per VLAN. The rules per subnet would still work though.

I feel like I may have to allow a couple CT/VMs to communicate without going through the firewall simply for performance reasons. Has that ever been a concern for you? None of the routing or switching would be hardware accelerated.


Search eBay for used gaming laptops. Comes with a built in UPS.


Many Network Interfaces per VM/CT - Good Practice?
I am currently setting up a Proxmox box that has the usual selfhosted stuff (Nextcloud, Jellyfin, etc) and I want all of these services in different containers/VMs. I am planning to start sharing this with family/friends who are not tech savvy, so I want excellent security. I was thinking of restricting certain services to certain VLANs, and only plugging those VLANs into the CT/VMs that need them. Currently, each CT/VM has a network interface (for example eth0) which gives them internet access (for updates and whatnot) and an interface that I use for SSH and management (for example eth1). These interfaces are both on different VLANs and I must use Wireguard to get onto the management network. I am thinking of adding another interface just for “consumption” which my users would get onto via a separate Wireguard server, and they would use this to actually use the services. I could also add another network just to connect to an internal NFS server to share files between CT/VMs, and this would have its own VLAN and require an additional interface per host that connects to it. I have lots of other ideas for networks which would require additional interfaces per CT/VM that uses them. From my experience, using a “VLAN-Aware” bridge and assigning VLANs per interface via the GUI is best practice. However, Proxmox does not support multiple VLANs per interface using this method. I have an IPv6-only network, so I could theoretically assign multiple IPs per interface. Then I would use Linux VLANs from within the guest OS. However, this is a huge pain and I do not want to do this. And it is less secure because a compromised VM/CT could change its VLAN tag itself. I am asking if adding many virtual interfaces per CT/VM is good practice, or if there is a better way to separate internal networks. Or maybe I should rethink the whole thing and not use one network per use-case. I am especially curious about performance impacts of multiple interfaces.
fedilink

Thanks for the wise words. However I have some questions:

If you’re worried about someone malicious having access to your network connection, ssh is going to do a DNS lookup to map the hostname to an IP for the client.

Are you sure that this is true for Tor? .onion addresses never resolve to an IP address, even for the end user client. If I was on an untrusted network, both for the client and the server, the attacker could find out that I was using Tor, but not know literally anything more than that.

And attackers have aimed to exploit things like buffer overflows in IDSes before – this is a real thing.

I would expect an IDS to be an order of magnitude larger attack surface than Wireguard, and significantly less tested. Although I could also say that about SSH, and we had the recent backdoor. However, I think it is a lot more likely that a bug will cause a security method to be ineffective than actively turn it in to a method for exfiltration or remote access though. For example, with the recent SSH backdoor, if those servers had protected SSH behind Wireguard then they would have been safe even if SSH was compromised.


Do I Need to Harden SSH over Tor?
cross-posted from: https://infosec.pub/post/10908807 > TLDR: > > If I use SSH as a Tor hidden service and do not share the public hostname of that service, do I need any more hardening? > > Full Post: > > I am planning to setup a clearnet service on a server where my normal "in bound" management will be over SSH tunneled through Wireguard. I also want "out of bound" management in case the incoming ports I am using get blocked and I cannot access my Wireguard tunnel. This is selfhosted on a home network. > > I was thinking that I could have an SSH bastion host as a virtual machine, which will expose SSH as a a hidden service. I would SSH into this VM over Tor and then proxy SSH into the host OS from there. As I would only be using this rarely as a backup connection, I do not care about speed or convenience of connecting to it, only that it is always available and secure. Also, I would treat the public hostname like any other secret, as only I need access to it. > > Other than setting up secure configs for SSH and Tor themselves, is it worth doing other hardening like running Wireguard over Tor? I know that extra layers of security can't hurt, but I want this backup connection to be as reliable as possible so I want to avoid unneeded complexity.
fedilink