𝓢𝓮𝓮𝓙𝓪𝔂𝓔𝓶𝓶
  • 14 Posts
  • 276 Comments
Joined 1Y ago
cake
Cake day: Jun 26, 2023

help-circle
rss

Is feel a lot better about this if it was a “supporter” tag not this “unlicensed” crap.


That’s the business ed not the community. There’s no limit in aware of in the community ed



Enough people have already commented on the “proxy at the vps solution”. Another option is to configure routing and nat on the VPS and have it route over the wg tunnel.

Requires you to have postup/predown scripts that modify your routing tables on the wg endpoint.


I made the plunge about a year ago. Spectrum assigns me a prefix but routing was spotty at best. In the end after all the troubleshooting pointed to the problem being the ISP I gave up and stuck with what works, IPv4.


DDOS protection is going to depend on the VPS. But for most services you could spin up a pretty lean Debian vm running a proxy like nginx proxy manager and run that over the tunnel. Something like opnsense seems like overkill.



I feel this post so hard. I’m always about 5 seconds from going Office Space on my printer.



However, if my VPS is compromised, wouldn’t the attacker still be able to access my local network?

That depends on your setup. I terminate my wireguard tunnels on my opnsense router, where I have explicit fw rules for what the vps hosts can talk to.


I’m using CheckMk for pretty much all of that. Personally I found zabbix to have too much overhead.


No but less power hungry than a full desktop. It’s a good trade-off between power and performance.


If you want the small footprint and power costs are a concern, look for a second hand mini computer. Dell, Lenovo, Intel nuc.

Something like this as an example.





Thanks I may give it a try if I’m feeling daring.


Media should exist in its own with a tuned record size of 1mb

Should the vm storage block size also be set to 1MB or just the ZFS record size?


That cheat sheet is getting bookmarked. Thanks.


I’m referring to this.

… using grub to directly boot from ZFS - such setups are in general not safe to run zpool upgrade on!

$ sudo proxmox-boot-tool status
Re-executing '/usr/sbin/proxmox-boot-tool' in new private mount namespace..
System currently booted with legacy bios
8357-FBD5 is configured with: grub (versions: 6.5.11-7-pve, 6.5.13-5-pve, 6.8.4-2-pve)

Unless I’m misunderstanding the guidance.


Proxmox is using ZFS. Opnsense is using UFS. Regarding the record size I assume you’re referring to the same thing this comment is?

You can always find some settings in your opnsense vm to migrate log files to tmpfs which places them in memory.

I’ll look into this.


I’ve done a bit of research on that and I believe upgrading the zpool would make my system unbootable.


I didn’t pass any phy disks through, if that’s what you mean. I’m using that system for more than OMV. I created disks for the VM like I would any other VM.


This was really interesting, thanks for the info.


Thanks for all the info. I’ll keep this in mind if I replace the drive. I am using refurb enterprise HDDs in my main server. Didn’t think I’d need to go enterprise grade for this box but you make a lot of sense.


I’ve been happily running Open Media Vault in a Proxmox VM for some time now.


I may end up having to go that route. I’m no expert but aren’t you supposed to use different parameters when using SSDs on ZFS vs an HDD?


I thought cheap SSDs and ZFS didn’t play well together?


I’m starting to lean towards this being an I/O issue but I haven’t figure out what or why yet. I don’t often make changes to this environment since it’s running my Opnsens router.

root@proxmox-02:~# zpool status
  pool: rpool
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
  scan: scrub repaired 0B in 00:56:10 with 0 errors on Sun Apr 28 17:24:59 2024
config:

        NAME                                    STATE     READ WRITE CKSUM
        rpool                                   ONLINE       0     0     0
          ata-ST500LM021-1KJ152_W62HRJ1A-part3  ONLINE       0     0     0

errors: No known data errors

I’m trying to think of anything I may have changed since the last time I rebooted the opnsense VM. But I try to keep up on updates and end up rebooting pretty regularly. The only things on this system are the opnsense VM and a small pihole VM. At the time of the screenshot above, the opnsense VM was the only thing running.

If it’s not a failing HDD, my next step is to try and dig into what’s generating the I/O to see if there’s something misbehaving.


It’s an old Optiplex SFF with a single HDD. Again, my concern isn’t that it’s “slow”. It’s that performance has rather suddenly tanked and the only changes I’ve made are regular OS updates.


While you’re waiting for that, I’d also look at the smart data and write the output to a file, then check it again later to see if any of the numbers have changed, especially reallocated sectors, pending sectors, corrected and uncorrected errors, stuff like that.

That’s a good idea. Thanks.


I would start by making sure you have good recent backups ASAP.

I do.

Could be as simple as a service logging some warnings due to junk incoming traffic, or an update that added some more info logs, etc.

Possible. It’s a really consistent (and stark) degradation in performance tho and is repeatable even when the opnsense VM is the only one running.



Kinda feel dumb that my answer is no. Let me do that and report back.


Proxmox Disk Performance Problems
I've started encountering a problem that I should use some assistance troubleshooting. I've got a Proxmox system that hosts, primarily, my Opnsense router. I've had this specific setup for about a year. Recently, I've been experiencing sluggishness and noticed that the IO wait is through the roof. Rebooting the Opnsense VM, which normally only takes a few minutes is now taking upwards of 15-20. The entire time my IO wait sits between 50-80%. The system has 1 disk in it that is formatted ZFS. I've checked dmesg, and the syslog for indications of disk errors (this feels like a failing disk) and found none. I also checked the smart statistics and they all "PASSED". Any pointers would be appreciated. ![Example of my most recent host reboot.](https://lemmy.procrastinati.org/pictrs/image/968a942c-1083-4950-9771-4041b8b0a253.png) Edit: I believe I've found the root cause of the change in performance and it was a bit of shooting myself in the foot. I've been experimenting with different tools for log collection and the most recent one is a SIEM tool called Wazuh. I didn't realize that upon reboot it runs an integrity check that generates a ton of disk I/O. So when I rebooted this proxmox server, that integrity check was running on proxmox, my pihole, and (I think) opnsense concurrently. All against a single consumer grade HDD. Thanks to everyone who responded. I *really* appreciate all the performance tuning guidance. I've also made the following changes: 1. Added a 2nd drive (I have several of these lying around, don't ask) converting the zfs pool into a mirror. This gives me both redundancy and should improve read performance. 2. Configured a 2nd storage target on the same zpool with compression enabled and a 64k block size in proxmox. I then migrated the 2 VMs to that storage. 3. Since I'm collecting logs in Wazuh I set Opnsense to use ram disks for /tmp and /var/log. Rebooted Opensense and it was back up in 1:42 min.
fedilink

Yes. That’s why it’s called the Internet of things. Every “smart”, wifi connected, device you have uses that connection to communicate with a remote server. The app on your phone does the same to control the light.

Check out Zigbee for an example local control.


Zabbix & Grafana for supervision

@foremanguy92_@lemmy.ml personally I prefer CheckMk over Zabbix. I found Zabbix to be an absolute pig. Both are on the complex side. But really, you probably just need something like Uptime Kuma.


That very much depends on what you want to do.

The self hosted mailing list has a directory of apps they track.

There’s also the Awesome Self hosted.


I’ve got PBS setup to keep 7 daily backups and 4 weekly backups. I used to have it retaining multiple monthly backups but realized I never need those and since I sync my backups volume to B2 it was costing me $$.

What I need to do is shop around for a storage VM in the cloud that I could install PBS on. Then I could have more granular control over what’s synced instead the current all-or-nothing approach. I just don’t think I’m going to find something that comes in at B2 pricing and reliability.


A newbie should be running AIO in docker, which in my experience, has been pretty solid.


Change tracking ideas
I'd like to start doing a better job of tracking the changes I made to my homelab environment. Hardware, software, network, etc. I'm just not sure what path I want to take and was hoping to get some recommendations. So far the thoughts I have are: - A change history sub-section of my wiki. (I'm not a fan of this idea.) - A ticketing system of some sort. (I tried this one and it was too heavy. I'd need to find a simple solution.) - A nextcloud task list. - Self-host a gitlab instance, make a project for changes and track with issues. Move what stuff I have in github to this instance and kill my github projects. (It's all private stuff.) I know that several of you are going to say "config as code" and I get it. But I'm not there yet and I want to track the changes I'm making today. Thanks
fedilink

Backblaze B2 Reporting
I can't seem to find anything so I was hoping someone here has run into this. Does anyone know if there's a way to get reporting on a per application key basis or per bucket. I periodically get threshold alerts (usually the download cap) but that doesn't give me any idea of what utilization is triggering the alert. The reporting I can find is pretty rudimentary and account wide.
fedilink

How to backup object storage for NextCloud
I'm experimenting with running NextCloud (AIO) on a VPS with a B2 bucket as the primary storage. I want to compare performance compared to running it on my home server (esp. when I'm remote) and get an idea of the kinds of costs I'd rack up doing it. As part of the setup I have configured the built in borg backup but it has this caveat: >Be aware that this solution does not back up files and folders that are mounted into Nextcloud using the external storage app - but you can add further Docker volumes and host paths that you want to back up after the initial backup is done. The primary storage is external but I'm not using the "external storage" app. So, I have 2 questions. 1. Does it backup object storage if it's primary (my gut says no)? 1. If no, what's a good way to backup the B2 bucket? I've done some research on this topic and I'm kinda coming up empty. I would normally use restic but restic doesn't work in that direction (B2 -> local backup). It looks like rclone can be used to mount a B2 bucket. One idea I had was to mount it, read-only, and let AIO/borg backup that path with the container backups. Has anyone done this before? Any thoughts?
fedilink

Retain source IP when proxying through VPS
So, I'm experimenting with running a Mailu instance on my home server but proxying all of the relevant traffic through a WireGuard tunnel to my VPS. I'm currently using NGINX Proxy Manager streams to redirect the traffic and it all seems to be working. The only problem is that, all connections appear to come from the VPS. It's really screwing with the spam filter. I'm trying to figure out if there's a way to retain the source IP while still tunneling the traffic. The only idea I have, and I don't know if it's a bad one, is to us iptables to NAT the ports inbound on the VPS and on my home router (opnsense) route all outbound traffic from that IP back through the VPS instead of the default gateway. This way I shouldn't *need* to rewrite the destination port on the VPS side. It sound a bit hacky tho, and I'm open to better suggestions. Thanks **Edit:** I think I need to clarify my post as there's some confusion in the comments. I would like the VPS to masquerade/nat for my mailu system accessible over a WG tunnel so that inbound traffic to the SMTP reports it's actual public IP instead of the IP of the VPS host that's currently proxying. After giving that some thought I think the only way this could work would be if I treated the VPS as the upstream gateway for all traffic. My current setup is below: [VPS] <-- wg --> [opnsense] <--eth-->[mailu] I can source route all traffic from mailu to the VPS, via wg, but I don't know how to properly configure iptables to do the masquerading as I'd only want to masquerade that one IP. I'm not concerned about mailu not having internet access when wg is down, and frankly, I think I'd prefer it didn't. **Edit 2:** I got the basic masquerading working. Can ping public IPs and traceroute verifies it's taking the correct path. ``` iptables -A FORWARD -i wg0 -s <mailu-ip> -j ACCEPT iptables -t nat -A POSTROUTING -o eth0 -s <mailu-ip> -j MASQUERADE ``` I think I got the port forwarding working. ``` iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 25 -j DNAT --to-destination <mailu-ip> iptables -A FORWARD -p tcp -d <mailu-ip> --dport 25 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT ``` - tcpdump on the VPS eth0 shows traffic in. - tcpdump on the VPS wg0 shows the natted traffic. - tcpdump on mailu shows both inbound and outbound traffic. - tcpdump on opnsense shows 2 way traffic on the vlan interface mailu is on. - tcpdump on opnsense only shows inbound, but not outbound traffic on the wg interface. I think the problem is now in opnsense but I'm trying to suss out why. If I initiate traffic on mailu (i.e. a ping or a web request) I see it traversing the opnsense wg interface, but I do not see any of the return SMTP traffic. **Edit 3:** I found the missing packets. They're going out the WAN interface on the router, I do not know why. Traffic I initiate from the mailu box gets routed through the WG tunnel as expected but replies to traffic sourced from the internet and routed over the WG tunnel, are going out the WAN. The opnsense rule is pretty basic. Source: <mailu>, Dest: any, gateway: wg. **Edit 4:** I ran out of patience trying to figure out what was going on in opnsense and configured a direct tunnel between the mailu vm and the VPS. That immediately solved my problems although it's not the solution I was striving for. It was pointed out to me in the comments that my source routing rule likely wasn't configured properly. I'll need to revisit that later. If I was misconfiguring it I'd like to know that.
fedilink

[Fixed] Weird Wireguard issues I could use some help with.
I've hit a wall with a weird Wireguard issue. I'm trying to connect my phone (over cell) to my home router using wireguard and it will not connect. - The keys are all correct. - The IPs are all correct. - The ports are open on the firewall. - My router has a public IP, no CGNAT. The router is opnsense, I have a tcpdump session going and when I attempt a connection from the phone I see 0 packets on that port. I am able to ping the router and reach the web server sitting behind it from the phone. I have a VPS that I configured WG on and the phone connects fine to that. I also tested configuring the VPS to connect to my home router and that also works fine. I'm really at a loss as to where to go next. Edit 2: I completely blew out the config on both sides and rebuilt it from scratch, using a different UDP port, and it all appears to be working now. Thanks for everyone's help in tracking this down. Edit: It was requested I provide my configs. ## opnsense: ``` #################################################### # Interface settings, not used by `wg` # # Only used for reference and detection of changes # # in the configuration # #################################################### # Address = 172.31.254.1/24 # DNS = # MTU = # disableroutes = 0 # gateway = [Interface] PrivateKey = ListenPort = 51821 [Peer] # friendly_name = note20 PublicKey = AllowedIPs = 172.31.254.100/32 ``` ## Android: ``` [Interface] Address = 172.31.254.100/32 PrivateKey = [Peer] AllowedIPs = 0.0.0.0/32 Endpoint = :51821 PublicKey = ```
fedilink

Non-Realtek NIC Recommendations
Since switching to Proxmox I've noticed an issue with intermittent network connectivity on my VMs. I've narrowed it down to the realtek based PCI NIC (Rosewill RNG-407-Dualv2) I currently have installed. Basically when I see a ton of these in my syslog: > Dec 14 13:55:37 server kernel: r8169 0000:09:00.0 enp9s0: rtl_rxtx_empty_cond == 0 (loop: 42, delay: 100). It means it's time to reboot. I did some digging on it and it appears to be a kernel driver issue. Unless someone in this community has encountered this and knows of a good fix (other than rebooting) I'd rather just ditch Realtek and replace the NIC. Can anyone recommend a 2 port PCIe (x1) card that has good driver support under Linux and (hopefully) won't cost me a small fortune? Bonus points if it's 2.5GbE capable.
fedilink

Self-Hosting Email - Software Recommendations?
I'm going to start off but saying I know that self-hosting email can be a bad idea. That being said, I'm trying to de-googlfy my life and would like to experiment. I have a VPS and a domain that doesn't get used for much at the moment. I'd like to try configuring a full mail suite on that domain and see if I can make it work. I've been looking into the various options on [this list ](https://awesome-selfhosted.net/tags/communication---email---complete-solutions.html) and was hoping for some feed back on options that people have used. If this works out it would be fairly low volume. Ideally I'd like a full solution that includes web administration if at all possible. I think I'm leaning towards mailcow but it might be overkill. I'd appreciate any input on what has or hasn't worked for people. Thanks.
fedilink

[Help] Dropped connections to VM with multiple interfaces.
I'm not sure where to start with to troubleshoot this. I segregated my network into a few different VLANs (servers, workstations, wifi, etc...). I have VMs and LxC containers running in Proxmox, routing is handled by Opnsense, and I have a couple tplink managed switches. All of this is working fine except for 1 problem. I have a couple systems (VM and LxC) that have interfaces on multiple VLANs. If I SSH to one of these systems, on the IP that's on the same VLAN as the client, it works fine. If I SSH to one of the other IPs it'll initially connect and work but within a minute or so the connection hangs and times out. I tried running ssh in verbose mode and got this, which seems fairly generic: ``` debug3: recv - from CB ERROR:10060, io:00000210BBFC6810 debug3: send packet: type 1 debug3: send - WSASend() ERROR:10054, io:00000210BBFC6810 client_loop: send disconnect: Connection reset debug3: Successfully set console output code page from 65001 to 65001 debug3: Successfully set console input code page from 65001 to 65001 ``` I realize the simple solution is to just use the IP on the same subnet, but my current DNS setup doesn't allow for me to provide responses based on client subnet. I'd also like to better understand (and potentially) solve this problem. Thanks
fedilink

Log Collection
I'm in the process of re-configuring my home lab and would like to get some help figuring out log collection. My setup was a hodgepodge of systems/OSes using rsyslog to send syslogs to a syslog listener on my qnap but that's not going to work anymore (partly because the qnap is gone). My end-goal is going to be as homogeneous as I can manage. Mostly Debian 12 systems (phy and vm) and Docker containers. Does anyone know of a FOSS solution that can ingest journald, syslog, and if it's even possible to send docker logs to a log collector? Thanks
fedilink

What do you use to document your home lab?
My home lab has a mild amount of complexity and I'd like practice some good habits about documenting it. Stuff like, what each system does, the OS, any notable software installed and, most importantly, any documentation around configuration or troubleshooting. i.e. I have an internal SMTP relay that uses a letsencrypt SSL cert that I need to use the DNS challenge to renew. I've got the steps around that sitting in a Google Doc. I've got a couple more google docs like that. I don't want to get super complicated but I'd like something a bit more structured than a folder full of google docs. I'd also like to pull it in-house. Thanks Edit: I appreciate all the feedback I've gotten on this post so far. There have been a lot of tools suggested and some great discussion about methods. This will probably be my weekend now.
fedilink

[Help] Redeploy Portainer Edge Agent without losing config?
cross-posted from: https://lemmy.procrastinati.org/post/27277 > According to the documentation to change the Portainer address and Edge agent talks to, you have to redeploy the Edge agent. If I understand properly this is going to assign the agent a new ID and will blow away the configuration. > > Does anyone know how to do this while retaining the stack configurations?
fedilink

Advice/poll on switching away from Ubuntu for my VM host.
First off, I know ultimately I'm the only person who can decide if it's worth it. But I was hoping for some input from your collective experience. I have a server I built currently running Ubuntu 22.04. I'm using KVM/qemu to host VMs and have recently started exploring the exciting world of Docker, with a VM dedicated to Portainer. I manage the VMs with a mix of virt-manager via xRDP, cli tools, and (if I'm feeling extra lazy) Cockpit. Disks are spindles currently in software Raid 10 (md), and I use LVM to assign volumes to the KVM VMs. Backups are via a script I wrote to snapshot the LVM volume and back it up to B2 via restic. It all works. Rather smoothly except when it doesn't 😀. I've been planning an HD upgrade and was considering using that as an excuse to start over. My thoughts are to either install Debian and continue with my status quo, or to give Proxmox a try. I've been reading alot of positive comments about it here and I have longed for one unified web interface to manage my VMs. My main concerns are: 1. Backups. I want to be able to backup to B2 but from what I've read I don't see a way to do that. I don't mean backup to a local repository and then sync that to B2. I'm talking direct to B2. 2. Performance. People rave about ZFS, but I have no experience. Will I get at least equivalent performance out of ZFS and how much RAM will that cost me? Do I even need ZFS or can I just continue to store VMs the way I do today? Having never used Proxmox to compare I'm really on the fence about this one. I'd appreciate any input. Thanks.
fedilink

Need some advice on my NAS situation.
I need some advice on my NAS situation. I've got a hand-me-down QNap TS-453BU-RP that's been working fine for the majority of the time I've had it. It's for 4x2TB spindles and 2xM.2 SSDs. The M.2's are on a PCI expansion card. The past couple months I've been having problems where one of the M.2 drives will randomly disconnect. A cold restart doesn't seem to help (power down, remove power, wait, power back up). If I pull the thing open and reseat the drives that fixes it the majority of the time. It's usually slot 2, but slot 1 has disappeared before as well. I've tried swapping the drives around and I've tried replacement drives. None of it seems to make a difference. I've also reseated the pci card several times. So my questions are 2 fold: First, any ideas on this issue that I maybe haven't thought of? I don't *think* it's the drives. Could be the PCI or the riser but I'm not sure how to go about identifying that. Second, assuming I'm SoL with the QNap, what are my replacement options? One of the things I like about the qnap is it can do tiered storage between the SSDs and the spinning disks, presenting that all as one pool. Do any of the alternatives (TrueNAS, Unraid, etc...) have a similar feature? This thing is a 1u chassis and I don't have a rack. Space is a premium so any suggestions that have a minimal footprint would be ideal (i.e. not a full tower case sitting next to my PC). Price is also going to be a major consideration. Thanks
fedilink