• 0 Posts
  • 47 Comments
Joined 1Y ago
cake
Cake day: Jun 09, 2023

help-circle
rss

Nextcloud Photos performs okay, but the interface is very ‘meh’. Plus, the mobile client’s sync is a little unstable. On iOS, there’s no background sync at all.


This seems the correct advice. If the container is on the same host as the data, there’s no need to access the data via Samba. In fact, it’s likely the container doesn’t contain the samba client needed for such connectivity.

Assuming TrueNAS allows the containers to see local data, a bind mount is the way to go.


This is good stuff. Has it been posted to the project’s GitHub (issue, discussion, etc.)?


Have you considered searching the GitHub issues?


IMO, this is a discussion that should be taking place on the project’s GitHub. I’m going to lock the comments so I don’t get any more reports about commenters’ behavior.


I imagine this would be up to the application. What you’re describing would been seen by the OS as the device becoming unavailable. That won’t really affect the OS. But, it could cause problems with the drivers and/or applications that are expecting the device to be available. The effect could range from “hm, the GPU isn’t responding, oh well” to a kernel panic.


I still use the label ‘homelab’ for everything in my house, including the production services. It’s just a convenient term and not something I’ve seen anyone split hairs about until now.

if nothing on it is permanent. You can have a home lab where the things you’re testing are self hosted apps. But if the server in question is meant to be permanent, like if you’re backing up the data on it, or you’ve got it on a UPS you make sure it stays available, or you would be upset if somebody came by and accidentally unplugged it during the day, it’s not a home lab.

A home lab is an unimportant, transient environment me


Tailscale is an overlay network. It will use whatever networking is available. If only one of those NICs is a gateway, then that’s what will be used to reach remote Tailnet resources.


Leaving this post here since it’s an interesting project to keep an eye on, but the conversation isn’t constructive. So, locking the comments.


Would they have to be VLAN aware if the switch port was already tagged AND if OP doesn’t care to consider untagged traffic ?


With the disclaimer that Proxmox has nothing to do with this question, I’m forced to assume this is just a networking issue that happens to use OPNsense as the router. Because of that, I must advise that you seek help from a networking-focused community. There’s no clear link to self-hosting in this post, which is required per Rule 3.


If the connections are already tagged as you come into the Proxmox server, then you need only to create interfaces for them in Proxmox (vmbr1, vmbr2, etc). EDIT: if you’re doing PCI passthrough of the physical NICs, ignore this step.

Then, in OPNsense, you just adding the individual interfaces. No need to assign a VLAN inside OPnsense because the traffic is already tagged on the network (per your earlier statement).

Whether or not the managed switch that has tagged each port is also providing VLAN isolation, you’ll simply use the OPNsense firewall to provide isolation, which it does by default. You’ll use it to allow the connections access to the fiber WAN gateway.


You’ll need to be far more descriptive than “I can’t get it to work.” I can almost guarantee you that Fedora is not the problem.



I’m a little lost on how a container would mess with your boot loader (GRUB). That aside, most of what you’re explaining to do with the containers. These are OS-agnostic. What do the container logs tell you?


This is really more of a home networking issue than anything having to do with self-hosting, especially since it centers on a consumer router. Please consider posting this in one of the many Lemmy home networking communities.


I’m going to allow this post, despite its age and likely obsolescence. I encourage community members to use up and down votes to judge its value to the community.


I am with you on the advantages of running it in a VM. The isolation a VM provides is really nice. Snapshots FTW.


That’s not a definitive support statement about Docker being unsupported. In fact, even in the Admin Guide, it only provides recommendations. The comment I replied said Docker is unsupported by Proxmox. I maintain that there is no such statement from Proxmox.


Proxmox is Debian at its core, which is supported by Docker. There’s no good reason to not run Docker on the bare metal in a homelab. I’d be curious to know what statement Proxmox has made about supporting Docker. I’ve found nothing.


This community is not unmoderated, nor is it micromanaged. As has been shared in these comments, some members of this community appreciate these new release postings. If you don’t, ignore/hide it and/or downvote it and move on.


Check the ZFS pool status. You could lots of errors that ZFS is correcting.


Quick and easy fix attempt would be to replace the HDD with an SSD. As others have said, the drive may just be failing. Replacing with an SSD would not only get rid of the suspect hardware, but would be an upgrade to boot. You can clone the drive, or just start fresh with the backups you have.




Yeah, and it’s so comprehensive.

yarn install
yarn dev

My point stands.


If you really want to serve the self-hosting community, please improve your documentation. As someone unfamiliar with this product, I have no idea what to do with this once I clone the repo. I hunted and found a compose.yaml file, but it’s not clear if this is all I need.


Per rule #3, this seems to be a general home computing question and not centered around self-hosting. Please consider adding details to clarify how this involves self-hosting.


Except when the ONLY pi-hole is down, which was the original OP’s whole question.


Yes, your experience will be different if your DNS is being provided by another kind of DNS resolver. If you want a consistent pi-hole experience (and you can’t avoid downtime of your current pi-hole), add another pi-hole to your network and let that be your secondary DNS resolver.


Add another DNS server (1.1.1.1, for instance) to your DHCP options. Your DHCP clients will use 1.1.1.1 when the pi-hole isn’t responsive.


VLANs all the way. I have several VLANs, including:

  • Virtual Servers
  • Bare metal
  • Trusted devices
  • IoT devices
  • Guest network etc.

EDIT: An alternative would be to replace or supplement Proxmox with Docker/Podman on the bare metal of the server. The container networking would be isolated by default. If you can replace your VM needs with containers, that may get you what you want.


When you mention Postgres, are you saying PG specifically is better, or are you implying that the default SQLite db is what really slows things down? I ask because I’m on mariadb with no complaints, but might switch if NC is faster on Postgres.


I’ll consider it a drop-in replacement when Kubernetes can use it.


Locking the thread. Information relevant to self-hosters has already been shared. Too many reports of off-topic comments to leave this open.


Add “-vvv” to your mount command and see what else it tells you.


Based on the vaultwarden wiki, the default DB engine is SQLite. Therefore, all the data is in the sqlite file(s) contained in your data volume. This backup utility seems to take that into account and only focuses on the data volume.



Seriously? Do we have to create a “no posts about what’s happening on Reddit” rule?