I am currently running most of my stuff from an unraid box using spare parts I have. It seems like I am hitting my limit on it and just want to turn it into a NAS. Micro PCs/USFF are what I am planning on moving stuff to (probably a cluster of 2 for now but might expand later.). Just a few quick questions:

  1. Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.

  2. Which micro PCs are you running? I am leaving towards HP prodesk or Lenovo 7xx/9xx series around 200 each. I don’t really plan on getting more than 2-3 and don’t run too many things, but would want enough overhead if I switch stuff over to home assistant and windows and Linux VMs if needed.

  3. Any best practices you recommend when starting a Proxmox cluster? I’ve learned over time it’s best to set it up correctly than try to fix stuff when it’s running. I wish I could coach myself from 7 years ago now. Would of saved a lot of headaches lol.

@TCB13@lemmy.world
link
fedilink
English
41Y

It’s 2024, avoid Proxmox and safe yourself a LOT of headaches down the line.

You most likely don’t need Proxmox and its pseudo-open-source bullshit. My suggestion is to simply with with Debian 12 + LXD/LXC, it runs VMs and containers very well. Proxmox ships with an old kernel that is so mangled and twisted that they shouldn’t even be calling it a Linux kernel. Also their management daemons and other internal shenanigans will delay your boot and crash your systems under certain circumstances.

What I would suggest you to use use instead is LXD/Incus.

LXD/Incus provides a management and automation layer that really makes things work smoothly - essentially what Proxmox does but properly done. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).

Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

I draw your attention to containers (not docker), LXC containers because for most people full virtualization isn’t even required. In a small homelab if you can have containers that behave like full operating systems (minus the kernel) including persistence, VMs might not be required. Either way LXD/Incus will allow for both and you can easily mix and match and use what you require for each use case.

For eg. I virtualize the official HomeAssistant image with LXD because we all know how hard is to get that thing running, however my NAS / Samba shares are just a LXD Debian 12 container with Samba4, Nginx and FileBrowser. Sames goes for torrent client that has its own container. Some other service I’ve exposed to the internet also runs a full VM for isolation.

Like Proxmox, LXD/Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. I can guarantee you that most people running Proxmox today it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no BS licenses and way less overhead.

Yes, there’s a WebUI for LXD as well!

lazynooblet
link
fedilink
English
121Y

Can someone explain the benefits of LXD without the opinionated crap?

@TCB13@lemmy.world
link
fedilink
English
0
edit-2
1Y

create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes).

provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

What else do you need.

Possibly linux
link
fedilink
English
5
edit-2
1Y

Your comment is wrong in a few ways and suggests using a LXC which is way slower than docker or podman and lacks the easy setup.

Proxmox is good because it makes it easy to create VMs and setup least access. It also has as new of kernel as stable Debian so no, its not terribly out of date.

If you want to suggest that someone install Debian + Docker compose that would make more sense. This isn’t a good setup for more advanced setups and it doesn’t allow for a not of flexibility.

@TCB13@lemmy.world
link
fedilink
English
-31Y

This was a discussion about management solutions such as Proxmox and LXD and NOT about containerization technologies like Docker or LXC. Also Proxmox uses the Proxmox VE Kernel that is derived from Ubuntu.

Your comment makes no sense whatsoever. I’m not even sure you know the difference between LXD and LXC…

@node815@lemmy.world
link
fedilink
English
91Y

Since you didn’t include a link to the source for your recommendation:

https://github.com/canonical/lxd

I’ve been on Proxmox for 6 or so months with very few issues and have found it to work well in my instance, I do appreciate seeing another alternative and learning about it too! I very specifically like Proxmox as it gives me an actual IP on my router’s subnet for my machines such as Home Assistant. So instead of the 192.168.122.1 it rolls a nice 192.168.1.X/24 IP which fits my range which makes it easier for me to direct my outside traffic to it. Does this also do this? Based on your screenshots, maybe not, IDK.

@jgkawell@lemmy.world
link
fedilink
English
71Y

Thanks for the link! I’ve been running Proxmox for years now without any of the issues like the previous commenter mentioned. Not that they don’t exist, just that I haven’t hit them. I really like Proxmox but love hearing about alternatives. One day I might get bored and want to set things up new with a different stack and anything that’s more free/open is better in my book.

@TCB13@lemmy.world
link
fedilink
English
11Y

it gives me an actual IP on my router’s subnet for my machines

Yes you configure LXD/Incus’ networking to use a bridge and it will simply delegate the task to your router instead of proving IPs itself. One of my nodes actually runs the two setups at the same time, I’ve a bunch of containers on an internal range and then my Home Assistant VM getting an IP from my router.

@drkt@feddit.dk
link
fedilink
English
31Y

I’m going to experiment with this! I would love to get rid of Proxmox, it has so many problems and I only run containers anyway.

Is there an easy way to migrate containers? I’m not well versed in LXC despite using it for years.

@TCB13@lemmy.world
link
fedilink
English
11Y

I’ve no ideia if there’s a reasonable migration path but after running Proxmox for years I wouldn’t even want stuff that was tainted by it ever running on my pristine LXD nodes.

@drkt@feddit.dk
link
fedilink
English
21Y

It’d be a pain in the rear to rebuild everything. This proxmox machine is the center of everything, even housing the disk all the config backups are on. I should probably not be doing that…

@TCB13@lemmy.world
link
fedilink
English
31Y

If you’re on a recent proxmox setup that uses LXC containers you might be able to export those containers using lxc-snapshot or some other method and move them to the LXD node… May work just fine, may require other adjustments.

Personally the move was worth it. I’m not gonna lie, a ton of my most complex solutions are setup using cloud-init and Ansible so moving from one solution to another was mostly running those again and watch the machines get re-created.

@drkt@feddit.dk
link
fedilink
English
11Y

I’ll look at lxc snapshots after the hardware upgrade I got lined up, thanks!

How well does it handle backups, and are they deduplicated incremental ones like proxmox backup server makes?

@TCB13@lemmy.world
link
fedilink
English
11Y

I do regular snapshots of my containers live and sometimes restore them, no issues there. De-duplication and incremental features are (mostly) provided by the storage backend, if you use BTRFS or ZFS for your storage pool every container will be a volume that you can snapshot, rollback, export at any time. LXD also provides tools to make those operations: https://documentation.ubuntu.com/lxd/en/latest/howto/instances_backup/

That makes sense, but no remote backups over the network? Local snapshots I don’t really count as backups.

Possibly linux
link
fedilink
English
21Y

For your Proxmox cluster shoot for three devices. With three devices you can do high availability which is a bonus but not something I though to do when I built my setup.

And you don’t have quorum issues any time a system is down. (I regret making mine a cluster.)

@nem@sopuli.xyz
link
fedilink
English
11Y

You can set up a qdev on a pi or something.

Possibly linux
link
fedilink
English
21Y

Can you? That would be really cool

@nem@sopuli.xyz
link
fedilink
English
11Y

Yeah, you can run it on anything and its great for even numbered clusters.

Possibly linux
link
fedilink
English
11Y

Can you explain how?

I need to re-ip both of my proxmox hosts and ran into a wall due to quorum. This could get me over that hump.

That being said, it was a failed experiment to put them in a cluster. I don’t use any of the cluster functionality and would love to destroy the cluster config w/o having to rebuild the proxmox hosts.

You don’t have to rebuild the proxmox hosts to remove the cluster. I made the same mistake last year sometime and was able to remove the cluster and each of the proxmox machines works as it should standalone. I don’t recall the exact steps but it was very easy. A quick search for “proxmox remove cluster” gave me this result and from what I recall these are the steps I followed as well. https://rostislavjadavan.com/posts/promox-delete-cluster

https://rostislavjadavan.com/posts/promox-delete-cluster

I have looked high and low for how to delete a cluster and have never stumbled on this page, thanks! Almost everything I found said I had to destroy proxmox and reinstall it.

cooljimy84
link
fedilink
English
11Y

With arr services try to limit network throughput and disk throughput on them, as if either are maxed out for too long (like moving big linux iso files) it can cause weird timeouts and failures

Edgarallenpwn
creator
link
fedilink
English
11Y

I believe I would be fine on the network part, I am just guessing writing them to an SSD cache drive on my NAS would be fine? Im currently writing to the SSD and have a move script run twice a day to the HDDs

cooljimy84
link
fedilink
English
11Y

Should be fine, I’m writing to spinning rust, so if I was playing back a movie it could cause a few “dad the tv is buffering again” problems

@Lem453@lemmy.ca
link
fedilink
English
31Y

I have a setup similar to what you want.

My nas is a low powered atom board that runs unraid.

My dockets run on a ryzen CPU with proxmox. I don’t have a cluster, just 1.

In proxmox I run a VM that runs a all my dockets.

I use portainer to run all my services as stacks. So the arr stack has all the arrs together in a docker compose file. The docker compose files are stored in gitea (one of the few things I still run on unraid) and Everytime I make a change to the git, I press one button on portainer and it pulls down the latest docker compose.

For storage, on proxmox I use zfs with ssds only. The only thing that needs HDDs is the media on my unraid.

When a docker needs to access the media it uses an NFS mount to the unraid server.

Everything else is on my zfs array on proxmox. I have auto zfs snapshots every hour. Borg backup also takes hourly incremental backups of the zfs array and sends it to the unraid server locally and borg base for off-site backup.

The whole setup works very well and it very stable.

The flexibility of using proxmox means that things that work better in a VM (HaOS) I can install as a VM. Everything else is docker.

@tristan@aussie.zone
link
fedilink
English
1
edit-2
1Y

My current setup is 3x Lenovo m920q (soon to be 4) all in a proxmox cluster, along with a qnap nas with 20gb ram and 4x 8tb in raid 5.

The specs on the m920q are: I5 8500T 32gb ram 256gb sata SSD 2tb nvme SSD 1gbe nic

On each proxmox machine, I have a docker server in swarm mode and each of those vm all have the same NFS mounts pointing to the nas

On the Nas I have a normal docker installation which runs my databases

On the swarm I have over 60 docker containers, including the arr services, overseerr and two deluge instances

I have no issues with performance or read/write or timeouts.

As one of the other posters said, point all of your arr services to the same mount point as it makes it far easier for the automated stuff to work.

Put all the arr services into a single stack (or at least on a single network), that way you can just point them to the container name rather than IP, for example, in overseerr to tell it where sonarr is, you’d just say http://sonarr:8989 and it will make life much easier

As for proxmox, the biggest thing I’ll say from my experience, if you’re just starting out, make sure you set it’s IP and hostname to what you want right from the start… It’s a pain in the ass to change them later. So if you’re planning to use vlans or something, set them up first

Pic of my setup

@phanto@lemmy.ca
link
fedilink
English
61Y

Do two NICs. I have a bigger setup, and it’s all running on one LAN, and it is starting to run into problems. Changing to a two network setup from the outset probably would have saved me a lot of grief.

Possibly linux
link
fedilink
English
21Y

Can you explain what benefit that would bring?

Edgarallenpwn
creator
link
fedilink
English
11Y

So dual NIC on each device and set up another lan on my router? Sorry it seems like a dumb question but just want to make sure.

Why would you need two nics unless you’re planning on having a proxmox Vm being your router?

This is exactly my setup on one of my Proxmox servers - a second NIC connected as my WAN adapter to my fibre internet. OPNsense firewall/router uses it.

I think two NICs is required to do VLANing properly? Not 100% sure.

@Live2day@lemmy.sdf.org
link
fedilink
English
21Y

No, you can do more than 1 VLAN per port. It’s called a trunk

@DeltaTangoLima@reddrefuge.com
link
fedilink
English
4
edit-2
1Y

Nope - Proxmox lets you create VLAN trunks, just like a physical switch.

Edit: here’s one of my Proxmox server network configs.

@monkinto@lemmy.world
link
fedilink
English
21Y

Is there a reason to do this over just giving the nic for the vm/container a vlan tag?

deleted by creator

You still need to do that, but you need the Linux bridge interface to have VLANs defined as well, as the physical switch port that trunks the traffic is going to tag the respective VLANs to/from the Proxmox server and virtual guests.

So, vmbr1 maps to physical interface enp2s0f0. On vmbr1, I have two VLAN interfaces defined - vmbr1.100 (Proxmox guest VLAN) and vmbr1.60 (Phsyical infrastructure VLAN).

My Proxmox server has its own address in vlan60, and my Proxmox guests have addresses (and vlan tag) for vlan100.

The added headfuck (especially at setup) is that I also run an OPNsense VM on Proxmox, and it has its own vlan interfaces defined - essentially virtual interfaces on top of a virtual interface. So, I have:

  • switch trunk port
    • enp2s0f0 (physical)
      • vmbr1 (Linux bridge)
        • vmbr1.60 (Proxmox server interface)
        • vmbr1.100 (Proxmox VLAN interface)
          • virtual guest nic (w/ vlan tag and IP address)
        • vtnet1 (OPNsense “physical” nic, but actually virtual)
          • vtnet1_vlan[xxx] (OPNsense virtual nic per vlan)

All virtual guests default route via OPNsense’s IP address in vlan100, which maps to OPNsense virtual interface vtnet1_vlan100.

Like I said, it’s a headfuck when you first set it up. Interface-ception.

The only unnecessary bit in my setup is that my Proxmox server also has an IP address in vlan100 (via vmbr1.100). I had it there when I originally thought I’d use Proxmox firewalling as well, to effectively create a zero trust network for my Proxmox cluster. But, for me, that would’ve been overkill.

Huh, cool, thank you! I’m going to have to look into that. I’d love for some of my containers and VMs to be on a different VLAN from others. I appreciate the correction. 😊

No worries mate. Sing out if you get stuck - happy to provide more details about my setup if you think it’ll help.

Thanks for the kind offer! I won’t get to this for a while, but I may take you up on it if I get stuck.

You want to have at least 3 if you’re going to do that. I usually use the one on the mobo for all the other services and management. Then a dedicated port for lan and wan on a separate nic.

stown
link
fedilink
English
11Y

Security. Keeping publicly accessible and locally accessible on different networks.

@DeltaTangoLima@reddrefuge.com
link
fedilink
English
2
edit-2
1Y

Hmmm - not really any more. I have everything on the same VLAN, with publicly accessible services sitting behind nginx reverse proxy (using Authelia and 2FA).

The real separation I have is the separate physical interface I use for WAN connectivity to my virtualised firewall/router - OPNsense. But I could also easily achieve that with VLANs on my switch, if I only had a single interface.

The days of physical DMZs are almost gone - virtualisation has mostly superseded them. Not saying they’re not still a good idea, just less of an explicit requirement nowadays.

I haven’t done it - but I believe Proxmox allows for creating a “backplane” network which the servers can use to talk directly to each other. This would be used for ceph and server migrations so that the large amount of network traffic doesn’t interfere with other traffic being used by the VMs and the rest of your network.

You’d just need a second NIC and a switch to create the second network, then staticly assign IPs. This network wouldn’t route anywhere else.

In proxmox there’s no need to assign it to a physical NIC. If you want a virtual network that goes as frast as possible you’d create a bridge or whatever and assign it to nothing. If you assign it to a NIC then since it wants to use SR-IOV it would only go as fast as the NIC can go.

@t3chskel@lemmy.world
link
fedilink
English
11Y

Consider checking out XCP-ng. I’ve been testing it for a few days and I’m really enjoying it. Seems less complicated and more flexible than Proxmox but admittedly I’m still learning and haven’t even tried multiple servers yet. I would suggest watching some YouTube videos first. Good luck!

@jkrtn@lemmy.ml
link
fedilink
English
210M

I want to like XCP-ng. Unfortunately my primary use case is VMs or containers working with attached USB devices. On Xen it seems like an absolute nightmare to passthrough USB or PCI devices other than GPUs (as vGPUs).

Even on Proxmox it has been frustratingly manual.

I’m planning to try out k8s generic device plugins. I don’t really need VMs if containers will cooperate with the host’s USB. I’m sure that will be a bit of a nightmare on its own and I will be right back to Proxmox.

I hope someone will tell me I am wrong and USB can be easy with Xen. I do prefer XCP-ng over Proxmox in many other ways.

@t3chskel@lemmy.world
link
fedilink
English
110M

Here’s their documentation. The tip suggests it may have been harder in the past but it doesn’t seem too bad now. Hopefully this is configurable in Xen Orchestra in the future.

https://docs.xcp-ng.org/compute/#️-usb-passthrough

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
2
edit-2
10M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
HTTP Hypertext Transfer Protocol, the Web
IP Internet Protocol
LXC Linux Containers
NAS Network-Attached Storage
SSD Solid State Drive mass storage
ZFS Solaris/Linux filesystem focusing on data integrity
k8s Kubernetes container management package
nginx Popular HTTP server

8 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.

[Thread #411 for this sub, first seen 8th Jan 2024, 20:15] [FAQ] [Full list] [Contact] [Source code]

@eerongal@ttrpg.network
link
fedilink
English
81Y

Running arr services on a proxmox cluster to download to a device on the same network. I don’t think there would be any problems but wanted to see what changes need to be done.

I’m essentially doing this with my set up. I have a box running proxmox and a separate networked nas device. There aren’t really any changes, per se, other than pointing the *arr installs at the correct mounts. One thing to make note of, i would make sure that your download, processing, and final locations are all within the same mount point, so that you can take advantage of atomic moves.

archomrade [he/him]
link
fedilink
English
21Y

I second this. It took me a really long time how to properly mount network storage on proxmox VM’s/LXC’s, so just be prepared and determine the configuration ahead of time. Unprivilaged LXC’s have differen’t root user mappings, and you can’t mount an SMB directly into a container (someone correct me if i’m wrong here), so if you go that route you will need to fuss a bit with user maps.

I personally have a VM running with docker for the arr suite and a separate LXC’s for my sambashare and streaming services. It’s easy to coordinate mount points with the compose.yml files, but still tricky getting the network storage mounted for read/write within the docker containers and LXC’s.

@DeltaTangoLima@reddrefuge.com
link
fedilink
English
3
edit-2
1Y

I have two Proxmox hosts and two NASes. All are connected at 1Gbps.

The Proxmox hosts maintain the real network mounts - nfs in my case - for the NAS shares. Inside each CT that requires them, these are mapped to mount points with identical paths in each, eg. /storage/nas1 and /storage/nas2.

All my *arr (and downloader) CTs are configured to use the exact same paths.

It’s seamless. nzbget or deluge download to the same parent folders that my *arr CTs work with, which means atomic renames/moves are pretty much instant. The only real network traffic is from the download CTs to the NASes.

Edit: my downloader CTs download directly to the NAS paths - no intermediate disk at all.

Use ZFS when prompted - it opens up some features and is a bitch to change later. I don’t understand why it’s not the default.

From what I read disk wear out on consumer drives is a concern when using ZFS for boot drives with proxmox. I don’t know if the issues are exaggerated, but to be safe I ended up picking up some used enterprise SSDs off eBay for that reason.

This seems to be a “widely believed fact” but I haven’t seen any real data to back it up.

Possibly linux
link
fedilink
English
1
edit-2
1Y

I personally use both Btrfs and ZFS. For the main install I went with btrfs raid 1 as it is simpler and doesn’t have as much overhead.

I was a little worried about stability but I’ve had no issues and was able to swap a dead ssd without issue. It been going for almost 2 years now.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 157 users / day
  • 456 users / week
  • 1.32K users / month
  • 3.92K users / 6 months
  • 1 subscriber
  • 3.8K Posts
  • 76.9K Comments
  • Modlog