I’d expected this but it still sucks.

Really glad I made the transition from ESXi to Docker containers about a year ago. Easier to manage too and lighter on resources. Plus upgrades are a breeze. Should have done that years ago…

@kalpol@lemmy.world
creator
link
fedilink
English
1110M

I need full on segregated machines sometimes though. I’ve got stuff that only runs in Win98 or XP (old radio programming software).

Do you work for a railroad? That sounds too familiar.

@kalpol@lemmy.world
creator
link
fedilink
English
510M

Lol no, just old radios. My point is just that my requirements are pretty widely varied.

tyablix
link
fedilink
English
210M

I’m curious what radio software you use that has these requirements?

@kalpol@lemmy.world
creator
link
fedilink
English
3
edit-2
10M

Old Motorolas, they really hate users.

@eerongal@ttrpg.network
link
fedilink
English
510M

I agree with the other poster; you should look into proxmox. I migrated from ESXi to proxmox 7-8 years ago or so, and honestly its been WAY better than ESXi. The migration process was pretty easy too, i was able to bring over the images from ESXi and load them directly into proxmox.

If you’re running a basic linux install you can use KVM for some VMs. Or use Proxmox for a good ESXi replacement.

@TCB13@lemmy.world
link
fedilink
English
-310M

Or… LXD/Incus.

Might be time to look into Proxmox. There’s a fun weekend project for you!

@TCB13@lemmy.world
link
fedilink
English
-2210M

Save yourself time and future headaches and try LXD/Incus instead.

No headaches here - running a two node cluster with about 40 LXCs, many of them using Docker, and an OPNsense VM. It’s been flawless for me.

@TCB13@lemmy.world
link
fedilink
English
-16
edit-2
10M

If you’re already using LXC containers why are you stuck with their questionable open-source and ass of a kernel when you can just run LXD/Incus and have a much cleaner experience in a pure Debian system? Boots way faster, fails less and is more open.

Proxmox will eventually kill the free / community version, it’s just a question of time and they don’t offer anything particularly good over what LXD/Incus offers.

I’m intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven’t had any failures.

I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

@TCB13@lemmy.world
link
fedilink
English
4
edit-2
10M

comment history keeps taking aim at Proxmox. What did you find questionable about them?

Here’s the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I’ve been around for all wins and fails of Proxmox, I’ve seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu’s kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I’ve been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

At some point not even simple things such as OVPN worked fine under Proxmox’s kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don’t even start with the system properly on the first try).

Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

There’s no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

Well if you’re some time to spare on testing stuff try LXD/Incus and you’ll see. Maybe you won’t replace all your Proxmox instances but you’ll run a mixed environment like a did for a long time.

@DeltaTangoLima@reddrefuge.com
link
fedilink
English
9
edit-2
10M

OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.

But, for my self-hosted needs, Proxmox has been an absolute boon for me (I moved to it from a pure RasPi/Docker setup about a year ago).

I’m interested in having a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it. The former requires investment, and the latter is pretty much a one-way decision (at least, not an easy one to rollback from).

Something I need to ponder…

XCP-ng it is for you sir

why are you stuck with their questionable open-source and ass of a kernel

Because you don’t care about it being open source? Just working (and continuing to work) is a pretty big motivating factor to stay with what you have.

@TCB13@lemmy.world
link
fedilink
English
-1010M

Because you don’t care about it being open source?

If you’re okay with the risk of one day ending up like the people running ESXi now then you should be okay. Let’s say that not “ending up with your d* in your hand” when you least expect it is also a pretty big motivating factor to move away from Proxmox.

Now I don’t see how come in a self-hosting community on Lemmy someone would bluntly state what you’ve.

@fuckwit_mcbumcrumble@lemmy.world
link
fedilink
English
3
edit-2
10M

What makes you think that can’t happen to something just because it’s open source? And from all companies it’s from Canonical.

It’s “Selfhosted” not “SelfHostedOpenSourceFreeAsInFreedom/GNU”. Not everyone has drank the entire open source punch bowl.

@TCB13@lemmy.world
link
fedilink
English
-8
edit-2
10M

Fear no my friend. Get get yourself into LXC/LXD/Incus as it can do both containers and full virtual machines. It is available on Debian’s repositories and is fully and truly open-source.

@TCB13@lemmy.world
link
fedilink
English
-13
edit-2
10M

So… you replaced a property solution by a free one that depends on proprietary components and a proprietary distribution mechanism? Get get yourself into LXC/LXD/Incus (that does both containers and VMs) and is available on Debian’s repositories. Or Podman if you really like the mess that Docker is.

@kalpol@lemmy.world
creator
link
fedilink
English
710M

I’ve seen you recommending this here before - what’s its selling point vs say qemu-kvm? Does Incus do virtual networking without having to straight up learn iptables or whatever? (Not that there is anything wrong with iptables, I just have to choose what I can learn about)

@TCB13@lemmy.world
link
fedilink
English
410M

Does Incus do virtual networking without having to straight up learn iptables or whatever?

That’s the just one of the things it does. It goes much further as it can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes). Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 177 users / day
  • 390 users / week
  • 1.04K users / month
  • 3.75K users / 6 months
  • 1 subscriber
  • 3.92K Posts
  • 79.5K Comments
  • Modlog