I’d expected this but it still sucks.

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
4
edit-2
7M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
ESXi VMWare virtual machine hypervisor
HA Home Assistant automation software
~ High Availability
LTS Long Term Support software version
LXC Linux Containers
NAS Network-Attached Storage
Plex Brand of media server package
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
ZFS Solaris/Linux filesystem focusing on data integrity

8 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

[Thread #506 for this sub, first seen 12th Feb 2024, 20:15] [FAQ] [Full list] [Contact] [Source code]

@zorflieg@lemmy.world
link
fedilink
English
57M

High Availability not Home Assistant.

yeehaw
link
fedilink
English
27M

Depends which community you ask 🤣. It was definitely high availability first though.

sj_zero
link
fedilink
287M

The most important thing for everyone to remember is that if you don’t fully own the thing such that you can install and run it without asking permission, or if it isn’t simply free and open source, then it can go away at any time.

Possibly linux
link
fedilink
English
37M

Hopefully everyone migrated.

Lettuce eat lettuce
link
fedilink
English
277M

XCP-ng or Proxmox if you need a bare metal hypervisor. Both open source, powerful, mature, and have large communities with lots of helpful documentation.

I think you can migrate ESXi VMs directly to XCP-ng. I have moved onto it about 6 months ago and it has been solid. Steep learning curve, but really great once you get the hang of it, and enterprise grade if you need stuff like HA clustering and complex virtual networking solutions.

I managed to migrate all mine to libvirt when I dumped esxi. They dropped support for the old opteron I was running at the time, so I couldn’t upgrade to v7. Welp, Fedora Server does just as well and I’ve been moving the VM hosted services into containers anyway.

Ofc… well, we’ll see what IBM does with RedHat. Probably something like this eventually. They simply can’t help themselves.

Oh no!

Anyway…

Been on Proxmox for a couple of years and it’s been great.

@Xartle@lemmy.ml
link
fedilink
English
87M

I’m shocked I tell you; simply shocked…

Lemmy Tagginator
bot account
link
fedilink
07M

deleted by creator

@Damage@slrpnk.net
link
fedilink
English
57M

I wonder what’s the future of vmware player

Possibly linux
link
fedilink
English
117M

Not bright…

RedFox
link
fedilink
English
27M

What about virtualizing windows?

Only thing I know of is hyperv, but it’s not widely used I don’t think and MS is pushing azure $tack right?

@Socket462@feddit.it
link
fedilink
English
27M

I tried virtualizing Windows on proxmox and it went smooth

Anything based on KVM does great

@dan@upvote.au
link
fedilink
English
17M

Just make sure you install the virtio drivers.

yeehaw
link
fedilink
English
37M

Hyper-v is definitely wisely used…

Lots of hypervisors support windows. Ie proxmox

To be pedantic - KVM is the hypervisor. Proxmox is a wrapper to it.

Anarch157a
link
fedilink
English
27M

Being even more pedantic, KVM is the hypervisor, QEMU is a wrapper around it and Proxmox provides a management interface to it.

yeehaw
link
fedilink
English
17M

Fair enough

@Crogdor@lemmy.world
link
fedilink
English
1217M

There are two kinds of datacenter admins, those who aren’t using VMWare, and those who are migrating away from VMWare.

what does this mean for me? i have a lenovo 82k100lqus

@kalpol@lemmy.world
creator
link
fedilink
English
57M

Doesn’t mean anything right now if you are running ESXi, except you can’t reinstall ESXi unless you kept the image and you won’t get ESXi updates.

i looked it up, and it’s part of vmware? i don’t run that so *shrug*

Possibly linux
link
fedilink
English
07M

*proxmox*

@TCB13@lemmy.world
link
fedilink
English
-6
edit-2
7M

*proxmox*

*LXD/Incus*

Possibly linux
link
fedilink
English
-17M

LXD is not really usable for anything as it is very slow

@TCB13@lemmy.world
link
fedilink
English
1
edit-2
7M

LXD uses QEMU/KVM/libvirt for VMs thus the performance is at least the same as any other QEMU solution like Proxmox, the real difference is that LXD has a much smaller footprint, doesn’t depend on 400+ daemons thus boots and runs management operations much faster. The virtualization tech is the same and the virtualization performance is the same.

Possibly linux
link
fedilink
English
17M

Maybe I’m just doing it wrong. I’ve just found LXD to be lacking as you can’t live transfer it to a different host. It is also slower than Docker and Podman and I was unable to get docker running in a unprivileged LXC container. I think it should be possible to run docker in LXC but by the time I spend the effort is is more secure and easier to use a full virtual machine.

Maybe I should revisit the idea though as it seems like many people stand by it.

oh, fuck. really? what if i have 12 year old copy?

Possibly linux
link
fedilink
English
27M

I meant you should switch to proxmox. What are you referring to?

@nrezcm@lemmy.world
link
fedilink
English
27M

No this is Patrick.

Possibly linux
link
fedilink
English
2
edit-2
7M

Spongebob is that you?

@TCB13@lemmy.world
link
fedilink
English
-27M

He should really switch to LXD/Incus, not Proxmox as it will end like ESXi one day.

Possibly linux
link
fedilink
English
0
edit-2
7M

Lxd is slow and doesn’t support HA

@mindlight@lemm.ee
link
fedilink
English
41
edit-2
7M

Along with the termination of perpetual licensing, Broadcom has also decided to discontinue the Free ESXi Hypervisor, marking it as EOGA (End of General Availability).

Wiktionary: Adjective perpetual (not comparable) Lasting forever, or for an indefinitely long time.

Hello ProxMox here I come!

@kn33@lemmy.world
link
fedilink
English
97M

They’re terminating in the sense that they won’t sell it anymore. They’re not breaking the licensing they’ve already sold (mostly, there was some fuckery with activating licensing they sold through third parties)

@kalpol@lemmy.world
creator
link
fedilink
English
47M

Sort of. The activation license will work as long as you have it. They won’t renew support though, which effectively kills it when the support contract runs out.

@kn33@lemmy.world
link
fedilink
English
17M

You won’t be able to upgrade to new versions when the support contract runs out, but you can install updates to the existing version as long as updates are made for it. This has always been the lifecycle for perpetual licensing. It’s good forever, but at a certain point it becomes a security risk to continue using. The difference here is they won’t sell you another perpetual license when the lifecycle is up.

@TCB13@lemmy.world
link
fedilink
English
-38
edit-2
7M

Hello ProxMox here I come!

Proxmox is questionable open-source, performs poorly and will most likely end up burning the free users at some point. Get get yourself into LXC/LXD/Incus that does both containers and VMs, is way more performant and clean and is also available on Debian’s repositories.

You know, you can recommend lxd and whatever without putting out FUD about proxmox and other tech.

@TCB13@lemmy.world
link
fedilink
English
-77M

While I get your point… I kind of can’t: https://lemmy.world/comment/7476411

What about Proxmox makes its license questionable?

@TCB13@lemmy.world
link
fedilink
English
-47M

First they’re always nagging you to get a subscription. Then they make system upgrades harder for free customers. Then the gatekeep you from the enterprise repositories in true RedHat fashion and have important fixes from the pve-no-subscription repository multiple times.

As long as the source code is freely available, that’s entirely congruent with GPL, which is one of the most stringent licenses. You can lay a lot of criticism on their business practices, and I would not deploy this on my home server, but it haven’t seen any evidence that they’re infringing any licenses.

@TCB13@lemmy.world
link
fedilink
English
-27M

Okay if you want to strictly look at licenses per si no issues there. But the rest of what I described I believe we can agree is very questionable, takes into questionable open-source.

Nothing that is more questionable than lxd, which now requires a contributor license agreement, allowing canonical to not open source their hosted versions, despite lxd being agpl.

Thankfully, it’s been forked as incus, and debian is encouraging users to migrate.

But yeah. They haven’t said what makes proxmox’s license questionable.

@TCB13@lemmy.world
link
fedilink
English
-17M

Thankfully, it’s been forked as incus, and debian is encouraging users to migrate.

Yes, the people running the original LXC and LXD projects under Canonical now work on Incus under the Linux Containers initiative. Totally insulated from potential Canonical BS. :)

The move from LXD to Incus should be transparent as it guarantees compatibility for now. But even if you install Debian 12 today and LXD from the Debian repository you’re already insulated from Canonical.

Really glad I made the transition from ESXi to Docker containers about a year ago. Easier to manage too and lighter on resources. Plus upgrades are a breeze. Should have done that years ago…

@kalpol@lemmy.world
creator
link
fedilink
English
117M

I need full on segregated machines sometimes though. I’ve got stuff that only runs in Win98 or XP (old radio programming software).

Might be time to look into Proxmox. There’s a fun weekend project for you!

@TCB13@lemmy.world
link
fedilink
English
-227M

Save yourself time and future headaches and try LXD/Incus instead.

No headaches here - running a two node cluster with about 40 LXCs, many of them using Docker, and an OPNsense VM. It’s been flawless for me.

@TCB13@lemmy.world
link
fedilink
English
-16
edit-2
7M

If you’re already using LXC containers why are you stuck with their questionable open-source and ass of a kernel when you can just run LXD/Incus and have a much cleaner experience in a pure Debian system? Boots way faster, fails less and is more open.

Proxmox will eventually kill the free / community version, it’s just a question of time and they don’t offer anything particularly good over what LXD/Incus offers.

why are you stuck with their questionable open-source and ass of a kernel

Because you don’t care about it being open source? Just working (and continuing to work) is a pretty big motivating factor to stay with what you have.

@TCB13@lemmy.world
link
fedilink
English
-107M

Because you don’t care about it being open source?

If you’re okay with the risk of one day ending up like the people running ESXi now then you should be okay. Let’s say that not “ending up with your d* in your hand” when you least expect it is also a pretty big motivating factor to move away from Proxmox.

Now I don’t see how come in a self-hosting community on Lemmy someone would bluntly state what you’ve.

I’m intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven’t had any failures.

I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

@TCB13@lemmy.world
link
fedilink
English
4
edit-2
7M

comment history keeps taking aim at Proxmox. What did you find questionable about them?

Here’s the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I’ve been around for all wins and fails of Proxmox, I’ve seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu’s kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I’ve been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

At some point not even simple things such as OVPN worked fine under Proxmox’s kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don’t even start with the system properly on the first try).

Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

There’s no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

Well if you’re some time to spare on testing stuff try LXD/Incus and you’ll see. Maybe you won’t replace all your Proxmox instances but you’ll run a mixed environment like a did for a long time.

Do you work for a railroad? That sounds too familiar.

@kalpol@lemmy.world
creator
link
fedilink
English
57M

Lol no, just old radios. My point is just that my requirements are pretty widely varied.

tyablix
link
fedilink
English
27M

I’m curious what radio software you use that has these requirements?

@kalpol@lemmy.world
creator
link
fedilink
English
3
edit-2
7M

Old Motorolas, they really hate users.

@TCB13@lemmy.world
link
fedilink
English
-8
edit-2
7M

Fear no my friend. Get get yourself into LXC/LXD/Incus as it can do both containers and full virtual machines. It is available on Debian’s repositories and is fully and truly open-source.

@eerongal@ttrpg.network
link
fedilink
English
57M

I agree with the other poster; you should look into proxmox. I migrated from ESXi to proxmox 7-8 years ago or so, and honestly its been WAY better than ESXi. The migration process was pretty easy too, i was able to bring over the images from ESXi and load them directly into proxmox.

If you’re running a basic linux install you can use KVM for some VMs. Or use Proxmox for a good ESXi replacement.

@TCB13@lemmy.world
link
fedilink
English
-37M

Or… LXD/Incus.

@TCB13@lemmy.world
link
fedilink
English
-13
edit-2
7M

So… you replaced a property solution by a free one that depends on proprietary components and a proprietary distribution mechanism? Get get yourself into LXC/LXD/Incus (that does both containers and VMs) and is available on Debian’s repositories. Or Podman if you really like the mess that Docker is.

@kalpol@lemmy.world
creator
link
fedilink
English
77M

I’ve seen you recommending this here before - what’s its selling point vs say qemu-kvm? Does Incus do virtual networking without having to straight up learn iptables or whatever? (Not that there is anything wrong with iptables, I just have to choose what I can learn about)

@TCB13@lemmy.world
link
fedilink
English
47M

Does Incus do virtual networking without having to straight up learn iptables or whatever?

That’s the just one of the things it does. It goes much further as it can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes). Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

@ziviz@lemmy.sdf.org
link
fedilink
English
117M

Yay… Capitalism…

@TCB13@lemmy.world
link
fedilink
English
67M

This was totally expected, even before BCM bought them. This is the same thing we had with CentOS/ReadHat and that will happen with Docker/DockerHub and all the people that moved from CentOS to Ubuntu.

Possibly linux
link
fedilink
English
-147M

Its not capitalism it just is business. No one is making you use it.

Well dang, I guess that “learn about proxmox” line on my to-do list just moved a little higher. For the most part, I’ve enjoyed using ESXi and am sad to see it go.

@LrdThndr@lemmy.world
link
fedilink
English
147M

FWIW, I run proxmox at home, and I friggin love it. It’s really not hard at all.

@dan@upvote.au
link
fedilink
English
27M

I like Unraid… It has a UI for VMs and LXC containers like Proxmox, but it also has a pretty good Docker UI. I’ve got most things running on Docker on my home server, but I’ve also got one VM (Windows Server 2022 for Blue Iris) and two LXC containers. (LXC support is a plugin; it doesn’t come out-of-the-box)

Docker with Proxmox is a bit weird, since it doesn’t actually support Docker and you have to run Docker inside an LXC container or VM.

@___@lemm.ee
link
fedilink
English
17M

I’ve just learned about converting docker containers to lxc natively, so that’s my next project.

@Auli@lemmy.ca
link
fedilink
English
17M

I moved from lxc to docker. Much easier to manage.

@dan@upvote.au
link
fedilink
English
1
edit-2
7M

I personally prefer Docker over LXC since the containers are essentially immutable. You can completely delete and recreate a container without causing issues. All your data is stored outside the container in a Docker volume, so deleting the container doesn’t delete your volume. Your docker-compose describes the exact state of the containers (as long as you use version numbers rather than tags like latest)

Good Docker containers are “distroless” which means it only contains the app and the bare minimum dependencies for the app to run, without any extraneous OS stuff in it. LXC containers aren’t as light since as far as I know they always contain an OS.

@___@lemm.ee
link
fedilink
English
17M

I’m with you for the most part, but I’m slowly moving over to podman over docker for security and simplicity. LXC is convenient for proxmox, and you can make a golden snapshot, store your data and config in a bind mount, and replicate some of docker’s features. Lately, I run a privileged lxc with rootless podman running dockge. Seems to work well for now.

LifeBandit666
link
fedilink
English
27M

I’m in the market for a nas or thinclient for these kinds of things, an upgrade for my RPi Home Assistant.

I’m stuck at hardware at the moment and think a cheap 2bay NAS is probably the way to go. My concern is that I won’t be able to run all the things on a NAS mainly because I’m clueless. This community talks in maths (as Radiohead say) so half the time I’m trying to decipher all the LXCs and other acronyms.

Anyway, I think I need to learn PROXMOX or Unraid so your comment has me interested.

My question to you is this: since your server is plugged in via ethernet, can you access the Windows VM via web interface? Or does it require a screen, keyboard, mouse, etc?

I think I’m gonna be running HA in a VM, along with Adguard and maybe LMS in docker containers, then probably a Windows VM for Arr and Plex. I assume all these things will have their own port but I’m just not 100% about the actual Windows VM

@dan@upvote.au
link
fedilink
English
17M

I’d recommend building your own server rather than buying an off-the-shelf NAS. The NAS will have limited upgrade options - usually, if you want to make it more powerful in the future, you’ll have to buy a new one. If you build your own, you can freely upgrade it in the future - add more memory (RAM), make it faster by replacing the CPU with a better one, etc.

If you want a small one, the Asus Prime AP201 is a pretty nice (and affordable!) case.

@Scrath@lemmy.dbzer0.com
link
fedilink
English
1
edit-2
7M

I run a couple of containers on my lenovo mini pc. I have proxmox installed on bare metal and then one VM for truenas, one for docker containers and one for home assistant OS.

For me the limiting factor is definitely RAM. I have 20GB (because the machine came with a 2x4GB configuration and I bought a single 16GB upgrade stick) and am constantly at ~98% utilization.

To be fair, about half of that is eaten up by TrueNAS alone due to ZFS.

The point I’m trying to make is basically make sure you can put enough RAM into your machine. Some NAS have soldered memory you won’t be able to upgrade. The CPU performance you need highly depends on what you want to do.

In my case the only CPU intensive task I have is media transcoding which can often be offloaded to dedicated bardware like intel quicksync. The only annoying exception is hardware transcoding of x265 media which is apparently only supported from intel 7th gen and upwards processors and I have a 6th gen i5… Or maybe I configured something wrong. No clue

Edit: I wrote that after reading the first half of your comment. Regarding connecting a screen, I think I had one connected once to set up proxmox. Afterwards I just log into the proxmox web interface. If required I can use that to get a GUI session of each VM as well.

LifeBandit666
link
fedilink
English
27M

Hey no you answered a bunch of questions I had there. So I’m looking for an i7 with lots of RAM. Thanks that’s excellent

Just to be sure there isn’t a misunderstanding. With 7th gen I mean any intel iX-7xxx processor or higher.

The first (or first 2) numbers of the second part of the processor name determine the generation of the processor. The number immediately following the i just denotes the performance tier within the processors own generation

LifeBandit666
link
fedilink
English
27M

Thanks for the correction. I’ve lurked in here and the Reddit one back before the time we don’t talk about, but I have no clue when it comes to hardware. I got given a PC to game on and was talking to my mate about buying server bits, and mentioned getting i7 processors. He told me it would be more powerful than my gaming rig because that’s only i5s.

This makes more sense. So I can get an i3-7xxx quad core mini PC and try upgrade the RAM and storage.

I have a bunch of ram sticks in a bottom drawer and some HDDs I’ve never managed to boot yet, so I have things to play with… I just don’t know what they are or if they work.

I love to tinker though. This all sounds like lots of fun

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 279 users / day
  • 589 users / week
  • 1.34K users / month
  • 4.55K users / 6 months
  • 1 subscriber
  • 3.5K Posts
  • 70K Comments
  • Modlog