I placed a low bid on an auction for 25 Elitedesk 800 G1s on a government auction and unexpectedly won (ultimately paying less than $20 per computer)

In the long run I plan on selling 15 or so of them to friends and family for cheap, and I’ll probably have 4 with Proxmox, 3 for a lab cluster and 1 for the always-on home server and keep a few for spares and random desktops around the house where I could use one.

But while I have all 25 of them what crazy clustering software/configurations should I run? Any fun benchmarks I should know about that I could run for the lolz?

Edit to add:

Specs based on the auction listing and looking computer models:

  • 4th gen i5s (probably i5-4560s or similar)
  • 8GB of DDR3 RAM
  • 256GB SSDs
  • Windows 10 Pro (no mention of licenses, so that remains to be seen)
  • Looks like 3 PCIe Slots (2 1x and 2 16x physically, presumably half-height)

Possible projects I plan on doing:

  • Proxmox cluster
  • Baremetal Kubernetes cluster
  • Harvester HCI cluster (which has the benefit of also being a Rancher cluster)
  • Automated Windows Image creation, deployment and testing
  • Pentesting lab
  • Multi-site enterprise network setup and maintenance
  • Linpack benchmark then compare to previous TOP500 lists
@requiem@lemmy.world
link
fedilink
English
297M

I think the only answer is “Doom”

@seaQueue@lemmy.world
link
fedilink
English
87M

But can they run Crysis?

We’re you thinking like Doom Lan party, or some weird supercluster with the pure focus of running Doom?

@mlg@lemmy.world
link
fedilink
English
2
edit-2
7M

If OP actually does do this I recommend Odamex

Although he’d also need 25 monitors lol

@Trainguyrom@reddthat.com
creator
link
fedilink
English
117M

Although he’d also need 25 monitors lol

Back to the government auctions then!

@Boomkop3@reddthat.com
link
fedilink
English
17M

25 screens, 25 dancing gandalfs

Matthew Gasoline
link
fedilink
English
697M

Senior year of Highschool, I put Unreal Tournament on the school server. If it were me, I’d recreate that experience, including our teacher looking around the class. That was almost 20 years ago, I hope everyone is doing alright.

@Wojwo@lemmy.ml
link
fedilink
English
237M

I have a box with 10 old laptops that I keep around, just for that. Unreal tournament 2004, Insane, Brood Wars and all the Id classics. I don’t get to set it up a lot, but when I do it’s always a hit.

@notfromhere@lemmy.ml
link
fedilink
English
67M

You could possibly run ai horde if they have enough ram or vram. You could run bare metal kubernetes or inside proxmox.

👍Maximum Derek👍
link
fedilink
English
15
edit-2
7M

If I had 25 surprise desktops I imagine I’d discover a long dormant need for a Beowulf cluster.

@Trainguyrom@reddthat.com
creator
link
fedilink
English
57M

The thought did cross my mind to run Linpack and see where I fall on the Top500 (or the Top500 of 2000 for example for a more fair comparison haha)

@wewbull@feddit.uk
link
fedilink
English
2
edit-2
7M
  • Slurm cluster
  • MPI development
@Doombot1@lemmy.one
link
fedilink
English
517M

Shitty k8s cluster/space heater?

NOT any kind of crypto mining bullshit.

Handles
link
fedilink
English
57M

There’s always a good reason not to put another crypto mining cluster into the world.

@Charadon@lemmy.sdf.org
link
fedilink
English
17M

distcc cluster?

@Decronym@lemmy.decronym.xyz
bot account
link
fedilink
English
6
edit-2
7M

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
ESXi VMWare virtual machine hypervisor
IP Internet Protocol
NAT Network Address Translation
NVMe Non-Volatile Memory Express interface for mass storage
PSU Power Supply Unit
SSD Solid State Drive mass storage
VPN Virtual Private Network
k8s Kubernetes container management package

8 acronyms in this thread; the most compressed thread commented on today has 12 acronyms.

[Thread #697 for this sub, first seen 21st Apr 2024, 15:25] [FAQ] [Full list] [Contact] [Source code]

I don’t understand why people want to use so many PC’s rather than just run multiple VM’s on a single server that has more cores.

Having multiple machines can protect against hardware failures.
If hardware fails, you have dono machines.
It’s good learning, both for provisioning and for the physical (cleaning, customising, wiring, networking with multiple nics), and for multi-node clusters.

Virt is convenient, but doesn’t teach you everything

@Linkerbaan@lemmy.world
link
fedilink
English
0
edit-2
7M

I’m not sure if running multiple single SSD machines would provide much redundancy over a server with multiple PSU’s and drives. Sure the CPU or mobo could fail but the downtime would be less hassle than 25 old PC’s.

Of course there is a learning experience in more hardware but 25 PC’s does seem slightly overkill. I can imagine 3-5 max.

I’m probably looking at this from a homelab point of view who just wants to run stuff though, not really as the hobby being “setting up the PC’s themselves”.

@___@lemm.ee
link
fedilink
English
37M

removed by mod

@LukyJay@lemmy.world
link
fedilink
English
87M

“I don’t understand why you’d run so many VMs can you can just run it on bare metal”

It’s fun! This is a hobby. It doesn’t have to be practical.

Of course, but installing everything on multiple bare metal machines which take IP addresses, against just running it in VM’s which have IP addresses… It just takes a lot of extra power and doesn’t achieve much. Of course that can be said about any hobby, but I just want OP to know that there is no real reason to do this and I don’t understand so many people hyping it up.

@Trainguyrom@reddthat.com
creator
link
fedilink
English
27M

I already said in the original post I plan on sellong off and giving away ~15 of them, keeping a few as spares, and only actually leaving one on 24/7

bare metal machines which take IP addresses, against just running it in VM’s which have IP addresses

Both bare metal and VMs require IPs, it’s just about what networks you toss them on. Thanks to NAT IPs are free and there’s about 18 million of them to pick from in just the private IPv4 space

Big reason for bare metal for clustering is it takes the guess work out of virtual networking since there’s physical cables to trace. I don’t have to guess if a given virtual network has an L3 device that the virtual network helpfully added or is all L2, I can see the blinky lights for an estimate as to how much activity is going on on the network, and I can physically degrade a connection if I want to simulate an unreliable connection to a remote site. I can yank the power on a physical machine to simulate a power/host failure, you have to hope the virtual host actually yanks the virtual power and doesn’t do some pre shutdown stuff before killing the VM to protect you from yourself. Sure you can ultimately do all of this virtually, but having a few physical machines in the mix takes the guesswork out of it and makes your labbing more “real world”

I also want to invest the time and money into doing some real clustering technologies kinda close to right. Ever since I ran a ceph cluster in college on DDR2 era hardware over gigabit links I’ve been curious to see what level of investment is needed to make ceph perform reasonably, and how ceph compares to say glusterFS for example. I also want to setup an OpenShift cluster to play with and that calls for about 5 4-8 core 32GB RAM machines as a minimum (which happens to be the maximum hardware config of these machines). Similar with Harvester HCI

It just takes a lot of extra power and doesn’t achieve much

I just plan on running all of them just long enough to get some benchmark porn then starting to sell them off. Most won’t even be plugged in for more than a few hours before I sell them off

there is no real reason to do this and I don’t understand so many people hyping it up.

Because it’s fun? I got 25 computers for a bit more than the price of one (based on current eBay pricing). Why not do some stupid silly stuff while I have all of them? Why have an actual reason beyond “because I can!”

25 PC’s does seem slightly overkill. I can imagine 3-5 max.

25 computers is definitely overkill, but the auction wasn’t for 6 computers it was for 25 of them. And again, I seriously expected to be out of and the winning bid to be over a grand. I didn’t expect to get 25 computers for about the price of one. But now I have them so I’m gonna play with them

@Linkerbaan@lemmy.world
link
fedilink
English
17M

I see I was picturing a 25 pile stack of PC’s this makes a lot more sense thanks for the explanation.

Darth_Mew
link
fedilink
English
07M

Damn zuck meta is eating you up. Take a breather it’s just for fun. Bro doesn’t have to find the cure for cancer just to poke around on some new hardware

@solrize@lemmy.world
link
fedilink
English
157M

Do you have particularly cheap or free electricity?

@Trainguyrom@reddthat.com
creator
link
fedilink
English
6
edit-2
7M

12 cents per kilowatt-hour. I certainly don’t plan on leaving more than a couple on long term. I might get lucky with the weather and need the heating though :)

@seaQueue@lemmy.world
link
fedilink
English
15
edit-2
7M

Distcc, maybe gluster. Run a docker swarm setup on pve or something.

Models like those are a little hard to exploit well because of limited network bandwidth between them. Other mini PC models that have a pcie slot are fun because you can jam high speed networking into them along with NVMe then do rapid fail over between machines with very little impact when one goes offline.

If you do want to bump your bandwidth per machine you might be able to repurpose the wlan m2 slot for a 2.5gbe port, but you’ll likely have to hang the module out the back through a serial port or something. Aquantia USB modules work well too, those can provide 5gbe fairly stably.

Edit: Oh, you’re talking about the larger desktop elitedesk g1, not the USFF tiny machines. Yeah, you can jam whatever hh cards into these you want - go wild.

@Trainguyrom@reddthat.com
creator
link
fedilink
English
37M

From the listing photos these actually have half-height expansion slots! So GPU options are practically nonexistant, but networking and storage is blown wide open for options compared to the miniPCs that are more prevalent now.

@seaQueue@lemmy.world
link
fedilink
English
27M

Yeah, you’ll be fairly limited as far as GPU solutions go. I have a handful of hh AMD cards kicking around that were originally shipped in t740s and similar but they’re really only good for hardware transcoding or hanging extra monitors off the machine - it’s difficult to find a hh board with a useful amount of vram for ml/ai tasks.

@someguy3@lemmy.world
link
fedilink
English
67M

God damn. What are the specs on those? I gotta check out some government auctions.

@Trainguyrom@reddthat.com
creator
link
fedilink
English
67M

4th gen intel i5s, 8GB of RAM and 256GB SSDs, so not terrible for a basic Windows desktop even today (except of course for the fact that no supported Windows desktop operating system will officially support these system come Q4 2025)

But don’t get your hopes up, when I’ve bid on auctions like this before the lots have gone for closer to $80 per computer, so I was genuinely surprised I could win with such a low bid. Also every state has entirely different auction setups. When I’ve looked into it in the past, some just dump everything to a third party auction, some only do an in-person auction annually at a central auction house, and some have a snazzy dedicated auction site. Oh and because its the US, states do it differently from the federal government. So it might take some research and digging around to find the most convenient option for wherever you are (which could just be making a friend in an IT department somewhere that will let you dumpster dive)

@sabreW4K3@lazysoci.al
link
fedilink
English
37M

They’re actually decent. Congratulations!

Richard
link
fedilink
English
67M

I would personally attempt the Kubernetes cluster if I had that many physical machines!

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 239 users / day
  • 638 users / week
  • 1.41K users / month
  • 3.93K users / 6 months
  • 1 subscriber
  • 3.78K Posts
  • 76.6K Comments
  • Modlog