I’ve spent the past day working on my newest Poweredge R620 acquisition, and trying to nail down what things I can do without checking. Google has shown me that everyone seems to be having similar issues regardless of brand or model. Gone are the days when a rack server could be fully booted in 90 seconds. A big part of my frustration has been when the USB memory sticks are inserted to get firmware updated before I put this machine in production, easily driving times up to 15-20 minutes just to get to the point where I find out if I have the right combination of BIOS/EUFI boot parameters for each individual drive image.

I currently have this machine down to 6:15 before it starts booting the OS, and a good deal of that time is spent sitting here watching it at the beginning, where it says it’s testing memory but in fact hasn’t actually started that process yet. It’s a mystery what exactly it’s even doing.

At this point I’ve turned off the lifecycle controller scanning for new hardware, no boot processes on the internal SATA or PCI ports, or from the NICs, memory testing disabled… and I’ve run out of leads. I don’t really see anything else available to turn off sensors and such. I mean it’s going to be a fixed server running a bunch of VMs so there’s no need for additional cards although some day I may increase the RAM, so I don’t really need it to scan for future changes at every boot.

Anyway, this all got me thinking… it might be fun to compare notes and see what others have done to improve their boot times, especially if you’re also balancing your power usage (since I’ve read that allowing full CPU power during POST can have a small effect on the time). I’m sure different brands will have different specific techniques, but maybe there’s some common areas we can all take advantage of? And sure, ideally our machines would never need to reboot, but many people run machines at home only while being used and deal with this issue daily, or want to get back online as quickly as possible after a power outage, so anything helps…

@SheeEttin@lemmy.world
link
fedilink
English
241Y

I don’t. Poweredges are slow to boot, not much you can do about that. They’re designed to be very compatible, unlike the desktops. Any time I need to reboot a physical server, I go do something else for a while and come back.

If you want to avoid outages, consider a UPS or a second server for HA.

@970372@sh.itjust.works
link
fedilink
English
11Y

The new ones are actually reaaaaly fast with booting

I concur and it just gets worse the more hardware you have in them. 256G of memory and 24 disks? Might as well go have lunch while it boots.

@kalleboo@lemmy.world
link
fedilink
English
3
edit-2
1Y

And beyond the UEFI/boot stuff, it takes 10 minutes just for my ZFS pool to mount

@Shdwdrgn@mander.xyz
creator
link
fedilink
English
31Y

Damn are all 24 disks internal? That’s some rig! I have the hardware on my latest NAS to connect up to 56 drives in hot-swap bays, and at one point while migrating data to the new drives I had 27 active units. Now that I’ve cleaned it up I’m only running 17 drives but it still seems like quite a stack.

Yea they’re internal. That’s normal for a fully loaded 2u storage server. Some even have 2-4 extra disk slots in the rear to cram in a few more.

@Shdwdrgn@mander.xyz
creator
link
fedilink
English
11Y

Wow that’s packing a lot in 2u. I’ve only ever had 1u servers so eight 2.5" slots is a lot for these.

Norah - She/They
link
fedilink
English
2
edit-2
1Y

To be fair, that’s for something like the R720xd, which drops the disk drive and tape drive slots to fit an extra 8 disks in the front. I have a regular ole R720 and it only has 16 bays. I didn’t need that many bays, and wanted better thermals for the GPU in it.

Edit: and I went with the 2U because it’s so much quieter.

@Shdwdrgn@mander.xyz
creator
link
fedilink
English
11Y

The 2u (R720) is quieter than the 1u (R620)? Or quieter than the R720xd?

Unfortunately the 720 wouldn’t have worked for me as the majority of my drives are 3.5" (8x18TB + 5x6TB). What I ended up doing is designing a 3D-printed 16-drive rack using some cheap SATA2 backplanes. Speed tests showed the HDDs were slower than SATA2 anyway, so despite the apparent hardware limitation I actually still clock around 460MB/s transfer rates from those arrays. Then I use the internal 2.5" slots for SSDs. Seems to be working a hell of a lot better than my previous server (a PE 1950 which only had a PCIx 4x slot and topped out at about 75MB/s).

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 136 users / day
  • 427 users / week
  • 1.16K users / month
  • 3.85K users / 6 months
  • 1 subscriber
  • 3.68K Posts
  • 74.2K Comments
  • Modlog