I’m in the process of wiring a home before moving in and getting excited about running 10g from my server to the computer. Then I see 25g gear isn’t that much more expensive so I might was well run at least one fiber line. But what kind of three node ceph monster will it take to make use of any of this bandwidth (plus run all my Proxmox VMs and LXCs in HA) and how much heat will I have to deal with. What’s your experience with high speed homelab NAS builds and the electric bill shock that comes later? Epyc 7002 series looks perfect but seems to idle high.

Saik0
link
fedilink
English
8
edit-2
3M

5 node proxmox cluster (each node on 40gbps networking[yes ceph…], ~80TB of SSD storage, 180cores, ~630GB of ram total)
1 slow storage node (~400TB)
2x opnsense servers in HA
2x icx7750s
2x icx7450s

PoE to all the things… and 8gbps internet.

Usually run ~15-17amps. So about 2000 watts. It’s my baby datacenter.

Sometime this month I’ll be installing 25000kwh solar system on my roof and batteries.

As far as heat goes… It’s in the garage with an insulated door, heat pump water heater, and there’s a tripplite ac unit in the bottom of the rack. The waste air(from the a/c) exhausts outside through a direct vent in the wall. The garage is downright tolerable to me for extended periods of time. The servers don’t complain at all.

Reading about all you guys being under 200w or whatever makes me wonder if it’s worth it. Then I realize that the cost to do even a 1/4 of what I do in the cloud is more expensive than buying my solar.

Power costs for the rack is about $100-120 a month. If it wasn’t for solar.

Edit: 75 LXC containers, 22VMs.

@486@lemmy.world
link
fedilink
English
33M

Edit: 75 LXC containers, 22VMs.

That’s a lot of power draw for so few VMs and containers. Any particular applications running that justify such a setup?

Saik0
link
fedilink
English
33M

That’s total draw of the whole rack. No indicative of power per vm/lxc container. If I pop onto management on a particular box it’s only running at an average of 164 watts. So for all 5 processing nodes it’s actually 953 watts (average over the past 7 days). So if you’re wanting to quantify it that way, it’s about 10W per container.

Truenas is using 420 watts (30 spinning disks, 400+TiB raw storage…closer to 350 usable. Assuming 7 watts per spinning drive were at 210Watts in disks alone, spec sheet says 5 at idle and 10 at full speed). About 70 watts per firewall. Or 1515 for all the compute itself.

The other 1000-ish watts is spent on switches, PoE (8 cameras, 2 HDHR units, time server and clock module,whatever happens to be plugged in around the house using PoE). Some power would also be lost to the UPS as well because conversions aren’t perfect. Oh and the network KVM and pullout monitor/keyboard.

I think the difference here is that I’m taking my whole rack into account. Not looking at the power cost of just a server in isolation but also all the supporting stuff like networking. Max power draw on an icx7750 is 586Watts, Typical is 274 according to spec sheet. I have 2 of them trunked. Similar story with my icx7450s, 2 trunked and max power load is 935W each, but in this case specifically for PoE. Considering that I’m using a little shy of 1k on networking I have a lot of power overhead here that I’m not using. But I do have the 6x40gbps modules on the 7750.

With this setup I’m using ~50% of the memory I have available. I’m 2 node redundant, and if I was down 2 nodes I’d be at 80% capacity. Enough to add about 60GB more of services before I have to worry about shedding load if I were to see critical failures.

Mister Bean
link
fedilink
English
13M

Just out of curiosity, what do you use all that storage for?

Saik0
link
fedilink
English
13M

On the Sata SSD ceph storage. That’s just live stuff on the containers/vms. I’m at 20% usage of the 70TiB usable at the moment. I don’t use it all that heavily. Because of the way ceph works it’s really ~23 TiB of usable space and ~4.5 TiB written since it writes 3 copies in my cluster.

On the slow storage node it’s running Truenas with 28 spinning disks at 16TB each. 2 hot spares, and 2 ssds each for cache, log, and metadata (eating up total of 36 bays). That’s 342.8TiB usable after raidz nonsense. And I’m 56% usage. I have literally everything I’ve done that I cared to save from like 2005 or 2006 or so. Backups for the ceph storage (PBS). Backups for computers I’ve had over the years. Lots of linux ISOs(105TiB) archived, including complete sets of gaming (37TiB) variants. Oh and my full steam library as well which currently sits at 14TiB. Flashpoint takes up a few TiB as well…

Damn that’s a setup alright!

If you’re making use of the hardware it’s well worth it over anything cloud based for sure.

The load on my UPS is around 100-140 watts. That includes my server, firewall, switch, starlink and a unifi access point. I would love to get that power consumption down. I only get 4-5 hours of runtime on battery. Also, the room it’s in is small and it gets really hot in the summer time.

@vividspecter@lemm.ee
link
fedilink
English
13M

Systems themselves are all around 5-20W, although the ones with mechanical HDDs obviously add their own idle usage.

mesamune
link
fedilink
English
263M

My pi costs probably around 20 a year lol.

Jolteon
link
fedilink
English
-13M

deleted by creator

If you’re just running home automation, you do not need an Epyc 🤣

Get a low power anything to just run what you need.

@johnnixon@lemmy.world
creator
link
fedilink
English
23M

I looked at Epyc because I wanted to bandwidth to run u.2 drives at full speed and it wasn’t until Epyc or Threadripper that you could get much more than 40 lanes in a single socket. I’ve got to find another way to saturate 10g and give up on 25g. My home automation is run on a Home Assistant Yellow and works perfectly, for what it does.

@just_another_person@lemmy.world
link
fedilink
English
6
edit-2
3M

Some unsolicited advice then: don’t go LOOKING for reasons to use the absolute max of what your hardware is capable of just because you can. You just end up spending more money 🤑

For real though, just get an N100 or something that does what you need. You don’t need to waste money and power on an Epyc if it just sits idle 99% of the time.

@johnnixon@lemmy.world
creator
link
fedilink
English
13M

What I need is a 10g storage for my Adobe suite that I can access from my MacBook. I need redundant, fault tolerant storage for my precious data. I need my self hosted services to be high availability. What’s the minimum spec to reach that? I started on the u.2 path when I saw enterprise u.2 drives at similar cost per GB as SATA SSDs but faster and crazy endurance. And when my kid wants to run a Minecraft server with mods for him and his friends, I better have some spare CPU cycles and RAM to keep up.

@MangoPenguin@lemmy.blahaj.zone
link
fedilink
English
3
edit-2
3M

You could technically do that from like 2x ~$150 used business desktop PCs off ebay, 10th gen Intel CPU models or around there with Core i3/i5 CPUs.

Throw some M.2 SSDs in each one in a mirror array for storage, add a bit of additional RAM if needed and a 10G NIC. Would probably use about 30-40W total for both of them.

Minecraft servers are easy to run, they don’t need much especially on a fairly modern CPU with high single thread performance, and only use maybe 6GB of RAM for a modded one.

You’re not asking for a whole lot out of the hardware, so you could do it cheap if you wanted to.

@just_another_person@lemmy.world
link
fedilink
English
0
edit-2
3M

Get a Drobo if you’re that worried about that kind of access then. Make it simple.

Otherwise anything with two NICs is the same thing.

@wreckedcarzz@lemmy.world
link
fedilink
English
2
edit-2
3M

I just moved my home assistant docker container to a new-to-me Xeon system. It also runs a couple basically idle tasks/containers, so I threw BOINC at it to put it to good use. All wrapped up with Debian 12 on proxmox…

(I needed USB support for zigbee in ha, and synology yanked driver support from dsm with the latest major version, so ‘let’s just use the new machine’…)

I’ve got a 3 node Proxmox/ceph cluster with 10G, plus a separate Nas. They are all rack mount with dual PSU. Add in the necessary switching, and my average load is about 800w. Throw my desktop (also on 10G) into the mix and it runs 1.1kw.

That’s roughly $50-60 extra in electricity costs for me monthly.

@johnnixon@lemmy.world
creator
link
fedilink
English
13M

I’m afraid of dumping 500+ watts into a (air conditioned) closet. How are you able to saturate the 10g? I had some idea that ceph speed is that of the slowest drive, so even SATA SSDs won’t fill the bucket. I imagine this is due to file redundancy not parity/striping spreading the data. I’d like to stick to lower power consumer gear but ceph looks CPU, RAM, and bandwidth (storage and network) hungry plus low latency.

I ran proxmox/ceph over 1GB on e-waste mini PCs and it was… unreliable. Now my NAS is my HA storage but I’m not thrilled to beat up QLC NAND for hobby VMs.

My 10G is far from saturated, but I do try and keep things using RAM where possible. I figure that with 100gb of DDR4 in my main server, that should be able to provide enough speed for a 10G link.

I’ve got ceph running on Intel Enterprise SSDs, so they are pretty quick.

I also tried running ceph on 1G. I found it unreliable as well.

Would be around 300€ in Germany, on a cheap contract. Limiting myself to one combined NAS/application server atm, with the others turned on only if I want to try sth out.

@AceBonobo@lemmy.world
link
fedilink
English
13M

deleted by creator

@tmjaea@lemmy.world
link
fedilink
English
33M

Average load 800W is 0.8kW24h30d=576kWh/M

Which is over 172€ on a 30ct/kWh contract.

@jqubed@lemmy.world
link
fedilink
English
33M

Wow! I’m paying 10.5¢/kWh for electricity at home here in the US; it’s a little below the national average but not dramatically.

@tmjaea@lemmy.world
link
fedilink
English
23M

Yeah, we pay a lot. We also got one of the lowest downtimes regarding electricity, on average approximately 10minutes per year…so that’s kind of a (small) advantage you get for the premium price

I ise about the same. But that is more due to the hardware I got being a bit older. 2 dell R710s 1 R510 and a custom build server. Everything is still 1g. In my case electricity is not a big deal due to solar. We produce much more then we can use our self.

Lemmy Tagginator
bot account
link
fedilink
-23M

deleted by creator

@Retiring@lemmy.ml
link
fedilink
English
13M

82.2W average for which I pay 144.6€/a at the moment. That’s for a Ryzen 7 3700X, some hard drives and SSDs and the fiber connection to my basement. I outsourced 90% of media consumption to a VPS though, that’s another 84€/a.

Suzune
link
fedilink
English
43M

I recently removed my 25Gbps PCIe dual port cards from my 2 servers because they were using 20W more. My entire rack including 2 UniFi PoE connections uses 90 W now (so 110 W just for having 25 Gpbs).

There is some heat from such cards, but usually it gets transported outside fine. The ones I bought did not come with a fan. I think you cannot operate them without one. The heat sinks get very hot.

thejevans
link
fedilink
English
23M

I use a Ryzen 5900x, RTX 3080, 2x 10Gbit sfp+ NIC, 128GB ECC RAM, and only 2x 20TB drives at the moment.

For my gateway, I have an Intel N6005 box, I have a managed 2.5/10Gbit switch, and I have a wifi AP.

I have a ton of Proxmox VMs and containers.

All that hovers between 140W to 180W

@PieMePlenty@lemmy.world
link
fedilink
English
2
edit-2
3M

I run a NUC11 so about 10W. 15-20€ per annum assuming a single tariff at 0.17€ per kwh. It can use up to 30W but only during heavy load which may be like 8 hours a week. But electricity is also cheaper during off peak hours so it averages to about that (we have 5 tariffs).

Load is NAS, media server, homeassistant and a usb zigbee router, *arr stack.

Power usage was my main concern and wanted something eco friendly.

Kokesh
link
fedilink
English
13M

7W I think

@farcaller@fstab.sh
link
fedilink
English
43M

I run 3900X with a 40Gbit fiber, packed with HDDs and nvmes. The box fluctuates around 90-110W use.

@johnnixon@lemmy.world
creator
link
fedilink
English
13M

Where do you find the bandwidth to do all that? NVME eats it up and the 40g too.

@farcaller@fstab.sh
link
fedilink
English
13M

I did ran out of pcie, yeah :-( the network peaks at about 26gbit/s, which is the most you can squeeze out of pcie 3.0 x4. I could move the nvmes off the pcie 4.0 x16 (I have two m2 slots on the motherboard itself), but I planned to expand the nvme storage to 4x SSDs and I’m out of the pci lanes on the other end of the fiber either way (that box has all x16 going to the gpu)

From the wall I’m pulling 120w

Ryzen 5700G

128GB ram

2tb + 4tb NVMe drive

2 x 20tb HDDs

Unifi Enterprise 24 PoE

Mikrotik RB5009

2 access points

3 cameras

Fiber runs cooler than copper all of my SFP+ are fiber.

I feel almost obliged to ask: what are you running on this monster of a setup?

@aStonedSanta@lemm.ee
link
fedilink
English
13M

You know he’s just running docker.

Mostly for PiHole.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 279 users / day
  • 589 users / week
  • 1.34K users / month
  • 4.55K users / 6 months
  • 1 subscriber
  • 3.47K Posts
  • 69.3K Comments
  • Modlog