Sorry for the long post. tl;dr: I’ve already got a small home server and need more storage. Do I replace an existing server with one that has more hard drive bays, or do I get a separate NAS device?
I’ve got some storage VPSes “in the cloud”:
The 10TB VPS has various files on it - offsite storage of alert clips from my cameras, photos, music (which I use with Plex on the NVMe VPS via NFS), other miscellaneous files (using Seafile), backups from all my other VPSes, etc. The 3.5TB one is for a backup of the most important files from that.
The issue I have with the VPSes is that since they’re shared servers, there’s limits in terms of how much CPU I can use. For example, I want to run PhotoStructure for all my photos, but it needs to analyze all the files initially. I limit Plex to maximum 50% of one CPU, but limiting things like PhotoStructure would make them way slower.
I’ve had these for a few years. I got them when I had an apartment with no space for a NAS, expensive power, and unreliable Comcast internet. Times change… Now I’ve got a house with space for home servers, solar panels so running a server is “free”, and 10Gbps symmetric internet thanks to a local ISP, Sonic.
Currently, at home I’ve got one server: A HP ProDesk SFF PC with a Core i5-9500, 32GB RAM, 1TB NVMe, and a single 14TB WD Purple Pro drive. It records my security cameras (using Blue Iris) and runs home automation stuff (Home Assistant, etc). It pulls around 41 watts with its regular load: 3 VMs, ~12% CPU usage, constant ~34Mbps traffic from the security cameras, all being written to disk.
So, I want to move a lot of these files from the 10TB VPS into my house. 10TB is a good amount of space for me, maybe in RAID5 or whatever is recommended instead these days. I’d keep the 10TB VPS for offsite backups and camera alerts, and cancel the other two.
Trying to work out the best approach:
Any thoughts? I’m leaning towards option 2 since it’ll use less space and power compared to having two separate systems, but maybe I should keep security camera stuff separate? Not sure.
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.
Rules:
Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.
No spam posting.
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.
Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
No trolling.
Resources:
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
Don’t buy a synology. For less money you can make a better system. I use a cheap itx board, a used 6600k, Silverstone DS380 and 8x4TB disks of spinning rust and a 256G NVME as my current iteration of my NAS. its basically silent, and runs ubuntu + zfs + shit in containers. Its excellent.
I am however considering 10G ethernet cards for it and my desktop and just doing point-to-point. Not that 1G is too slow for my needs, but because it’d be fun.
I just bought a QNAS. Thoughts?
Can you just put stock ubuntu on it? Is the CPU worth a damn?
If it can’t do either of those, it is manufactured ewaste, imo.
I use QNAPs for literary decades. I’m now in my 3d one. I love that they supports their devices for long time. But their software is getting more features, but quality IMHO is going down. I would now build NAS myself and not buy QNAP. Not having option with ECC RAM is also disappointing, but probably ok for home usage.
Do you use ECC RAM? The Synology comes with ECC RAM, whereas it’s hard to find consumer motherboards that support ECC :/
Another reason to avoid a Synology. I had a HP Microserver gen 8 that I ditched due to CPU constraints and ECC ram. Just got 32G of cheap DDR4.
What’s your power utilization with the 6600k? I have a spare one of those lying around and would convert my Ryzen 3950X AIO to just a server w/ a 6600k NAS if it doesn’t cost you too much.
It and some other network appliance bits draw ~ 100W continuous.
I think a good chunk of that is the disks, but I could be wrong.
Thanks for the input. Would you recommend having a separate NAS system, or replacing my current server with it?
Personally I like to keep my data on a separate system because it helps me keep it stable and secure compared to my more “fun” servers.
That said, being able to run compute on the same server as storage removes a bit of hassle.
Run your fun things in containers and you can’t make a mess of the host.
I’d consolidate to let it pay for itself over the longer term in electricity savings.
My single NAS runs everything I could ever want, though I regret not finding a used 6700k, finding out teh 6600k didn’t have HT.
Also, I run frigate on it inside a container and use a Google Coral Accellerator to people-detection from 4x2k camera streams. Its pretty swish, though it took some fiddling to get the kernel to be groovy with it and do container-device passthru from PCI-e.
In total, my single NAS runs the following in containers:
The whole shebang, NAS with permanently spinning rust, UPS, ISP Modem and Ubiquity Dream Machine run ~100W.
Edit: I’ve noticed ZFS is twitchier than most about disks failing. It fails disks about once or twice a year, which are getting cheaper every year. Most of the time the disk still works as far as SMART is concerned, but I’m not gonna question the ZFS gods.
Are you running something like Unraid or TrueNAS, or are you just running a ‘regular’ Linux distro?
I’m doing something similar, except using Blue Iris and CodeProject.AI instead of Frigate. Works pretty well! CodeProject AI just added Coral support recently.
How much power does just the NAS use?
the NAS is the bulk of the 100W.
Ubuntu + ZFS. I don’t see the appeal of running a non-mainline distribution. All I did was set it up so ZFS sends me emails and a crontab to run a ZFS resilver weekly.