Even the slowest SSD write speeds should be faster than an HDD, and those have been running systems perfectly fine for decades. I’ve never used enterprise SSDs (usually one little consumer SSD, or even USB, for boot/cache and a bunch of HDDs for storage) and I’ve never had a problem.
What kind of hardware are you using?
Huh. Kind of surprised it supports up to four drives, but if that’s what it says, there you go. Shouldn’t be any risk of drawing too much current through the wire. At most the board or PSU would shut down.
Also, if you are putting more drives in, see if the BIOS lets you enable staggered spin-up, so that they aren’t all peaking at the same time.
I have never known RPM of a drive to affect its noise level. The fan(s) will be far more significant in noise level. Most drives run pretty quietly, though some can get noisy during I/O, like my HGST Ultrastar He6 drives.
Also, without knowing the model, I wouldn’t say they’re not made to run 24/7. But even on desktop drives, it’s rarely run time that kills them, it’s start-stop cycles. Everything will be fine, but one day you’ll shut it down and some drives won’t spin up. That’s why power outages can be deadly to an old server.
It certainly could. A bit-flip in a core part of the kernel could easily cause it to lock up, if an address is corrupted and it starts writing garbage over its code, or execution jumps to somewhere unexpected, or an instruction is changed from something reasonable to a halt.
Yes, most of those should trigger a blue screen or kernel panic, but that’s not guaranteed when you’re making completely random changes.
https://github.com/go-shiori/shiori#documentation
I assume you’re running it as a web app, in which case the docs cover that. Just run multiple instances on different ports and with different storage.
Looks like they provide an official docker container, too, so running it in docker should be very easy.
You need two Proxmox nodes for HA.
Virtual networking is also not a great idea in the homelab. It’s better if you do have HA, but even so, if you screw it up and break something in Proxmox, you’ll be without any network access to look for help online (except on your phone, so good luck retyping commands or transferring files).
I tried switching a while back, but I found a bunch of stuff didn’t work properly, and wasn’t considered supported. I don’t remember what it was exactly.
I might try it again once there’s been a bit more development and community use. Docker isn’t ideal, but at least it works and there’s a lot of community support.
And nothing of value was lost. Opnsense is still free and open source, and doesn’t start petty drama insulting its competitors.
Yeah it’s a rough number, but it can be used as a guide to estimate power consumption, max performance, and stuff like that. In this case, running an identical software stack, you can say that it would probably result in slightly increased power usage. A better CPU might mean more efficiency, but you’re also running more GHz and more cores.
If you wanted hard data, you’d have to run benchmarks.
https://www.cpubenchmark.net/compare/2957vs3103/Intel-Celeron-G3930-vs-Intel-i3-8100
TDP goes from 51 to 65 W, not much of an increase. If you have it set to throttle down when not used, I don’t think you’ll see much of a change.
Doubling the number of cores is a big performance boost. I would certainly upgrade if you’re considering 4K video, especially if you’re transcoding.
I’d also read the Jellyfin article about hardware acceleration: https://jellyfin.org/docs/general/administration/hardware-acceleration/
Just give them access to it now? There shouldn’t be any issue with it continuing to be available or a while if you should get hit by a bus.