A person with way too many hobbies, but I still continue to learn new things.
If you want to do it right, try to get a static IP (you may need to get a business account). If your provider doesn’t provide IPv6 to static IPs, go to some place like Hurricane Electric and get a free IPv6 range pointed to your IPv4 static address.
Alternatively, you might do a search for any DDNS services that provide IPv6 (I’m not sure if any do?), then that service will fllow your residential address when it changes. Either way I think you’ll have some additional costs you need to weigh against your current hosting provider.
I think I missed something in your description, but what are you running on your local server? I think most people set up postfix to relay the emails over to gmail or whoever, and there are options in postfix for backwards compatibility with Outlook or even Microsoft Mail so your wife could use whatever client she wants. If you don’t have a local mail server set up then this is probably what you want to do. This method allow a local or remote connection from any client so you could run K9 on your phone instead of a VPN.
For opening such a setup to the internet (and allowing access from anywhere), make sure you have strong passwords on your accounts, require SASL authentication, and set up fail2ban to block repeated attempts to hack your mailboxes. Don’t run anything else on the same server (or use virtual machines or strong containers) to reduce the chance of your mail server getting compromised other ways, and you should be good to go.
At that capacity, I’ll cast another vote for SSD if at all possible, but you can certainly get small HDDs pretty cheap now.
If you want the easiest and cheapest way to add more drives, do a search for “sata port multiplier”. These cards go for around $25US on Amazon or Ebay. They are NOT fast! It uses a single SATA port to run up to five drives, so all the drives split the bandwidth, but long ago I ran some of them for a few years without any problems. You simply run a sata and power cable from your motherboard to the card, then plug in your drives, it doesn’t even require a slot on your motherboard.
I’ve always relied on multiple aspects for my used drives (currently ZFS raidz2 which itself provides multiple checks, but I also do backups of the really important stuff). It doesn’t matter though, all drives new or old are going to fail and you just have to be ready for it. The worst case is multiple drives failing at once, and I had that happen several times when using a weak power supply.
So far I’ve been really happy with the refurbs from Amazon though, plus the NAS is nothing to sneeze at. I upgraded the server to a newer machine, then realized that allowed me to step up in families for my SAS cards. Basically went from a machine that could push data at 70MB/s (and was constantly behind) to a new machine pushing 450MB/s or more with almost no lag. I run a lot of stuff on my home network so it’s been nice having the new speed, and the zfs pools are providing around 92TB on one set and 22TB on another set so I have room to go crazy. If I had to buy new drives I’d have maybe half that amount of space.
Check the SMART info on the drives you receive, if they already show signs of failure then return them immediately.
For “reputable” sellers, I typically go with ones who are selling drives in bulk and have a history of more than five minutes with lots of recent good reviews. I took a chance on a good deal for a “new” drive once and received an obviously used drive where the previous person had cut out the SAS bridge (these pins are required to power on some SAS models like what I bought, so the drive was a paperweight). You’ll get some lemons, but I’ve been running mostly used drives on my fileserver for the past twenty years and had reasonably good luck from bulk sellers (and easy replacements when I got a bad drive from one of them).
Oh, you might also check refurbs from Amazon. My current fileserver is running a set of eight 18TB refurbs which were significantly cheaper at the time, but the drive model itself was only a year old so I knew there couldn’t be much wear on what I received. And Amazon has a good return policy.
Want to see shit really hit the fan? Imagine if Taiwan applied for NATO membership. They are already recognized by most countries as an independent entity and I assume are aligned with many Western goals since they conduct joint military exercises with the US, but China would go crazy over attempts at NATO membership.
Makes me wonder if there are are any rules to prevent acceptance if a country is attacked because of asking to join? Like I know Ukraine can’t join right now because they’re already at war (despite whatever Russia wants to call it), but I think Taiwan is not officially at war with anyone.
I’ve never used TrueNAS, but my experience with ZFS is that it could care less what order the drives are detected by the operating system. You could always shut down the machine, swap two drives around, boot back up, and see if the pool comes back online. If it fails, shut it back down and put the drives in their original locations.
If you are moving your data to new (larger) drives, before anything else you should take the opportunity to play with the new drives and find the ZFS settings that work well. I think recordsize is autodetected these days, but maybe for your use things like dedup, atime, and relatime can be turned off, and do you need xattr? If you’re using 4096 block sizes did you partition the drives starting at sector 2048? Did you turn off compression if you don’t need it? Also consider your hardware, like if you have multiple connection ports, can you get a speed increase by spreading out the drives so you don’t saturate any particular channel?
Newer hardware by itself can make a huge difference too. My last upgrade took me from PCIe x4 to x16 slots, allowing me to upgrade to SAS3 cards, and overall went from around 70MB/s to 460MB/s transfer speeds with enough hardware to manage up to 40 drives. Turns out the new configuration also uses much less power, so a big win all around.
You got me thinking, so I did a search and ran across this page: https://www.hanssonit.se/nextcloud-vm/ I’m not sure how old these releases are, but at the very least it might provide some hints for building your own? I’m going to keep looking to see if I can find an image built on Debian, but at least now I know some options are out there.
[Edit] I also ran across across this page which builds a VM for you using an Ubuntu machine, so I’m guessing I could probably adjust it to a Debian setup fairly easily. https://github.com/nextcloud/vm
I was considering POE as an option, and this camera does have an ethernet port (although I can’t tell yet if that’s only for configuration or if the video will also stream over it directly). I don’t really need a constant stream and this camera also provides motion options so maybe it would only send video as needed (although during a heavy storm all of the cameras would probably fire at once).
I played with Zoneminder years ago but would like to get something set up for home security. I have a full internal network plus servers and about 60TB of free storage space so there’s really no limitations to what I could set up. Ideally I’d like to just hit a local IP from a cell phone to check the cameras (and remote access isn’t really needed), so that’s where I was trying to go with my previous questions.
The software side seems easy enough, but finding compatible IP cameras has been stumping me. I see the Reolink 4K TrackMix wifi cameras on Amazon for $130, and other than a few hiccups it looks likely that this piece of hardware would work, unless anyone knows of any “gotchas” that I’ve missed? Otherwise I’ll do a bit more research and then order one of the cameras to see how far I can get with it.
A lot of it will depend on what age of hardware you are looking for, especially the price. Last year I upgraded all my machines to Poweredge R620 servers. These are old enough that you can find a lot of options for CPUs and memory dirt cheap, and you can find them with either 2.5" or 3.5" internal hot-swap bays. The 6xx series is 1U and the 7xx series is 2U chassis. If power is a concern, I run some VM servers with around 10 VMs in use, 64GB of memory, and a pair of 12-core Xeon E5-2630L v2 processors (2.4GHz, low power) at around 84W, but there’s plenty of options to customize to your needs unless you need current-generation horsepower. The PERC controller in them can be flashed to IT mode for full control, and I run ZFS through it for some of my machines. I built these machines for around $150-200 each and picked them up from ebay (there’s a US seller I can recommend if you’re interested in going that route).
Keep in mind the R* series are rack servers, but Dell also has tower versions of the same machines available – I think those are labeled as M620?
If you can work from the command line (and assuming you have a linux server) then SSH is simple – really all it does is give you a secure connection to the command line. You should get familiar with it because if something goes wrong with your server that may be the only way you can connect to it.
Next you need tools to transfer files to the server. While wget is useful for grabbing stuff from other web servers, while something like scp can get you to any host that also accepts ssh. I use this all the time to transfer files between home and work. Or you might set up an sFTP service to accept a GUI connection from a client like FileZilla.
As for what you can put on your web server… Well if you install php then you can run any php code. If you write javascript code then the web browser interprets that, so nothing to add to your server, but NodeJS code would require some installation. You also want to take some time to learn about security practices. For example if you have pages that use a database, an attacker can write a URL to gain access to your server if the code simply accepts any random input. There’s not really any limit to what can be run, but some things (like the php example) require you to install more components on your server.
Alternatively, there are also functional services you can run that have nothing to do with web pages. For example, a caldav service would allow you to host your own calendar that can be shared between multiple people or locations. Or maybe you want to start up a chat server like IRC or Matrix? Maybe you want to start a Mosquitto server for your personal IoT content? Think of it this way – literally anything and everything that makes the internet run is something you can host yourself.
There was no such thing as a default firewall, but even now when I set up a new Debian machine there are no firewall rules, just the base iptables installed so you CAN add rules. Back then we also had insecure things like telnet installed by default and exposed to the world, so there’s really no telling exactly how they managed to get into my machine. It’s still good to learn about network security up front rather than relying on any default settings if someone is planning on self-hosting.
This was back in '99 and I didn’t know much about linux (or servers) at the time, so I’m not exactly sure what they did… but one morning I woke up and noticed my web service wasn’t working. I had an active login on the terminal but was just getting garbage from it, and I couldn’t log in remotely at all. My guess was that someone hacked in, but hacked the system so badly that they basically trashed it. I was able to recover a little data straight from the drive but I didn’t know anything about analyzing the damage to figure out what happened. so I finally ended up wiping the drive and starting over.
At that point I did a sped-run of learning how to set up a firewall, and noticed right away all kinds of attempts to hit my IP. It took time to learn more about IDS and trying not to be too wreckless in setting up my web pages, but apparently it was enough to thwart however that first attacker got in. Eventually I moved to a dedicated firewall in front of multiple servers.
Since then I’ve had a couple instances where someone cracked a user password and started sending spam through, but fail2ban stopped that. And boy are there a LOT of attempts at trying to get into the servers. I should probably bump up fail2ban to block IPs faster and over a longer period when they use invalid user names since attacks these days happen from such a wider range of IPs.
I see a number of comments to use a virtual server host, but I have not seen any mention of the main reason WHY this is advisable… If you want to host something from your home, people need a way to reach you. There are two options for this – use a DDNS service (generally frowned upon for permanent installations), or get a static IP address from your provider.
DDNS means you have to monitor whenever your local IP address changes, send out updated records, and wait for those changes to propagate across the internet. This generally will mean several minutes or more of down time where nobody can reach your server, and can happen at completely random times.
A static IP is reliable, but they cost money, and some providers won’t even give you the option unless you get a business-class connection, which costs even more money. However this cost is usually already rolled into the price of a virtual machine.
Keep in mind also that when hosting at home, simply using a laptop to stay online 24/7 is not enough, you also need a battery backup for your network equipment. You will want to learn about setting up a firewall and some kind of IDS to protect the front end of your services, but for starting out you can host this on the same machine as your other services. And if you really want to be safe, set up a second internal machine that you can perform regular backups to, so when your machine gets hacked you have a way to restore the information.
My first server was online for two whole weeks before someone blew it up. Learn security first, everything after that will be easy.
I dunno, like I said zfs is pretty damn good at recovery. If the drives simply drop out but there’s no hardware fault you should be able to clear the errors and bring the pool back up again. And the chances of two drives failing at the same time are pretty low. One of these days I do need to buy a spare to have on hand though. Maybe I’ll even swap out one drive just to see how long it takes to rebuild.
My current setup is eight 18TB Exos drives, all purchased from Amazon’s refurb shop, and running in a RAIDz2. I’m pulling about 450MB/s through various tests on a system that is in use. I’ve been running this about a year now and smartd hasn’t detected any issues. I have almost never run new drives for my storage and the only time I’ve ever lost data was back when I was running mdadm and a power glitch broke the sync on multiple drives so the array couldn’t be recovered. With zfs I have even run a RAID0 with five drives which saw multiple power incidents (before I got a redundant power supply) and I never once lost anything because of zfs’ awesome error detection.
So yes, used drives can be just fine as long as you do your research on the drive models, have a very solid power supply, and are configured for hot-swapping so you can replace a drive when they fail. Of course that’s solid advice even for brand new drives, but my last set of used drives (also from ebay) lasted about a decade before it was time to upgrade. Sure, individual drives took a dump over that time, this was another set of eight and I replaced three of them, but the data was always safe.
No matter how you go about it, getting these drives set up to be reliable isn’t going to be cheap. If you want to run without an enclosure, at the very least (and assuming you are running Linux) you are going to want something like LSI SAS cards with external ports, preferably a 4-port card (around $50-$100, each port will run four drives) that you can flash into IT mode. You will need matching splitter cables (3x $25 each). And most importantly you need a VERY solid power supply, preferably something with redundancy (probably $100 or more). These prices are based on used hardware from ebay, except for the cables, and you’ll have to do some considerable research to learn how to flash the SAS cards, and which ones can be flashed.
Of course this is very bare-bones, you won’t have a case to mount the drives in, and splitter cables from the power supply can be finicky, but with time and experience it can be made to work very well. My current NAS is capable of handling up to 32 external and 8 internal drives and I’m using 3D-printed drive cages with some cheap SATA2 backplanes to finally get a rock-solid setup. It takes a lot of work and experience to do things cheaply.
This right here. As a member of the OpenNIC project, I used to run an open resolver and this required a lot of hands-on maintenance. Basically what happens is someone sends a very small packet requesting the lookup of something which returns a huge amount of data (like DNSSEC records). They can make thousands of these requests in a short period, attempting to flood out the target domain’s DNS servers and effectively take them offline, by using your open server as the attacker.
At the very least, you need to have strict rate-limiting controls on DNS lookups. And since the requests come in through UDP, they can spoof their IP address so you can’t simply block an attacker. When I ran into this issue, I wrote up scripts to monitor for a lot of requests to the same domain name and outright block those until the attack stopped. It wasn’t a great solution, but it did at least make sure my system wasn’t contributing to an attack.
Your best bet is to only respond to DNS requests for your own domain(s). If you really want an open resolver, think about limiting it by creating some sort of sign-up method (for instance, ddns servers use a specific URL to register the changing IP of known users), but still keep the rate-limiting in place.
You might want to use a code block instead of bullet points for your table, the way you presented it is unreadable but I found the info on your blog page.
One of my criteria for video formats is the portability. Like sometimes I might watch something through a web browser which natively supports x264. Yeah x265 provides better compression, and AV1 certainly looks interesting, but they both require the addition of codecs on most of my viewing devices and in some cases that’s not possible.
For most cases I’ve found that CRF25 with x264 works reasonably well. I tend to download 720p videos to watch on our 1080p TV and don’t notice the difference except in very minor situations like rapid motion on a solid-color background (usually only seen on movie studio logo screens). Any sort of animated shows can go even lower without noticeable degradation.
Wait, there’s an option to host Overleaf locally? Is there any cost associated with this, or any restrictions on the number of users?
[Edit] Found some more info on this, there’s a free community version, and then an enterprise version with a fee that lets you self-host but adds features like SSO and support from the company. I’ll definitely have to look more into both of these options. Thanks, OP, for making me aware of this!
The key concept here is how valuable your time is to rebuild your collection. I have a ~92TB (8x16 radiz2) array with about 33TB of downloaded data that has never been backed up as it migrated from my original cluster of 250GB drives through to today. I think part of the key is to have a spare drive on hand and ready to go when you do lose a drive, to be swapped in as soon as a problem shows up, plus having email alerts when a drive goes down so you’re aware right away.
To add a little more perspective to my setup (and nightmare fuel for some people), I have always made my clusters from used drives, generally off ebay but the current batch comes from Amazon’s refurbished shop. Plus these drives all sit externally with cables from SAS cards. The good news is this year I finally built a 3D-printed rack to organize the drives, matched to some cheap backplane cards, so I have less chance of power issues. And power is key here, my own experience has shown that if you use a cheap desktop power supply for external drives, you WILL lose data. I now run a redundant PS from a server that puts out a lot more power than I need, and I haven’t lost anything since those original 250GB drives, nor have I had any concerns while rebuilding a failed drive or two. At one point during my last upgrade I had 27 HDDs spun up at once so I have a lot of confidence in this setup with the now-reduced drive count.
One promising item I found are some json files from Reuters…
This one provides info on the candidates and the key for state ID’s: https://graphics.thomsonreuters.com/data/2024/us-elections/production/events/20241105/metadata.json
This one seems like it will provide the ballot counts(0) and possibly any declared winners(1): https://graphics.thomsonreuters.com/data/2024/us-elections/production/events/20241105/summary-votes/president.json
Of course I won’t know anything for sure until tomorrow evening when states start releasing their counts, but I went ahead and wrote up some code to use the files. It’s something at least, and the Reuters data should be fairly timely. I hope to play around with the collected info in real time, then maybe next election I can re-use the same code.