fired up my IRC client a few weeks ago, very quiet there but still running! what servers are worth joining these days?
also worth noting that Element is built with similar ideals to IRC but with E2EE included.
Also important to remember that IRC has some privacy issues that you may want to address before connecting as any IRC user can get your connecting IP and ISP info.
worrying my head off about security because in the old days IPv6 had some issues esp with bascially putting every device on your network on the public internet with no firewall.
learned that years ago hardware makers started defaulting to blocking all traffic from the outside when ipv6 is enabled. Once I felt comfortable just turning it on I found it pretty easy to grasp esp when the addresses stopped liking like random junk to my eyes.
Once I knew how things worked actually exposing a specific system or port set to the internet was super easy, much easier than NAT + firewall.
with my ISP. v6 unexpectedly brought a new level of privacy we had not had before. When you geolocate the IPs they show up in ISP datacenters all over the country. One day it looks like we are in VA, the next we are coming out of Seattle. We have yet to notice any speed or routing issues. IPv4 and IPv6 play well together though once you turn on v6 you might find yourself turning it on for more vlans than you planned because you want the features!
trying to do simple things seemed like a fight, working examples I found hard to come by.
Nginx “just worked”
the most confusing thing was figuring out scriptable processing (and the lua vs JS back and forth, go with njs) however there are entire repos of common examples and solutions which made it much more manageable.
i put this down to maturity and age, older projects just often have more docs, and thier code bases have been molded to fit more cases (esp the strange ones) better.
When it comes to cloudflare, Im not sure you have much of a choice, I ran across errata RE: Caddy a fair bit when setting up my latest proxy through them.
I found the install to be cake using the ansible playbook https://github.com/LemmyNet/lemmy-ansible
i see those instructions already use some of the assets from the ansible deploy.
Raid5 is not as complicated as it seems to be, people going 1/0 prefer the performance increase though that varies based on hardware. For general use raid5 is easy to setup and not hard to understand as you don’t really need to understand it beyond noting how quickly you need to move when X-number of disks start to fail.
If you are backing up there is nothing to worry about other than be sure to buy drives suited to your usage. If you are going SSD the type of memory used will matter quite a bit. Pick your hardware right and an array of any config will run reliably for years to come.
there are a bunch of things to consider which is why it seems so complicated. things like, do you prefer more storage or more live redundancy (aka how many disks do you want to lose before it can’t recover)? There are also performance concerns you may or may not care about.
Hopefully someone familiar with QNAP’s idiosyncrasies chimes in as it sometimes matters when making these choices.
If you don’t care about all that, just want solid redundancy and don’t need the most blistering performance raid5 is always a good go-to. You will hear a lot of back and forth on other mixes that work as well and they are worth considering if you care about any of the factors I’ve mentioned.
Also something to keep in mind, if you plan to do full cloud backups you can play with your arrangement a bit and figure out what you want. Simply rebuild your array and load the cloud backup. Its time consuming so only go there if you really want to try other configs.
work yourself up to 5tbx4 and configure redundant raid. Also worth reading about 3-2-1 backups
oldskool, for something like this you can throw an old nuc on the network
multiple cores is the norm even on budget hardware so a surprising amount of cheap hardware is quite capable.
highly recommend looking into 1L systems. I moved in this direction after realizing i was headed down the same path as you.
agree on many of these points. the biggest thing I would want from a system like this is license portability. the binaries can be shuttled around relatively easily and scaling static file delivery is point and click practically these days.
a way to track license ownership so i could play on whatever service (likely with a nominal fee to cover storage, compute, etc) at the quality level i paid for would be amazing. could also open up more direct customer opportunities for developers since in a system like this id expect the devs to control thier own sales, cuts, etc. With cloud gaming going the way it is, this would be a huge enabler to the market if the entrenched players could let it grow.
I think the short answer there is a yes but the qualified answer is “it might be harder or have a gotcha”
I’ve used a few tutorials from here and they are usually on-point when up-to-date as this one is. They actually have walkthroughs on doing docker installs so I can only imagine the reason here is something to do with how they wanted set things up that made the container manager too much.
We are dealing with a setup where the general public will be hitting your NAS, this may be a way to secure things but I’m only guessing. We could try reaching out.
switching to a khadas vim4 myself.