I can upload files outside of the docroot, but if they stay there for too long, I get a nasty email from Dreamhost reminding me that this is for web space and not offsite storage (something they also sell). I haven’t tried uploading something inside the docroot and just setting permissions to 400 or something!
Same. I have a mediawiki install on the shared hosting still, but I haven’t updated it in forever. For the $10.99/month I’m paying for shared hosting, I could save a little and do a more powerful VPS to host similiar stuff… Of just keep doing what I’m doing w/ my S12 pro & Synology. Might look at some kind of failover down the road.
At the ends of the day, it’s about what you’re comfortable working on. My daily driver is a MacBook Pro. I have a BeeLink S12 Pro that runs most of my self hosted stuff, and a Synology that runs a couple things. I also have an HP Z440 as a test bed box (powered off unless I’m working on something). I’m comfortable working with Linux and power draw was important for me in setting up my always-on server (my power bill is already high).
The only minor concern I would have with a mini is you’re limiting your support base. This isn’t to say there’s no support, there’s just less. Most self hosted are using something like a unraid, a beelink, or an old micro Dell/HP/Lenovo. Because of that, there’s a ton of stuff out there about getting various services running on these setups. The M-based mini environment is going to be a little more unique.
Just reread you comment and I guess it’s the network that will cause issues. To be clear, I think I can make the cloudflare portion work one way or another (I have a second domain i can use if necessary). If my thinking is correct the tailnet communication would be over that IP space - not trying to route to my LAN net. Unless I’m missing something.
So I learned today that I need to play with the conflate tunnel if I want two systems using one domain. I’m hoping a second api key will help. Honestly, until I tested the second server on the tunnel, that’s been rock solid. Or are you saying using both networks will inject flakiness?
Also, I appreciate the suggestion of clustered with, but none of this is mission critical. If it’s down until I can login/fix, I’m ok with that. Only a 2-3 people using it.
I use nginx & docker-proxy. Because the model I copied used that setup. Having messed with it a bit, I’m understanding it more and more. Before that, the last time I messed with a web server (Apache), nginx wasn’t around. Lately, I’ve seen a similar docker setup to mine that doesn’t use docker-proxy. If I find time, I’ll probably play with that some on my dev rig.
Whether or not they comply with law enforcement is not the issue. Any company will comply with their local law enforcement if they want to keep their doors open. What’s important is what data they keep on their users. Unless I’m mistaken, Nord, like many others, only keeps billing info and limited connection info for load balancing purposes (deleted after something like 15-minutes). So, the Panamanian government (where they’re headquartered); who IIRC has no data retention laws and isn’t part of 5-eyes; asks for logs, they will get something, but not much to tie a specific customer to anything.
Also, Nord has been independently audited multiple times in the past. Something quite a few other providers can’t say.
It’s popular to bash on Nord b/c they advertise a lot, but I haven’t seen a legit reason not to use them. If it exists, I’d love to see it.
There’s nothing wrong with the small PC/NAS route. Certainly more powerful and flexible. I’m currently running the *arr stuff in containers on a Synology 1520 (also storing a bunch of other stuff), with Plex running on a Shield Pro. It’s pretty low power draw, and so far does everything I need.
Main thing with running Plex on the NAS is transcoding - audio and/or video. Depending on what your Plex client is, you want to make sure everything you’re streaming can direct play.
Adding to this, there’s probably a general feeling that, especially with publicly traded companies (which Nord isn’t… yet), profit motive will inevitably cause a company to make decisions that don’t align with its customer’s best interests. The idealist in me thinks it’s possible for a company to be profitable without being shitty towards its customers. The cynic in me thinks there’s probably more profit in being shitty.
That said, profit keeps companies in business. If you’re getting it for free, you’re either the product, pirating it, or relying on others to keep it going. I won’t say paying for it guarantees future availability and development, but that profit motive also motivates continuing development. Kind of a double edged sword, there.
Adobe and Microsoft only kinda care about you. You’re one person. All the freelancers out there are still a fairly small part of their respective balance sheets. If you’re a freelance worker, some of your customers might require you to show valid licenses for the software you use, because they want to make sure their partners are ethical (at least, in this regard). Alternatively, you could use FOSS apps.
As someone else already said, if you are making money using commercial software, you really should be paying for it. The cost of your software should be factored into what you charge your customers. They should understand that.
FWIW, all of my *arr, and VPN containers use the same network bridge. Prowlarr and torrent use the VPN service, though having Prowlarr on there is maybe overkill. They’re all able to access one another using the bridge gateway + port as the host, e.g.: 172.20.0.1:5050
I mostly used this guide, where he suggests:
I have split out Prowlarr as you may want this running on a VPN connection if your ISP blocks certain indexers. If not copy this section into your compose as well. See my Gluetun guides for more information on adding to a VPN.
One thing I had to make sure of was that the ports for Prowlarr were included in the VPN container setup, rather than the Prowlarr section (b/c it’s just connecting to the VPN service):
ports:
- 8888:8888/tcp # HTTP proxy
- 8388:8388/tcp # Shadowsocks
- 8388:8388/udp # Shadowsocks
- 8090:8090 # port for qbittorrent
- 9696:9696 # For Prowlarr
Damn! I missed that one. Working now. Thanks!