Sure thing - one thing I’ll often do for stuff like this is spin up a VM. You can throw 4x1GiB virtual drives in it and play around with creating and managing a raid using whatever you like. You can try out md, ZFS, and BTRFS without any risk - even unraid.
Another variable to consider as well - different RAID systems have different flexibility for reshaping the RAID. For example - if you wanted to add a disk later, or swap out old drives for new ones to increase space. It’s yet another rabbit hole to go down, but something to keep in mind. When we start talking about 10’s of terrabytes of data you start to lose somewhere to temporarily put it all if you need to recreate your raid to change your raid layout. :-)
Yeah - that’s fair. I may have oversimplified a tad… The concepts behind RAID, the theory, implementations, etc. are pretty complicated. And there are many tools that do “raid-like-things” with many options about raid types… So the landscape has a lot of options.
But once you’ve made a choice the actual “setting it up” is usually pretty simple, and there’s no real on-going support or management you need to do beyond just basic health monitoring which you’d want to do even without a RAID (e.g. smartd). Any Linux system can create and use a RAID - you don’t need anything special like Unraid. My old early-to-mid-2010’s Debian box manages a RAID with NFS just fine.
If you decide you want a RAID you first decide which “level” you want before talking about any specific implementations. This drives all of your future decisions including which software you use. This basically focuses on 2 questions - how much budget do you have and what is your fault tolerance?
e.g. I have a RAID5 because I’m cheap and wanted biggest bang-for-the-buck with some failure resiliency. RAID5 lets me lose one drive and recover, and I get the storage space of N-1 drives (1 drive is redundant). Minimum size for a RAID5 is 3 drives. Wikipedia lists the standard RAID levels which are “basically” standardized even though implementations vary.
I could have gone with RAID6 (minimum 4 disks) which can suffer a 2 drive outage. I have off-site backups so I’ve decided that the low-probability of a 2 drive failure means this option isn’t necessary for me. If I’m that unlucky I’ll restore from BackBlaze. In 10+ years of managing my own fileserver I’ve never had more than 1 drive fail at a time. I’ve definitely had drives fail though (replaced one 2 weeks ago - was basically a non-issue to fix).
Some folks are paranoid and go with RAID1 and friends (RAID1, RAID10, etc.) which involves basically full duplication of drives. Very safe, very expensive for the same amount of usable storage. But RAID1 can work with a minimum of 2 drives. It just mirrors them so you get half the storage.
Next the question becomes - what RAID software to use? Here there are lots of options and where things can get confusing. Many people have become oddly tribal about it as well. There’s the traditional Linux “md” RAID which I use that operates under the filesystems. It basically takes my 4 disks and creates a new block device (/dev/md0) where I create my filesystems. It’s “just a disk” so you can put anything you want on it - I do LVM + ext4. You could put btrfs on it, zfs, etc. It’s “just a disk” as far as the OS is concerned.
These days the trend is to let the filesystems handle your disk pooling rather than a separate layer. BTRFS will create a RAID (but cautions against RAID5), as does ZFS. These filesystems basically implement the functionality I get from md and lvm into the filesystem itself.
But there are also tools like Unraid that will provide a nice GUI and handle the details for you. I don’t know much about it though.
SSDs fail too. All storage is temporary…
Setting up a simple software raid is so easy it’s almost a no-brainer for the benefit imho. There’s nothing like seeing that a drive has problems, failing it from the raid, ordering a replacement, and just swapping it out and moving on. What would otherwise be hours of data copying, fixing things that broke, and discovery of what wasn’t backed up is now 10 minutes of swapping a disk.
A fairly common setup is something like this:
Internet -> nginx -> backend services.
nginx is the https endpoint and has all the certs. You can manage the certs with letsencrypt on that system. This box now handles all HTTPS traffic to and within your network.
The more paranoid will have parts of this setup all over the world, connected through VPNs so that “your IP is safe”. But it’s not necessary and costs more. Limit your exposure, ensure your services are up-to-date, and monitor logs.
fail2ban can give some peace-of-mind for SSH scanning and the like. If you’re using certs to authenticate rather than passwords though you’ll be okay either way.
Update your servers daily. Automate it so you don’t need to remember. Even a simple “doupdates” script that just does “apt-get update && apt-get upgrade && reboot” will be fine (though you can make it more smart about when it needs to reboot). Have its output mailed to you so that you see if there are failures.
You can register a cheap domain pretty easily, and then you can sub-domain the different services. nginx can point “x.example.com” to backend service X and “y.example.com” to backend service Y based on the hostname requested.
I’m at a bit of a loss as well… The official site talks more about Python, rust and open source than just explaining what the project’s point is.
Pinepods is a complete podcast management system and allows you to play, download, and keep track of podcasts you (or any of your users) enjoy.
So does my podcast client - which is on my phone where I listen to podcasts.
The likelihood of a risk in this proxy might be medium or even high according to you
It might be zero. It’s “unknown” (according to me I guess).
I’ve dug into the code a bit out of curiosity - it seems to me that “proxy” is a misnomer. It’s a stripped-down “view” layer built on top of the API. But has the same endpoints as the main immich app for shared things so that you can create links that work with it so it kinda looks like a proxy. But it’s just a “simplified public view” of sorts.
Meh.
I like to judge software based on its actually merit and not on the theoretical possibility it is vulnerable
This is literally the entire justification for the project. It’s assuming theoretical vulnerabilities in Immich.
I am not saying I would trust this software in a security critical situation
Which is the point of this software (security critical situation).
just that your speculation means nothing
This project has zero community support. That’s not speculative, it’s a fact. “Every project starts somewhere” is just a tautology that means nothing. Every project that fails starts somewhere.
It’s some rando’s project that has existed for “nearly a month”, has no community, is unlikely to have any rapid response to any issues, and probably won’t be supported for more than a year.
But sure - go ahead and run it for “security purposes”.
You can “reduce surface area” by simply putting in place nginx or apache (real supported software) and blacklisting the endpoints you don’t like.
Proxies are not used for security by anyone but morons. Firewalls, WAFs, etc. all provide some sort of benefit. What is this application doing that is of use? Just “not exposing your server directly”? Well, it is being exposed directly now - so it’s a very secure application written by a security professional then? Or should I put it behind another proxy just to be sure? Maybe 7 proxies are enough?
OP is well meaning - but this was a waste of time for anyone else to use. It’s a solution in search of a problem.
Like by reducing the attack surface on internal APIs?
This is my other favorite term the community has picked up and uses like it’s a mic drop without understanding it.
It’s a proxy my friend. It forwards requests to the other server. And you’ve added an untested personal project in front of it.
But wait! You don’t want to just expose your immich proxy to the internet do you? I’ll write DavesAwesomeProxy that you can put in front of that proxy! Will it be secure? Maybe. Will I support it? What’s with all the questions!
Put it on a different server then. It prevents your Immich server from ever needing to be exposed publicly. That’s the entire point.
This is stupid.
Repeat after me - proxies are not used for security.
This is a cargo-cult believe in this community. There’s a weird sense that it’s “dirty” to have a server exposed “directly” to the internet. But if I put it behind something else that forwards traffic to the server then that’s somehow safe!
Security is something you do not something you have. The false sense of security with proxy bullshit like this crappy project is not giving you anything. You’re taking a well supported community project (immich) and installing another app in front of it which appears to be some dude’s personal project and telling me that is more secure. As though that project is better written?
Install immich. Forward ports to it (or proxy it with nginx if needed for hostname routing (but don’t expect this to be more secure)), and keep it up to date and use good passwords.
At its simplest:
docker run -d --name servicename --restart unless-stopped container
That’ll get you going. Youi’ll have containers running, they restart, etc. There are more sophisticated ways of doing things (create a systemd file that starts/stops the container, use kubernetes, etc.) but if you’re just starting this will likely work fine.
Not sure what you heard, but I use ansible to push docker compose files to VMs and publish containers without issue. Works nicely.
I usually create systemd service files to start/stop the compose jobs and have ansible set it all up.