As others have mentioned its important to highlight the difference between a sync (basically a replica of the source) vs a true backup which is historical data.
As far as tools goes, if the device is running OMV you might want to start by looking at the options within OMV itself to achieve this. A quick google hinted at a backup plugin that some people seem to be using.
If you’re going to be replicating to a remote NAS over the Internet, try to use a site-to-site VPN for this and do not expose file sharing services to the internet (for example by port forwarding). Its not safe to do so these days.
The questions you need to ask first are:
Once you know that you will be able to determine:
I hope I haven’t overwhelmed, discouraged or confused you more and feel free to ask as many questions as you need. Protecting your data isn’t fun but it is important and its a good choice you’re making to look into it
Back in the day when the self-hosted $10 license existed I was using JIRA Service Desk to do this. As far as ticketing systems go it was very easy to work with and didn’t slow me down too much.
I know you don’t want a ticket system but I’m just curious what other people will suggest because I’m in the same boat as you.
Currently I haphazardly use Joplin to take very loose notes and sync them to Nextcloud.
If you want a very simple option with minimal setup and overhead you could use Joplin to create separate notes for each “part” of your lab and just add a new line with a date, time and summary of the change.
I do also use SnipeIT to track all my hardware and parts, which allows you to add notes and service history against the hardware asset.
Other than that, I’m keen to see what everyone else says
Power
Network
Storage
Compute
A second prod host will join the R520 soon to add some redundancy and mirror the Virtual SAN.
All VMs are backed up and kept in an encrypted on-site data store for at least 4 weeks. They’re duplicated to tape (encrypted) once a month and taken off site. Those are kept for 1 year minimum. Cloud backup storage will never replace tape in my setup.
Services
As far as “public facing” goes, the list is very short:
Though I do run around 30-40 services all up on this setup (not including actual non-prod lab things that are on other servers or various SBCs around the place).
If I had unlimited free electricity and no functioning ears I’d be using my Cisco UCS chassis and Nexus 5K switch/fabric extenders. But it just isn’t meant to be (for now, haha).
Depends on your use case, but you can use some Group Policy Objects on Linux (at least with sssd). See: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/windows_integration_guide/sssd-gpo
You can also grant sudo to AD group members in the sudoers file, which is how I’ve done it in a corporate setting.
I believe there are 3rd party ADMX templates you can add to your domain controllers to get more granular as well as additions to the AD schema, but I haven’t gone that deep with it since between sssd and the sudoers file I can achieve what I need to.
Authelia is popular, as is Keycloak. I believe Red Hat develops Keycloak or at least has a hand in it.
I’m on this journey as well, figuring out what I’m going to use. Currently most of my services just use LDAP back to AD but I’m looking to do something more modern like SAML, oAuth or OpenID Connect so that I can simplify the number of MFA tokens I have.
Just as an anecdote you may find useful - Personally I used to run an Active Directory for Windows and FreeIPA for my Linux machines and have managed to simplify this to just AD. Linux machines can be joined, you can still use sudo and all the other good stuff while only having one source of truth for identity.
Thanks for letting me know (and to the others that did as well). I might be able to jump sooner than anticipated, I’ll check my client tonight for the feature. I’m using it on the Apple TV, I think its the Swiftfin flavour of the client.
As a side note, it sure is a refreshing change to not be downvoted into oblivion for simply having out of date information and respectfully informed in the comments.
Zabbix can do everything you’re asking and can be connected to Grafana if you want custom visualisations. Most importantly, it contextualises what you need to know on the dashboard, as in it only tells you about things that require your attention.
You’re of course able to dive into the data and look at raw values or graphs if you wish, and can build custom dashboards too.
I’ve used it in both home lab and production scenarios monitoring small to mid size private clouds, including windows and linux hosts, docker, backups, SAN arrays, switches, VMware vSphere, firewalls, the lot. It’s extremely powerful and not terribly manual to set up.
If metrics is all you want and aren’t too fussed on the proactive monitoring focus, Netdata is a great option for getting up and running quickly.
Can’t argue with you there :P but I guess what I mean is from a service standpoint, Gmail is mail, ISPs provide internet.
For me personally, Google is not my friend and I run my own mail server on my own domain and have for years. It’s quite involved though if you want good deliverability.
I think Proton is probably the happiest medium between privacy-respecting and all-out DIY mail server. Though I’m sure there are many others too :)
Whilst I agree and sympathise with people on how difficult it is to change your primary email address (been there), the outcome will be better for them. They are no longer wedded to an ISP purely because all their mail goes there.
To liken it to something more tangible; when you move house, you need to change your mailing address. For renters, that can be often and is just as painful. Or when your phone number changes and you have to update your contacts. The difference here is who is pulling the trigger; the end user vs the provider.
Gmail is a great option, as is Proton Mail for the security conscious and tech savvy.
This isn’t to excuse the ISPs; it’s a shitty move on their part and the people using these mail accounts will likely be older technically challenged folks, but it is a logical one from a technical perspective. They may have also inadvertently taken the only thing away that’s creating stickiness between them and their customer and driven them into the arms of another ISP.
I’m the admin of krabb.org, honestly I’m loving it. There is a learning curve, particularly for non-technical folks, but that will get easier as time goes on.
As an admin, it is far easier to “jump start” an empty Lemmy instance with content from other instances than it is to do with Mastodon and Pixelfed.
Where we need to improve is the mobile apps, documentation and providing ways to make it easier for small instances to get new users. These are all very much in the spotlight and improving every day (especially the apps), so I’m confident we can get there
Tldr: it good, do like
Second to this - for what its worth (and I may be tarred and feathered for saying this here), I prefer commercial software for my backups.
I’ve used many, including:
What was important to me was:
Believe it or not, I landed on Backup Exec. Veeam was the only other one to even get close. I’ve been using BE for years now and it has never skipped a beat.
This most likely isn’t the solution for you, but I’m mentioning it just so you can get a feel for the sort of considerations I made when deciding how my setup would work.