I’d argue it’s better to use actual alternatives. Half of the issue with free and open source software is that it’s userbase is too small. If more people used it, it could actually improve in many ways.
Lets take gaming on Linux as an example. The userbase on steam is somewhere around 5%. So there is almost no incentive for developers to make games that run nativly on Linux. Its actually easier to run the games in a compatibility layer then to get a Linux port of a game. And although wine and proton work incredibly well, sometimes even running a game better than on windows; a Linux native version of every game would be ideal. Which will never happen with such a small userbase.
Next you have the terrible business practices of these companies. Even if you use the pirated versions. You are in their ecosystem and their community. You increase their profitability and their stock price simply by continuing the industry standard.
Pirated versions of software like this is excusable if you need it for work or sometihing. But imagine if instead of staying with the status quo, you use and help improve actual free and open source alternatives. Versons of software that don’t steal your data or monetize how you use it by selling your input to others or stealing it for “AI” datasets.
Imagine using free and open source software that gives you feedom because your data stays on your devices, your creations belong to only yourself or who ypu choose to share it with, and you work with others to improve it; even if it’s by just submitting bug reports. Imagine using something like that which you find so altruisticly beneficial that instead of pirating the software that has no respect for you, you donate money to the devs of free and open source software. Yes, I’m a pirate. But I do donate money to the right causes and something that protects my freedom is worth both my time and my money.
I just used kagi to search for the conversion, and thought the long decimal was funny.
But now that I think of it, does Canada make it’s own 4 L jugs so they can be accurately advertised or do they just use the US 1 gal jugs and call it a 4 L out of convenience but then write in fine print on the bottom that it’s actually 3.79 L?
Unless that is actually a 4L jug of vodka, couldn’t someone sue for misrepresenting the amount of product being sold?
Someone’s liquid here is probably not precise. And I’m going to guess it’s the one claiming to be a larger volume with an additional manufacturing cost.
I recommend adding ollama under the artificial intelligence tag.
If you are dipping toes into containers with kvm and proxmox already, then perhaps you could jump into the deep end and look at kubernetes (k8s).
Even though you say you don’t need production quality. It actually does a lot for you and you just need to learn a single API framework which has really great documentation.
Personally, if I am choosing a new service to host. One of my first metrics in that decision is how well is it documented.
You could also go the simple route and use docker to make containers. However making your own containers is optional as most services have pre built ones that you can use.
You could even use auto scaling to run your cluster with just 1 node if you don’t need it to be highly available with a lot of 9s in uptime.
The trickiest thing with K8s is the networking, certs and DNS but there are services you can host to take care of that for you. I use istio for networking, cert-manager for certs and external-dns for DNS.
I would recommend trying out k8s first on a cloud provider like digital ocean or linode. Managing your own k8s control plane on bare metal has its own complications.
I would say that if you are going to host it at home then kubenetes is more complex. Bare metal kubernetes control plane management has some pitfalls. But if you were to use a cloud provider like linode or digital ocean and use their kubernetes service, then only real extra complexity is learning how to manage Kubernetes which is minimal.
There is a decent hardware investment needed to run kubernetes if you want it to be fully HA (which I would argue means it needs to be a minimum of 2 clusters of 3 nodes each on different continents) but you could run a single node cluster with autoscaling at a cloud provider if you don’t need HA. I will say it’s nice not to have to worry about a service failing periodically as it will just transfer to another node in a few seconds automatically.
Well the kubernetes API has all the necessary parts built in mostly, although sometimes you may want to install a custom resource which often comes with complex service installs.
But I think the biggest strength of kubernetes is all the foss projects that are available for it. Specifically external-dns, cert-manager, and istio. These are separate projects and will have to be installed after the cluster is up.
You can also look at the cloud native computing foundation’s list of projects. It’s a good list of things that work well.
Caution, not all cloud providers support istio. I know that Google’s GKS doesn’t, they make you use their own fork of it
I would also recommend you avoid helm if possible as it obfuscates what the cluster is doing and might make learning harder. Try to just stick to using kubectl if possible.
I have heard good things about nomad too but I have yet to try it.
You should try out all the options you listed and the other recommendations and find what works best for you.
I personally use Kubernetes. It can be overwhelming but if you’re willing to learn some new jargon then try a managed kubernetes cluster. Like AKS or digital ocean kubernetes. I would avoid managing a kubernetes cluster yourself.
Kubernetes gets a lot of flack for being overly complicated but what is being overlooked with that statement is all the things that kubernetes does for you.
If you can spin up kubernetes with cert-manager, external-dns, and an ingress controller like istio then you got a whole automated data center for your docker containers.
Checkout ollama.
There’s a lot of models you can pull from the official library.
Using ollama, you can also run external gguf models found on places like huggingface if you use a modelfile with something as simple as
echo "FROM ~/Documents/ollama/models/$model_filepath" >| ~/Documents/ollama/modelfiles/$model_name.modelfile
Pass for personal use is great. Especially if paired with a self hosted private git repo like gitea.
Pass works well on all platforms I’ve tried, even android and wsl (although I’ve not tried with iPhone).
In a corporate setting. The biggest questions is going to be if there is already a secret store that has an API. If security will let you roll your own. How is it allowed to be networked. Who are the preferred vendors and is there any enterprise support available.
I kinda love it in theory.
Will be trying this out.
I do find it funny however that awk is lumped together with these small use case tools like sed, grep, tr, cut, and rev, since awk can be used to replace all of these tools and is it’s own language.
I don’t think the emphasis should be on simplicity, but rather on understandability (which long awk commands are not either).
If you give someone a bash script, they should be able to know exactly what the code will do when they read the script without having to run it or cat out the source it might need to parse. Using ubiquitous tools that many people understand is a good step.
Sadly awk is installed by default in most distros and tools like jq and jc would require installation.
Wait till you guys use cert-manager on a kubernetes cluster
If you like obsidian but want a FOSS alternative, you might want to try out emacs org-mode and org-roam.
Here is an example video: https://www.youtube.com/watch?v=AyhPmypHDEw
There is a learning curve, but emacs org-mode sounds exactly like what you want.
With org-mode you can have your docs and your code in the same place or use your docs to create and link to different files.
You can even run your code inside your docs and have it execute on a networked computer without ever leaving your doc.
https://www.youtube.com/watch?v=34zODp_lhqg
And with org-roam, you can keep the same functionality you are used to with obsidian.
Emacs is a bit of a rabbit hole however. So if you want to keep things simple you could just use git.
Git has its own learning curve but it’s pretty much a requirement everywhere code is developed and released professionally so it’s a good idea to have some experience with it.
I’d suggest having different repos for your different projects, and either one single readme.md file in each repo for all your docs or using the wiki feature that is built into most free git web UIs like GitHub and gitlab.
Once on git, it’s trivial to link to specific files or even individual lines in your repo.
Matrix so you can chat privately
I agree. I only use linux. Which includes for gaming. And I game a lot.