I was using .local, but it ran into too many conflicts with an mDNS service I host and vice versa. I switched to .lan, but I’m certainly not going to switch to .internal unless another conflict surfaces.
I’ve also developed a host-monitoring solution that uses mDNS, so I’m not about to break my own software. 😅
Coincidentally, I just found this other thread that mentions EasyEffects: https://programming.dev/post/17612973
You might be able to use a virtual device to get it working for your use case.
I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)
https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.
You may also run into performance issues within WSL due to the virtual machine overhead.
There’s a container web UI called Portainer, but I’ve never used it. It may be what you’re looking for.
I also use a container called Watchtower to automatically update my services. Granted there’s some risk there, but I wrote a script for backup snapshots in case I need to revert, and Docker makes that easy with image tags.
There’s another container called Autoheal that will restart containers with failed healthchecks. (Not every container has a built in healthcheck, but they’re easy to add with a custom Dockerfile or a docker-compose.)
It’s really not! I migrated rapidly from orchestrating services with Vagrant and virtual machines to Docker just because of how much more efficient it is.
Granted, it’s a different tool to learn and takes time, but I feel like the tradeoff was well worth it in my case.
I also further orchestrate my containers using Ansible, but that’s not entirely necessary for everyone.
You can tinker in the image in a variety of ways, but make sure to preserve your state outside the container in some way:
docker exec -it containerName /bin/bash
Yes, you can set a variety of resources constraints, including but not limited to processor and memory utilization.
There’s no reason to “freeze” a container, but if your state is in a host or volume mount, destroy the container, migrate your data, and resume it with a run command or docker-compose file. Different terminology and concept, but same result.
It may be worth it if you want to free up overhead used by virtual machines on your host, store your state more centrally, and/or represent your infrastructure as a docker-compose file or set of docker-compose files.
Honestly, taking the time learn Docker and then learn more about the specific containers that you want to use is probably going to be the easiest way forward in your position. If you have any specific questions about Docker or the containers you’re looking at, I can try to help.
When it comes to network mounts, I’ve found it a lot easier to use rclone for that purpose, and that’s currently what I use for the backend of my Plex server.
I’m using https://www.kavitareader.com/ with Moon+ Reader. Kavita supports OPDS feeds, which is perfect.
I’m using a combination of:
It doesn’t quite say that, but I think the meaning is essentially the same: “Don’t choose a name after a project unique to that machine.” - RFC 1178
For my homelab, I think that’s fine to do. I’m unlikely to have multiple Plex servers locally, for example, and if so, numerically naming them is fine - I provision with Ansible, and if I’m at the point where I’m having sequentially numbered hosts, they’ll be configured as cattle anyway. Also, having the names reflect the services a host provides makes it easier to match in my playbooks.
I think it’s a better scheme than turning to mythology, fiction, or animal species, which oddly enough RFC 1178 does encourage you to do.
It would be extremely barebones, but you can do something like this with Pandoc.