• 0 Posts
  • 32 Comments
Joined 1Y ago
cake
Cake day: Jul 13, 2023

help-circle
rss

It would be extremely barebones, but you can do something like this with Pandoc.


That I agree with. Microsoft drafted the recommendation to use it for local networks, and Apple ignored it or co-opted it for mDNS.


Macs aren’t the only thing that use mDNS, either. I have a host monitoring solution that I wrote that uses it.



I was using .local, but it ran into too many conflicts with an mDNS service I host and vice versa. I switched to .lan, but I’m certainly not going to switch to .internal unless another conflict surfaces.

I’ve also developed a host-monitoring solution that uses mDNS, so I’m not about to break my own software. 😅


Coincidentally, I just found this other thread that mentions EasyEffects: https://programming.dev/post/17612973

You might be able to use a virtual device to get it working for your use case.


It depends on the model you run. Mistral, Gemma, or Phi are great for a majority of devices, even with CPU or integrated graphics inference.


Show me a music store I can purchase music from on my phone through an app, and I’ll purchase it.


We all mess up! I hope that helps - let me know if you see improvements!


I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)

https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.

You may also run into performance issues within WSL due to the virtual machine overhead.


Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)


It should be split between VRAM and regular RAM, at least if it’s a GGUF model. Maybe it’s not, and that’s what’s wrong?


Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)

I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?


Unfortunately, I don’t expect it to remain free forever.


No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.

On my RTX 3060, I generally get responses in seconds.


My go-to solution for this is the Android FolderSync app with an SFTP connection.


Correction: migrated to GitLab, but I don’t expect they’ll want to keep it there.




The Docker client communicates over a UNIX socket. If you mount that socket in a container with a Docker client, it can communicate with the host’s Docker instance.

It’s entirely optional.


There’s a container web UI called Portainer, but I’ve never used it. It may be what you’re looking for.

I also use a container called Watchtower to automatically update my services. Granted there’s some risk there, but I wrote a script for backup snapshots in case I need to revert, and Docker makes that easy with image tags.

There’s another container called Autoheal that will restart containers with failed healthchecks. (Not every container has a built in healthcheck, but they’re easy to add with a custom Dockerfile or a docker-compose.)


It’s really not! I migrated rapidly from orchestrating services with Vagrant and virtual machines to Docker just because of how much more efficient it is.

Granted, it’s a different tool to learn and takes time, but I feel like the tradeoff was well worth it in my case.

I also further orchestrate my containers using Ansible, but that’s not entirely necessary for everyone.


You can tinker in the image in a variety of ways, but make sure to preserve your state outside the container in some way:

  1. Extend the image you want to use with a custom Dockerfile
  2. Execute an interactive shell session, for example docker exec -it containerName /bin/bash
  3. Replace or expose filesystem resources using host or volume mounts.

Yes, you can set a variety of resources constraints, including but not limited to processor and memory utilization.

There’s no reason to “freeze” a container, but if your state is in a host or volume mount, destroy the container, migrate your data, and resume it with a run command or docker-compose file. Different terminology and concept, but same result.

It may be worth it if you want to free up overhead used by virtual machines on your host, store your state more centrally, and/or represent your infrastructure as a docker-compose file or set of docker-compose files.


Honestly, taking the time learn Docker and then learn more about the specific containers that you want to use is probably going to be the easiest way forward in your position. If you have any specific questions about Docker or the containers you’re looking at, I can try to help.

When it comes to network mounts, I’ve found it a lot easier to use rclone for that purpose, and that’s currently what I use for the backend of my Plex server.


I’m using https://www.kavitareader.com/ with Moon+ Reader. Kavita supports OPDS feeds, which is perfect.


The DMCA supersedes that - it’s still a crime to bypass copy protection mechanisms, and there are very few exceptions to that rule.


It’s not even grey - in the US it is illegal under the DMCA.

I’m not up to date on ripping tools, though.


I’m using a combination of:

  • The Boox Palma reader, though they have larger tablets if you prefer. I’m not sure about the others, but the Palma runs Android with the Play Store.
  • Kavita to host my ebooks online.
  • FolderSync with SFTP to sync all of my books ahead of time to my SD card.
  • Moon Reader to add my Kavita server’s OPDS feed as an online catalog if I need to grab something manually.
  • Calibre to manage and embed metadata.

I was going to recommend Ansible as well - documentation as code can never be out of date if you continue using it.


This is a bit outdated with .NET Core. You can just compile it for a Linux target or install the .NET runtime from Microsoft.

I’m not sure Mono supports all the newer language features.


It doesn’t quite say that, but I think the meaning is essentially the same: “Don’t choose a name after a project unique to that machine.” - RFC 1178

For my homelab, I think that’s fine to do. I’m unlikely to have multiple Plex servers locally, for example, and if so, numerically naming them is fine - I provision with Ansible, and if I’m at the point where I’m having sequentially numbered hosts, they’ll be configured as cattle anyway. Also, having the names reflect the services a host provides makes it easier to match in my playbooks.

I think it’s a better scheme than turning to mythology, fiction, or animal species, which oddly enough RFC 1178 does encourage you to do.


I use significant hardware component or model:

  • Z390
  • AERO15

…or sometimes intended purpose:

  • USERV - Ubuntu SERVer
  • PlexBox - Plex Server
  • NAS - NAS
  • Runner - GitLab Runner
  • MDEV - Mobile DEVelopment
  • MDEV2 - Mobile DEVelopment, Version 2

I also have a Kubernetes cluster that ranges from K8S_0 to K8S_5.