Thank you for the discussion folks, I’ll try out Llama.cpp and report back.
I also saw the Neural Engine Stuff hasn’t been merged into main kernel yet but it’s available as a separate out of tree patch. Hopefully merging that will help with more model support? (Pure guesswork)
I also saw that PCL stuff isn’t ready yet and u/marcan42 said it’s WIP that also might be helpful in getting better model support because read somewhere that metal isn’t going to be a part of the Asahi kernel ever(?)
I’m no expert at any of this but hopefully we’ll be able to run some sort of GPTs locally someday.
Hi,
I wanted to run some Large Language Models locally. Something like [Private GPT](https://github.com/imartinez/privateGPT) or [Medium Article](https://medium.com/@aadityaubhat/local-llms-on-apple-silicon-39194de71ab7) on my local Apple Silicon to enhance my privacy but also get some additional help.
Does anyone have recommendations or guides I could follow?
Thank you very much.
Hi,
I was trying to setup OPNSense with My ATT BGW320-500, and had a few questions.
Configuration Questions:
1. [Dupuis.xyz](https://www.dupuis.xyz/bgw210-700-root-and-certs/) - this link has a FW for an older version for BGW210-700, can I use it for my BGW320-500?
2. [Prerequisites](https://github.com/owenthewizard/opnatt/blob/supplicant/README.md#prerequisites) mentions that I need to figure out `ONT_IF`, `EAP_IDENTITY`, and `RG_ETHER`, how does one do that?
Setup questions:
1. Do I need the Ethernet from `ONT` cable to my `WAN` port on OPNSense box?
2. Step 5 in the prerequisites document asks to test, but my box doesn't have `bash` or any internet access (to install bash)? How do I do that?
Thanks.
EDIT: I'm using Fiber.
Thank you for the discussion folks, I’ll try out Llama.cpp and report back.
I also saw the Neural Engine Stuff hasn’t been merged into main kernel yet but it’s available as a separate out of tree patch. Hopefully merging that will help with more model support? (Pure guesswork)
I also saw that PCL stuff isn’t ready yet and u/marcan42 said it’s WIP that also might be helpful in getting better model support because read somewhere that metal isn’t going to be a part of the Asahi kernel ever(?)
I’m no expert at any of this but hopefully we’ll be able to run some sort of GPTs locally someday.
Good luck.