A while ago, I had requested help with using LLMs to manage all my teaching notes. I have since installed Ollama and been playing with it to get a feel for the setup.

I was also suggested the use of RAG (Retrieval Augmented Generation ) and CA (cognitive architecture). However, I am unclear on good self hosted options for these two tasks. Could you please suggest a few?

For example, I tried ragflow.io and installed it on my system, but it seems I need to setup an account with a username and password to use it. It remains unclear if I can use the system offline like the base ollama model, and that information won’t be sent from my computer system.

@BaroqueInMind@lemmy.one
link
fedilink
English
1
edit-2
7h

Why not use this and select whatever LLM to leverage as a RAG? It literally allows you to self host the model and select any model for both chat and RAG analysis. I have it set to Hermes3 8B for chat and a 1.3B Llama3 as the RAG.

@Antiochus@lemmy.one
link
fedilink
English
18h

I’m not sure how well it would work in a self-hosted or server-type context, but GPT4all has built in RAG functionality. There’s also a flatpak in addition to the Windows, Mac and .deb installs.

@Zelyios@lemmy.world
link
fedilink
English
215h

You can use h2ogpt which allows you to build a RAG choosing your documents without coding anything

exu
link
fedilink
English
61d

I’m aware of these options to do RAG, though I’m not using any yet. Only SillyTavern for chat stuff

@BitSound@lemmy.world
link
fedilink
English
21d

Not sure how ollama integration works in general, but these are two good libraries for RAG:

https://github.com/facebookresearch/faiss

https://pypi.org/project/chromadb/

@chagall@lemmy.world
link
fedilink
English
31d

You should ask @brucethemoose@lemmy.world. He seems to know all about this stuff.

@brucethemoose@lemmy.world
link
fedilink
English
14
edit-2
1d

I have an old Lenovo laptop with an NVIDIA graphics card.

@Maroon@lemmy.world The biggest question I have for you is what graphics card, but generally speaking this is… less than ideal.

To answer your question, Open Web UI is the new hotness: https://github.com/open-webui/open-webui

I personally use exui for a lot of my LLM work, but that’s because I’m an uber minimalist.

And on your setup, I would host the best model you can on kobold.cpp or the built-in llama.cpp server (just not Ollama) and use Open Web UI as your front end. You can also use llama.cpp to host an embeddings model for RAG, if you wish.

This is a general ranking of the “best” models for document answering and summarization: https://huggingface.co/spaces/vectara/Hallucination-evaluation-leaderboard

…But generally, I prefer to not mess with RAG retrieval and just slap the context I want into the LLM myself, and for this, the performance of your machine is kind of critical (depending on just how much “context” you want it to cover). I know this is !selfhosted, but once you get your setup dialed in, you may consider making calls to an API like Groq, Cerebras or whatever, or even renting a Runpod GPU instance if that’s in your time/money budget.

@kwa@lemmy.zip
link
fedilink
English
3
edit-2
18h

I’m new to this and I was wondering why you don’t recommend ollama? This is the first one I managed to run and it seemed decent but if there are better alternatives I’m interested

Edit: it seems the two others don’t have an API. What would you recommend if you need an API?

@brucethemoose@lemmy.world
link
fedilink
English
3
edit-2
17h

Pretty much everything has an API :P

ollama is OK because its easy and automated, but you can get higher performance, better vram efficiency, and better samplers from either kobold.cpp or tabbyAPI, with the catch being that more manual configuration is required. But this is good, as it “forces” you to pick and test an optimal config for your system.

I’d recommend kobold.cpp for very short context (like 6K or less) or if you need to partially offload the model to CPU because your GPU is relatively low VRAM. Use a good IQ quantization (like IQ4_M, for instance).

Otherwise use TabbyAPI with an exl2 quantization, as it’s generally faster (but GPU only) and much better at long context through its great k/v cache quantization.

They all have OpenAI APIs, though kobold.cpp also has its own web ui.

Scrubbles
link
fedilink
English
21d

I’m not 100% what you’re asking for, but I use text-generation-webui for all of my local generation needs.

https://github.com/oobabooga/text-generation-webui

Text-generation-webui is cool, but also kinda crufty. Honestly a lot of the stuff is holdovers from what’s now ancient history in LLM land, and it has (for me) major performance issues at longer context.

Scrubbles
link
fedilink
English
11d

Anything better you know of? Most of my usage now with it is through its api

Uh, depends on your hardware and model, but probably TabbyAPI?

Scrubbles
link
fedilink
English
118h

Neat! I’ll check it out!

@filister@lemmy.world
link
fedilink
English
-419h

Why don’t you build your own?

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 188 users / day
  • 405 users / week
  • 1.08K users / month
  • 3.98K users / 6 months
  • 1 subscriber
  • 3.56K Posts
  • 71.5K Comments
  • Modlog