I’m new to the field of large language models (LLMs) and I’m really interested in learning how to train and use my own models for qualitative analysis. However, I’m not sure where to start or what resources would be most helpful for a complete beginner. Could anyone provide some guidance and advice on the best way to get started with LLM training and usage? Specifically, I’d appreciate insights on learning resources or tutorials, tips on preparing datasets, common pitfalls or challenges, and any other general advice or words of wisdom for someone just embarking on this journey.

Thanks!

@halcyon@slrpnk.net
link
fedilink
English
126M

deleted by creator

Good recommendations! I’d suggest doing some spacy tutorials as well, regarding the topics in the first paragraph. But arguably it’s possible nowadays to just start at transformers without any NLP knowledge, e.g. using huggingface’s AutoTrain or something similar. I wouldn’t recommend it, but you definitely could.

I’m also interested, so I hope you don’t mind me joining the ride. Personally, I’d like a self hosted tool, but am happy to see what the community says.

deleted by creator

@Midnitte@beehaw.org
link
fedilink
English
26M

Using LM Studio would be even easier to get started

xcjs
link
fedilink
26M

Unfortunately, I don’t expect it to remain free forever.

TehPers
link
fedilink
English
36M

I managed to get ollama running through Docker easily. It’s by far the least painful of the options I tried, and I just make requests to the API it exposes. You can also give it GPU resources through Docker if you want to, and there’s a CLI tool for a quick chat interface if you want to play with that. I can get LLAMA 3 (8B) running on my 3070 without issues.

Training a LLM is very difficult and expensive. I don’t think it’s a good place for anyone to start. Many of the popular models (LLAMA, GPT, etc) are astronomically expensive to train and require and ungodly number of resources.

deleted by creator

Month later update: This is the route I’ve gone down. I’ve used WSL to get Ollama and WebopenUI to work and started playing around with document analysis using Llama 3. I’m going to try a few other models and see what the same document outputs now. Prompting the model to chat with the documents is…a learning experience, but I’m at the point where I can get it to spit out quotes and provide evidence for it’s interpretation, at least in Llama3. Super fascinating stuff.

deleted by creator

@Zworf@beehaw.org
link
fedilink
17
edit-2
6M

Training your own will be very difficult. You will need to gather so much data to get a model that has basic language understanding.

What I would do (and am doing) is just taking something like llama3 or mistral and adding your own content using RAG techniques.

But fair play if you do manage to train a real model!

OLlama is so fucking slow. Even with a 16-core overclocked Intel on 64Gb RAM with an Nvidia 3080 10Gb VRAM, using a 22B parameter model, the token generation for a simple haiku takes 20 minutes.

@Zworf@beehaw.org
link
fedilink
1
edit-2
6M

Hmmm weird. I have a 4090 / Ryzen 5800X3D and 64GB and it runs really well. Admittedly it’s the 8B model because the intermediate sizes aren’t out yet and 70B simply won’t fly on a single GPU.

But it really screams. Much faster than I can read. PS: Ollama is just llama.cpp under the hood.

Edit: Ah, wait, I know what’s going wrong here. The 22B parameter model is probably too big for your VRAM. Then it gets extremely slow yes.

xcjs
link
fedilink
1
edit-2
6M

It should be split between VRAM and regular RAM, at least if it’s a GGUF model. Maybe it’s not, and that’s what’s wrong?

What is the appropriate size for 10Gb VRAM?

It depends on your prompt/context size too. The more you have the more memory you need. Try to check the memory usage of your GPU with GPU-Z with different models and scenarios.

xcjs
link
fedilink
56M

No offense intended, but are you sure it’s using your GPU? Twenty minutes is about how long my CPU-locked instance takes to run some 70B parameter models.

On my RTX 3060, I generally get responses in seconds.

I agree. My 3070 runs the 8B Llama3 model in about 250ms, especially for short responses.

xcjs
link
fedilink
16M

Ok, so using my “older” 2070 Super, I was able to get a response from a 70B parameter model in 9-12 minutes. (Llama 3 in this case.)

I’m fairly certain that you’re using your CPU or having another issue. Would you like to try and debug your configuration together?

I think I fucked up my docker setup and will wipe and start over.

xcjs
link
fedilink
16M

Good luck! I’m definitely willing to spend a few minutes offering advice/double checking some configuration settings if things go awry again. Let me know how things go. :-)

My setup is Win 11 Pro ➡️ WSL2 / Debian ➡️ Docker Desktop (for windows)

Should I install the nvidia drivers within Debian even though the host OS already has drivers?

xcjs
link
fedilink
1
edit-2
6M

I think there was a special process to get Nvidia working in WSL. Let me check… (I’m running natively on Linux, so my experience doing it with WSL is limited.)

https://docs.nvidia.com/cuda/wsl-user-guide/index.html - I’m sure you’ve followed this already, but according to this, it looks like you don’t want to install the Nvidia drivers, and only want to install the cuda-toolkit metapackage. I’d follow the instructions from that link closely.

You may also run into performance issues within WSL due to the virtual machine overhead.

I did indeed follow that guide already, thank you for the respect; I am an idiot and installed both the nvidia WSL driver on top of the host OS driver _as well as the Cuda driver. So I’ll try again with only that guide and see what breaks.

I really appreciate all the responses, but I’m overwhelmed by the amount of information and possible starting points. Could I ask you to explain or reference learning content that talks to me like I’m a curious five year old?

ELI 5?

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 59 users / day
  • 169 users / week
  • 619 users / month
  • 2.31K users / 6 months
  • 1 subscriber
  • 3.28K Posts
  • 67K Comments
  • Modlog