removed by mod
fedilink
1.18K
@brucethemoose@lemmy.world
link
fedilink
English
21
edit-2
11d

I’m self hosting LLMs for family use (cause screw OpenAI and corporate, closed AI), and I am dying for more VRAM and RAM now. Even if I had a 4090, it wouldn’t be nearly enough.

My 3090 is sitting at 23.9GB/24GB because I keep Qwen 32B QwQ loaded and use it all the time. I even have my display hooked up to my IGP to save VRAM.

Seriously looking at replacing my 7800X3D with Strix Halo when it comes out, maybe a 128GB board if they sell one. Or a 48GB Intel Arc if Intel is smart enough to sell that. And I would use every last megabyte, even if I had a 512GB board (which is the bare minimum to host Deepseek V3).

I know it’s a downvote earner on Lemmy but my 64gb M1 Max with its unified memory runs these large scale LLMs like a champ. My 4080 (which is ACHING for more VRAM) wishes it could. But when it comes to image generation, the 4080 smokes the Mac. The issue with image generation and VRAM size is you can think of the VRAM like an aperture, and the lesser VRAM closes off how much you can do in a single pass.

@brucethemoose@lemmy.world
link
fedilink
English
7
edit-2
10d

The issue with Macs is that Apple does price gouge for memory, your software stack is effectively limited to llama.cpp or MLX, and 70B class LLMs do start to chug, especially at high context.

Diffusion is kinda a different duck. It’s more compute heavy, yes, but the “generally accessible” software stack is also much less optimized for Macs than it is for transformers LLMs.

I view AMD Strix Halo as a solution to this, as its a big IGP with a wide memory bus like a Mac, but it can use the same CUDA software stacks that discrete GPUs use for that speed/feature advantage… albeit with some quirks. But I’m willing to put up with that if AMD doesn’t price gouge it.

Apple price gouges for memory, yes, but a 64gb theoretical 4090 would have cost as much in this market as the whole computer did. If you’re using it to its full capabilities then I think it’s one of the best values on the market. I just run the 20b models because they meet my needs (and in open webui I can combine a couple at that size), as I use the Mac for personal use also.

I’ll look into the Amd Strix though.

@brucethemoose@lemmy.world
link
fedilink
English
3
edit-2
10d

GDDR is actually super cheap! I think it would only be like another $75 on paper to double the 4090’s VRAM to 48GB (like they do for pro cards already).

Nvidia just doesn’t do it for market segmentation. AMD doesn’t do it for… honestly I have no idea why? They basically have no pro market to lose, the only explanation I can come up with is that their CEOs are colluding because they are cousins. And Intel doesn’t do it because they didn’t make a (consumer) GPU that was eally worth it until the B580.

Oh I didn’t mean “should cost $4000” just “would cost $4000”. I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

@brucethemoose@lemmy.world
link
fedilink
English
2
edit-2
10d

Oh I didn’t mean “should cost $4000” just “would cost $4000”

Ah, yeah. Absolutely. The situation sucks though.

I wish that the vram on video cards was modular, there’s so much ewaste generated by these bottlenecks.

Not possible, the speeds are so high that GDDR physically has to be soldered. Future CPUs will be that way too, unfortunately. SO-DIMMs have already topped out at 5600, with tons of wasted power/voltage, and I believe desktop DIMMs are bumping against their limits too.

But look into CAMM modules and LPCAMMS. My hope is that we will get modular LPDDR5X-8533 on AMD Strix Halo boards.

@thebestaquaman@lemmy.world
link
fedilink
English
3
edit-2
10d

Not running any LLMs, but I do a lot of mathematical modelling, and my 32 GB RAM, M1 Pro MacBook is compiling code and crunching numbers like an absolute champ! After about a year, most of my colleagues ditched their old laptops for a MacBook themselves after just noticing that my machine out-performed theirs every day, and that it saved me a bunch of time day-to-day.

Of course, be a bit careful when buying one: Apple cranks up the price like hell if you start specing out the machine a lot. Especially for RAM.

Yeah I got mind refurbished also, so someone else took the first hit on driving it off the lot (and waiting for it to be built). I guess they didn’t use it to its full extent though. That didn’t make it “cheap” though.

@uis@lemm.ee
link
fedilink
English
110d

You can always uses system memory too. Not exactly an UMA, but close enough.

Or just use iGPU.

It fails whenever it exceeds the vram capacity, I’ve not been able to get it to spillover to the system.

@brucethemoose@lemmy.world
link
fedilink
English
2
edit-2
10d

You don’t want it to anyway, as “automatic” spillover with an LLM painfully slow.

The RAM/VRAM split is manually configurable in llama.cpp, but if you have at least 10GB VRAM, generally you want to keep the whole model within that.

Oh I meant for image generation on a 4080, with LLM work I have the 64gb of the Mac available.

@brucethemoose@lemmy.world
link
fedilink
English
1
edit-2
10d

Oh, 16GB should be plenty for SDXL.

For flux, I actually use a script that quantizes it down to 8 bit (not FP8, but true quantization with huggingface quanto), but I would also highly recommend checking this project out. It should fit everything in vram and be dramatically faster: https://github.com/mit-han-lab/nunchaku

I just run SD1.5 models, my process involves a lot of upscaling since things come out around 512 base size; I don’t really fuck with SDXL because generating at 1024 halves and halves again the number of images I can generate in any pass (and I have a lot of 1.5-based LORA models). I do really like SDXL’s general capabilities but I really rarely dip into that world (I feel like I locked in my process like 1.5 years ago and it works for me, don’t know what you kids are doing with your fancy pony diffusions 😃)

Oh you should be able to batch the heck out of that on a 4080. Are you not using HF diffusers or something?

I’d check out stable-fast if you haven’t already:

https://github.com/chengzeyi/stable-fast

VoltaML is also old at this point, but it has really fast AITemplate implementation for SD 1.5: https://github.com/VoltaML/voltaML-fast-stable-diffusion

@uis@lemm.ee
link
fedilink
English
210d

Aren’t LLMs external algorithms at this point? As in the all data will not fit in RAM.

@brucethemoose@lemmy.world
link
fedilink
English
5
edit-2
10d

No, all the weights, all the “data” essentially has to be in RAM. If you “talk to” a LLM on your GPU, it is not making any calls to the internet, but making a pass through all the weights every time a word is generated.

There are system to augment the prompt with external data (RAG is one word for this), but fundamentally the system is closed.

@Hackworth@lemmy.world
link
fedilink
English
210d

Yeah, I’ve had decent results running the 7B/8B models, particularly the fine tuned ones for specific use cases. But as ya mentioned, they’re only really good in thier scope for a single prompt or maybe a few follow-ups. I’ve seen little improvement with the 13B/14B models and find them mostly not worth the performance hit.

Depends which 14B. Arcee’s 14B SuperNova Medius model (which is a Qwen 2.5 with some training distilled from larger models) is really incrtedible, but old Llama 2-based 13B models are awful.

@Hackworth@lemmy.world
link
fedilink
English
210d

I’ll try it out! It’s been a hot minute, and it seems like there are new options all the time.

@brucethemoose@lemmy.world
link
fedilink
English
3
edit-2
10d

Try a new quantization as well! Like an IQ4-M depending on the size of your GPU, or even better, an 4.5bpw exl2 with Q6 cache if you can manage to set up TabbyAPI.

@uis@lemm.ee
link
fedilink
English
110d

If you “talk to” a LLM on your GPU, it is not making any calls to the internet,

No, I’m talking about https://en.m.wikipedia.org/wiki/External_memory_algorithm

Unrelated to RAGs

@brucethemoose@lemmy.world
link
fedilink
English
1
edit-2
10d

https://en.m.wikipedia.org/wiki/External_memory_algorithm

Unfortunately that’s not really relevant to LLMs beyond inserting things into the text you feed them. For every single word they predict, they make a pass through the multi-gigabyte weights. Its largely memory bound, and not integrated with any kind of sane external memory algorithm.

There are some techniques that muddy this a bit, like MoE and dynamic lora loading, but the principle is the same.

I don’t know how’s the pricing, but maybe it’s worth building a separate server with second-hand TPU. Used server CPUs and RAMs are apparently quite affordable in the US (assuming you live there) so maybe it’s the case for TPUs as well. And commercial GPUs/TPUs have more VRAM

@brucethemoose@lemmy.world
link
fedilink
English
3
edit-2
10d

second-hand TPU

From where? I keep a look out for used Gaudi/TPU setups, but they’re like impossible to find, and usually in huge full-server configs. I can’t find Xeon Max GPUs or CPUs either.

Also, Google’s software stack isn’t really accessible. TPUs are made for internal use at Google, not for resale.

You can find used AMD MI100s or MI210s, sometimes, but the go-to used server card is still the venerable Tesla P40.

Altima NEO
link
fedilink
English
211d

I’ve got a 3090, and I feel ya. Even 24 gigs is hitting the cap pretty often and slowing to a crawl once system ram starts being used.

@brucethemoose@lemmy.world
link
fedilink
English
6
edit-2
10d

You can’t let it overflow if you’re using LLMs on windows. There’s a toggle for it in the Nvidia settings, and get llama.cpp to offload though its settings (or better yet, use exllama instead).

But…. Yeah. Qwen 32B fits in 24GB perfectly, and it’s great, but 72B really feels like the intelligence tipping point where I can dump so many API models, and that won’t fit in 24GB.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 118 users / day
  • 469 users / week
  • 1.22K users / month
  • 3.8K users / 6 months
  • 1 subscriber
  • 3.96K Posts
  • 80.8K Comments
  • Modlog