trash
fedilink
@axzxc1236@lemm.ee
link
fedilink
English
34M

I9 14900k…bad news for you, 13th and 14th gen I9 is unstable, crashes.

Suggestion: Wait for 15th gen or AMD 9000 series CPU to come out.

0^2
link
fedilink
English
14M

This. So many issues.

Will I see any performance increase?

Like others have said LLMs mostly use VRAM, they can use system RAM if you’re running them on CPU, but that’s ridiculously slow.

It will however increase the speed of your compile times, which is especially useful if you’re compiling something large like the Linux kernel on a regular basis.

I’m also worried about not having ECC RAM.

If you are using it purely for LLMs, if it’s going to get bit flips, it’ll happen in VRAM.

If you are compiling large things for customers, I’d recommend ECC, just in case, e.g. you don’t want a bricking firmware from a bit flip. But according to EDAC and my TIG stack, my server’s ECC RAM has never even detected an error in the past year, if I understand EDAC properly, so it’s really not important.

no.

furzegulo1312
link
fedilink
English
94M

no.

@Churbleyimyam@lemm.ee
link
fedilink
English
-34M

bingbong.

Have you looked into specialized AI chips/accelerators at all if you really want to mess with it?

Way lower end than what you’re working with, but they have AI accelerator kits for something as small as a Raspberry Pi.

@L_Acacia@lemmy.one
link
fedilink
English
24M

You are limited by bandwidth not compute with llm, so accelerator won’t change the interferance tp/s

adr1an
link
fedilink
English
14M

Have you tried ollama ? Some (if not all) models would do inference just fine with your current specs. Of course, it all depends on how many queries per unit of time you need. And if you wanted to load a huge codebase and pass it as input. Anyway, go try out.

no.

@KillerTofu@lemmy.world
link
fedilink
English
74M

no.

april
link
fedilink
English
484M

Only the GPU and primarily the vram matters for LLMs. So this wouldn’t help at all.

Time
creator
link
fedilink
English
2
edit-2
4M

Don’t you need tons of RAM to run LLMs? I thought the newer models needed up to 64GB RAM? Also, what about Stable Diffusion?

Pumpkin Escobar
link
fedilink
English
64M

Taking ollama for instance, either the whole model runs in vram and compute is done on the gpu, or it runs in system ram and compute is done on the cpu. Running models on CPU is horribly slow. You won’t want to do it for large models

LM studio and others allow you to run part of the model on GPU and part on CPU, splitting memory requirements but still pretty slow.

Even the smaller 7B parameter models run pretty slow in CPU and the huge models are orders of magnitude slower

So technically more system ram will let you run some larger models but you will quickly figure out you just don’t want to do it.

april
link
fedilink
English
8
edit-2
4M

Ram is important but it has to be vram not system ram.

Only MacBooks can use the system ram because they have an integrated GPU rather than a dedicated one.

Stable diffusion is the same situation.

VRAM. Not system RAM. LLMs run best entirely on the GPU.

@Findmysec@infosec.pub
link
fedilink
English
24M

They do, but VRAM. Unfortunately, the cards that do have that much of memory are used by OEMs/corporations and are insanely pricey

enkers
link
fedilink
English
1
edit-2
4M

One minor caveat where CPU could matter is AVX support. I couldn’t get ollama to run well on my system, despite having a decent GPU, because I’m using an ancient processor.

@cybersandwich@lemmy.world
link
fedilink
English
3
edit-2
4M

GPU with a ton of vran is what you need, BUT

An alternate solution is something like a Mac mini with an m series chip and 16gb of unified memory. The neural cores on apple silicon are actually pretty impressive and since they use unified memory the models would have access to whatever the system has.

I only mention it because a Mac mini might be cheaper than GPU with tons of vram by a couple hundred bucks.

And it will sip power comparatively.

4090 with 24gb of vram is $1900 M2 Mac mini with 24gb is $1000

@L_Acacia@lemmy.one
link
fedilink
English
24M

Buying second hand 3090/7090xtx will be cheaper for better performances if you are not building the rest of the machine.

mozz
link
fedilink
10
edit-2
4M

You’re the only one talking sense and you are sitting here with your 2 upvotes

The AI company business model is 100% unsustainable. It’s hard to say when they will get sick of hemorrhaging money by giving away this stuff more or less for free, but it might be soon. That’s totally separate from any legal issues that might come up. If you care about this stuff, learning about doing it locally and having a self hosted solution in place might not be a bad idea.

But upgrading anything aside from your GPU+VRAM is a pure and unfettered waste of money in that endeavor.

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 127 users / day
  • 422 users / week
  • 1.16K users / month
  • 3.85K users / 6 months
  • 1 subscriber
  • 3.68K Posts
  • 74.2K Comments
  • Modlog