I’m currently shopping around for something a bit faster than ollama and because I could not get it to use a different context and output length, which seems to be a known and long ignored issue. Somehow everything I’ve tried so far did miss one or more critical features, like:

  • “Hot” model replacement, so loading and unloading models on demand
  • Function calling
  • Support of most models
  • OpenAI API compatibility (to work well with Open WebUI)

I’d be happy about any recommendations!

@Arehandoro@lemmy.ml
link
fedilink
English
-118d

I don’t think it’s OpenAI compatible, but deepseek is faster.

hendrik
link
fedilink
English
617d

Btw, Ollama is a software to run AI models. Deepseek is just a company. Or a model file or a service. But that’s not what OP is looking for. They want to run a model. And that needs software like Ollama.

@Arehandoro@lemmy.ml
link
fedilink
English
116d

Isn’t this a model? https://github.com/deepseek-ai/DeepSeek-V3

(Honest question, not an expert in AI)

hendrik
link
fedilink
English
3
edit-2
16d

Yes, Deepseek V3 is a model. But what I was trying to say, you download the file. But then what? Just having the file stored on your harddisk doesn’t do much. You need to run it. That’s called “inference” in machine learning/AI terms. The repository you linked, contains some example code how to do it with Huggingface’s Transformer library. But there are quite some frameworks out there for running AI models. Ollama would be another one. And it’s not just some example code where to start with your own Python program, but a ready-made project/framework with tools and frontends available and an interface for other software to hook into.

And generally, you need some software to actually do something. And how fast it is, depends on the software used, the hardware it’s executed on. And in this case, also on the size of the AI model and its architecture. But yeah, Deepseek V3 has some tricks up it’s sleeves to make it very efficient. Though, it is really big for home use. I think we’re looking at a six-figure price for the hardware to run it. Usually, people use Deepseek R1 models. Or other smaller AI models if they run them themselves.

@Arehandoro@lemmy.ml
link
fedilink
English
315d

I see, had no idea! Thanks for the detailed answer!

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 284 users / day
  • 685 users / week
  • 1.48K users / month
  • 3.94K users / 6 months
  • 1 subscriber
  • 4.18K Posts
  • 87K Comments
  • Modlog