I’m currently shopping around for something a bit faster than ollama and because I could not get it to use a different context and output length, which seems to be a known and long ignored issue. Somehow everything I’ve tried so far did miss one or more critical features, like:

  • “Hot” model replacement, so loading and unloading models on demand
  • Function calling
  • Support of most models
  • OpenAI API compatibility (to work well with Open WebUI)

I’d be happy about any recommendations!

Possibly linux
link
fedilink
English
317d

I don’t think you are going to find anything faster. Ollama is pretty much as fast as it gets

@CaptnBook@feddit.org
link
fedilink
English
1
edit-2
17d

It’s not, by far. But vllm or SGLang don’t support switching the model… such a shame.

@RandomlyRight@sh.itjust.works
creator
link
fedilink
English
417d

There are many projects out there optimizing the speed significantly. Ollama is unbeaten in the convenience though

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 238 users / day
  • 644 users / week
  • 1.46K users / month
  • 3.92K users / 6 months
  • 1 subscriber
  • 4.18K Posts
  • 86.9K Comments
  • Modlog