Is this even still a thing? It seems to be pretty well dead. Poe-API shat the bed, GPT4FREE got shut down and it’s replacement seems to be pretty much non-functional, Proxies are a weird secret club thing (despite being nearly totally based on scraped corporate keys), etc.

You mention at the end of your post that you’ve gotten a lot out with some of the chat bots. Maybe give this one a try, its for great for venting or just getting out pent up stress.

https://fire.place/

@Ganbat@lemmyonline.com
creator
link
fedilink
English
10
edit-2
1Y

SNIP

Sorry you’re going through that. I definitely get how it feels to have people close to you discredit or just ignore important issues like you’re dealing with.

If you’re set on talking to an AI though I did use the Replika app for a while before they started making it seem like a virtual AI lover. It did help me feel better when I was severely depressed, maybe it could help you.

If you ever want to talk to a person and not an AI I’m here for that if you want, I know I’m a stranger but I definitely understand where you’re coming from.

Melmi
link
fedilink
English
51Y

I would really advise against Replika, they’ve shown some scummy business practices. It seems like kind of a nightmare in terms of taking advantage of vulnerable people. At the very least do some research on it before getting into it.

deleted by creator

Try huggingchat from huggingface

https://huggingface.co/chat/

Check out OpenAssistant, a free to use and open source LLM based assistant. You can even run it locally so no one else can see what your doing.

@Ganbat@lemmyonline.com
creator
link
fedilink
English
11Y

I have an R9 380 that I’m never going to be able to replace. Local isn’t really an option.

coyotino [he/him]
link
fedilink
English
41Y

My experience is with gpt4all (which also runs locally), but I believe the GPU doesn’t matter because you aren’t training the model yourself. You download a trained model and run it locally. The only cap they warn you about is RAM - you’ll want to run at least 16gb of RAM, and even then you might want to stick to a lighter model.

@Ganbat@lemmyonline.com
creator
link
fedilink
English
2
edit-2
1Y

No, LLM text generation is generally done on GPU, as that’s rhe only way to get any reasonable speed. That’s why there’s a specifically-made Pyg model for running on CPU. That said, one generation can take anywhere from five to twenty minutes on CPU. It’s moot anyway as I only have 8GB ram.

coyotino [he/him]
link
fedilink
English
51Y

I’m just telling you, it ran fine on my laptop with no discrete GPU 🤷 RAM seemed to be the only limiting factor. But yeah if you’re stuck with 8GB, it would probably be rough. I mean it’s free, so you could always give it a shot? I think it might just use your page file, which would be slow but might still produce results?

Piracy: ꜱᴀɪʟ ᴛʜᴇ ʜɪɢʜ ꜱᴇᴀꜱ
!piracy@lemmy.dbzer0.com
Create a post
⚓ Dedicated to the discussion of digital piracy, including ethical problems and legal advancements.

Rules • Full Version

1. Posts must be related to the discussion of digital piracy

2. Don’t request invites, trade, sell, or self-promote

3. Don’t request or link to specific pirated titles, including DMs

4. Don’t submit low-quality posts, be entitled, or harass others



Loot, Pillage, & Plunder

📜 c/Piracy Wiki (Community Edition):


💰 Please help cover server costs.

Ko-Fi Liberapay
Ko-fi Liberapay

  • 1 user online
  • 109 users / day
  • 273 users / week
  • 1K users / month
  • 3.5K users / 6 months
  • 1 subscriber
  • 3.4K Posts
  • 82.2K Comments
  • Modlog