Is this even still a thing? It seems to be pretty well dead. Poe-API shat the bed, GPT4FREE got shut down and it’s replacement seems to be pretty much non-functional, Proxies are a weird secret club thing (despite being nearly totally based on scraped corporate keys), etc.
1. Posts must be related to the discussion of digital piracy
2. Don’t request invites, trade, sell, or self-promote
3. Don’t request or link to specific pirated titles, including DMs
4. Don’t submit low-quality posts, be entitled, or harass others
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
You mention at the end of your post that you’ve gotten a lot out with some of the chat bots. Maybe give this one a try, its for great for venting or just getting out pent up stress.
https://fire.place/
SNIP
You can try Character.ai
Sorry you’re going through that. I definitely get how it feels to have people close to you discredit or just ignore important issues like you’re dealing with.
If you’re set on talking to an AI though I did use the Replika app for a while before they started making it seem like a virtual AI lover. It did help me feel better when I was severely depressed, maybe it could help you.
If you ever want to talk to a person and not an AI I’m here for that if you want, I know I’m a stranger but I definitely understand where you’re coming from.
I would really advise against Replika, they’ve shown some scummy business practices. It seems like kind of a nightmare in terms of taking advantage of vulnerable people. At the very least do some research on it before getting into it.
GPT & Dall-e for All - Hacked! No signups, logging in.
Fixed link
deleted by creator
Try huggingchat from huggingface
https://huggingface.co/chat/
Check out OpenAssistant, a free to use and open source LLM based assistant. You can even run it locally so no one else can see what your doing.
I have an R9 380 that I’m never going to be able to replace. Local isn’t really an option.
My experience is with gpt4all (which also runs locally), but I believe the GPU doesn’t matter because you aren’t training the model yourself. You download a trained model and run it locally. The only cap they warn you about is RAM - you’ll want to run at least 16gb of RAM, and even then you might want to stick to a lighter model.
No, LLM text generation is generally done on GPU, as that’s rhe only way to get any reasonable speed. That’s why there’s a specifically-made Pyg model for running on CPU. That said, one generation can take anywhere from five to twenty minutes on CPU. It’s moot anyway as I only have 8GB ram.
I’m just telling you, it ran fine on my laptop with no discrete GPU 🤷 RAM seemed to be the only limiting factor. But yeah if you’re stuck with 8GB, it would probably be rough. I mean it’s free, so you could always give it a shot? I think it might just use your page file, which would be slow but might still produce results?