I have experience in running servers, but I would like to know if it’s possible to do it, I just need a GPT 3.5 like private LLM running.

Tempo
link
fedilink
English
83M

They’re Ryzen processors with “AI” accelerators, so an LLM can definitely run on hardware on one of those. Other options are available, like lower powered ARM chipsets (RK3588-based boards) with accelerators that might have half the performance but are far cheaper to run, should be enough for a basic LLM.

exu
link
fedilink
English
33M

I don’t know of any project that already supports that AI processor. You’d still be using the CPU and GPU at the moment.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-8
edit-2
3M

The K8 it’s Ryzen, the K9 Intel, money isn’t a problem and it’s not a spending it’s a investment I need it for business, which of these two models would you recommend for a reasonable good LLM and Stable Diffusion?

I’m looking for the most cost-effective solution.

It’s doable. Stick to the 7b models and it should work for the most part, but don’t expect anything remotely approaching what might be called reasonable performance. It’s going to be slow. But it can work.

To get a somewhat usable experience you kinda need an Nvidia graphics card or an AI accelerator.

@1rre@discuss.tchncs.de
link
fedilink
English
43M

Intel Arc also works surprisingly fine and consistently for ML if you use llama.cpp for LLMs or Automatic for stable diffusion, it’s definitely much closer to Nvidia in terms of usability than it is to AMD

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-6
edit-2
3M

Would you suggest the K9 instead of the K8?

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-28
edit-2
3M

I need it to make academic works pass the anti-AI systems, what do you recommend for that work? It’s for business so I need a reasonable good performance but nothing extravagant…

I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.

Something with a GPU that’s good for LLMs would be best.

I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.

That’s not how it works, sorry.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-11
edit-2
3M

I was talking about that with a friend some days ago, and they made an experiment, they just made the AI correct punctuation errors of a text document, no words at all which you can easily add manually, and the anti-AI system target 99% AI made, I don’t know how to explain that, maybe the text was AI generated also IDK or there is a watermark in some place, a pattern or something.

Edit: you point will be that there is no way to fool the anti-AI systems running a private LLM?

@entropicdrift@lemmy.sdf.org
link
fedilink
English
7
edit-2
3M

Just that they’re no easier to use to fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.

Basically, they’re “boring text detectors” more than anything else.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-19
edit-2
3M

I have a friend who is running a business of doing homework on demand, he is using AI to do the work, he got a work returned because AI generated content was detected on it, he used to employ real people to do the work but anyway real people used AI too sometimes, so he knows I’m a “hacker” LMAO and asked me if I knew any way to fool the anti-AI systems, I thought about running a private LLM and training it with real human generated content like ebooks depending on the subject of the work, do you think it could be possible to fool these things with this method?

So first of all, you shouldn’t involve yourself in your friend’s business. Fraud is generally frowned upon.

But secondly, you know that ChatGPT was trained on the entire internet, right? Like, every book. I don’t think “more books” is gonna help.

I hope you take your computer skills and make something of yourself. Try not to get any more involved in this scheme, seriously. You don’t need this crap marring your reputation.

Besides, there are better reasons/ways to fight the system than helping other people avoid learning.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-22
edit-2
3M

TBH I’m going down the rabbit hole hard, it’s the way I am, if I get an idea I am not happy until it start making money, as I see it is not completely bad, education it’s a fucking shitty mess, just a way to get money away of people(making them paying a loan for 30 years) and perpetuating the fake idea of social status, If we get some of these bucks in the way I didn’t see what’s wrong about it, anyway these dumb people will do their things one way or another.

@hperrin@lemmy.world
link
fedilink
English
33M

Your “friend’s” business is very unethical. Maybe your friend should think about what they’re doing with their life, and quit doing this.

@1rre@discuss.tchncs.de
link
fedilink
English
23M

LLMs have a very predictable and consistent approach to grammar, punctuation, style and general cadence which is easily identifiable when compared to human written content. It’s kind of a watermark but it’s one the creators are aware of and are seeking to remove. That means if you want to use LLMs as a writing aid of any sort and want it to read somewhat naturally, you’ll have to either get it to generate bullet points and expand on them yourself, or get it to generate the content then rewrite it word for word in a style you’d write it in.

@al4s@feddit.de
link
fedilink
English
33M

LLMs work by always predicting the next most likely token and LLM detection works by checking how often the next most likely token was chosen. You can tell the LLM to choose less likely tokens more often (turn up the heat parameter) but you will only get gibberish out if you do. So no, there is not.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-14
edit-2
3M

I think hosting my own LLM wouldn’t work, at some point and as someone said it, the big models are already trained on all the internet stuff, so there is no point into feeding it with more stuff like ebooks, I have to find a way to make the AI write dumber or make it analize the way an author write to then make it emulate the author.

@hperrin@lemmy.world
link
fedilink
English
103M

Maybe just write the academic works yourself, then they should pass.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-113M

My friend used to employ several people for that, but they started using AI to work less so he decided to start doing by his own with AI instead of paying someone else to do the same.

@hperrin@lemmy.world
link
fedilink
English
73M

So your “friend’s” unethical business hired unethical workers and now you’ve come here to ask for advice on running your unethical business without paying anyone. Got it.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-11
edit-2
3M

Not exactly if you have relevant information reach me a PM and we can accord an agreement…

Edit: nevermind found the answer by myself anyway good luck!!

@MasterNerd@lemm.ee
link
fedilink
English
53M

Look into ollama. It shouldn’t be an issue if you stick to 7b parameter models

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-53M

Yeah, I did see something related to what you mentioned and I was quite interested. What about quantized models?

Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-73M

Interesting information mate, I’m documenting myself into the subject, thx for the help 👍👍

@MasterNerd@lemm.ee
link
fedilink
English
23M

I don’t have any experience with them honestly so I can’t help you there

@TheBigBrother@lemmy.world
creator
link
fedilink
English
-53M

Appreciate you 👍👍

Create a post

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don’t control.

Rules:

  1. Be civil: we’re here to support and learn from one another. Insults won’t be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it’s not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don’t duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

  • 1 user online
  • 279 users / day
  • 589 users / week
  • 1.34K users / month
  • 4.55K users / 6 months
  • 1 subscriber
  • 3.49K Posts
  • 69.8K Comments
  • Modlog