Probably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
Yeah that’s a no from me 😂 what causes this anyway? Badly thought out fine-tuning dataset?
Haven’t had a response sounding that out of touch from the few LLaMA variants I’ve messed around with in chat mode
Probably just poor tuning, but in general it’s pretty hard to guarantee that the model won’t do something unexpected. Hence why it’s a terrible idea to use LLMs for something like this.
People who wanna see mee hihiii : http://adfoc.us/870511108889439 +18 hihi .