Yeah, and they don’t just lie. They lie extremely convincingly. They’re very confident. If you ask them to write code, they can make up non existent libraries.
In theory, it may even be possible to use this as an attack vector. You could ask an AI repeatedly to generate code and whenever it hallucinates, claim that package for yourself with a malicious package. Then you just wait for some future victim to do the same.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
PrivateGPT + CS books = ask books questions while self learning?
The issue with that is that LLMs tend to lie when they dont know something. The best tool for that is stackoverflow, lemmy, matrix, etc.
Yeah, and they don’t just lie. They lie extremely convincingly. They’re very confident. If you ask them to write code, they can make up non existent libraries.
In theory, it may even be possible to use this as an attack vector. You could ask an AI repeatedly to generate code and whenever it hallucinates, claim that package for yourself with a malicious package. Then you just wait for some future victim to do the same.