My main concern with people making fun of such cases is about deficiencies of “AI” being harder to find/detect but obviously present.
Whenever someone publishes a proof of a system’s limitations, the company behind it gets a test case to use to improve it. The next time we - the reasonable people arguing that cybernetic hallucinations aren’t AI yet and are dangerous - try using such point, we would only get a reply of “oh yeah, but they’ve fixed it”. Even people in IT often don’t understand what they’re dealing with, so the non-IT people may have even more difficulties…
Myself - I just boycott this rubbish. I’ve never tried any LLM and don’t plan to, unless it’s used to work with language, not knowledge.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !programmerhumor@lemmy.ml
Post funny things about programming here! (Or just rant about your favourite programming language.)
Rules:
Posts must be relevant to programming, programmers, or computer science.
No NSFW content.
Jokes must be in good taste. No hate speech, bigotry, etc.
My main concern with people making fun of such cases is about deficiencies of “AI” being harder to find/detect but obviously present.
Whenever someone publishes a proof of a system’s limitations, the company behind it gets a test case to use to improve it. The next time we - the reasonable people arguing that cybernetic hallucinations aren’t AI yet and are dangerous - try using such point, we would only get a reply of “oh yeah, but they’ve fixed it”. Even people in IT often don’t understand what they’re dealing with, so the non-IT people may have even more difficulties…
Myself - I just boycott this rubbish. I’ve never tried any LLM and don’t plan to, unless it’s used to work with language, not knowledge.