My main concern with people making fun of such cases is about deficiencies of “AI” being harder to find/detect but obviously present.

Whenever someone publishes a proof of a system’s limitations, the company behind it gets a test case to use to improve it. The next time we - the reasonable people arguing that cybernetic hallucinations aren’t AI yet and are dangerous - try using such point, we would only get a reply of “oh yeah, but they’ve fixed it”. Even people in IT often don’t understand what they’re dealing with, so the non-IT people may have even more difficulties…

Myself - I just boycott this rubbish. I’ve never tried any LLM and don’t plan to, unless it’s used to work with language, not knowledge.

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 124 users / day
  • 147 users / week
  • 523 users / month
  • 2.5K users / 6 months
  • 1 subscriber
  • 1.6K Posts
  • 35.6K Comments
  • Modlog