we love google (and LLMs)

further fact check showed that THE IMAGE HAS BEEN EDITED

I’d say it’s not the LLM at fault. The LLM is essentially an innocent. It’s the same as a four year old being told if they clap hard enough they’ll make thunder. It’s not the kids fault that they’re being fed bad information.

The parents (companies) should be more responsible about what they tell their kids (LLMS)

Edit. Disregard this though if I’ve completely misunderstood your comment.

I mean - I don’t think anyone’s solution to this issue would be to put an AI on trial… but it’d be extremely reasonable to hold Google responsible for any potential damages from this and I think it’d also be reasonable to go after the organization that trained this model if they marketed it as an end-user ready LLM.

Yeah that’s my point, too. AI employing companies should be held responsible for the stuff their AIs say. See how much they like their AI hype when they’re on the hook for it!

Sorry, I didn’t know we might be hurting the LLM’s feelings.

Seriously, why be an apologist for the software? There’s no effective difference between blaming the technology and blaming the companies who are using it uncritically. I could just as easily be an apologist for the company: not their fault they’re using software they were told would produce accurate information out of nonsense on the Internet.

Neither the tech nor the corps deploying it are blameless here. I’m well aware than an algorithm only does exactly what it’s told to do, but the people who made it are also lying to us about it.

@barsoap@lemm.ee
link
fedilink
3
edit-2
5M

Sorry, I didn’t know we might be hurting the LLM’s feelings.

You’re not going to. CS folks like to anthropomorphise computers and programs, doesn’t mean we think they have feelings.

And we’re not the only profession doing that, though it might be more obvious in our case. A civil engineer, when a bridge collapses, is also prone to say “is the cable at fault, or the anchor” without ascribing feelings to anything. What it is though is ascribing a sort of animist agency which comes natural to many people when wrapping their head around complex systems full of different things well, doing things.

The LLM is, indeed, not at fault. The LLM is a braindead cable anchor that some idiot put in a place where it’s bound to fail.

@thehatfox@lemmy.world
link
fedilink
English
35M

I’d say it’s more that parents (companies) should be more responsible about what they tell their kids (customers).

Because right now the companies have a new toy (AI) that they keep telling their customers can make thunder from clapping. But in reality the claps sometimes make thunder but are also likely to make farts. Occasionally some incredibly noxious ones too.

The toy might one day make earth-rumbling thunder reliably, but right now it can’t get close and saying otherwise is what’s irresponsible.

Create a post

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

  • Posts must be relevant to programming, programmers, or computer science.
  • No NSFW content.
  • Jokes must be in good taste. No hate speech, bigotry, etc.
  • 1 user online
  • 77 users / day
  • 211 users / week
  • 413 users / month
  • 2.92K users / 6 months
  • 1 subscriber
  • 1.53K Posts
  • 33.8K Comments
  • Modlog