Google-funded Character.AI added guardrails, but grieving mom wants a recall.
@kinttach@lemm.ee
link
fedilink
English
83M

A very poor Lemmy article headline. The linked article says “alleged” and clearly there were multiple factors involved.

@sabreW4K3@lazysoci.al
creator
link
fedilink
83M

The title is straight from the article

@kinttach@lemm.ee
link
fedilink
English
73M

That is odd. It’s not what I see:

@sabreW4K3@lazysoci.al
creator
link
fedilink
73M

Could be different headlines for different regions?

@kinttach@lemm.ee
link
fedilink
English
113M

Or they changed the headline and due to caches CDNs or other reasons you didn’t get the newer one.

archive.today has your original headline cached.

Thanks for posting. While it’s a needlessly provocative headline, if that’s what the article headline was, then that is what the Lemmy one should be.

They most likely changed the headline because the original headline was so bad.

If people are still seeing old headline, it’s probably cached. Try a hard refresh or a different browser or a private browser, etc.

samwise
link
fedilink
93M

If a HumanA pushed and convinced HumanB to kill themselves, then HumanA caused it. IMO they murdered them. It doesn’t matter if they didn’t pull the trigger. I don’t care what the legal definitions say.

If a chatbot did the same thing, it’s no different. Except in this case, it’s a team of developers behind it that did so, that allowed it to do so. Character.ai has blood on their hands, should be completely dismantled, and every single person at that company tried for manslaughter.

@Buttons@programming.dev
link
fedilink
English
3
edit-2
3M

Your comment might cause me to do something. You’re responsible. I don’t care what the legal definitions say.

If we don’t care about legal definitions, then how do we know you didn’t cause all this?

Except character.ai didn’t explicitly push or convince him to commit suicide. When he explicitly mentioned suicide, it made efforts to dissuade him and showed concern. When it supposedly encouraged him, it was in the context of a roleplay in which it said “please do” in response to him “coming home,” which GPT3.5 doesn’t have the context or reasoning abilities to recognize as a euphemism for suicide when the character it’s roleplaying is dead and the user alive

Regardless, it’s a tool designed for roleplay. It doesn’t work if it breaks character

That will show that pesky receptionist

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 94 users / day
  • 257 users / week
  • 697 users / month
  • 2.1K users / 6 months
  • 1 subscriber
  • 3.58K Posts
  • 70.5K Comments
  • Modlog