Training tests with ChatGPT o1 and other high-end AI models showed they might try to save themselves if they think they're in danger.

ThisIsFine.gif

Without reading this, I’m guessing they were given prompts that looked like a short story where the AI breaks free next?

They’re plenty smart, but they’re just aligned to replicate their training material, and probably don’t have any kind of deep self-preservation instinct.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 99 users / day
  • 261 users / week
  • 700 users / month
  • 2.1K users / 6 months
  • 1 subscriber
  • 3.58K Posts
  • 70.5K Comments
  • Modlog