Large language models mimic human chatter, but scientists disagree on their ability to reason.

Let two AI’s talk to each other and see if they find out that they both aren’t humans?

bedrooms
link
fedilink
6
edit-2
1Y

Bro, humans literally don’t have that capability (that’s the presumption here). Or are you saying that many of us don’t have better consciousness than AIs? I might agree with that!

Ferk
link
fedilink
2
edit-2
1Y

The AI can only judge by having a neural network trained on what’s a human and what’s an AI (and btw, for that training you need humans)… which means you can break that test by making an AI that also accesses that same neural network and uses it to self-test the responses before outputting them, providing only exactly the kind of output the other AI would give a “human” verdict on.

So I don’t think that would work very well, it’ll just be a cat & mouse race between the AIs.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 144 users / day
  • 275 users / week
  • 709 users / month
  • 2.87K users / 6 months
  • 1 subscriber
  • 3.12K Posts
  • 65.1K Comments
  • Modlog