The Turing test focuses on the ability to chat—can we test the ability to think?
Cass.Forest
link
fedilink
131Y

There is, however, still the concept of the Chinese Room thought experiment, and I don’t think AI will topple that one for a while.

For those who don’t know and don’t wish to browse off the site, the thought experiment posits a situation in which a guy who does not understand Chinese is sat in a room and told to respond to sets of Chinese characters that come into the room. He has a little booklet of responses—all completely in Chinese—for him to use to send responses out of the room. The thought experiment questions whether or not the system of the Chinese Room itself can be thought to understand Chinese or even the man himself.

With the Turing Test getting all of the media spotlight in AI, machine learning, and cognitive science, I think the Chinese Room should enter into the conversation as the field of AI looks towards G.A.I.

The Chinese Room has already been surpassed by LLMs, which have shown to contain neurons that activate in such high correlation to abstract concepts like “formal text” or “positive sentiment”, that tweaking them is one of the options that LLM based chatbots are presenting to the user.

Analyzing the activation space, it’s also been shown that LLMs categorize and cluster sequences of text representing similar concepts closer to each other, which allows them to present reasonably accurate zero shot responses that have never been in the training set (that “weren’t in the book” for the Chinese Room).

I don’t understand what you mean by “The Chinese Room has already been surpassed by LLMs”. It’s not a test that can be surpassed. It’s just a thought experiment.

In any case, you do bring up a good point. Perhaps this understanding is in the organization of the information. So if you have a Chinese room where all the query-response pairs are in arbitrary orders, then maybe you wouldn’t consider that to be understanding. But if you have the data organized such that similar queries/responses are close to each other and this person in the room doing the answering can make mistakes such as accidentally copying out the response next to the correct response and still make sense, then maybe we can consider this system to have better understanding.

The Chinese Room is really a thought experiment about the inner workings of a partner in a Turing test. Externally they have the same pitfalls, but the Chinese Room also reveals itself completely if one can observe in detail the inner workings of the room/partner.

LLMs are still mostly black boxes, but we can have enough of a glimpse inside to reveal that they aren’t “following some rails” like a simple algorithm.

make mistakes such as accidentally copying out the response next to the correct response and still make sense

Precisely. This is another part that we can see with LLMs: at runtime, the models get applied a “temperature” parameter, which intentionally introduces a certain level of mistakes. With “temperature = 0”, the output is a “stochastic parrot”, and quickly turns into nonsense. With a higher temperature, the randomness increases and the output becomes a total mess. But setting it just right, to a sweet spot of “very little, but not zero”, turns out to produce the outputs that we see in ChatGPT and similar.

Knowing that the concept space of LLMs has similar concepts clustered, it makes sense that these errors would force the LLM to sometimes make associations on the fly between close concepts, associations that it didn’t have trained for before, and which “derail” it into a close, but not exactly the same, train of thought.

This behavior also seems to be what we call “intelligence” in humans: the ability to solve problems not seen before (zero shot).

A further extension would be the ability to constantly learn from every interaction. Right now LLMs have a “context” of some length, that changes dynamically, but has no influence over the pre-trained network.

Interestingly, this has a parallel in “crystallized intelligence” vs. “fluid intelligence” in humans.

So… maybe LLMs are not full AGIs yet, but they are showing many of the behaviors that we would expect from an AGI, while at the same time giving or confirming insights into the workings of the human mind itself.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 53 users / day
  • 163 users / week
  • 617 users / month
  • 2.32K users / 6 months
  • 1 subscriber
  • 3.29K Posts
  • 67.1K Comments
  • Modlog