Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.

Large language model AIs might seem smart on a surface level but they struggle to actually understand the real world and model it accurately, a new study finds.

As such, it raises concerns that AI systems deployed in a real-world situation, say in a driverless car, could malfunction when presented with dynamic environments or tasks.

This is currently happening with driverless cars that use machine learning - so this goes beyond LLMs and is a general machine learning issue. Last time I checked, Waymo cars needed human intervention every six miles. These cars often times block each other, are confused by the simplest of obstacles, can’t reliably detect pedestrians, etc.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 63 users / day
  • 202 users / week
  • 638 users / month
  • 2.07K users / 6 months
  • 1 subscriber
  • 3.48K Posts
  • 69K Comments
  • Modlog