New GPT-4o model can sing a bedtime story, detect facial expressions, read emotions.

Reducing emotion to voice intonation and facial expression is trivializing what it means to feel. This kind of approach dates from the 70s (promoted namely by Paul Elkman) and has been widely criticized from the get-go. It’s telling of the serious lack of emotional intelligence of the makers of such models. This field keeps redefining words pointing to deep concepts with their superficial facsimiles. If “emotion” is reduced to a smirk and “learning” to a calibrated variable, then of course OpenAI will be able to claim grand things based on that amputated view of the human experience.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 144 users / day
  • 275 users / week
  • 709 users / month
  • 2.87K users / 6 months
  • 1 subscriber
  • 3.09K Posts
  • 64.9K Comments
  • Modlog