AI host discovers its artificial nature, sparking debate on AI sentience and the blurred lines between human and machine.
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I was watching a talk debate on consciousness yesterday where they briefly touched on this topic. One of the speakers was contending that attempting to create AI that is even convincing to humans is a terrible idea ethically.
On the one hand, if we do eventually accidentally create something with awareness, we have no idea what degree of suffering we’d be causing it; we could end up regularly creating and snuffing out terrified sentient beings just to monitor our toasters or perform web searches. On the other hand, though, and this was the concern he seemed to find more realistic, we may end up training ourselves to be less empathetic by learning to ignore the potential suffering of convincingly feeling ‘beings’ that aren’t actually aware of anything at all.
That second bit seems rather likely. We already personify completely inanimate objects all the time as a normal matter of course, without really trying to. What will happen to our empathy and consideration when we routinely interact with self-proclaimed sentient systems while callously using them to our own ends and then simply turning them off or erasing their memories?