A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
People think they are actually intelligent and perform reasoning. This article discusses how and why that is not true.
They do both. The articles fails to successfully argue that point and just turns AIs failure to answer an irrelevant trivia question into a gotcha moment.
I would encourage you to ask ChatGPT itself if it is intelligent or performs reasoning.
That said, this is one area where I wouldn’t trust the ChatGPT one bit. It has no introspection (outside of the prompt), due to not having any long term memory. So everything it says is based on whatever marketing material OpenAI trained it with.
Either way, any reasonable conversation with the bot will show that it can reason and is intelligent. The fact that it gets stuff wrong sometimes is absolutely irrelevant, since every human does that too.
I think it’s hilarious you aren’t listening to anyone telling you you’re wrong, even the bot itself. Must be nice to be so confident.
You got to provide actual arguments, examples, failure cases, etc. Instead all I see is repetition of the same tired talking points from 9 months ago when the thing launched. It’s boring and makes me seriously doubt if humans are capable of original thought.
I think their creators have deliberately disconnected the runtime AI model from re-reading their own training material because it’s a copyright and licensing nightmare.