Large language models mimic human chatter, but scientists disagree on their ability to reason.

No, and that definition has nothing in common with what the word means.

Autocorrect has plenty of information encoded as artifacts of how it works. ChatGPT isn’t like autocorrect. It is autocorrect, and doesn’t do anything more.

It’s fine if you think so, but then it’s a pointless argument over definitions.

You can’t have a conversation with autocomplete. It’s qualitatively different. There’s a reason we didn’t have this kind of code generation before LLM’s.

Adversus solem ne loquitor.

If you just keep taking the guessed next word from autocomplete you also get a bunch of words shaped like a conversation.

I am not sure of the relevance of the oppressed classes and with the object of duping the latter is the cravings of the oppressed classes and with the object of duping the latter

Yeah, totally. Repeating the same nonsensical sentence over and over is also how I converse. 🙄

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 61 users / day
  • 171 users / week
  • 620 users / month
  • 2.31K users / 6 months
  • 1 subscriber
  • 3.28K Posts
  • 67K Comments
  • Modlog