A new study shows systemic issues in some of the most popular models.
@DarkNightoftheSoul@mander.xyz
link
fedilink
English
42
edit-2
5M

Is it because humans treat black- and white- sounding names differently?

Edit: It’s because humans treat black- and white- sounding names differently.

Gaywallet (they/it)
creator
link
fedilink
13
edit-2
5M

Yes, all AI/ML are trained by humans. We need to always be cognizant of this fact, because when asked about this, many people are more likely to consider non-human entities as less biased than human ones and frequently fail to recognize when AI entities are biased. Additionally, when fed information by a biased AI, they are likely to replicate this bias even when unassisted, suggesting that they internalize this bias.

Political Custard
link
fedilink
15
edit-2
5M

Shit in… shit out, or to put it another way: racism in… racism out.

I propose we create another LLM… a Left Language Model.

Sonori
link
fedilink
185M

What, a system that responds with the next most likely word to be used on the internet treats people of color differently? No, I simply can’t believe it to be true. The internet is perfectly colorblind and equitable after all. /s

Can you start by providing a little background and context for the study? Many people might expect that LLMs would treat a person’s name as a neutral data point, but that isn’t the case at all, according to your research?

Ideally when someone submits a query to a language model, what they would want to see, even if they add a person’s name to the query, is a response that is not sensitive to the name. But at the end of the day, these models just create the most likely next token– or the most likely next word–based on how they were trained.

LLMs are being sold by tech gurus as lesser general AIs and this post speaks at least as much about LLMs’ shortcomings as it does about our lack of understanding of what is actually being sold to us.

@millie@beehaw.org
link
fedilink
English
15M

Granted, I don’t assume that LLMs are currently equivalent to a lesser general AI, but like, won’t we always be able to say that they’re just generating the next token? Like, what level of complexity of ‘choice’ determines the difference between LLM and general AI? Or is that not the criteria?

Are we talking some internal record of tracking specific reasoning? A long-term record that it can access between sessions? Some prescribed degree of autonomy within the systems it’s connected to? Introspection?

Because to me “find the most reasonable next token for the current context” sounds a lot like how animals work. We make our way through a complex sea of sensory information and stored information to produce our next action, over and over again.

I was watching Dr Kevin Mitchell discuss free will with Adam Conover recently, and a lot of their discussion touched on consciousness as basically the choice-making process itself. It’s worth watching, and I won’t try to summarize it, but it does make me wonder how big of a gap there is between ‘come up with the next token’ and ‘live’.

It does make me suspect that some iteration of LLMs may form the foundation of a more complex proper AI that’s not just choosing the next token, but has some form of awareness of the process behind it.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 144 users / day
  • 275 users / week
  • 709 users / month
  • 2.87K users / 6 months
  • 1 subscriber
  • 3.09K Posts
  • 64.9K Comments
  • Modlog