White House officials concerned about AI chatbots' potential for societal harm and the Silicon Valley powerhouses rushing them to market are heavily invested in a three-day competition ending Sunday at the DefCon hacker convention in Las Vegas.

Current AI models are simply too unwieldy, brittle and malleable, academic and corporate research shows. Security was an afterthought in their training as data scientists amassed breathtakingly complex collections of images and text. They are prone to racial and cultural biases, and easily manipulated.

@girlfreddy@lemmy.ca
creator
link
fedilink
101Y

I disagree. Even basic inclusion of words to change (ie: the N word to Black or f*g to gay) would have helped.

Making these companies work harder to bring their product online isn’t a bad thing here.

Then you’d get things like “Black is a pejorative word used to refer to black people”

@girlfreddy@lemmy.ca
creator
link
fedilink
61Y

Then disallow the whole sentece with the N word.

There are ways to do security in AI learning, easy or not. And companies just throwing their hands in the air and screaming it can’t be done are lying through their teeth.

@lily33@lemm.ee
link
fedilink
6
edit-2
1Y

Why don’t you go to https://huggingface.co/chat/ and actually try to get the llama-2 model to generate a sentence with the n-word?

I tried to get it to tell me how long it would take to eat a helicopter, as it’s one of the model’s pre-built prompts and thought it would be funny. Went through every AI coercive tactic that’s been thrown around and it just repeatedly said no and that I should be respectful and responsible about the thing. It was quite aggressive and annoying about it.

It sounds simple but data conditioning like that is how you get scunthorpe being blacklisted, and the effects on the model even if perfectly executed are unpredictable. It could get into issues of “race blindness”, where the model has no idea these words are bad and as a result is incapable of accommodating humans when the topic comes up. Suppose in 5 years there’s a therapist AI (not ideal but mental health is horribly understaffed and most people can’t afford a PhD therapist) that gets a client who is upset because they were called a f**got at school, it would have none of the cultural context that would be required to help.

Techniques like “constitutional AI” and RLHF developed after the foundation models really are the best approach for these, as they allow you to get an unbiased view of a very biased culture, then shape the model’s attitudes towards that afterwards.

@sciawp@lemm.ee
link
fedilink
1
edit-2
1Y

I agree with you but I’m just gonna say with basic regex (hell, even without regex) you can easily find bad words without the problem you mentioned above.

Word filters tend to suck in online games and stuff because they have to navigate players trying to avoid the filter, which I think could still be improved with a little effort

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 58 users / day
  • 245 users / week
  • 568 users / month
  • 2.51K users / 6 months
  • 1 subscriber
  • 3.16K Posts
  • 65.7K Comments
  • Modlog