Employees describe the psychological trauma of reading and viewing graphic content, low pay and abrupt dismissals

Employees say they weren’t adequately warned about the brutality of some of the text and images they would be tasked with reviewing, and were offered no or inadequate psychological support. Workers were paid between $1.46 and $3.74 an hour, according to a Sama spokesperson.

The difference between what a human mind does in transforming their nature and experiences through artistic expression and what the machine does by referencing values and expressing them in human language without any kind of understanding is very different. You are right that LLMs don’t literally copy word for word what they find, and they certainly are sophisticated pieces of technology, but what they are expressing is more processed language or images than an act of artistic creation. Less culinary experience and more industrial sausage. They do not have intelligence and are incapable of producing art of any kind. This isn’t to say they aren’t a threat to commodified art in the marketplace because they very much are, but in terms of enrichment or even entertainment the machine is not capable of producing anything worthwhile unless the viewer is looking for something they don’t have to look at for more than a moment or read with any serious interest of the contents. I’m interested in people using LLMs as a tool in their own artistic pursuits, but they have their own limitations as any tool does.

Give the AI a body with sense inputs, and allow those sense inputs to transform the “decider” value. That’s a step in the direction of true creativity

@Kwakigra@beehaw.org
link
fedilink
6
edit-2
1Y

A step closer to approximating the intelligence of a worm, perhaps. I once looked into where the line is on which anamalia were capable of operant conditioning, which I hypothesize may be the first purpose of a brain, and the line on our present taxonomic hierarchy is among worms (jellyfish do not have sufficient faculties for operant conditioning and are on the other side of the line). Sensory input being associated with decider values is still not as sophisticated as learning to be attracted to beneficial things and avoiding dangerous things because the machine does not have needs or desires to base its reactions on which would have to be trained into it by those with intelligence. I’m not saying it’s impossible to artificially create a being like this, but in my estimation we are very far from it considering that we barely grasp how any brain works other than to be aware of their extreme complexity. Considering the degree of difference between a worm and a sentient human, we are much further from what we would consider a human level of intelligence.

Edit: Re-reading this it seems much more snippy than I intended and I’m not sure how to frame it to sound more neutral. I meant this as a neutral continuation of a discussion of an idea.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 59 users / day
  • 156 users / week
  • 575 users / month
  • 2.02K users / 6 months
  • 1 subscriber
  • 3.49K Posts
  • 69.2K Comments
  • Modlog