A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
If protecting teen safety was not generating a profit or avoiding a cost, they would have no incentive to do so.
This is not a defence of them but a comment on the lack of regulation they operate under.
Yeah. On the one hand, fuck all the evil fucks at meta. On the other, we need to stop pretending the private sector is going to make rules and frameworks to protect anyone. We all know that private capital is not capable of behaving ethically unless directed to at the barrel of a gun.
It shouldn’t be that way. We should be able to trust that people will behave ethically. But we can’t. They won’t. They are unethical and like being that way. They’re monsters and have no intention to be otherwise.
🤖 I’m a bot that provides automatic summaries for articles:
Click here to see the summary
NEW YORK, Nov 7 (Reuters) - A former Meta (META.O) employee testified before a U.S. Senate subcommittee on Tuesday, alleging that the Facebook and Instagram parent company was aware of harassment and other harms facing teens on its platforms but failed to address them.
Bejar’s testimony comes amid a bipartisan push in Congress to pass legislation that would require social media platforms to provide parents with tools to protect children online.
The goal of his work at Meta was to influence the design of Facebook and Instagram in ways that would nudge users toward more positive behaviors and provide tools for young people to manage unpleasant experiences, Bejar said at the hearing.
Meta said in a statement that it is committed to protecting young people online, pointing to its backing of the same user surveys Bejar cited in his testimony and its creation of tools like anonymous notifications of potentially hurtful content.
In one 2021 email, Bejar flagged to Zuckerberg and other top executives internal data revealing that 51% of Instagram users had reported having a bad or harmful experience on the platform in the past seven days.
Bejar met last week with two senators sponsoring the Kids Online Safety Act who said he shared evidence that Meta executives ignored harm to young people on the company’s platforms.
Saved 58% of original text.