Why not simply make social media sites liable for anything their algorithm recommends? This is the same way as it’s always worked for published media, and when you think about it, having content picked up by an algorithm is very analogous to having something published in traditional media.
Then the liability in these cases get decided on a case by case basis, but overall social media sites would be incentivised to avoid having their algorithms promote anything that’s in the hate speech grey area.
Everyone could still post whatever they want, but you’re unlikely to get picked up by an algorithm for doing stochastic terrorism, which removes the profit motive in doing it.
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: !canada@lemmy.ca
Why not simply make social media sites liable for anything their algorithm recommends? This is the same way as it’s always worked for published media, and when you think about it, having content picked up by an algorithm is very analogous to having something published in traditional media.
Then the liability in these cases get decided on a case by case basis, but overall social media sites would be incentivised to avoid having their algorithms promote anything that’s in the hate speech grey area.
Everyone could still post whatever they want, but you’re unlikely to get picked up by an algorithm for doing stochastic terrorism, which removes the profit motive in doing it.
This would be worth exploring. But no doubt big tech will fight this like their lives (or profit margins) depend on it