Experts warn chatbots writing police reports can make serious errors.

Here’s a great example of dystopian tech being rolled out without guardrails. Brought to you by Axos, which you may know as the company that rebranded after Taser became a liability as a name.

So, AI that is strictly incapabale of generating new ideas is going to be fed decades of police reports as it’s database, and use that data to discern that makes a good police report?

Surely this won’t replicate decade old systematic problems with racial profiling. I mean, all these police reports are certainly objective, with no hint of bias to be found in the officers writing.

Create a post

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

  • 1 user online
  • 112 users / day
  • 329 users / week
  • 860 users / month
  • 2.11K users / 6 months
  • 1 subscriber
  • 3.67K Posts
  • 71.5K Comments
  • Modlog