and subsequent update to the headline, which reads like kind of a backpedal from the CEO:
Update: Tinybuild CEO Alex Nichiporchik says a recent talk that indicated the publisher uses AI to monitor staff was “hypothetical.”
Update (07/14/23): In a separate response sent directly to Why Now Gaming, Nichiporchik said the HR portion of his presentation was purely a “hypothetical” and that Tinybuild doesn’t use AI tools to monitor staff.
“The HR part of my presentation was a hypothetical, hence the Black Mirror reference. I could’ve made it more clear for when viewing out of context,” reads the statement. “We do not monitor employees or use AI to identify problematic ones. The presentation explored how AI tools can be used, and some get into creepy territory. I wanted to explore how they can be used for good.”
From video gaming to card games and stuff in between, if it’s gaming you can probably discuss it here!
Please Note: Gaming memes are permitted to be posted on Meme Mondays, but will otherwise be removed in an effort to allow other discussions to take place.
See also Gaming’s sister community Tabletop Gaming.
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
The company I work for uses activity watch to monitor our productivity. The program isn’t very accurate though but they seem to take it as gospel. So I’ve had to set up ways to prevent it from making me appear away when I’m actually still at my pc working.
These kinds of micromanaging steps only further the employer/employee divide. In my eyes a good employer would consult with an employee if they aren’t meeting their standards and work with them to improve things or offer other potential solutions.
From my experience they want us to be robots, not humans.
Edit: I also forgot to mention they monitor us with this tool without employee knowledge. They only will reveal this once they feel there’s a need to fix an “issue” (which might not even be an issue)
There’s also been a rumor going around that they are checking our webcams without our knowledge but I can’t confirm if this is true.
It’s not great overall
This is a Pandora box situation, when potential use for malicious purposes on AI on the ponderance of evidence outweigh the goods, one have to conclude that it is necessary to ban it from the purpose of monitoring. This have immense impact on disabled workers for instance.