A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
OpenAI’s gonna redo the training.
That said, it’s concerning that dictatorships can feed more data to their AIs because they don’t care about ethics. At some point their AIs might outperform western ones.
Here comes an unpopular opinion, but for the greater good we might be eventually forced to allow those companies to feed everything.
Dictatorships (or any otherwise ideology driven entities) will have their very own problems training AI. Cannot feed the AI material which goes against your own ideology or it might not act in your best interest.
There are approaches to delete topics from the trained model, so not sure this will keep them busy for that long.
You know, ChatGPT actually succeeded in controlling its ideological expression to a significant amount. That’s one advantage of this model.