• 0 Posts
  • 12 Comments
Joined 1Y ago
cake
Cake day: Jun 01, 2023

help-circle
rss

This is what I’m worried about. As the fediverse grows and gains popularity it will undoubtedly become worth targeting. It’s not hard to imagine it becoming a lucrative target for things like astroturfing, vote brigading etc bots. For centralized sites it’s not hard to come up with some solutions to at least minimize the problem. But when everyone can just spin up a Lemmy, Kbin, etc instance it becomes a much, much harder problem to tackle because instances can also be ran by bot farms themselves, where they have complete control over the backend and frontend as well. That’s a pretty scary scenario which I’m not sure can be “fixed”. Maybe something can be done on the ActivityPub side, I don’t know.


This might work against very generic bots, but it won’t work against specialized bots. Those wouldn’t even need to parse the DOM, just recreate the HTTP requests.


I can already hear the CPA/affiliate marketing bots spinning up lol.


I can’t think of a better way to put more gasoline on the fire. If it happens I hope the users revolt and completely shit up any sub where they pull this stunt. Let’s see how long those new mods last then, and how many advertisers they lose.


Man that whole situation really sucks. Reddit was by far my most visited site before they decided to light the house on fire. On mobile I always used Boost because the official app is terrible and (at least the last time I looked at it) would drain my battery like it was nothing even when the app was closed. RIP. At least we’ve got Lemmy. I just wish these 3rd party apps would take their users to the fediverse instead of shutting down entirely. As a developer it really sucks when you have to shut down a project you’ve put so much work into.


But then they’d have to break up with their AI girlfriends/boyfriends 🤔.

spoiler

I wish I was joking.


Yeah that’s a good point. I have no idea how you’d go about solving that problem. Right now you can still sort of tell sometimes when something was AI generated. But if we extrapolate the past few years of advances in LLMs, say, 10 years into the future… There will be no telling what’s AI and what’s not. Where does that leave sites like StackOverflow, or indeed many other types of sites?

This then also makes me wonder how these models are going to be trained in the future. What happens when for example half of the training data is the output from previous models? How do you possibly steer/align future models and prevent compounding errors and bias? Strange times ahead.


It seems to me like StackOverflow is really shooting themselves in the foot by allowing AI generated answers. Even if we assume that all AI generated answers are “correct”, doesn’t that completely destroy the purpose of the site? Like, if I were seeking an answer to some Python-related problem, why wouldn’t I go straight to ChatGPT or similar language models instead then? That way I also don’t have to deal with some of the other issues that plague StackOverflow such as “this question is a duplicate of <insert unrelated question> - closed!”.


Wish I had the time and resources. I’m in the middle of developing a 🤪 site (not in Reddit/forum format) so I already have my plate full haha.



Been on Reddit for over 10 years and this move finally made me go look for alternatives that don’t hate their own users. Reddit was already going downhill fast the last couple of years, and this move was the last drop for me. If they want to Digg their own hole so be it. Only thing I haven’t found so far is an alternative to the NSFW side of Reddit.