A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I mean personally I did block all the AI scrapers I could find on my instance, around a month or so ago. There were a lot, mostly unscrupulous, some big names included. Probably should look at the logs to see what’s new.
The amount of traffic was quite significant too. I have a theory that they expect legislation soon, so are not playing nice and slow like crawlers do, but are vacuuming as fast as they can.
But you’re right. Everyone would need to do it, to make a difference.
How does that help? My personal instance currently has a database of several million posts thanks to the various Mastodon relays. I don’t need to scrape your instance to sell your posts. I don’t, of course, but it’d be easy for some company to create friendlycutekittens.social and just start collecting posts. Do you really have time to audit every instance you federate with?
But, they aren’t. They’re not after Activitypub specifically. They’re scraping the whole internet, most of them using clear bot User Agents. So, I routinely block their bots because the AI ones are usually hitting you multiple times a second non-stop. If they started making fake Activitypub nodes they would not be scraping as a bot, and they would want specifically fediverse data. Important to note here though, an Activitypub node doesn’t “collect” data, they subscribe (to mastadon users/hashtags or communities) and then get new data delivered to them. So they wouldn’t get the old stuff.
Having said that, I’ve seen some obvious bots using genuine browser user agents on IP addresses from certain very large Chinese companies. For those I just blocked their whole AS number.
So most modern activitypub servers backfill threads and profiles. My single user instance processes 30000 notes a day. If I was actually trying, I’m sure it’d be easy to grab much more while appearing well behaved.
It’s not how ActivityPub (at least Lemmy/*bin servers) works. There isn’t so far as I’ve ever seen an API that allows for this within ActivityPub (now specific to Lemmy/*bin implementations there’s the API the browser/apps use that must provide this, but that’s not ActivityPub). It actually looks to be cleverly designed to prevent it. It might look like backfilling is happening because old stuff appears, but there are reasons for this.
How it works from my experience (I did some work on the federation in kbin a year or so ago).
And so old posts and comments will begin to appear as activities linked to them happen. But there isn’t a method to ask for “all the posts in community X” using activity pub. I remember because I was specifically looking for this a year or so ago. It let’s you see the parent object but not any children.
Maybe Mastadon etc does it different? No idea.
And all of this is moot because if I block a User Agent, or I block an AS number/IP block. They’re not getting anything either by ActivityPub or scraping unless they change User Agent, AS number, or both.
Not necessarily expecting any legislation; it might be the simple inequality of you having an instance, while they have a bunch of datacenters.
What’s 1TB/s more or less, a rounding error?
Big names scrape the whole web all the time; best case scenario, they’ll have an optimized scraper for federated networks; worst case, they’ll scrape as they would any other website and not even notice the difference.
I don’t think they’re optimising much at all. I think it’s likely just a modified web crawler but without the kind of throttling normal search engine crawlers use. They’re following links recursively. Then probably some basic parsing or even parsing with AI to prepare the data to make another AI model.