For instance, say I search for “The Dark Knight” on my Usenet indexer. It returns to me a list of uploads and where to get them via my Usenet provider. I can then download them, stitch them together, and verify that it is, indeed, The Dark Knight. All of this costs only a few dollars a month for me.
My question is, why can’t copyright holders do this as well? They could follow the same process, and then send takedown requests for each individual article which comprises the movie. We already know they try to catch people torrenting so why don’t they do this as well?
I can think of a few reasons, but they all seem pretty shaky.
It seems unlikely to me that literally all of it is hosted in places like this. Plus, the providers wouldn’t be able to operate at all in countries like the US without facing legal repercussions.
This also seems fishy. It’s cheap enough for me as an individual to do this, and if Usenet weren’t an option, I’d have to pay for 3+ streaming services to be able to watch everything I do currently. They’d literally break even with this scheme if they could only remove access to me.
The whole point of doing this would be to make Usenet a non-viable option for piracy. If I don’t care about it because it happens so rarely, then what’s the point of doing it at all?
1. Posts must be related to the discussion of digital piracy
2. Don’t request invites, trade, sell, or self-promote
3. Don’t request or link to specific pirated titles, including DMs
4. Don’t submit low-quality posts, be entitled, or harass others
📜 c/Piracy Wiki (Community Edition):
💰 Please help cover server costs.
Ko-fi | Liberapay |
As far as I know, they do get dmca’d. But they delete a single file so it’s incomplete. But if you have 2 different newsgroup providers they usually didn’t delete the same file, so you can still download it.
But I could be totally wrong because I haven’t really looked into this, and this is all from a very old memory.
That makes some amount of sense. I’m not sure exactly how each article is stitched together to create the full file. Do you happen to know if it’s just put together sequentially or if there’s XORing or more complex algorithm going on there? If it’s only the former, they would still be hosting copyrighted content, just a bit less of it.
EDIT:
https://sabnzbd.org/wiki/extra/nzb-spec
This implies that they are just individually decoded and stitched together.
Compressed into a set of small archives, then each one is posted.
Usually par files are included so you can regenerate a few missing archives. https://en.m.wikipedia.org/wiki/Parchive
Copyright is a legal construct, not a technological one. Shuffling the file contents around doesn’t make the slightest bit of legal difference, as long as the intent is to reconstruct it back into the copyrighted work.
(Conversely, if the intent was to, say, print out the file in hexadecimal and wallpaper your house with it, that wouldn’t be copyright infringement even if you didn’t rearrange it at all because the use was transformative. Unless the file in question was a JPEG of hex-digit wallpaper, of course.)