Generative artificial intelligence (GenAI) company Anthropic has claimed to a US court that using copyrighted content in large language model (LLM) training data counts as “fair use”, however.
Under US law, “fair use” permits the limited use of copyrighted material without permission, for purposes such as criticism, news reporting, teaching, and research.
In October 2023, a host of music publishers including Concord, Universal Music Group and ABKCO initiated legal action against the Amazon- and Google-backed generative AI firm Anthropic, demanding potentially millions in damages for the allegedly “systematic and widespread infringement of their copyrighted song lyrics”.
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
Then it shouldn’t exist.
This isn’t an issue of fair use. They’re stealing other people’s work and using it to create something new and then trying to profit from it, without any credit or recompense.
Now that it exists how do you propose we make it not exist?
Even if we outlaw it Russia and China won’t and without the tools to fight back against it the web is basically done as anything but a propaganda platform
Just like I do with literally all content I’ve ever consumed. Everything I’ve seen has been remashed in my brain into the competencies I charge money for.
It’s not until I profit off of someone else’s work — ie when the source of the profit is their work — that I’m breaking any rules.
This is a non-issue. We’ve let our (legitimate) fear of AI twist us into distorting truth and motivated reasoning. Instead of trying to paint AI as morally wrong, we should admit that we are afraid of it.
We’re trying to replace our fear with disgust and anger. It’s not healthy for us. AI is ultra fucking scary. And not because it’s going to take inspiration from a copyrighted song when it writes a different song. AI is ultra fucking scary because it will soon surpass any possibility of our understanding, and we will be at the whim of an alien intelligence.
But that’s too sci fi sounding, to be something people have to look at. Because it sounds so out there, it’s easy to scoff at and dismiss. So instead of acknowledging our terror at the fact this thing will likely end humanity as we know it, we’re sublimating that energy through righteous indignation. See, indignation is unpleasant, but it’s less threatening to the self than terror.
It’s understandable, like doing another line of coke is understandable. But it is not healthy, not productive, and will not play out the way we think. We need to stop letting our fear turn our minds to mush.
Reading someone else’s material before you write new material is not the same as copying someone else’s material and selling it as your own. The information on the internet has always been considered free for legal use. And the limit of legal use is based on the selling of others’ verbatim material.
This is a simple fact, easy to see. Except recognizing it nullifies the righteous indignation, opening the way for the terror and confusion to come in again.