A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
No, it doesn’t. The model doesn’t contain any copyright-significant amount of the original training data in it, it physically can’t contain it, the model isn’t large enough. The model only contains concepts that it learned from the training data - ideas, patterns, but not literal snippets of the data.
The only time you can dredge a significant snippet of training data out is in a case where a particular bit of training data was present hundreds or thousands of times in the training data - a condition called “overfitting” that is considered a flaw and that AI trainers work hard to prevent by de-duplicating the data before training. Nobody wants overfitting, it defeats the whole point of generative AI to use it to replicate the “copy and paste” function in a hugely inefficient way. It’s very hard to find any actual examples of overfitting in modern models.
And that’s all that you need to make this copyright-kosher.
Think of it this way. Draw a picture of an apple. When you’re done drawing it, think to yourself - which apple did I just draw? You’ve probably seen thousands of apples in your life, but you didn’t draw any specific one, or piece together the picture from various specific bits of apple images you memorized. Instead you learned what the concept of an apple is like from all those examples, and drew a new thing that represents that concept of “appleness.” It’s the same way with these AIs, they don’t have a repository of training data that they copy from whenever they’re generating new text.
I’m aware the model doesn’t literally contain the training data, but for many models and applications, the training data is by nature small enough, and the application is restrictive enough that it is trivial to get even snippets of almost verbatim training data back out.
One of the primary models I work on involves code generation, and in those applications we’ve actually observed verbatim code being output by the model from the training data, even if there’s a fair amount of training data it’s been trained on. This has spurred concerns about license violation on open source code that was trained on.
There’s also the concept of less verbatim, but more “copied” style. Sure making a movie in the style of Wes Anderson is legitimate artistic expression, but what about a graphic designer making a logo in the “style of McDonalds”? The law is intentionally pretty murky in this department, with even some colors being trademarked for certain categories in the states. There’s not a clear line here, and LLMs are well positioned to challenge what we have on the books already. IMO this is not an AI problem, it’s a legal one that AI just happens to exacerbate.
You’re conflating a bunch of different areas here. Trademark is an entirely different category of IP. As you say, “style” cannot be copyrighted. And the sorts of models that chatter from social media is being used for is quite different from code generation.
Sure, there is going to be a bunch of lawsuits and new legislation coming down the pipe to clarify this stuff. But it’s important to bear in mind that none of that has happened yet. Things are not illegal by default, you need to have a law or precedent that makes them illegal. And there’s none of that now, and no guarantee that things are going to pan out that way in the end.
People are acting incensed at AI trainers using public data to train AI as if they’re doing something illegal. Maybe they want it to be illegal, but it isn’t yet and may never be. Until that happens people should keep in mind that they have to debate, not dictate.
The law is (in an ideal world), the reflection of our collective morality. It is supposed to dictate what is “right” and “wrong”. That said— I see too many folks believing that it works the other way too, that what is illegal must be wrong, and what is legal must be ok. This is (decisively) not the case.
In AI terms, I do believe some of the things that LLMs and the companies behind them are doing now may turn out to be illegal under certain interpretations of the law. But further, I think a lot of the things companies are doing to train these models are seen as “immoral” (me included), and that the law should be changed to reflect that.
Sure that may mean that “stuff these companies are doing now is legal”, but that doesn’t mean we don’t have the right to be upset about it. Tons of stuff large corporations have done was fully legal until public outcry forced the government to legislate against it. The first step in many laws being passed is the public demonstrating a vested interest in it. I believe the same is happening here.
The problem I have with this is that the argument seems to boil down to “I don’t like this so it should be illegal.” It puts me in mind of the classic objection on the grounds that something is devastating to your case. Laws should have a rationale beyond simply being what “collective morality” decides, otherwise all sorts of religious prohibitions and moral scares end up embedded in the legal system too.
Generally speaking, laws are based on the much simpler and more generic foundation of rights. Laws exist to protect rights, and get complicated because those rights can end up conflicting with each other. So what rights do the two “sides” of this conflict bring to the table? On the pro-AI side people are arguing that they have the right to learn concepts and styles from publicly available data, to analyze that data and record that analysis, and to make use of the products of that analysis. It all seems quite reasonable and foundational to me. On the anti-AI side - arguments based on complete misunderstandings of how the technology works aside - I generally see “because it’s devastating to my future career, your honor.”
Anti-AI artists are simply being selfish, IMO, demanding that society must continue to provide them with their current niche of employment and “specialness” by restricting other peoples’ rights through new legal restrictions. Sure, if you can convince enough people to go along with that idea those laws will be passed. That doesn’t make them right. There have been many laws over the years that were both popular and wrong on many levels.
Fortunately there are many different jurisdictions in the world. There isn’t just one “The Law.” So even if some places do end up banning AI I don’t think that’s going to slow it down much on a global scale, it’ll just help determine which places get a lead and which places fall behind in developing this new technology. There’s too much benefit for everyone to forego it everywhere.
I’m out and about today, so apologies if my responses don’t contain the level of detail I’d like; As for the law being collective morality, all sorts of religious prohibitions and moral scares HAVE ended up in the law. The idea is that the “collective” is large enough to dispel any niche restrictive beliefs. Whether or not you agree with that strategy aside, that is how I believe the current system works in an ideal sense (even if it works differently in practice), that’s what it is designed to protect from my perspective.
As for anti-AI artists, let me pose a situation for you to illustrate my perspective. As a prerequisite for this situation, a large part of a lawsuit, and the ability to advocate for a law is based on standing, the idea that you personally, or a group you represent has been directly, tangibly harmed by the thing you are trying to restrict. Here is the situation:
I am a furry, and a LARGE part of the fandom is based on art and artists. A core furry experience is getting art commissioned of your character from other artists. It’s commonplace for all these artists to have a very specific, identifiable signature style, so much so that it is trivial for me and other furs to be able to identify artists by their work alone at just a glance. Many of these artists have shifted to making their living full time off of creating art. With the advent of some new generational models, it is now possible to train a model exclusively off of one singular artists style, and generate art indistinguishable from the real thing without ever contacting them. This puts their livelihood directly at risk, and also muddies the waters in terms of subject matter, and what they support. Without laws regulating training, this could take away their livelihood, or even give a (very convincing, and hard to disprove) impression that they support things they don’t, like making art involving political parties, or illegal activities, which I have seen happen already. This almost approaches defamation in my opinion.
One argument you could make is that this is similar to the invention of photography, which may have directly threatened the work of painters. And while there are some comparisons you could draw from that situation, photography didn’t fundamentally replace their work verbatim, it merely provided an alternative that filled a similar role. This situation is distinct because in many cases, it’s not possible, or at least immediately apparent which pieces are authentic, or not. That is a VERY large problem the law needs to solve as soon as possible.
Further, I believe the same, or similar problems exist in LLMs, like they do in the situation involving generative image models above. Sure with enough training, those issues are lessened in impact, but where is the line of what is ok and what isn’t? Ultimately the models themselves don’t contain any copyrighted content, but they (by design) combine related ideas and patterns found in the training data, in a way that will always approximate it, depending on the depth of training data. While “overfitting” might be considered a negative in the industry, it’s still a possibility, and until there is some sort of regulations establishing the fitness of commercially available LLMs, I can envision situations in which management would cut training short once it’s “good enough”, leaving overfitting issues in place.
Lastly, with respect, I’d like to push back on both the notion that I’d like to ban AI or LLMs, as well as the notion that I’m not educated enough on the subject to adequately debate regulations on it. Both are untrue. I’m very much in favor of developing the technology, and exploring all it’s applications. It’s revolutionary, and worthy of the research attention it’s getting. I work on a variety of models across the AI and LLM space professionally, and I’ve seen how versatile it is; That said, I have also seen how over publicized it is. We’re clearly (from my perspective) in a bubble that will eventually pop. We’re claiming products use AI to do this and that across nearly every industry, and while LLMs in particular are amazing, and can be used in a ton of applications, it’s certainly not all of them— and I’m particularly cautious of putting new models in charge of dangerous or risky processes where they shouldn’t be before we develop adequate metrics, regulation, and guardrails. To summarize my position, I’m very excited to work towards developing them further, but I want to publicly express the notion that it’s not a silver bullet, and we need to develop legal frameworks for protecting people now, rather than later.
I’m rather confused by this. My point is that having the collective’s religious prohibitions and moral scares imposed upon the minority is a bad thing, and that it’s a flaw in “majority rule” that a rights-based legal system is supposed to attempt to counter. It doesn’t always work but that’s the idea. So simply having a large number of people pull out pitchforks and demand that the rights of AI trainers be restricted should not automatically result in that actually happening.
With regard to your scenario about furry art: You’re simply describing a specific example of the general scenario I already talked about. You’re saying that furry artists should have a right to copyright their “style”, which is emphatically not the case. Style cannot be copyrighted (and as a furry-adjacent who’s seen plenty of furry art over the years, I would also very much disagree that every furry artist has a unique style. They copy off each other all the time). You’re also saying that furry artists should have a right to their livelihood, which is also not the case. Civilization changes over time, new technologies and new social movements come along and result in jobs coming and going. Nobody has the right to make a living at some particular career.
You say “A core furry experience is getting art commissioned of your character from other artists.” Well, maybe that was a core furry experience. But the times they are a-changing. My avatar image here on the Fediverse was generated by me in large part by AI art generators and I got a much better experience and a much more accurate reflection of what I was going for than I would have got via a commission, and I got it for free. That sucks for the artists but it’s great for everyone else.
Does AI art actually replace an artist’s work verbatim? When I made my avatar image I still did a lot of intermediate fiddling steps in the Gimp. AI is just part of my workflow. An artist could also make use of it. Or they could continue making art the old fashioned way if they want, the mere existence of AI art generators doesn’t affect that ability one whit. All it does is change the market, possibly making it so that they can no longer make a living at their old job.
There are still plenty of painters. But when photography came along there were probably a lot of portrait painters who were put out of work. Over the years I’ve had several family photographs taken in photography studios, but I’ve never even considered commissioning a painter to paint a portrait of myself.
And that’s that for basically all the anti-AI legal arguments.
And there’s absolutely nothing wrong with this. People do it all the time, why is it suddenly a huge moral problem when a machine does? Should it be illegal for someone to go to a furry artist and ask for something “in the style of Dark Natasha”, or for an artist to pick up some of his personal style from Jay Naylor’s work?
I actually agree, but the people that I think are most in need of protecting are the people who train and use AI models. There are tons of news stories and personal experiences being posted these days about these people being persecuted in various ways, deplatformed, lied about, and so forth. They’re the ones whose rights people are proposing should be restricted.