• 0 Posts
  • 18 Comments
Joined 1Y ago
cake
Cake day: Jun 08, 2023

help-circle
rss

This toggle allows you to opt out of having profiling used for future decisions that produce legal or similarly significant effects about you.

The what now?

This sounds strangely ominous.


And it hasn’t been tested because researching the “best” method for executing humans is abhorrent and the scientific and medical communities have ethical standards.

But the State of Alabama doesn’t, a feature it shares with other regimes responsible for the worst atrocities in history.


Unless they’re really into arts and crafts, there’s no good reason for a home user to buy an inkjet anymore.

If every once in a while they want a nice photo print or to print up some flyers in color or something, it’s cheaper and less overall hassle to just pay per page at a drug store or office store on those occasions.


You might have a point there. Maybe… just maybe… this Musk fellow is completely full of shit about most things he says.


Yes, this is totally a symbolic move and nothing has meaningfully changed at Unity. Riccitiello is probably walking away with many millions of dollars and the rest of the leadership team who were fully onboard with the new licensing plan are still there. Once the negative press dies down, Unity will try something equally shitty again.

Developers would be foolish to trust this company ever again.


An increasing number of restaurants are pulling exactly this sort of bullshit–little 3.5% fees at the bottom of the total check disclosed only in fine print on the menu (if at all) tied to COVID, paying their staff, processing credit cards, etc. It needs to end. Pricing should be upfront so customers can compare what they’re actually paying, not snuck in at the end.


I’m seeing it more and more. Little “processing fees” here and there, some tied to COVID, some tied to credit cards. There needs to be a clap-back against this behavior.


My bet is: it’s going to depend on a case by case basis.

Almost certainly. Getty images has several exhibits in its suit against Stable Diffusion showing the Getty watermark popping up in its output as well as several images that are substantially the same as their sources. Other generative models don’t produce anything all that similar to the source material, so we’re probably going to wind up with lots of completely different and likely contradictory rulings on the matter before this gets anywhere near being sorted out legally.

Copyright laws are not necessarily wrong; just remove the “until author’s death plus 70 years” coverage, go back to a more reasonable “4 years since publication”, and they make much more sense.

The trouble with that line of thinking is that the laws are under no obligation to make sense. And the people who write and litigate those laws benefit from making them as complicated and irrational as they can get away with.


Not a single original sentence of the original work is retained in the model.

Which is why I find it interesting that none of the court cases (as far as I’m aware) are challenging whether an LLM is copying anything in the first place. Granted, that’s the plaintiff’s job to prove, but there’s no need to raise a fair use defense at all if no copying occurred.


Clearly transformative only applies to the work a human has put in to the process. It isn’t at all clear that an LLM would pass muster for a fair use defense, but there are court cases in progress that may try to answer that question. Ultimately, I think what it’s going to come down to is whether the training process itself and the human effort involved in training the model on copyrighted data is considered transformative enough to be fair use, or doesn’t constitute copying at all. As far as I know, none of the big cases are trying the “not a copy” defense, so we’ll have to see how this all plays out.

In any event, copyright laws are horrifically behind the times and it’s going to take new legislation sooner or later.


This thread is about ChatGPT, an LLM. It is not a general purpose AI.


So if someone builds an atom-perfect artificial brain from scratch, sticks it in a body, and shows it around the world, should we expect the creator to pay licensing fees to the owners of everything it looks at?

That’s unrelated to an LLM. An LLM is not a synthetic human brain. It’s a computer program and sets of statistical data points from large amounts of training data to generate outputs from prompts.

If we get real general-purpose AI some day in the future, then we’ll need to answer those sorts of questions. But that’s not what we have today.


And as a public company, Microsoft has a lot more options to leverage their equity than a private company or individual does.


We are tribal animals and will do this with just about everything. See also: politics, religion, ancestry, food, computer platforms, smartphone platforms, clothing brands, astrological signs, science fiction franchises, choice of pet, which way the toilet paper goes, etc.


It just goes to show that you can bullshit, bully & fire as many engineers as you like but you can’t bullshit physics. There’s no talking your way out of being crushed at depth.


I completely agree. I just don’t see how there can be any realistic expectation of privacy when publishing something publicly.

I appreciate the idea of laws establishing a right to be forgotten and I think there’s still some value in being able to take your data away from certain companies, but there’s no guarantee it wasn’t copied many times before the original location is taken down.

The Fediverse works like email. Once somebody hits send, there’s no real way to claw that back.


I tried nuking mine and they restored everything. At least I have the power not to give them anything more.


Nah. If Lemmy/Fediverse doesn’t work out, there will be others. This has all happened before…