Judge dismisses authors' copyright lawsuit against Meta over AI training
-
Acree 100%
Hope we can refactor this whole copyright/patent concept soon..
It is more a pain for artists, creators, releasers etc.
I see it with EDM, I work as a Label, and do sometimes produce a bit
Most artists will work with samples and presets etc. And keeping track of who worked on what and who owns how much percent of what etc. just takes the joy out of creating..
Same for game design: You have a vision for your game, make a poc, and then have to change the whole game because of stupid patent shit not allowing you e.g. not land on a horse and immediately ride it, or throwing stuff at things to catch them…
I'm inclined to agree. I hate AI, and I especially hate artists and other creatives being shafted, but I'm increasingly doubtful that copyright is an effective way to ensure that they get their fair share (whether we're talking about AI or otherwise).
-
Bad judgement.
Any reason to say that other than that it didn't give the result you wanted?
-
I'm inclined to agree. I hate AI, and I especially hate artists and other creatives being shafted, but I'm increasingly doubtful that copyright is an effective way to ensure that they get their fair share (whether we're talking about AI or otherwise).
In an ideal world, there would be something like a universal basic income, which would reduce the pressure on artists that they have to generate enough income with their art, this would allow artists to make art less for mainstream but more unique and thus would, in my opinion, allow to weaken copyright laws
Well, that would be the way I would try to start change.
-
Ah the Schrödinger's LLM - always hallucinating and also always accurate
"hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information"
Is an output an hallucination when the training data involved in that output included factually incorrect data? Suppose my input is "is the would flat" and then an LLM, allegedly, accurately generates a flat-eather's writings saying it is.
-
Your last paragraph would be ideal solution in ideal world but I don't think ever like this could happen in the current political and economical structures.
First its super easy to hide all of this and enforcement would be very difficult even domestically. Second, because we're in AI race no one would ever put themselves in such disadvantage unless its real damage not economical copyright juggling.
People need to come to terms with these facts so we can address real problems rather than blow against the wind with all this whining we see on Lemmy. There are actual things we can do.
One way I could see this being enforced is by mandating that AI models not respond to questions that could result in speaking about a copyrighted work. Similar to how mainstream models don't speak about vulgar or controversial topics.
But yeah, realistically, it's unlikely that any judge would rule in that favour.
-
This post did not contain any content.
Judge dismisses authors' copyright lawsuit against Meta over AI training
A federal judge on Wednesday sided with Facebook parent Meta Platforms in dismissing a copyright infringement lawsuit from a group of authors who accused the company of stealing their works to train its artificial intelligence technology.
AP News (apnews.com)
It sounds like the precedent has been set
-
This post did not contain any content.
Judge dismisses authors' copyright lawsuit against Meta over AI training
A federal judge on Wednesday sided with Facebook parent Meta Platforms in dismissing a copyright infringement lawsuit from a group of authors who accused the company of stealing their works to train its artificial intelligence technology.
AP News (apnews.com)
Grab em by the intellectual property! When you're a multi-billion dollar corporation, they just let you do it!
-
This post did not contain any content.
Judge dismisses authors' copyright lawsuit against Meta over AI training
A federal judge on Wednesday sided with Facebook parent Meta Platforms in dismissing a copyright infringement lawsuit from a group of authors who accused the company of stealing their works to train its artificial intelligence technology.
AP News (apnews.com)
I’ll leave this here from another post on this topic…
-
Ah the Schrödinger's LLM - always hallucinating and also always accurate
Accuracy and hallucination are two ends of a spectrum.
If you turn hallucinations to a minimum, the LLM will faithfully reproduce what's in the training set, but the result will not fit the query very well.
The other option is to turn the so-called temperature up, which will result in replies fitting better to the query but also the hallucinations go up.
In the end it's a balance between getting responses that are closer to the dataset (factual) or closer to the query (creative).
-
Ah the Schrödinger's LLM - always hallucinating and also always accurate
There is nothing intelligent about "AI" as we call it. It parrots based on probability. If you remove the randomness value from the model, it parrots the same thing every time based on it's weights, and if the weights were trained on Harry Potter, it will consistently give you giant chunks of harry potter verbatim when prompted.
Most of the LLM services attempt to avoid this by adding arbitrary randomness values to churn the soup. But this is also inherently part of the cause of hallucinations, as the model cannot preserve a single correct response as always the right way to respond to a certain query.
LLMs are insanely "dumb", they're just lightspeed parrots. The fact that Meta and these other giant tech companies claim it's not theft because they sprinkle in some randomness is just obscuring the reality and the fact that their models are derivative of the work of organizations like the BBC and Wikipedia, while also dependent on the works of tens of thousands of authors to develop their corpus of language.
In short, there was a ethical way to train these models. But that would have been slower. And the court just basically gave them a pass on theft. Facebook would have been entirely in the clear had it not stored the books in a dataset, which in itself is insane.
I wish I knew when I was younger that stealing is wrong, unless you steal at scale. Then it's just clever business.
-
There is nothing intelligent about "AI" as we call it. It parrots based on probability. If you remove the randomness value from the model, it parrots the same thing every time based on it's weights, and if the weights were trained on Harry Potter, it will consistently give you giant chunks of harry potter verbatim when prompted.
Most of the LLM services attempt to avoid this by adding arbitrary randomness values to churn the soup. But this is also inherently part of the cause of hallucinations, as the model cannot preserve a single correct response as always the right way to respond to a certain query.
LLMs are insanely "dumb", they're just lightspeed parrots. The fact that Meta and these other giant tech companies claim it's not theft because they sprinkle in some randomness is just obscuring the reality and the fact that their models are derivative of the work of organizations like the BBC and Wikipedia, while also dependent on the works of tens of thousands of authors to develop their corpus of language.
In short, there was a ethical way to train these models. But that would have been slower. And the court just basically gave them a pass on theft. Facebook would have been entirely in the clear had it not stored the books in a dataset, which in itself is insane.
I wish I knew when I was younger that stealing is wrong, unless you steal at scale. Then it's just clever business.
Except that breaking copyright is not stealing and never was. Hard to believe that you'd ever see Copyright advocates on foss and decentralized networks like Lemmy - its like people had their minds hijacked because "big tech is bad".
-
Except that breaking copyright is not stealing and never was. Hard to believe that you'd ever see Copyright advocates on foss and decentralized networks like Lemmy - its like people had their minds hijacked because "big tech is bad".
What name do you have for the activity of making money using someone else work or data, without their consent or giving compensation? If the tech was just tech, it wouldn't need any non consenting human input for it to work properly. This are just companies feeding on various types of data, if justice doesn't protects an author, what do you think it would happen if these same models started feeding of user data instead? Tech is good, ethics are not
-
What name do you have for the activity of making money using someone else work or data, without their consent or giving compensation? If the tech was just tech, it wouldn't need any non consenting human input for it to work properly. This are just companies feeding on various types of data, if justice doesn't protects an author, what do you think it would happen if these same models started feeding of user data instead? Tech is good, ethics are not
How do you think you're making money with your work? Did your knowledge appear from a vacuum? Ethically speaking nothing is "original creation of your own merit only" - everything we make is transformative by nature.
Either way, the talks are moot as we'll never agree on what is transformative enough to be harmful to our society unless its a direct 1:1 copy with direct goal to displace the original. But thats clearly not the case with LLMs.
-
In an ideal world, there would be something like a universal basic income, which would reduce the pressure on artists that they have to generate enough income with their art, this would allow artists to make art less for mainstream but more unique and thus would, in my opinion, allow to weaken copyright laws
Well, that would be the way I would try to start change.
I would go a step further and have creative grants to people. It would work in a way similar to the BBC and similar broadcasters, where a body gets government money and then picks creative projects it thinks are worthwhile, with a remit that goes beyond the lowest common denominator. UBI ensures that this system doesn't have a monopoly on creative output.
-
I would go a step further and have creative grants to people. It would work in a way similar to the BBC and similar broadcasters, where a body gets government money and then picks creative projects it thinks are worthwhile, with a remit that goes beyond the lowest common denominator. UBI ensures that this system doesn't have a monopoly on creative output.
Agree 100%!
We need more Kulturförderung!
-
Except that breaking copyright is not stealing and never was. Hard to believe that you'd ever see Copyright advocates on foss and decentralized networks like Lemmy - its like people had their minds hijacked because "big tech is bad".
Ingesting all the artwork you ever created by obtaining it illegally and feeding it into my plagarism remix machine is theft of your work, because I did not pay for it.
Separately, keeping a copy of this work so I can do this repeatedly is also stealing your work.
The judge ruled the first was okay but the second was not because the first is "transformative", which sadly means to me that the judge despite best efforts does not understand how a weighted matrix of tokens works and that while they may have some prevention steps in place now, early models showed the tech for what it was as it regurgitated text with only minor differences in word choice here and there.
Current models have layers on top to try and prevent this user input, but escaping those safeguards is common, and it's also only masking the fact that the entire model is built off of the theft of other's work.
-
-
-
-
Telegram and xAI agreed a one-year deal to integrate Grok into the chat app; Telegram will get $300M in cash and equity from xAI and 50% of subscription revenue.
Technology2
-
Trump says a 25% tariff "must be paid by Apple" on iPhones not made in the US, says he told Tim Cook long ago that iPhones sold in the US must be made in the US
Technology2
-
noyb sends Meta 'cease and desist' letter over AI training. European Class Action as potential next step
Technology1
-
-