Judge backs AI firm over use of copyrighted books
-
This post did not contain any content.
-
This post did not contain any content.
80% of the book market is owned by 5 publishing houses.
They want to create a monopoly around AI and kill open source. The copyright industry is not our friend. This is a win, not a loss.
-
This post did not contain any content.
Previous discussion from yesterday about the same topic: https://lemmy.world/post/31923154
-
80% of the book market is owned by 5 publishing houses.
They want to create a monopoly around AI and kill open source. The copyright industry is not our friend. This is a win, not a loss.
What, how is this a win? Three authors lost a lawsuit to an AI firm using their works.
-
This post did not contain any content.
I'm not pirating. I'm building my model.
-
80% of the book market is owned by 5 publishing houses.
They want to create a monopoly around AI and kill open source. The copyright industry is not our friend. This is a win, not a loss.
How exactly does this benefit "us" ?
-
80% of the book market is owned by 5 publishing houses.
They want to create a monopoly around AI and kill open source. The copyright industry is not our friend. This is a win, not a loss.
Keep in mind this isn't about open-weight vs other AI models at all. This is about how training data can be collected and used.
-
This post did not contain any content.
An 80 year old judge on their best day couldn't be trusted to make an informed decision. This guy was either bought or confused into his decision. Old people gotta go.
-
Keep in mind this isn't about open-weight vs other AI models at all. This is about how training data can be collected and used.
If you aren't allowed to freely use data for training without a license, then the fear is that only large companies will own enough works or be able to afford licenses to train models.
-
80% of the book market is owned by 5 publishing houses.
They want to create a monopoly around AI and kill open source. The copyright industry is not our friend. This is a win, not a loss.
Cool than, try to do some torrenting out there and don't hide that. Tell us how it goes.
The rules don't change. This just means AI overlords can do it, not that you can do it too
-
If you aren't allowed to freely use data for training without a license, then the fear is that only large companies will own enough works or be able to afford licenses to train models.
If they can just steal a creator's work, how do they suppose creators will be able to afford continuing to be creators?
Right. They think we have enough original works that the machines can just make any new creations.
-
If they can just steal a creator's work, how do they suppose creators will be able to afford continuing to be creators?
Right. They think we have enough original works that the machines can just make any new creations.
Yeah, I guess the debate is which is the lesser evil. I didn't make the original comment but I think this is what they were getting at.
-
Yeah, I guess the debate is which is the lesser evil. I didn't make the original comment but I think this is what they were getting at.
Absolutely. The current copyright system is terrible but an AI replacement of creators is worse.
-
If they can just steal a creator's work, how do they suppose creators will be able to afford continuing to be creators?
Right. They think we have enough original works that the machines can just make any new creations.
It is entirely possible that the entire construct of copyright just isn't fit to regulate this and the "right to train" or to avoid training needs to be formulated separately.
The maximalist, knee-jerk assumption that all AI training is copying is feeding into the interests of, ironically, a bunch of AI companies. That doesn't mean that actual authors and artists don't have an interest in regulating this space.
The big takeaway, in my book, is copyright is finally broken beyond all usability. Let's scrap it and start over with the media landscape we actually have, not the eighteenth century version of it.
-
This post did not contain any content.
IMO the focus should have always been on the potential for AI to produce copyright-violating output, not on the method of training.
-
IMO the focus should have always been on the potential for AI to produce copyright-violating output, not on the method of training.
If you try to sell "the new adventures of Doctor Strange, Steven Strange and Magic Man." existing copyright laws are sufficient and will stop it. Really, training should be regulated by the same laws as reading. If they can get the material through legitimate means it should be fine, but pulling data that is not freely accessible should be theft, as it is already.
-
If you try to sell "the new adventures of Doctor Strange, Steven Strange and Magic Man." existing copyright laws are sufficient and will stop it. Really, training should be regulated by the same laws as reading. If they can get the material through legitimate means it should be fine, but pulling data that is not freely accessible should be theft, as it is already.
That "freely" there really does a lot of hard work.
-
I'm not pirating. I'm building my model.
To anyone who is reading this comment without reading through the article. This ruling doesn't mean that it's okay to pirate for building a model. Anthropic will still need to go through trial for that:
But he rejected Anthropic's request to dismiss the case, ruling the firm would have to stand trial over its use of pirated copies to build its library of material.
-
How exactly does this benefit "us" ?
Because books are used to train both commercial and open source language models?
-
An 80 year old judge on their best day couldn't be trusted to make an informed decision. This guy was either bought or confused into his decision. Old people gotta go.
Did you read the actual order? The detailed conclusions begin on page 9. What specific bits did he get wrong?