Skip to content

AI industry horrified to face largest copyright class action ever certified

Technology
126 71 3
  • Unfortunately, this will probably lead to nothing: in our world, only the poor seem to be punished for stealing. Well, corporations always get away with everything, so we sit on the couch and shout "YES!!!" for the fact that they are trying to console us with this.

    This issue is not so cut and dry. The AI companies are stealing from other companies more than ftom individual people. Publishing companies are owned by some very rich people. And they want thier cut.

    This case may have started out with authors, but it is mentioned that it could turn into publishing companies vs AI companies.

  • No. But going after LLMs wont make the situation for IA any worse, not directly anyway.

    if the courts decide that scraping is illegal, IA can close up shop.

  • Ok perfect so since AGI is right around the corner and this is all irrelevant, then I'm sure the AI companies won't mind paying up.

    That's not the way it works. Do you think the Roman Empire just picked a particular Tuesday to collapse? It's a process and will take a while.

  • This post did not contain any content.

    People cheering for this have no idea of the consequence of their copyright-maximalist position.

    If using images, text, etc to train a model is copyright infringement then there will NO open models because open source model creators could not possibly obtain all of the licensing for every piece of written or visual media in the Common Crawl dataset, which is what most of these things are trained on.

    As it stands now, corporations don't have a monopoly on AI specifically because copyright doesn't apply to AI training. Everyone has access to Common Crawl and the other large, public, datasets made from crawling the public Internet and so anyone can train a model on their own without worrying about obtaining billions of different licenses from every single individual who has ever written a word or drawn a picture.

    If there is a ruling that training violates copyright then the only entities that could possibly afford to train LLMs or diffusion models are companies that own a large amount of copyrighted materials. Sure, one company will lose a lot of money and/or be destroyed, but the legal president would be set so that it is impossible for anyone that doesn't have billions of dollars to train AI.

    People are shortsightedly seeing this as a victory for artists or some other nonsense. It's not. This is a fight where large copyright holders (Disney and other large publishing companies) want to completely own the ability to train AI because they own most of the large stores of copyrighted material.

    If the copyright holders win this then the open source training material, like Common Crawl, would be completely unusable to train models in the US/the West because any person who has ever posted anything to the Internet in the last 25 years could simply sue for copyright infringement.

  • The law absolutely does not apply to everybody, and you are well aware of that.

    Shouldn't it?

  • Take scraping. Companies like Clearview will tell you that scraping is legal under copyright law. They’ll tell you that training a model with scraped data is also not a copyright infringement. They’re right.

    I love Cory's writing, but while he does a masterful job of defending scraping, and makes a good argument that in most cases, it's laws other than Copyright that should be the battleground, he does, kinda, trip over the main point.

    That is that training models on creative works and then selling access to the derivative "creative" works that those models output very much falls within the domain of copyright - on either side of a grey line we usually call "fair use" that hasn't been really tested in courts.

    Lets take two absurd extremes to make the point. Say I train an LLM directly on Marvel movies, and then sell movies (or maybe movie scripts) that are almost identical to existing Marvel movies (maybe with a few key names and features altered). I don't think anyone would argue that is not a derivative work, or that falls under "fair use." However, if I used literature to train my LLM to be able to read, and used that to read street signs for my self-driving car, well, yeah, that might be something you could argue is "fair use" to sell. It's not producing copy-cat literature.

    I agree with Cory that scraping, per se, is absolutely fine, and even re-distributing the results in some ways that are in the public interest or fall under "fair use", but it's hard to justify the slop machines as not a copyright problem.

    In the end, yeah, fuck both sides anyway. Copyright was extended too far and used for far too much, and the AI companies are absolute thieves. I have no illusions this type of court case will do anything more than shift wealth from one robber-barron to another, and won't help artists and authors.

    Say I train an LLM directly on Marvel movies, and then sell movies (or maybe movie scripts) that are almost identical to existing Marvel movies (maybe with a few key names and features altered). I don’t think anyone would argue that is not a derivative work, or that falls under “fair use.”

    I think you're failing to differentiate between a work, which is protected by copyright, vs a tool which is not affected by copyright.

    Say I use Photoshop and Adobe Premiere to create a script and movie which are almost identical to existing Marvel movies. I don't think anyone would argue that is not a derivative work, or that falls under "fair use".

    The important part here is that the subject of this sentence is 'a work which has been created which is substantially similar to an existing copyrighted work'. This situation is already covered by copyright law. If a person draws a Mickey Mouse and tries to sell it then Disney will sue them, not their pencil.

    Specific works are copyrighted and copyright laws create a civil liability for a person who creates works that are substantially similar to a copyrighted work.

    Copyright doesn't allow publishers to go after Adobe because a person used Photoshop to make a fake Disney poster. This is why things like Bittorrent can legally exist despite being used primarily for copyright violation. Copyright laws apply to people and the works that they create.

    A generated Marvel movie is substantially similar to a copyrighted Marvel movie and so copyright law protects it. A diffusion model is not substantially similar to any copyrighted work by Disney and so copyright laws don't apply here.

  • The law absolutely does not apply to everybody, and you are well aware of that.

    The law applies to everybody, but the law-makers change the laws to benefit certain people. And then trump pardons the rest lol.

  • This post did not contain any content.

    Would really love to see IP law get taken down a notch out of this.

  • Say I train an LLM directly on Marvel movies, and then sell movies (or maybe movie scripts) that are almost identical to existing Marvel movies (maybe with a few key names and features altered). I don’t think anyone would argue that is not a derivative work, or that falls under “fair use.”

    I think you're failing to differentiate between a work, which is protected by copyright, vs a tool which is not affected by copyright.

    Say I use Photoshop and Adobe Premiere to create a script and movie which are almost identical to existing Marvel movies. I don't think anyone would argue that is not a derivative work, or that falls under "fair use".

    The important part here is that the subject of this sentence is 'a work which has been created which is substantially similar to an existing copyrighted work'. This situation is already covered by copyright law. If a person draws a Mickey Mouse and tries to sell it then Disney will sue them, not their pencil.

    Specific works are copyrighted and copyright laws create a civil liability for a person who creates works that are substantially similar to a copyrighted work.

    Copyright doesn't allow publishers to go after Adobe because a person used Photoshop to make a fake Disney poster. This is why things like Bittorrent can legally exist despite being used primarily for copyright violation. Copyright laws apply to people and the works that they create.

    A generated Marvel movie is substantially similar to a copyrighted Marvel movie and so copyright law protects it. A diffusion model is not substantially similar to any copyrighted work by Disney and so copyright laws don't apply here.

    @FauxLiving @Jason2357

    I take a bold stand on the whole topic:

    I think AI is a big Scam ( pattern matching has nothing to do with !!! intelligence !!! ).

    And this Scam might end as Dot-Com bubble in the late 90s ( https://en.wikipedia.org/wiki/Dot-com_bubble ) including the huge economic impact cause to many people have invested in an "idea" not in an proofen technology.

    And as the Dot-Com bubble once the AI bubble has been cleaned up Machine Learning and Vector Databases will stay forever ( maybe some other part of the tech ).

    Both don't need copyright changes cause they will never try to be one solution for everything. Like a small model to transform text to speech ... like a small model to translate ... like a full text search using a vector db to index all local documents ...

    Like a small tool to sumarize text.

  • People cheering for this have no idea of the consequence of their copyright-maximalist position.

    If using images, text, etc to train a model is copyright infringement then there will NO open models because open source model creators could not possibly obtain all of the licensing for every piece of written or visual media in the Common Crawl dataset, which is what most of these things are trained on.

    As it stands now, corporations don't have a monopoly on AI specifically because copyright doesn't apply to AI training. Everyone has access to Common Crawl and the other large, public, datasets made from crawling the public Internet and so anyone can train a model on their own without worrying about obtaining billions of different licenses from every single individual who has ever written a word or drawn a picture.

    If there is a ruling that training violates copyright then the only entities that could possibly afford to train LLMs or diffusion models are companies that own a large amount of copyrighted materials. Sure, one company will lose a lot of money and/or be destroyed, but the legal president would be set so that it is impossible for anyone that doesn't have billions of dollars to train AI.

    People are shortsightedly seeing this as a victory for artists or some other nonsense. It's not. This is a fight where large copyright holders (Disney and other large publishing companies) want to completely own the ability to train AI because they own most of the large stores of copyrighted material.

    If the copyright holders win this then the open source training material, like Common Crawl, would be completely unusable to train models in the US/the West because any person who has ever posted anything to the Internet in the last 25 years could simply sue for copyright infringement.

    In theory sure, but in practice who has the resources to do large scale model training on huge datasets other than large corporations?

  • But it would also mean that the Internet Archive is illegal, even tho they don't profit, but if scraping the internet is a copyright violation, then they are as guilty as Anthropic.

    i say move it out of the us

  • In theory sure, but in practice who has the resources to do large scale model training on huge datasets other than large corporations?

    Distributed computing projects, large non-profits, people in the near future with much more powerful and cheaper hardware, governments which are interested in providing public services to their citizens, etc.

    Look at other large technology projects. The Human Genome Project spent $3 billion to sequence the first genome but now you can have it done for around $500. This cost reduction is due to the massive, combined effort of tens of thousands of independent scientists working on the same problem. It isn't something that would have happened if Purdue Pharma owned the sequencing process and required every scientist to purchase a license from them in order to do research.

    LLM and diffusion models are trained on the works of everyone who's ever been online. This work, generated by billions of human-hours, is stored in the Common Crawl datasets and is freely available to anyone who wants it. This data is both priceless and owned by everyone. We should not be cheering for a world where it is illegal to use this dataset that we all created and, instead, we are forced to license massive datasets from publishing companies.

    The amount of progress on these types of models would immediately stop, there would be 3-4 corporations would could afford the licenses. They would have a de facto monopoly on LLMs and could enshittify them without worry of competition.

  • People cheering for this have no idea of the consequence of their copyright-maximalist position.

    If using images, text, etc to train a model is copyright infringement then there will NO open models because open source model creators could not possibly obtain all of the licensing for every piece of written or visual media in the Common Crawl dataset, which is what most of these things are trained on.

    As it stands now, corporations don't have a monopoly on AI specifically because copyright doesn't apply to AI training. Everyone has access to Common Crawl and the other large, public, datasets made from crawling the public Internet and so anyone can train a model on their own without worrying about obtaining billions of different licenses from every single individual who has ever written a word or drawn a picture.

    If there is a ruling that training violates copyright then the only entities that could possibly afford to train LLMs or diffusion models are companies that own a large amount of copyrighted materials. Sure, one company will lose a lot of money and/or be destroyed, but the legal president would be set so that it is impossible for anyone that doesn't have billions of dollars to train AI.

    People are shortsightedly seeing this as a victory for artists or some other nonsense. It's not. This is a fight where large copyright holders (Disney and other large publishing companies) want to completely own the ability to train AI because they own most of the large stores of copyrighted material.

    If the copyright holders win this then the open source training material, like Common Crawl, would be completely unusable to train models in the US/the West because any person who has ever posted anything to the Internet in the last 25 years could simply sue for copyright infringement.

    Copyright is a leftover mechanism from slavery and it will be interesting to see how it gets challenged here, given that the wealthy view AI as an extension of themselves and not as a normal employee. Genuinely think the copyright cases from AI will be huge.

  • And you’re just crying that you can’t steal.

    Ah yes. "Public Domain" == "Theft"

  • good, then I can just ignore Disney instead of EVERYTHING else.

    Until they charge people to use their AI.

    It'll be just like today except that it will be illegal for any new companies to try and challenge the biggest players.

  • Copyright is a leftover mechanism from slavery and it will be interesting to see how it gets challenged here, given that the wealthy view AI as an extension of themselves and not as a normal employee. Genuinely think the copyright cases from AI will be huge.

    The case law is on the side of fair use.

    Google has already been sued for using copyrighted works to produce their products and they won on the grounds of fair use. The cases have some differences but the arguments are likely to be roughly the same in this case. In the case of Authors Guild, Inc. v. Google, Inc., 804 F.3d 202 (2d Cir. 2015)
    Google was sued for their Google Books product. With Google Books you can full-text search most books and it returns a snippet of text containing your search terms and an image of the book's cover. Both the text and book images are copyrighted and Google did not obtain any license for their use.

    Google won the case after a trial. The ruling was upheld on appeal and the Supreme Court denied cert.

    In sum, we conclude that:

    1. Google's unauthorized digitizing of copyright-protected works, creation of a search functionality, and display of snippets from those works are non-infringing fair uses. The purpose of the copying is highly transformative, the public display of text is limited, and the revelations do not provide a significant market substitute for the protected aspects of the originals. Google's commercial nature and profit motivation do not justify denial of fair use.
    2. Google's provision of digitized copies to the libraries that supplied the books, on the understanding that the libraries will use the copies in a manner consistent with the copyright law, also does not constitute infringement.

    Nor, on this record, is Google a contributory infringer.

    In the case in the OP, this case will certainly be referenced.

  • People cheering for this have no idea of the consequence of their copyright-maximalist position.

    If using images, text, etc to train a model is copyright infringement then there will NO open models because open source model creators could not possibly obtain all of the licensing for every piece of written or visual media in the Common Crawl dataset, which is what most of these things are trained on.

    As it stands now, corporations don't have a monopoly on AI specifically because copyright doesn't apply to AI training. Everyone has access to Common Crawl and the other large, public, datasets made from crawling the public Internet and so anyone can train a model on their own without worrying about obtaining billions of different licenses from every single individual who has ever written a word or drawn a picture.

    If there is a ruling that training violates copyright then the only entities that could possibly afford to train LLMs or diffusion models are companies that own a large amount of copyrighted materials. Sure, one company will lose a lot of money and/or be destroyed, but the legal president would be set so that it is impossible for anyone that doesn't have billions of dollars to train AI.

    People are shortsightedly seeing this as a victory for artists or some other nonsense. It's not. This is a fight where large copyright holders (Disney and other large publishing companies) want to completely own the ability to train AI because they own most of the large stores of copyrighted material.

    If the copyright holders win this then the open source training material, like Common Crawl, would be completely unusable to train models in the US/the West because any person who has ever posted anything to the Internet in the last 25 years could simply sue for copyright infringement.

    Or it just happens overseas, where these laws don't apply (or can't be enforced).

    But I don't think it will happen. Too many countries are desperate to be "the AI country" that they'll risk burning whole industries to the ground to get it.

  • Until they charge people to use their AI.

    It'll be just like today except that it will be illegal for any new companies to try and challenge the biggest players.

    why would I use their AI? on top of that, wouldn't it be in their best interests to allow people to use their AI with as few restrictions as possible in order to maximize market saturation?

  • This post did not contain any content.

    An important note here, the judge has already ruled in this case that "using Plaintiffs' works "to train specific LLMs [was] justified as a fair use" because "[t]he technology at issue was among the most transformative many of us will see in our lifetimes." during the summary judgement order.

    The plaintiffs are not suing Anthropic for infringing on their copyright, the court has already ruled that it was so obvious that they could not succeed with that argument that it could be dismissed. Their only remaining claim is that Anthropic downloaded the books from piracy sites using bittorrent

    This isn't about LLMs anymore, it's a standard "You downloaded something on Bittorrent and made a company mad"-type case that has been going on since Napster.

    Also, the headline is incredibly misleading. It's ascribing feelings to an entire industry based on a common legal filing that is not by itself noteworthy. Unless you really care about legal technicalities, you can stop here.


    The actual news, the new factual thing that happened, is that the Consumer Technology Association and the Computer and Communications Industry Association filed an Amicus Brief, in an appeal of an issue that Anthropic the court ruled against.

    This is pretty normal legal filing about legal technicalities. This isn't really newsworthy outside of, maybe, some people in the legal profession who are bored.

    The issue was class certification.

    Three people sued Anthropic. Instead of just suing Anthropic on behalf of themselves, they moved to be certified as class. That is to say that they wanted to sue on behalf of a larger group of people, in this case a "Pirated Books Class" of authors whose books Anthropic downloaded from the book piracy websites.

    The judge ruled they can represent the class, Anthropic appealed the ruling. During this appeal an industry group filed an Amicus brief with arguments supporting Anthropic's argument. This is not uncommon, The Onion famously filed an Amicus brief with the Supreme Court when they were about to rule on issues of parody. Like everything The Onion writes, it's a good piece of satire: link

  • Copyright is a leftover mechanism from slavery and it will be interesting to see how it gets challenged here, given that the wealthy view AI as an extension of themselves and not as a normal employee. Genuinely think the copyright cases from AI will be huge.

    My last comment was wrong, I've read through the filings of the case.

    The judge has already ruled that training the LLMs using the books was so obviously fair use that it was dismissed in summary judgement (my bolds):

    To summarize the analysis that now follows, the use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use under Section 107 of the Copyright Act. The digitization of the books purchased in print form by Anthropic was also a fair use, but not for the same reason as applies to the training copies. Instead, it was a fair use because all Anthropic did was replace the print copies it had purchased for its central library with more convenient, space-saving, and searchable digital copies without adding new copies, creating new works, or redistributing existing copies. However, Anthropic had no entitlement to use pirated copies for its central library, and creating a permanent, general-purpose library was not itself a fair use excusing Anthropic's piracy.

    The only issue remaining in this case is that they downloaded copyrighted material with bittorrent, the kind of lawsuits that have been going on since napster. They'll probably be required to pay for all 196,640 books that they priated and some other damages.

  • 400 Stimmen
    62 Beiträge
    108 Aufrufe
    T
    No action to protest facism is illegal!
  • Meta to spend hundreds of billions to build AI data centres

    Technology technology
    14
    1
    39 Stimmen
    14 Beiträge
    182 Aufrufe
    muusemuuse@sh.itjust.worksM
    The end game doesn't involve having customers at all. The rich think they just wont need an economy anymore once their slaves die off and automation and AI replace them all. They wont be able to help themselves though. They will get bored and start eating eachother.
  • 122 Stimmen
    3 Beiträge
    51 Aufrufe
    captainastronaut@seattlelunarsociety.orgC
    Anytime I get one as an Uber I try to play stupid like I can’t figure out the door handles. Slam the doors, pull the emergency door release (if there is one), push against the motorized door close mechanism. Ask if there’s a shade for the glass roof. Anything to remind the driver that it’s not a good car, especially as a taxi.
  • 337 Stimmen
    19 Beiträge
    190 Aufrufe
    R
    What I'm speaking about is that it should be impossible to do some things. If it's possible, they will be done, and there's nothing you can do about it. To solve the problem of twiddled social media (and moderation used to assert dominance) we need a decentralized system of 90s Web reimagined, and Fediverse doesn't deliver it - if Facebook and Reddit are feudal states, then Fediverse is a confederation of smaller feudal entities. A post, a person, a community, a reaction and a change (by moderator or by the user) should be global entities (with global identifiers, so that the object by id of #0000001a2b3c4d6e7f890 would be the same object today or 10 years later on every server storing it) replicated over a network of servers similarly to Usenet (and to an IRC network, but in an IRC network servers are trusted, so it's not a good example for a global system). Really bad posts (or those by persons with history of posting such) should be banned on server level by everyone. The rest should be moderated by moderator reactions\changes of certain type. Ideally, for pooling of resources and resilience, servers would be separated by types into storage nodes (I think the name says it, FTP servers can do the job, but no need to be limited by it), index nodes (scraping many storage nodes, giving out results in structured format fit for any user representation, say, as a sequence of posts in one community, or like a list of communities found by tag, or ... , and possibly being connected into one DHT for Kademlia-like search, since no single index node will have everything), and (like in torrents?) tracker nodes for these and for identities, I think torrent-like announce-retrieve service is enough - to return a list of storage nodes storing, say, a specified partition (subspace of identifiers of objects, to make looking for something at least possibly efficient), or return a list of index nodes, or return a bunch of certificates and keys for an identity (should be somehow cryptographically connected to the global identifier of a person). So when a storage node comes online, it announces itself to a bunch of such trackers, similarly with index nodes, similarly with a user. One can also have a NOSTR-like service for real-time notifications by users. This way you'd have a global untrusted pooled infrastructure, allowing to replace many platforms. With common data, identities, services. Objects in storage and index services can be, say, in a format including a set of tags and then the body. So a specific application needing to show only data related to it would just search on index services and display only objects with tags of, say, "holo_ns:talk.bullshit.starwars" and "holo_t:post", like a sequence of posts with ability to comment, or maybe it would search objects with tags "holo_name:My 1999-like Star Wars holopage" and "holo_t:page" and display the links like search results in Google, and then clicking on that you'd see something presented like a webpage, except links would lead to global identifiers (or tag expressions interpreted by the particular application, who knows). (An index service may return, say, an array of objects, each with identifier, tags, list of locations on storage nodes where it's found or even bittorrent magnet links, and a free description possibly ; then the user application can unify responses of a few such services to avoid repetitions, maybe sort them, represent them as needed, so on.) The user applications for that common infrastructure can be different at the same time. Some like Facebook, some like ICQ, some like a web browser, some like a newsreader. (Star Wars is not a random reference, my whole habit of imagining tech stuff is from trying to imagine a science fiction world of the future, so yeah, this may seem like passive dreaming and it is.)
  • Crypto.com

    Technology technology
    4
    2
    2 Stimmen
    4 Beiträge
    53 Aufrufe
    D
    It's like complaining about the cost of Nike but still buying and wearing it.
  • 0 Stimmen
    1 Beiträge
    19 Aufrufe
    Niemand hat geantwortet
  • Is AI Apocalypse Inevitable? - Tristan Harris

    Technology technology
    11
    1
    121 Stimmen
    11 Beiträge
    109 Aufrufe
    V
    Define AGI, because recently the definition is shifting down to match LLM. In fact we can say we achieved AGI now because we have machine that answers questions. The problem will be when the number of questions will start shrinking not because of number of problems but number of people that understand those problems. That is what is happening now. Don't believe me, read the statistics about age and workforce. Now put it into urgent need to something to replace those people. After that think what will happen when all those attempts fail.
  • UK government withholding details of Palantir contract

    Technology technology
    3
    1
    15 Stimmen
    3 Beiträge
    37 Aufrufe
    T
    Of all the partners you could have picked. Eek.