Skip to content

AI industry horrified to face largest copyright class action ever certified

Technology
125 70 1
  • do you know how much content disney has? go scrolling: https://en.wikipedia.org/wiki/List_of_assets_owned_by_the_Walt_Disney_Company
    e: that's the tip of the iceberg, because if they band together with others from the MPAA & RIAA, they can suffocate the entire Movie, Book and Music world with it.

    good, then I can just ignore Disney instead of EVERYTHING else.

  • do you know how much content disney has? go scrolling: https://en.wikipedia.org/wiki/List_of_assets_owned_by_the_Walt_Disney_Company
    e: that's the tip of the iceberg, because if they band together with others from the MPAA & RIAA, they can suffocate the entire Movie, Book and Music world with it.

    They have 0.2T in assets the world has around 660T in assets which as I said before is a tiny fraction. Obviously both hold a lot of assets that aren’t worthwhile to AI training such as theme parks but when you consider a single movie that might be worth millions or billions has the same benefit for AI training as another movie worth thousands. the amount of assets Disney owned is not nearly as relevant as you are making it out to be

  • I just remembered the movie where the genie was released from the bottle of a real genie, he turned the world into chaos by freeing his own kind, and if it weren't for the power of the plot, I'm afraid people there would have become slaves or died out.

    Although here it is already necessary to file a lawsuit for theft of the soul in the literal sense of the word.

    I remember that X-Files episode!

  • This post did not contain any content.

    Too late. The systems we are building as a species will soon become sentient. We'll have aliens right here, no UFOs required. Where the music comes from will no longer be relevant.

  • I remember that X-Files episode!

    Damn, what did you watch those masterpieces on? What kind of smoke were you sitting on then? Although I don't know what secret materials you're talking about. Maybe I watched something wrong... And what an episode?

  • They don’t want copyright power to expand further. And I agree with them, despite hating AI vendors with a passion.

    For an understanding of the collateral damage, check out How To Think About Scraping by Cory Doctorow.

    Ahhh, it makes more sense now. Thank you!

  • Do you think that would rescue the IA from the type of people who made the IA already pull 300k books?

    No. But going after LLMs wont make the situation for IA any worse, not directly anyway.

  • Too late. The systems we are building as a species will soon become sentient. We'll have aliens right here, no UFOs required. Where the music comes from will no longer be relevant.

    Ok perfect so since AGI is right around the corner and this is all irrelevant, then I'm sure the AI companies won't mind paying up.

  • Unfortunately, this will probably lead to nothing: in our world, only the poor seem to be punished for stealing. Well, corporations always get away with everything, so we sit on the couch and shout "YES!!!" for the fact that they are trying to console us with this.

    This issue is not so cut and dry. The AI companies are stealing from other companies more than ftom individual people. Publishing companies are owned by some very rich people. And they want thier cut.

    This case may have started out with authors, but it is mentioned that it could turn into publishing companies vs AI companies.

  • No. But going after LLMs wont make the situation for IA any worse, not directly anyway.

    if the courts decide that scraping is illegal, IA can close up shop.

  • Ok perfect so since AGI is right around the corner and this is all irrelevant, then I'm sure the AI companies won't mind paying up.

    That's not the way it works. Do you think the Roman Empire just picked a particular Tuesday to collapse? It's a process and will take a while.

  • This post did not contain any content.

    People cheering for this have no idea of the consequence of their copyright-maximalist position.

    If using images, text, etc to train a model is copyright infringement then there will NO open models because open source model creators could not possibly obtain all of the licensing for every piece of written or visual media in the Common Crawl dataset, which is what most of these things are trained on.

    As it stands now, corporations don't have a monopoly on AI specifically because copyright doesn't apply to AI training. Everyone has access to Common Crawl and the other large, public, datasets made from crawling the public Internet and so anyone can train a model on their own without worrying about obtaining billions of different licenses from every single individual who has ever written a word or drawn a picture.

    If there is a ruling that training violates copyright then the only entities that could possibly afford to train LLMs or diffusion models are companies that own a large amount of copyrighted materials. Sure, one company will lose a lot of money and/or be destroyed, but the legal president would be set so that it is impossible for anyone that doesn't have billions of dollars to train AI.

    People are shortsightedly seeing this as a victory for artists or some other nonsense. It's not. This is a fight where large copyright holders (Disney and other large publishing companies) want to completely own the ability to train AI because they own most of the large stores of copyrighted material.

    If the copyright holders win this then the open source training material, like Common Crawl, would be completely unusable to train models in the US/the West because any person who has ever posted anything to the Internet in the last 25 years could simply sue for copyright infringement.

  • The law absolutely does not apply to everybody, and you are well aware of that.

    Shouldn't it?

  • Take scraping. Companies like Clearview will tell you that scraping is legal under copyright law. They’ll tell you that training a model with scraped data is also not a copyright infringement. They’re right.

    I love Cory's writing, but while he does a masterful job of defending scraping, and makes a good argument that in most cases, it's laws other than Copyright that should be the battleground, he does, kinda, trip over the main point.

    That is that training models on creative works and then selling access to the derivative "creative" works that those models output very much falls within the domain of copyright - on either side of a grey line we usually call "fair use" that hasn't been really tested in courts.

    Lets take two absurd extremes to make the point. Say I train an LLM directly on Marvel movies, and then sell movies (or maybe movie scripts) that are almost identical to existing Marvel movies (maybe with a few key names and features altered). I don't think anyone would argue that is not a derivative work, or that falls under "fair use." However, if I used literature to train my LLM to be able to read, and used that to read street signs for my self-driving car, well, yeah, that might be something you could argue is "fair use" to sell. It's not producing copy-cat literature.

    I agree with Cory that scraping, per se, is absolutely fine, and even re-distributing the results in some ways that are in the public interest or fall under "fair use", but it's hard to justify the slop machines as not a copyright problem.

    In the end, yeah, fuck both sides anyway. Copyright was extended too far and used for far too much, and the AI companies are absolute thieves. I have no illusions this type of court case will do anything more than shift wealth from one robber-barron to another, and won't help artists and authors.

    Say I train an LLM directly on Marvel movies, and then sell movies (or maybe movie scripts) that are almost identical to existing Marvel movies (maybe with a few key names and features altered). I don’t think anyone would argue that is not a derivative work, or that falls under “fair use.”

    I think you're failing to differentiate between a work, which is protected by copyright, vs a tool which is not affected by copyright.

    Say I use Photoshop and Adobe Premiere to create a script and movie which are almost identical to existing Marvel movies. I don't think anyone would argue that is not a derivative work, or that falls under "fair use".

    The important part here is that the subject of this sentence is 'a work which has been created which is substantially similar to an existing copyrighted work'. This situation is already covered by copyright law. If a person draws a Mickey Mouse and tries to sell it then Disney will sue them, not their pencil.

    Specific works are copyrighted and copyright laws create a civil liability for a person who creates works that are substantially similar to a copyrighted work.

    Copyright doesn't allow publishers to go after Adobe because a person used Photoshop to make a fake Disney poster. This is why things like Bittorrent can legally exist despite being used primarily for copyright violation. Copyright laws apply to people and the works that they create.

    A generated Marvel movie is substantially similar to a copyrighted Marvel movie and so copyright law protects it. A diffusion model is not substantially similar to any copyrighted work by Disney and so copyright laws don't apply here.

  • The law absolutely does not apply to everybody, and you are well aware of that.

    The law applies to everybody, but the law-makers change the laws to benefit certain people. And then trump pardons the rest lol.

  • This post did not contain any content.

    Would really love to see IP law get taken down a notch out of this.

  • Say I train an LLM directly on Marvel movies, and then sell movies (or maybe movie scripts) that are almost identical to existing Marvel movies (maybe with a few key names and features altered). I don’t think anyone would argue that is not a derivative work, or that falls under “fair use.”

    I think you're failing to differentiate between a work, which is protected by copyright, vs a tool which is not affected by copyright.

    Say I use Photoshop and Adobe Premiere to create a script and movie which are almost identical to existing Marvel movies. I don't think anyone would argue that is not a derivative work, or that falls under "fair use".

    The important part here is that the subject of this sentence is 'a work which has been created which is substantially similar to an existing copyrighted work'. This situation is already covered by copyright law. If a person draws a Mickey Mouse and tries to sell it then Disney will sue them, not their pencil.

    Specific works are copyrighted and copyright laws create a civil liability for a person who creates works that are substantially similar to a copyrighted work.

    Copyright doesn't allow publishers to go after Adobe because a person used Photoshop to make a fake Disney poster. This is why things like Bittorrent can legally exist despite being used primarily for copyright violation. Copyright laws apply to people and the works that they create.

    A generated Marvel movie is substantially similar to a copyrighted Marvel movie and so copyright law protects it. A diffusion model is not substantially similar to any copyrighted work by Disney and so copyright laws don't apply here.

    @FauxLiving @Jason2357

    I take a bold stand on the whole topic:

    I think AI is a big Scam ( pattern matching has nothing to do with !!! intelligence !!! ).

    And this Scam might end as Dot-Com bubble in the late 90s ( https://en.wikipedia.org/wiki/Dot-com_bubble ) including the huge economic impact cause to many people have invested in an "idea" not in an proofen technology.

    And as the Dot-Com bubble once the AI bubble has been cleaned up Machine Learning and Vector Databases will stay forever ( maybe some other part of the tech ).

    Both don't need copyright changes cause they will never try to be one solution for everything. Like a small model to transform text to speech ... like a small model to translate ... like a full text search using a vector db to index all local documents ...

    Like a small tool to sumarize text.

  • People cheering for this have no idea of the consequence of their copyright-maximalist position.

    If using images, text, etc to train a model is copyright infringement then there will NO open models because open source model creators could not possibly obtain all of the licensing for every piece of written or visual media in the Common Crawl dataset, which is what most of these things are trained on.

    As it stands now, corporations don't have a monopoly on AI specifically because copyright doesn't apply to AI training. Everyone has access to Common Crawl and the other large, public, datasets made from crawling the public Internet and so anyone can train a model on their own without worrying about obtaining billions of different licenses from every single individual who has ever written a word or drawn a picture.

    If there is a ruling that training violates copyright then the only entities that could possibly afford to train LLMs or diffusion models are companies that own a large amount of copyrighted materials. Sure, one company will lose a lot of money and/or be destroyed, but the legal president would be set so that it is impossible for anyone that doesn't have billions of dollars to train AI.

    People are shortsightedly seeing this as a victory for artists or some other nonsense. It's not. This is a fight where large copyright holders (Disney and other large publishing companies) want to completely own the ability to train AI because they own most of the large stores of copyrighted material.

    If the copyright holders win this then the open source training material, like Common Crawl, would be completely unusable to train models in the US/the West because any person who has ever posted anything to the Internet in the last 25 years could simply sue for copyright infringement.

    In theory sure, but in practice who has the resources to do large scale model training on huge datasets other than large corporations?

  • But it would also mean that the Internet Archive is illegal, even tho they don't profit, but if scraping the internet is a copyright violation, then they are as guilty as Anthropic.

    i say move it out of the us

  • In theory sure, but in practice who has the resources to do large scale model training on huge datasets other than large corporations?

    Distributed computing projects, large non-profits, people in the near future with much more powerful and cheaper hardware, governments which are interested in providing public services to their citizens, etc.

    Look at other large technology projects. The Human Genome Project spent $3 billion to sequence the first genome but now you can have it done for around $500. This cost reduction is due to the massive, combined effort of tens of thousands of independent scientists working on the same problem. It isn't something that would have happened if Purdue Pharma owned the sequencing process and required every scientist to purchase a license from them in order to do research.

    LLM and diffusion models are trained on the works of everyone who's ever been online. This work, generated by billions of human-hours, is stored in the Common Crawl datasets and is freely available to anyone who wants it. This data is both priceless and owned by everyone. We should not be cheering for a world where it is illegal to use this dataset that we all created and, instead, we are forced to license massive datasets from publishing companies.

    The amount of progress on these types of models would immediately stop, there would be 3-4 corporations would could afford the licenses. They would have a de facto monopoly on LLMs and could enshittify them without worry of competition.

  • First rack at home

    Technology technology
    1
    1 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • Managed Services and 24/7 Data Centre Support in India

    Technology technology
    1
    1
    2 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • We need to stop pretending AI is intelligent

    Technology technology
    331
    1
    1k Stimmen
    331 Beiträge
    3k Aufrufe
    dsilverz@friendica.worldD
    @technocrit While I agree with the main point that "AI/LLMs has/have no agency", I must be the boring, ackchyually person who points out and remembers some nerdy things.tl;dr: indeed, AIs and LLMs aren't intelligent... we aren't so intelligent as we think we are, either, because we hold no "exclusivity" of intelligence among biosphere (corvids, dolphins, etc) and because there's no such thing as non-deterministic "intelligence". We're just biologically compelled to think that we can think and we're the only ones to think, and this is just anthropocentric and naive from us (yeah, me included).If you have the patience to read a long and quite verbose text, it's below. If you don't, well, no problems, just stick to my tl;dr above.-----First and foremost, everything is ruled by physics. Deep down, everything is just energy and matter (the former of which, to quote the famous Einstein equation e = mc, is energy as well), and this inexorably includes living beings.Bodies, flesh, brains, nerves and other biological parts, they're not so different from a computer case, CPUs/NPUs/TPUs, cables and other computer parts: to quote Sagan, it's all "made of star stuff", it's all a bunch of quarks and other elementary particles clumped together and forming subatomic particles forming atoms forming molecules forming everything we know, including our very selves...Everything is compelled to follow the same laws of physics, everything is subjected to the same cosmic principles, everything is subjected to the same fundamental forces, everything is subjected to the same entropy, everything decays and ends (and this comment is just a reminder, a cosmic-wide Memento mori).It's bleak, but this is the cosmic reality: cosmos is simply indifferent to all existence, and we're essentially no different than our fancy "tools", be it the wheel, the hammer, the steam engine, the Voyager twins or the modern dystopian electronic devices crafted to follow pieces of logical instructions, some of which were labelled by developers as "Markov Chains" and "Artificial Neural Networks".Then, there's also the human non-exclusivity among the biosphere: corvids (especially Corvus moneduloides, the New Caleidonian crow) are scientifically known for their intelligence, so are dolphins, chimpanzees and many other eukaryotas. Humans love to think we're exclusive in that regard, but we're not, we're just fooling ourselves!IMHO, every time we try to argue "there's no intelligence beyond humans", it's highly anthropocentric and quite biased/bigoted against the countless other species that currently exist on Earth (and possibly beyond this Pale Blue Dot as well). We humans often forgot how we are species ourselves (taxonomically classified as "Homo sapiens"). We tend to carry on our biological existences as if we were some kind of "deities" or "extraterrestrials" among a "primitive, wild life".Furthermore, I can point out the myriad of philosophical points, such as the philosophical point raised by the mere mention of "senses" ("Because it’s bodiless. It has no senses, ..." "my senses deceive me" is the starting point for Cartesian (René Descartes) doubt. While Descarte's conclusion, "Cogito ergo sum", is highly anthropocentric, it's often ignored or forgotten by those who hold anthropocentric views on intelligence, as people often ground the seemingly "exclusive" nature of human intelligence on the ability to "feel".Many other philosophical musings deserve to be mentioned as well: lack of free will (stemming from the very fact that we were unable to choose our own births), the nature of "evil" (both the Hobbesian line regarding "human evilness" and the Epicurean paradox regarding "metaphysical evilness"), the social compliance (I must point out to documentaries from Derren Brown on this subject), the inevitability of Death, among other deep topics.All deep principles and ideas converging, IMHO, into the same bleak reality, one where we (supposedly "soul-bearing beings") are no different from a "souless" machine, because we're both part of an emergent phenomena (Ordo ab chao, the (apparent) order out of chaos) that has been taking place for Æons (billions of years and beyond, since the dawn of time itself).Yeah, I know how unpopular this worldview can be and how downvoted this comment will probably get. Still I don't care: someone who gazed into the abyss must remember how the abyss always gazes us, even those of us who didn't dare to gaze into the abyss yet.I'm someone compelled by my very neurodivergent nature to remember how we humans are just another fleeting arrangement of interconnected subsystems known as "biological organism", one of which "managed" to throw stuff beyond the atmosphere (spacecrafts) while still unable to understand ourselves. We're biologically programmed, just like the other living beings, to "fear Death", even though our very cells are programmed to terminate on a regular basis (apoptosis) and we're are subjected to the inexorable chronological falling towards "cosmic chaos" (entropy, as defined, "as time passes, the degree of disorder increases irreversibly").
  • 282 Stimmen
    27 Beiträge
    417 Aufrufe
    F
    it becomes a form of censorship when snall websites and forums shut down because they don’t have the capacity to comply. In this scenario that's not a consideration. We're talking about algorithmically-driven content, which wouldn't apply to Lemmy, Mastodon, or many mom-and-pop sized pages and forums. Those have human moderation anyway, which the big sites don't. If you're making editorial decisions by weighting algorithmically-driven content, it's not censorship to hold you accountable for the consequences of your editorial decisions. (Just as we would any major media outlet.)
  • 300 Stimmen
    71 Beiträge
    675 Aufrufe
    T
    Time to head for greener pastures.
  • 376 Stimmen
    51 Beiträge
    582 Aufrufe
    L
    I believe that's what a write down generally reflects: The asset is now worth less than its previous book value. Resale value isn't the most accurate way to look at it, but it generally works for explaining it: If I bought a tool for 100€, I'd book it as 100€ worth of tools. If I wanted to sell it again after using it for a while, I'd get less than those 100€ back for it, so I'd write down that difference as a loss. With buying / depreciating / selling companies instead of tools, things become more complex, but the basic idea still holds: If the whole of the company's value goes down, you write down the difference too. So unless these guys bought it for five times its value, they'll have paid less for it than they originally got.
  • 93 Stimmen
    35 Beiträge
    282 Aufrufe
    D
    Same as American companies. Send you targeted ads and news articles to influence your world view as a form of new soft power.
  • Where are all the data centres and why should you care?

    Technology technology
    5
    1
    63 Stimmen
    5 Beiträge
    58 Aufrufe
    A
    Ai says Virginia is home to the largest data center market in the world, with over 576 data centers, primarily located in Northern Virginia,