Skip to content

“Piracy is Piracy” – Disney and Universal team up to sue Midjourney

Technology
68 40 2.0k
  • Oh so when Big companies do it, it's OK. But it's stealing when an OpenSource AI gives that same power back to the people.

    There is no logic in mans lust for power. The most self serving will do whatever it takes to achieve wealth, status, and control. The world made so much more sense once I realized that.

  • I'd say that scraping as a verb implies an element of intent. It's about compiling information about a body of work, not simply making a copy, and therefore if you can accurately call it "scraping" then it's always fair use. (Accuse me of "No True Scotsman" if you would like.)

    But since it involves making a copy (even if only a temporary one) of licensed material, there's the potential that you're doing one thing with that copy which is fair use, and another thing with the copy that isn't fair use.

    Take archive.org for example:

    It doesn't only contain information about the work, but also a copy (or copies, plural) of the work itself. You could argue (and many have) that archive.org only claims to be about preserving an accurate history of a piece of content, but functionally mostly serves as a way to distribute unlicensed copies of that content.

    I don't personally think that's a justified accusation, because I think they do everything in their power to be as fair as possible, and there's a massive public benefit to having a service like this. But it does illustrate how you could easily have a scenario where the stated purpose is fair use but the actual implementation is not, and the infringing material was "scraped" in the first place.

    But in the case of gen AI, I think it's pretty clear that the residual data from the source content is much closer to a linguistic analysis than to an internet archive. So it's firmly in the fair use category, in my opinion.

    Edit: And to be clear, when I say it's fair use, I only mean in the strict sense of following copyright law. I don't mean that it is (or should be) clear of all other legal considerations.

    I think the distinction between data acquisition and data application is important. Consider the parallel of photography; you are legally and ethically entitled to take a photo of anything that you can see from public (ie, you can "scrape" it). But that doesn't mean that you can do anything you want with those photos. Distinguishing them makes the scraping part a lot less muddy.

  • The enemies of my enemies are my friends.

    But if both sides are your enemies, they're both your friends. But if they're your friends, they aren't the enemies of your enemies anymore, which would make them your enemies once again. But then they are your friends again. But then

  • But if both sides are your enemies, they're both your friends. But if they're your friends, they aren't the enemies of your enemies anymore, which would make them your enemies once again. But then they are your friends again. But then

    But if both sides are your enemies, they're both your friends.

    Yes. And both of my friends will weaken both of my enemies.

    • Disney and NBCUniversal have teamed up to sue Midjourney.
    • The companies allege that the platform used its copyright protected material to train its model and that users can generate content that infringes on Disney and Universal’s copyrighted material.
    • The scathing lawsuit requests that Midjourney be made to pay up for the damage it has caused the two companies.

    Note that Disney and Universal pirate other people's stuff whenever they want.

    Note also that all the Generative AI services are very protective of their big cistern of web-crawled data, say when China borrows it for DeepSeek.

    Content, content everywhere and not a drop of principle.

  • Yes, that’s a good addition.

    Overall, my point was not that scraping is a universal moral good, but that legislating tighter boundaries for scraping in an effort to curb AI abuses is a bad approach.

    We have better tools to combat this, and placing new limits on scraping will do collateral damage that we should not accept.

    And at the very least, the portfolio value of Disney’s IP holdings should not be the motivating force behind AI regulation.

    Tbh, this is not a question about scraping at all.

    Scraping is just a rather neutral tool that can be used for all sorts of purposes, legal and illegal.

    Neither does the technique justify the purpose nor does outlawing the technique fix the actual problem.

    Fair use only applies for a certain set of use cases and has a strict set of restrictions applied to it.

    The permitted use cases are: "criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research".

    And the two relevant restrictions are:

    • "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;"
    • "the effect of the use upon the potential market for or value of the copyrighted work."

    (Quoted from 17 U.S.C. § 107)

    And here the differences between archive.org and AI become obvious. While archive.org can be abused as some kind of file sharing system or to circumvent paywalls or ads, its intended purpose is for research, and it's firmly non-profit and doesn't compete with copyright holders.

    AI, on the other hand, is almost always commercial, and its main purpose is to replace human labour, specifically of the copyright owners. It might not be an actual problem for Disney's bottom line, but it's a massive problem for smaller artists, stock photographers, translators, and many other professions.

    That way, it clearly doesn't apply to the use cases for fair use while violating the restrictions.

    And for that, it doesn't matter if the training data is acquired using scraping (without permission) or some other way (without permission to use it for AI training).

  • I say this as a massive AI critic: Disney does not have a legitimate grievance here.

    AI training data is scraping. Scraping is — and must continue to be — fair use. As Cory Doctorow (fellow AI critic) says: Scraping against the wishes of the scraped is good, actually.

    I want generative AI firms to get taken down. But I want them to be taken down for the right reasons.

    Their products are toxic to communication and collaboration.

    They are the embodiment of a pathology that sees humanity — what they might call inefficiency, disagreement, incoherence, emotionality, bias, chaos, disobedience — as a problem, and technology as the answer.

    Dismantle them on the basis of what their poison does to public discourse, shared knowledge, connection to each other, mental well-being, fair competition, privacy, labor dignity, and personal identity.

    Not because they didn’t pay the fucking Mickey Mouse toll.

    You did not read your source. Some quotes you apparently missed:

    Scraping to violate the public’s privacy is bad, actually.

    Scraping to alienate creative workers’ labor is bad, actually.

    Please read your source before posting it and claiming it says something it doesn't actually say.

    Now why does Doctrow distinguish between good scraping and bad scraping, and even between good LLM training and bad LLM training in his post?

    Because the good applications are actually covered by fair use while the bad parts aren't.

    Because fair use isn't actually about what is done (scraping, LLM training, ...) but about who does it (researchers, non-profit vs. companies, for-profit) and for what purpose (research, critique, teaching, news reporting vs. making a profit by putting original copyright owners out of work).

    That's the whole point of fair use. It's even in the name. It's about the use, and the use needs to be fair. It's not called "Allowed techniques, don't care if it's fair".

    • Disney and NBCUniversal have teamed up to sue Midjourney.
    • The companies allege that the platform used its copyright protected material to train its model and that users can generate content that infringes on Disney and Universal’s copyrighted material.
    • The scathing lawsuit requests that Midjourney be made to pay up for the damage it has caused the two companies.

    Stupid lawsuit because anyone can do Ai now.

  • How so? Isn't it the same for the financial purposes?

    Many times these keys are obtained illegitimately and they end up being refunded. In other cases the key is bought from another region so the devs do get some money, but far less than they would from a regular purchase.

    I'm not sure exactly how the illegitimate keys are obtained, though. Maybe in trying to not pay the publisher you end up rewarding someone who steals peoples' credit cards or something.

  • Oh so when Big companies do it, it's OK. But it's stealing when an OpenSource AI gives that same power back to the people.

    Midjourney isn't opensource, I can't run it on my PC, contrary to stable diffusion.

  • Yes I under that, but is Midjourney profiting off these characters? Ie are people paying for these services just so they can create images of these specific characters ? I think that’s the question that needs to be answered here.

    I mean you’re not paying piecemeal as you would for an artist to create your commission of Shrek getting railed by Donkey, you pay for the service which in turns creates anything you tell it to.

    It’s like I’m still not convinced that training AI with copyrighted material is infringement, because in my mind is not any different than me seeing Arthas when I was kid, thinking he was cool as fuck and then deciding to make my own OC inspired by him. Was I infringing on Blizzard’s copyrighted character for taking inspiration from its design? Was Mike Pondsmith infringing on William Gibson’s copyright when he invented Cyberpunk?

    Yes, your fan art infringed on Blizzards copyright. Blizzard lets it slide, because there's nothing to gain from it apart from a massive PR desaster.

    Now if you sold your Arthas images on a large enough scale then Blizzard will clearly come after you. Copyright is not only about the damages occured by people not buying Blizzards stuff, but also the license fees they didn't get from you.

    That's the real big difference: if Midjourney was a little hobby project of some guy in his basement that never saw the the light of day, there wouldn't be a problem. But Midjourney is a for-profit tool with the express purpose of allowing people to make images without paying an artist and the way it does that is by using copyrighted works to do so.

  • You should totally play the game, but make sure that you pirate it so your money doesn't go to the thief who stole the rights from the creators.

    Oh that's unfortunate. Well I don't mind not supporting people like that so I'll give it a go

    • Disney and NBCUniversal have teamed up to sue Midjourney.
    • The companies allege that the platform used its copyright protected material to train its model and that users can generate content that infringes on Disney and Universal’s copyrighted material.
    • The scathing lawsuit requests that Midjourney be made to pay up for the damage it has caused the two companies.

    Bite each others dicks off

  • Stupid lawsuit because anyone can do Ai now.

    that's a shit take.

    anyone can do AI now, but everyone can't profit from it like they can. that's why the lawsuit.

  • Drama. A business partner of the creators used an illegal loophole to obtain a majority stake of the company and then fired the actual creators because they where considered to volatile.

    The universe of Disco Elysium is Kurvitz paracosm which he has been creating since his teens. Its a part of their identity that they are now barred from expressing.

    Its a bit like if you told Tolkien halfway trough writing lotr that he is fired as the author and can never write anything about middle earth again.

    intellectual property is grotesque.

    under no circumstances a creator should be barred from his creation.

    if shit like that happens I'd rather there not be any intellectual property at all

  • Here’s how to spot AI writing, according to Wikipedia

    Technology technology
    19
    171 Stimmen
    19 Beiträge
    43 Aufrufe
    M
    A lot of that is "stuff humans added to essays in school in the last 30 years to fill space and sound impartial"
  • 655 Stimmen
    42 Beiträge
    98 Aufrufe
    T
    Half a year...
  • 208 Stimmen
    30 Beiträge
    292 Aufrufe
    muusemuuse@sh.itjust.worksM
    “If we can just get rid of all the humans, I’ll have a machine that prints me free money!” This is all that the AI push is about. It really is that stupid. Nobody is going to ever pay for AI if they aren’t employed anymore.
  • No JS, No CSS, No HTML: online "clubs" celebrate plainer websites

    Technology technology
    205
    2
    772 Stimmen
    205 Beiträge
    7k Aufrufe
    R
    Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL. This for me seems a completely different application from torrents. I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services' responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that's more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer. For this application: I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them. But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it's bittorrent-like info tree hash, as namespace with peers freely joining it can do. Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure. Then, with freely joining it, there's no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash. Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it. There's the issue of updates, yes, hence I've started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group's data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system. The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone's collection of favorite stuff, and not too rich one.
  • 147 Stimmen
    55 Beiträge
    833 Aufrufe
    01189998819991197253@infosec.pub0
    I meant to download from the official Microsoft site. Kudos on getting your mum on Linux! I was unable to keep mine on it : / Maybe I'm missing something, but this is from the "Download Windows 11 Disk Image (ISO) for x64 devices" section from the official Microsoft site, but I don't see any option to buy or mention of it: Before you begin downloading an ISO Make sure you have: An internet connection (internet service provider fees may apply). Sufficient data storage available on the computer, USB, or external drive you are downloading the .iso file to. A blank DVD disc with at least 8GB (and DVD burner) to create a bootable disc. We recommend using a blank USB or blank DVD, because any content on it will be deleted during installation. If you receive a “disc image file is too large” message while attempting to burn a DVD bootable disc from an ISO file, consider using a higher capacity Dual Layer DVD.
  • No, Social Media is Not Porn

    Technology technology
    3
    1
    21 Stimmen
    3 Beiträge
    48 Aufrufe
    Z
    This feels dystopian and like overreach. But that said, there definitely is some porn on the 4 platforms they cited. It's an excuse sure, but let's also not deny reality.
  • 71 Stimmen
    12 Beiträge
    120 Aufrufe
    C
    Because that worked so well for South Korea
  • Sunsetting the Ghostery Private Browser

    Technology technology
    8
    1
    33 Stimmen
    8 Beiträge
    78 Aufrufe
    P
    Sunsetting Dawn? Of course