Skip to content

Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not

Technology
222 117 1
  • You can “use” them to learn from, just like “AI” can.

    What exactly do you think AI does when it “learns” from a book, for example? Do you think it will just spit out the entire book if you ask it to?

    It cant speak or use any words without it being someone elses words it learned from? Unless its giving sources everything is always from something it learned because it cannot speak or use words without that source in the first place?

  • If I understand correctly they are ruling you can by a book once, and redistribute the information to as many people you want without consequences. Aka 1 student should be able to buy a textbook and redistribute it to all other students for free. (Yet the rules only work for companies apparently, as the students would still be committing a crime)

    Well, it would be interesting if this case would be used as precedence in a case invonving a single student that do the same thing. But you are right

    This was my understanding also, and why I think the judge is bad at their job.

  • AI can “learn” from and “read” a book in the same way a person can and does

    This statement is the basis for your argument and it is simply not correct.

    Training LLMs and similar AI models is much closer to a sophisticated lossy compression algorithm than it is to human learning. The processes are not at all similar given our current understanding of human learning.

    AI doesn’t reproduce a work that it “learns” from, so why would it be illegal?

    The current Disney lawsuit against Midjourney is illustrative - literally, it includes numerous side-by-side comparisons - of how AI models are capable of recreating iconic copyrighted work that is indistinguishable from the original.

    If a machine can replicate your writing style because it could identify certain patterns, words, sentence structure, etc then as long as it’s not pretending to create things attributed to you, there’s no issue.

    An AI doesn't create works on its own. A human instructs AI to do so. Attribution is also irrelevant. If a human uses AI to recreate the exact tone, structure and other nuances of say, some best selling author, they harm the marketability of the original works which fails fair use tests (at least in the US).

    Even if we accept all your market liberal premise without question... in your own rhetorical framework the Disney lawsuit should be ruled against Disney.

    If a human uses AI to recreate the exact tone, structure and other nuances of say, some best selling author, they harm the marketability of the original works which fails fair use tests (at least in the US).

    Says who? In a free market why is the competition from similar products and brands such a threat as to be outlawed? Think reasonably about what you are advocating... you think authorship is so valuable or so special that one should be granted a legally enforceable monopoly at the loosest notions of authorship. This is the definition of a slippery-slope, and yet, it is the status quo of the society we live in.

    On it "harming marketability of the original works," frankly, that's a fiction and anyone advocating such ideas should just fucking weep about it instead of enforce overreaching laws on the rest of us. If you can't sell your art because a machine made "too good a copy" of your art, it wasn't good art in the first place and that is not the fault of the machine. Even big pharma doesn't get to outright ban generic medications (even tho they certainly tried)... it is patently fucking absurd to decry artist's lack of a state-enforced monopoly on their work. Why do you think we should extend such a radical policy towards... checks notes... tumblr artists and other commission based creators? It's not good when big companies do it for themselves through lobbying, it wouldn't be good to do it for "the little guy," either. The real artists working in industry don't want to change the law this way because they know it doesn't work in their favor. Disney's lawsuit is in the interest of Disney and big capital, not artists themselves, despite what these large conglomerates that trade in IPs and dreams might try to convince the art world writ large of.

  • If I understand correctly they are ruling you can by a book once, and redistribute the information to as many people you want without consequences. Aka 1 student should be able to buy a textbook and redistribute it to all other students for free. (Yet the rules only work for companies apparently, as the students would still be committing a crime)

    They may be trying to put safeguards so it isn't directly happening, but here is an example that the text is there word for word:

    Not at all true. AI doesn’t just reproduce content it was trained on on demand.

  • My interpretation was that AI companies can train on material they are licensed to use, but the courts have deemed that Anthropic pirated this material as they were not licensed to use it.

    In other words, if Anthropic bought the physical or digital books, it would be fine so long as their AI couldn't spit it out verbatim, but they didn't even do that, i.e. the AI crawler pirated the book.

    Does buying the book give you license to digitise it?

    Does owning a digital copy of the book give you license to convert it into another format and copy it into a database?

    Definitions of "Ownership" can be very different.

  • This was my understanding also, and why I think the judge is bad at their job.

    I suppose someone could develop an LLM that digests textbooks, and rewords the text and spits it back out. Then distribute it for free page for page. You can't copy right the math problems I don't think.. so if the text wording is what gives it credence, that would have been changed.

  • I joined lemmy specifically to avoid this reddit mindset of jumping to conclusions after reading a headline

    Guess some things never change...

    Well to be honest lemmy is less prone to knee-jerk reactionary discussion but on a handful of topics it is virtually guaranteed to happen no matter what, even here. For example, this entire site, besides a handful of communities, is vigorously anti-AI; and in the words of u/jsomae@lemmy.ml elsewhere in this comment chain:

    "It seems the subject of AI causes lemmites to lose all their braincells."

    I think there is definitely an interesting take on the sociology of the digital age in here somewhere but it's too early in the morning to be tapping something like that out lol

  • You're getting douchevoted because on lemmy any AI-related comment that isn't negative enough about AI is the Devil's Work.

    Some communities on this site speak about machine learning exactly how I see grungy Europeans from pre-18th century manuscripts speaking about witches, Satan, and evil... as if it is some pervasive, black-magic miasma.

    As someone who is in the field of machine learning academically/professionally it's honestly kind of shocking and has largely informed my opinion of society at large as an adult. No one puts any effort into learning if they see the letters "A" and "I" in all caps, next to each other. Immediately turn their brain off and start regurgitating points and responding reflexively, on Lemmy or otherwise. People talk about it so confidently while being so frustratingly unaware of their own ignorance on the matter, which, for lack of a better comparison... reminds me a lot of how historically and in fiction human beings have treated literal magic.

    That's my main issue with the entire swath of "pro vs anti AI" discourse... all these people treating something that, to me, is simple & daily reality as something entirely different than my own personal notion of it.

  • You can “use” them to learn from, just like “AI” can.

    What exactly do you think AI does when it “learns” from a book, for example? Do you think it will just spit out the entire book if you ask it to?

    I am educated on this. When an ai learns, it takes an input through a series of functions and are joined at the output. The set of functions that produce the best output have their functions developed further. Individuals do not process information like that. With poor exploration and biasing, the output of an AI model could look identical to its input. It did not "learn" anymore than a downloaded video ran through a compression algorithm.

  • LLMs don’t learn, and they’re not people. Applying the same logic doesn’t make much sense.

    The judge isn't saying that they learn or that they're people. He's saying that training falls into the same legal classification as learning.

  • Your very first statement calling my basis for my argument incorrect is incorrect lol.

    LLMs “learn” things from the content they consume. They don’t just take the content in wholesale and keep it there to regurgitate on command.

    On your last part, unless someone uses AI to recreate the tone etc of a best selling author *and then markets their book/writing as being from said best selling author, and doesn’t use trademarked characters etc, there’s no issue. You can’t copyright a style of writing.

    If what you are saying is true, why were these ‘AI’s” incapable of rendering a full wine glass? It ‘knows’ the concept of a full glass of water, but because of humanities social pressures, a full wine glass being the epitome of gluttony, art work did not depict a full wine glass, no matter how ai prompters demanded, it was unable to link the concepts until it was literally created for it to regurgitate it out. It seems ‘AI’ doesn’t really learn, but regurgitates art out in collages of taken assets, smoothed over at the seams.

  • I suppose someone could develop an LLM that digests textbooks, and rewords the text and spits it back out. Then distribute it for free page for page. You can't copy right the math problems I don't think.. so if the text wording is what gives it credence, that would have been changed.

    If a human did that it’s still plagiarism.

  • What a bad judge.

    Why ? Basically he simply stated that you can use whatever material you want to train your model as long as you ask the permission to use it (and presumably pay for it) to the author (or copytight holder)

    "Fair use" is the exact opposite of what you're saying here. It says that you don't need to ask for any permission. The judge ruled that obtaining illegitimate copies was unlawful but use without the creators consent is perfectly fine.

  • Not at all true. AI doesn’t just reproduce content it was trained on on demand.

    It can, the only thing stopping it is if it is specifically told not to, and this consideration is successfully checked for. It is completely capable of plagiarizing otherwise.

  • Gist:

    What’s new: The Northern District of California has granted a summary judgment for Anthropic that the training use of the copyrighted books and the print-to-digital format change were both “fair use” (full order below box). However, the court also found that the pirated library copies that Anthropic collected could not be deemed as training copies, and therefore, the use of this material was not “fair”. The court also announced that it will have a trial on the pirated copies and any resulting damages, adding:

    “That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages.”

    Formatting thing: if you start a line in a new paragraph with four spaces, it assumes that you want to display the text as a code and won't line break.

    This means that the last part of your comment is a long line that people need to scroll to see. If you remove one of the spaces, or you remove the empty line between it and the previous paragraph, it'll look like a normal comment

    With an empty line of space:

    1 space - and a little bit of writing just to see how the text will wrap. I don't really have anything that I want to put here, but I need to put enough here to make it long enough to wrap around. This is likely enough.

    2 spaces - and a little bit of writing just to see how the text will wrap. I don't really have anything that I want to put here, but I need to put enough here to make it long enough to wrap around. This is likely enough.

    3 spaces - and a little bit of writing just to see how the text will wrap. I don't really have anything that I want to put here, but I need to put enough here to make it long enough to wrap around. This is likely enough.

    4 spaces -  and a little bit of writing just to see how the text will wrap. I don't really have anything that I want to put here, but I need to put enough here to make it long enough to wrap around. This is likely enough.
    
  • If a human did that it’s still plagiarism.

    Oh I agree it should be, but following the judges ruling, I don't see how it could be. You trained an LLM on textbooks that were purchased, not pirated. And the LLM distributed the responses.

    (Unless you mean the human reworded them, then yeah, we aren't special apparently)

  • So I can't use any of these works because it's plagiarism but AI can?

    Why would it be plagiarism if you use the knowledge you gain from a book?

  • Does buying the book give you license to digitise it?

    Does owning a digital copy of the book give you license to convert it into another format and copy it into a database?

    Definitions of "Ownership" can be very different.

    You can digitize the books you own. You do not need a license for that. And of course you could put that digital format into a database. As databases are explicit exceptions from copyright law. If you want to go to the extreme: delete first copy. Then you have only in the database. However: AIs/LLMs are not based on data bases. But on neural networks. The original data gets lost when "learned".

  • Oh I agree it should be, but following the judges ruling, I don't see how it could be. You trained an LLM on textbooks that were purchased, not pirated. And the LLM distributed the responses.

    (Unless you mean the human reworded them, then yeah, we aren't special apparently)

    Yes, on the second part. Just rearranging or replacing words in a text is not transformative, which is a requirement. There is an argument that the ‘AI’ are capable of doing transformative work, but the tokenizing and weight process is not magic and in my use of multiple LLM’s they do not have an understanding of the material any more then a dictionary understands the material printed on its pages.

    An example was the wine glass problem. Art ‘AI’s were unable to display a wine glass filled to the top. No matter how it was prompted, or what style it aped, it would fail to do so and report back that the glass was full. But it could render a full glass of water. It didn’t understand what a full glass was, not even for the water. How was this possible? Well there was very little art of a full wine glass, because society has an unspoken rule that a full wine glass is the epitome of gluttony, and it is to be savored not drunk. Where as the reference of full glasses of water were abundant. It doesn’t know what full means, just that pictures of full glass of water are tied to phrases full, glass, and water.

  • If what you are saying is true, why were these ‘AI’s” incapable of rendering a full wine glass? It ‘knows’ the concept of a full glass of water, but because of humanities social pressures, a full wine glass being the epitome of gluttony, art work did not depict a full wine glass, no matter how ai prompters demanded, it was unable to link the concepts until it was literally created for it to regurgitate it out. It seems ‘AI’ doesn’t really learn, but regurgitates art out in collages of taken assets, smoothed over at the seams.

    Copilot did it just fine

  • Could Windows and installed apps upload all my personal files?

    Technology technology
    2
    1 Stimmen
    2 Beiträge
    0 Aufrufe
    rikudou@lemmings.worldR
    Yes, every application has access to everything. The only exception are those weird apps that use the universal framework or whatever that thing is called, those need to ask for permissions. But most of the apps on your PC have full access to everything. And Windows does collect and upload a lot of personal information and they could easily upload everything on your system. The same of course applies for the apps as well, they have access to everything except privileged folders (those usually don't contain your personal data, but system files).
  • 0 Stimmen
    1 Beiträge
    2 Aufrufe
    Niemand hat geantwortet
  • 368 Stimmen
    27 Beiträge
    11 Aufrufe
    P
    They're like "Wahhh we need to hit 3.5% and then the fascist dictator will totally resign!" and then Trump is like "Oooo my delicate little feefees, oh well, here comes my Gestapo!" while the 50501 protest marshalls chant "We did it! We don't need crushing violence to make a change!" while completely ignoring that the NKD protests accomplished literally nothing.
  • How will the space race affect our environment? (Video 25mins)

    Technology technology
    1
    1
    17 Stimmen
    1 Beiträge
    3 Aufrufe
    Niemand hat geantwortet
  • Why so much hate toward AI?

    Technology technology
    73
    38 Stimmen
    73 Beiträge
    11 Aufrufe
    H
    AI has only one problem to solve: salaries
  • 1 Stimmen
    2 Beiträge
    6 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 0 Stimmen
    4 Beiträge
    2 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    5 Aufrufe
    Niemand hat geantwortet