Skip to content

Judge dismisses authors' copyright lawsuit against Meta over AI training

Technology
24 14 126
  • Acree 100%

    Hope we can refactor this whole copyright/patent concept soon..

    It is more a pain for artists, creators, releasers etc.

    I see it with EDM, I work as a Label, and do sometimes produce a bit

    Most artists will work with samples and presets etc. And keeping track of who worked on what and who owns how much percent of what etc. just takes the joy out of creating..

    Same for game design: You have a vision for your game, make a poc, and then have to change the whole game because of stupid patent shit not allowing you e.g. not land on a horse and immediately ride it, or throwing stuff at things to catch them…

    I'm inclined to agree. I hate AI, and I especially hate artists and other creatives being shafted, but I'm increasingly doubtful that copyright is an effective way to ensure that they get their fair share (whether we're talking about AI or otherwise).

  • Bad judgement.

    Any reason to say that other than that it didn't give the result you wanted?

  • I'm inclined to agree. I hate AI, and I especially hate artists and other creatives being shafted, but I'm increasingly doubtful that copyright is an effective way to ensure that they get their fair share (whether we're talking about AI or otherwise).

    In an ideal world, there would be something like a universal basic income, which would reduce the pressure on artists that they have to generate enough income with their art, this would allow artists to make art less for mainstream but more unique and thus would, in my opinion, allow to weaken copyright laws

    Well, that would be the way I would try to start change.

  • Ah the Schrödinger's LLM - always hallucinating and also always accurate

    "hallucination refers to the generation of plausible-sounding but factually incorrect or nonsensical information"

    Is an output an hallucination when the training data involved in that output included factually incorrect data? Suppose my input is "is the would flat" and then an LLM, allegedly, accurately generates a flat-eather's writings saying it is.

  • Your last paragraph would be ideal solution in ideal world but I don't think ever like this could happen in the current political and economical structures.

    First its super easy to hide all of this and enforcement would be very difficult even domestically. Second, because we're in AI race no one would ever put themselves in such disadvantage unless its real damage not economical copyright juggling.

    People need to come to terms with these facts so we can address real problems rather than blow against the wind with all this whining we see on Lemmy. There are actual things we can do.

    One way I could see this being enforced is by mandating that AI models not respond to questions that could result in speaking about a copyrighted work. Similar to how mainstream models don't speak about vulgar or controversial topics.

    But yeah, realistically, it's unlikely that any judge would rule in that favour.

  • This post did not contain any content.

    It sounds like the precedent has been set

  • This post did not contain any content.

    Grab em by the intellectual property! When you're a multi-billion dollar corporation, they just let you do it!

  • This post did not contain any content.

    I’ll leave this here from another post on this topic…

  • Ah the Schrödinger's LLM - always hallucinating and also always accurate

    Accuracy and hallucination are two ends of a spectrum.

    If you turn hallucinations to a minimum, the LLM will faithfully reproduce what's in the training set, but the result will not fit the query very well.

    The other option is to turn the so-called temperature up, which will result in replies fitting better to the query but also the hallucinations go up.

    In the end it's a balance between getting responses that are closer to the dataset (factual) or closer to the query (creative).

  • Ah the Schrödinger's LLM - always hallucinating and also always accurate

    There is nothing intelligent about "AI" as we call it. It parrots based on probability. If you remove the randomness value from the model, it parrots the same thing every time based on it's weights, and if the weights were trained on Harry Potter, it will consistently give you giant chunks of harry potter verbatim when prompted.

    Most of the LLM services attempt to avoid this by adding arbitrary randomness values to churn the soup. But this is also inherently part of the cause of hallucinations, as the model cannot preserve a single correct response as always the right way to respond to a certain query.

    LLMs are insanely "dumb", they're just lightspeed parrots. The fact that Meta and these other giant tech companies claim it's not theft because they sprinkle in some randomness is just obscuring the reality and the fact that their models are derivative of the work of organizations like the BBC and Wikipedia, while also dependent on the works of tens of thousands of authors to develop their corpus of language.

    In short, there was a ethical way to train these models. But that would have been slower. And the court just basically gave them a pass on theft. Facebook would have been entirely in the clear had it not stored the books in a dataset, which in itself is insane.

    I wish I knew when I was younger that stealing is wrong, unless you steal at scale. Then it's just clever business.

  • There is nothing intelligent about "AI" as we call it. It parrots based on probability. If you remove the randomness value from the model, it parrots the same thing every time based on it's weights, and if the weights were trained on Harry Potter, it will consistently give you giant chunks of harry potter verbatim when prompted.

    Most of the LLM services attempt to avoid this by adding arbitrary randomness values to churn the soup. But this is also inherently part of the cause of hallucinations, as the model cannot preserve a single correct response as always the right way to respond to a certain query.

    LLMs are insanely "dumb", they're just lightspeed parrots. The fact that Meta and these other giant tech companies claim it's not theft because they sprinkle in some randomness is just obscuring the reality and the fact that their models are derivative of the work of organizations like the BBC and Wikipedia, while also dependent on the works of tens of thousands of authors to develop their corpus of language.

    In short, there was a ethical way to train these models. But that would have been slower. And the court just basically gave them a pass on theft. Facebook would have been entirely in the clear had it not stored the books in a dataset, which in itself is insane.

    I wish I knew when I was younger that stealing is wrong, unless you steal at scale. Then it's just clever business.

    Except that breaking copyright is not stealing and never was. Hard to believe that you'd ever see Copyright advocates on foss and decentralized networks like Lemmy - its like people had their minds hijacked because "big tech is bad".

  • Except that breaking copyright is not stealing and never was. Hard to believe that you'd ever see Copyright advocates on foss and decentralized networks like Lemmy - its like people had their minds hijacked because "big tech is bad".

    What name do you have for the activity of making money using someone else work or data, without their consent or giving compensation? If the tech was just tech, it wouldn't need any non consenting human input for it to work properly. This are just companies feeding on various types of data, if justice doesn't protects an author, what do you think it would happen if these same models started feeding of user data instead? Tech is good, ethics are not

  • What name do you have for the activity of making money using someone else work or data, without their consent or giving compensation? If the tech was just tech, it wouldn't need any non consenting human input for it to work properly. This are just companies feeding on various types of data, if justice doesn't protects an author, what do you think it would happen if these same models started feeding of user data instead? Tech is good, ethics are not

    How do you think you're making money with your work? Did your knowledge appear from a vacuum? Ethically speaking nothing is "original creation of your own merit only" - everything we make is transformative by nature.

    Either way, the talks are moot as we'll never agree on what is transformative enough to be harmful to our society unless its a direct 1:1 copy with direct goal to displace the original. But thats clearly not the case with LLMs.

  • In an ideal world, there would be something like a universal basic income, which would reduce the pressure on artists that they have to generate enough income with their art, this would allow artists to make art less for mainstream but more unique and thus would, in my opinion, allow to weaken copyright laws

    Well, that would be the way I would try to start change.

    I would go a step further and have creative grants to people. It would work in a way similar to the BBC and similar broadcasters, where a body gets government money and then picks creative projects it thinks are worthwhile, with a remit that goes beyond the lowest common denominator. UBI ensures that this system doesn't have a monopoly on creative output.

  • I would go a step further and have creative grants to people. It would work in a way similar to the BBC and similar broadcasters, where a body gets government money and then picks creative projects it thinks are worthwhile, with a remit that goes beyond the lowest common denominator. UBI ensures that this system doesn't have a monopoly on creative output.

    Agree 100%!

    We need more Kulturförderung!

  • Except that breaking copyright is not stealing and never was. Hard to believe that you'd ever see Copyright advocates on foss and decentralized networks like Lemmy - its like people had their minds hijacked because "big tech is bad".

    Ingesting all the artwork you ever created by obtaining it illegally and feeding it into my plagarism remix machine is theft of your work, because I did not pay for it.

    Separately, keeping a copy of this work so I can do this repeatedly is also stealing your work.

    The judge ruled the first was okay but the second was not because the first is "transformative", which sadly means to me that the judge despite best efforts does not understand how a weighted matrix of tokens works and that while they may have some prevention steps in place now, early models showed the tech for what it was as it regurgitated text with only minor differences in word choice here and there.

    Current models have layers on top to try and prevent this user input, but escaping those safeguards is common, and it's also only masking the fact that the entire model is built off of the theft of other's work.

  • Best Andar Bahar game development company

    Technology technology
    1
    2
    0 Stimmen
    1 Beiträge
    0 Aufrufe
    Niemand hat geantwortet
  • Ready-made stem cell therapies for pets could be coming

    Technology technology
    1
    1
    27 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet
  • 19 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 307 Stimmen
    23 Beiträge
    127 Aufrufe
    G
    I spent way too long researching the morning. That industry implies a much greater population that is attracted to children. Things get more nuanced. People are attracted to different stages, like prebubesant, early adolescence, and mid to late adolescence. It seems like an important distinction because this is a common mental disorder. I was ready to write this comment about my fear that there's a bunch of evil pedophiles living among us who are simply deterred by legal or social pressures. It seems more like the extreme stigma of pedophilia has prevented individuals from seeking assistance and has resulted in more child sexual abuse. This sort of disorder can be caused by experiencing this abuse at a younger age. When I was religious, we worked closely with an organization to help victims of trafficking. We had their stories. They entered our lives. I took care of some of these kids. As a victim of sexual abuse when I was kid, I had a hatred for these kinds of people. I feel like my brain is melting seeing how there is a high chance of people in my life being attracted to children. This isn't really to justify the industry. I'm just realizing that general harassing people openly about it might not be helping the situation.
  • 895 Stimmen
    204 Beiträge
    440 Aufrufe
    S
    I know what an LLM is doing. You don't know what your brain is doing.
  • Front Brake Lights Could Drastically Diminish Road Accident Rates

    Technology technology
    337
    1
    595 Stimmen
    337 Beiträge
    1k Aufrufe
    M
    I always say there are drivers out there who only survive by the grace of other drivers.
  • 1 Stimmen
    8 Beiträge
    40 Aufrufe
    L
    I think the principle could be applied to scan outside of the machine. It is making requests to 127.0.0.1:{port} - effectively using your computer as a "server" in a sort of reverse-SSRF attack. There's no reason it can't make requests to 10.10.10.1:{port} as well. Of course you'd need to guess the netmask of the network address range first, but this isn't that hard. In fact, if you consider that at least as far as the desktop site goes, most people will be browsing the web behind a standard consumer router left on defaults where it will be the first device in the DHCP range (e.g. 192.168.0.1 or 10.10.10.1), which tends to have a web UI on the LAN interface (port 8080, 80 or 443), then you'd only realistically need to scan a few addresses to determine the network address range. If you want to keep noise even lower, using just 192.168.0.1:80 and 192.168.1.1:80 I'd wager would cover 99% of consumer routers. From there you could assume that it's a /24 netmask and scan IPs to your heart's content. You could do top 10 most common ports type scans and go in-depth on anything you get a result on. I haven't tested this, but I don't see why it wouldn't work, when I was testing 13ft.io - a self-hosted 12ft.io paywall remover, an SSRF flaw like this absolutely let you perform any network request to any LAN address in range.
  • The bots are among us.

    Technology technology
    3
    2
    0 Stimmen
    3 Beiträge
    26 Aufrufe
    yerbouti@sh.itjust.worksY
    Yeah she was on to something with the layers, but screw it up. I’m sure the models got better since.