AI industry horrified to face largest copyright class action ever certified
-
I wouldn't even say AI is bad, i have currently Qwen 3 running on my own GPU giving me a course in RegEx and how to use it. It sometimes makes mistakes in the examples (we all know that chatbots are shit when it comes to the r's in strawberry), but i see it as "spot the error" type of training for me, and the instructions themself have been error free for now, since i do the lesson myself i can easily spot if something goes wrong.
AI crammed into everything because venture capitalists try to see what sticks is probably the main reason public opinion of chatbots is bad, and i don't condone that too, but the technology itself has uses and is an impressive accomplishment.
Same with image generation: i am shit at drawing, and i don't have the money to commission art if i want something specific, but i can generate what i want for myself.
If the copyright side wins, we all might lose the option to run imagegen and llms on our own hardware, there will never be an open-source llm, and resources that are important to us all will come even more under fire than they are already. Copyright holders will be the new AI companies, and without competition the enshittification will instantly start.
What you see as "spot the error" type training, another person sees as absolute fact that they internalize and use to make decisions that impact the world. The internet gave rise to the golden age of conspiracy theories, which is having a major impact on the worsening political climate, and it's because the average user isn't able to differentiate information from disinformation. AI chatbots giving people the answer they're looking for rather than the truth is only going to compound the issue.
-
This post did not contain any content.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.
And yet, despite 20 years of experience, the only side Ashley presents is the technologists' side.
-
This post did not contain any content.
I hope LLMs and generative AI crash and burn.
-
I hope LLMs and generative AI crash and burn.
I'm thinking, honestly, what if that's the planned purpose of this bubble.
I'm explaining - those "AI"'s involve assembling large datasets and making them available, poisoning the Web, and creating demand for for a specific kind of hardware.
When it bursts, not everything bursts.
Suddenly there will be plenty of no longer required hardware usable for normal ML applications like face recognition, voice recognition, text analysis to identify its author, combat drones with target selection, all kinds of stuff. It will be dirt cheap, compared to its current price, as it was with Sun hardware after the dotcom crash.
There still will be those datasets, that can be analyzed for plenty of purposes. Legal or not, they are already processed into usable and convenient state.
There will be the Web covered with a great wall of China tall layer of AI slop.
There will likely be a bankrupt nation which will have a lot of things failing due to that.
And there will still be all the centralized services. Suppose on that day you go search something in Google, and there's only the Google summary present, no results list (or maybe even a results list, whatever, but suddenly weighed differently), saying that you've been owned by domestic enemies yadda-yadda and the patriotic corporations are implementing a popular state of emergency or something like that. You go to Facebook, and when you write something there, your messages are premoderated by an AI so that you'd not be able to god forbid say something wrong. An LLM might not be able to support a decent enough conversation, but to edit out things you say, or PGP keys you send, in real time without anything appearing strange - easily. Or to change some real person's style of speech to yours.
Suppose all of not-degoogled Android installations start doing things like that, Amazon's logistics suddenly start working to support a putsch, Facebook and WhatsApp do what I described or just fail, Apple makes a presentation of a new, magnificent, ingenious, miraculous, patriotic change to a better system of government, maybe even with Johnny Ive as the speaker, and possibly does the same unnoticeable censorship, Microsoft pushes one malicious update 3 months earlier with a backdoor to all Windows installations doing the same, and commits its datacenters to the common effort, and let's just say it's possible that a similar thing is done by some Linux developer believing in an idea and some of the major distributions - don't need it doing much, just to provide a backdoor usable remotely.
I don't list Twitter because honestly it doesn't seem to work well enough or have coverage good enough.
So - this seems a pretty possible apocalypse scenario which does lead to a sudden installation of a dictatorial regime with all the necessary surveillance, planning, censorship and enforcement already being functioning systems.
So - of course apocalypse scenarios were a normal thing in movies for many years and many times, but it's funny how the more plausible such become, the less often they are described in art.
-
This post did not contain any content.
Fucking good!! Let the AI industry BURN!
-
IA doesn't make any money off the content. Not that LLM companies do, but that's what they'd want.
And this is exactly the reason why I think the IA will be forced to close down while AI companies that trained their models on it will not only stay but be praised for preserving information in an ironic twist. Because one side does participate in capitalism and the other doesn’t. They will claim AI is transformative enough even when it isn’t because the overly rich invested too much money into the grift.
-
Ah yes. "Public Domain" == "Theft"
Not everything is public domain, thief scum.
-
I propose that anyone defending themselves in court over AI stealing data must be represented exclusively by AI.
That would be glorious. If the future of your company depends on the LLM keeping track of hundreds of details and drawing the right conclusions, it’s game over during the first day.
-
This post did not contain any content.
Good!!! Let the AI industry fucking burn!!!
-
Not everything is public domain, thief scum.
Do they even teach the constitution anymore?
-
What you see as "spot the error" type training, another person sees as absolute fact that they internalize and use to make decisions that impact the world. The internet gave rise to the golden age of conspiracy theories, which is having a major impact on the worsening political climate, and it's because the average user isn't able to differentiate information from disinformation. AI chatbots giving people the answer they're looking for rather than the truth is only going to compound the issue.
I agree that this has to become better in the future, but the technology is pretty young, and i am pretty sure that fixing this stuff has a high priority in those companies - it's bad PR for them. But the people are already gorging themselves on faulty info per social media - i don't see that chatbots are making this really worse than it already is.
-
This post did not contain any content.
Good fuck those fuckers
-
This post did not contain any content.
Now they're in the "finding out" phase of the "fucking around and finding out".
-
This post did not contain any content.
We just need to show that ChatGPT and alike can generate Nintendo based content and let it fight out between them
-
Fucking good!! Let the AI industry BURN!
What um, what court system do you think is going to make that happen? Cause the current one is owned by an extremely pro-AI administration. If anything gets appealed to SCOTUS they will rule for AI.
-
Do they even teach the constitution anymore?
Not every country lives under the Divided States constitution you entitled twat. Other countries are actually free living with actual rights.
Not that having a constitution ever meant anything to you idiots as you just sit there watching taco tits wipe his asshole with it.
-
An important note here, the judge has already ruled in this case that "using Plaintiffs' works "to train specific LLMs [was] justified as a fair use" because "[t]he technology at issue was among the most transformative many of us will see in our lifetimes." during the summary judgement order.
The plaintiffs are not suing Anthropic for infringing on their copyright, the court has already ruled that it was so obvious that they could not succeed with that argument that it could be dismissed. Their only remaining claim is that Anthropic downloaded the books from piracy sites using bittorrent
This isn't about LLMs anymore, it's a standard "You downloaded something on Bittorrent and made a company mad"-type case that has been going on since Napster.
Also, the headline is incredibly misleading. It's ascribing feelings to an entire industry based on a common legal filing that is not by itself noteworthy. Unless you really care about legal technicalities, you can stop here.
The actual news, the new factual thing that happened, is that the Consumer Technology Association and the Computer and Communications Industry Association filed an Amicus Brief, in an appeal of an issue that Anthropic the court ruled against.
This is pretty normal legal filing about legal technicalities. This isn't really newsworthy outside of, maybe, some people in the legal profession who are bored.
The issue was class certification.
Three people sued Anthropic. Instead of just suing Anthropic on behalf of themselves, they moved to be certified as class. That is to say that they wanted to sue on behalf of a larger group of people, in this case a "Pirated Books Class" of authors whose books Anthropic downloaded from the book piracy websites.
The judge ruled they can represent the class, Anthropic appealed the ruling. During this appeal an industry group filed an Amicus brief with arguments supporting Anthropic's argument. This is not uncommon, The Onion famously filed an Amicus brief with the Supreme Court when they were about to rule on issues of parody. Like everything The Onion writes, it's a good piece of satire: link
-
This post did not contain any content.
Well maybe they shouldn't have done of the largest violations of copyright and intellectual property ever.
Probably the largest single instance ever.
-
The site formatting broke it. Maybe it'll work as a link
Yup, seems to work
-
This post did not contain any content.
I myself don't allow my data to be used for AI, so is anyone did, they do owe me a boatload of gold coins. That's just my price. Great tech though.