Why so much hate toward AI?
-
I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.
As several have already explained their questions, I will clarify some points.
Not all countries consider AI training using copyrighted material as theft. For example, Japan has allowed AI to be trained with copyrighted material since 2019, and it's strange because that country is known for its strict laws in that regard.
Also, saying that AI can't or won't harm society sells. Although I don't deny the consequences of this technology. But it will only be effective if AI doesn't get better, because then it could be counterproductive.
-
I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.
My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?
On top of these, there's also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.
-
I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.
My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.
But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.
So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.
-
What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.
Cool everyone can use the website they scraped the data from already.
Also anyone can use open weights models? Even those without beefy systems? Please...
-
Yes. https://infosec.pub/post/29620772
Seems someone deleted it, and now we have to discuss the same thing again.
According to modlog it was against Rule#2
-
Especially in coding?
Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".
Audio, art - anything that doesn't need "bit perfect" output is another thing though.
There's also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn't fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn't immediately spot.
With art, it can't really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.
-
I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.
Not much to win with.
A fake bubble of broken technology that's not capable of doing what is advertised, it's environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.
-
I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.
Because of studies like https://arxiv.org/abs/2211.03622:
Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.
-
My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.
But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.
So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.
Also, it should never be used for art. I don’t care if you need to make a logo for a company and A.I. spits out whatever. But real art is about humans expressing something. We don’t value cave paintings because they’re perfect. We value them because someone thousands of years ago made it.
So, that’s something I hate about it. People think it can “democratize” art. Art is already democratized. I have a child’s drawing on my fridge that means more to me than anything at any museum. The beauty of some things is not that it was generated. It’s that someone cared enough to try. I’d rather a misspelled crayon card from my niece than some shit ChatGPT generated.
-
Because the goal of "AI" is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is "why should we continue to pay wages?".
That is bad for everyone who isn't part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/... the data you input can and WILL eventually be used against you.If you only self-host and know what you're doing, this might be somewhat different, but it still won't stop the big guys from trying to swallow all the others whole.
Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"
-
Not much to win with.
A fake bubble of broken technology that's not capable of doing what is advertised, it's environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.
It's either broken and not capable or takes jobs.
You can't be both useless and destroying jobs at the same time
-
My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?
On top of these, there's also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.
The industrial revolution called, they want their argument against the use of automated looms back.
-
It's either broken and not capable or takes jobs.
You can't be both useless and destroying jobs at the same time
Have you never had a corporate job? A technology can be very much useless while incompetent 'managers' who believe it can do better than humans WILL buy the former to get rid of the latter, even though that's a stupid thing to do, in order to meet their yearly targets and other similar idiotic measures of division/team 'productivity'
-
Because of studies like https://arxiv.org/abs/2211.03622:
Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.
Seems like this is a good argument for specialization. Have AI make bad but fast code, pay specialty people to improve and make it secure when needed. My 2026 Furby with no connection to the outside world doesn't need secure code, it just needs to make kids smile.
-
It's either broken and not capable or takes jobs.
You can't be both useless and destroying jobs at the same time
And yet AI pulls through and somehow does manage to do both
-
Seems like this is a good argument for specialization. Have AI make bad but fast code, pay specialty people to improve and make it secure when needed. My 2026 Furby with no connection to the outside world doesn't need secure code, it just needs to make kids smile.
They're called programmers, and it's faster and less expensive all around to just have humans do it better the first time.
-
There's also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn't fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn't immediately spot.
With art, it can't really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.
and also chatbot-generated bug reports (like curl) and entire open source projects (i guess for some stupid crypto scheme)
-
It's either broken and not capable or takes jobs.
You can't be both useless and destroying jobs at the same time
it's not ai taking your job, it's your boss. all they need to believe is that language-shaped noise generator can make it work, doesn't matter if it does (it doesn't). then business either suffers greatly or hires people back (like klarna)
-
I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.
It's a massive new disruptive technology and people are scared of what changes it will bring. AI companies are putting out tons of propaganda both claiming AI can do anything and fear mongering that AI is going to surpass and subjugate us to back up that same narrative.
Also, there is so much focus on democratizing content creation, which is at best a very mixed bag, and little attention is given to collaborative uses (which I think is where AI shines) because it's so much harder to demonstrate, and it demands critical thinking skills and underlying knowledge.
In short, everything AI is hyped as is a lie, and that's all most people see. When you're poking around with it, you're most likely to just ask it to do something for you: write a paper, create a picture, whatever, and the results won't impress anyone actually good at those things, and impress the fuck out of people who don't know any better.
This simultaneously reinforces two things to two different groups: AI is utter garbage and AI is smarter than half the people you know and is going to take all the jobs.
-
But but, now idea man can vibecode. this shit destroys separation between management and codebase making it perfect antiproductivity tool