An earnest question about the AI/LLM hate
-
I feel like it's more the sudden overnight hype about it rather than the technology itself. CEOs all around the world suddenly went "you all must use AI and shoe horn it into our product!". People are fatigued about constantly hearing about it.
But I think people, especially devs, don't like big changes (me included), which causes anxiety and then backlash. LLMs have caused quite a big change with the way we go about our day jobs. It's been such a big change that people are likely worried about what their career will look like in 5 or 10 years.
Personally I find it useful as a pairing buddy, it can generate some of the boilerplate bullshit and help you through problems, which might have taken longer to understand by trawling through various sites.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.
The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people's (or worse, AI's) code is orders of magnitude harder than writing the same code yourself.
-
I know there's people who could articulate it better than I can, but my logic goes like this:
- Loss of critical thinking skill: This doesn't just apply for someone working on a software project that they don't really care about. Lots of coders start in their bedroom with notepad and some curiosity. If copilot interrupts you with mediocre but working code, you never get the chance to learn ways of solving a problem for yourself.
- Style: code spat out by AI is a very specific style, and no amount of prompt modifiers with come up with the type of code someone designing for speed or low memory usage would produce that's nearly impossible to read but solves for a very specific case.
- If everyone is a coder, no one is a coder: If everyone can claim to be a coder on paper, it will be harder to find good coders. Sure, you can make every applicant do FizzBuzz or a basic sort, but that does not give a good opportunity to show you can actually solve a problem. It will discourage people from becoming coders in the first place. A lot of companies can actually get by with vibe coders (at least for a while) and that dries up the market of the sort of junior positions that people need to get better and promoted to better positions.
- When the code breaks, it takes a lot longer to understand and rectify when you don't know how any of it works. When you don't even bother designing or completing a test plan because Cursor developed a plan, which all came back green, pushed it during a convenient downtime and has archived all the old versions in its own internal logical structure that can't be easily undone.
Edits: Minor clarification and grammar.
I'm an empirical researcher in software engineering and all of the points you're making are being supported by recent papers on SE and/or education. We are also seeing a strong shift in behavior of our students and a lack of ability to explain or justify their "own" work
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
I recently had an online event about using "AI" in my industry, construction.
The presentor finished on "Now is no the time to wait, but to get doing, lest you want to stay behind".
She gave examples of some companies she found that promised to help with "AI" in the process of designing constructions. When i asked her, if any of these companies are willing to take the legal risk that the designs are up to code and actually sound from an engineering perspective, she had to deny.
This sums it up for me. You get sold a hype by people who dont understand (or dont tell) what it is and isnt to managers who dont understand what it is and isnt over the heads of people who actually understand what it is or at least what it needs to be to be relevant. And these last people then get laid off or f*ed over in other ways as they have twice the work than before because now first they need to show to management why the "AI" result is criminal and then do all the regular design work anyways.
It is the same toxid dynamic like with any tech bro hype before. Just now it seems to look good at first and is more difficult to show why it is not.
This is especially dangerous when it comes to engineering.
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
Not to be snarky, but why didn’t you ask an LLM this question?
-
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.
The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people's (or worse, AI's) code is orders of magnitude harder than writing the same code yourself.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go
If we include languages like C#, javascript/typescript, python etc then that's a huge portion of the landscape.
Personally I wouldn't use it to generate entire features as it will generally produce working, but garbage code, but it's useful to get boilerplate stuff done or query why something isn't working as expected. For example, asking it to write tests for a React component, it'll get about 80-90% of it right, with all the imports, mocks etc, you just need to write the actual assertions yourself (which we should be doing anyway).
I gave Claude a try last week at building some AWS infrastructure in Terraform based off a prompt for a feature set and it was pretty bang on. Obviously it required some tweaks but it saved a tonne of time vs writing it all out manually.
-
I am in software and a software engineer, but the least of my concerns is being replaced by an LLM any time soon.
-
I don't hate LLMs, they are just a tool and it does not make sense at all to hate a LLM the same way it does not make sense to hate a rock
-
I hate the marketing and the hype for several reasons:
- You use the term AI/LLM in the posts title: There is nothing intelligent about LLMs if you understand how they work
- The craziness about LLMs in the media, press and business brainwashes non technical people to think that there is intelligence involved and that LLMs will get better and better and solve the worlds problems (possible, but when you do an informed guess, the chances are quite low within the next decade)
- All the LLM shit happening: Automatic translations w/o even asking me if stuff should be translated on websites, job loss for translators, companies hoping to get rid of experienced technical people because LLMs (and we will have to pick up the slack after the hype)
- The lack of education in the population (and even among tech people) about how LLMs work, their limits and their usages...
LLMs are at the same time impressive (think jump to chat-gpt 4), show the ugliest forms of capitalism (CEOs learning, that every time they say AI the stock price goes 5% up), helpful (generate short pieces of code, translate other languages), annoying (generated content) and even dangerous (companies with the money can now literally and automatically flood the internet/news/media with more bullshit and faster).
Everything you said is great except for the rock metaphor. It’s more akin to a gun in that it’s a tool made by man that has the capacity to do incredible damage and already has on a social level.
Guns ain’t just laying around on the ground, nor are LLMs. Rocks however, are, like, it’s practically their job.
-
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
Ethics and morality does it for me. It is insane to steal the works of millions and re-sell it in a black box.
The quality is lacking. Literally hallucinates garbage information and lies, which scammers now weaponize (see Slopsquatting).
Extreme energy costs and environmental damage. We could supply millions of poor with electricity yet we decided a sloppy AI which cant even count letters in a word was a better use case.
The AI developers themselves dont fully understand how it works or why it responds with certain things. Thus proving there cant be any guarantees for quality or safety of AI responses yet.
Laws, juridical systems and regulations are way behind, we dont have laws that can properly handle the usage or integration of AI yet.
Do note: LLM as a technology is fascinating. AI as a tool become fantastic. But now is not the time.
-
Everything you said is great except for the rock metaphor. It’s more akin to a gun in that it’s a tool made by man that has the capacity to do incredible damage and already has on a social level.
Guns ain’t just laying around on the ground, nor are LLMs. Rocks however, are, like, it’s practically their job.
LLMs and generative AI will do what social media did to us, but a thousand times worse. All that plus the nightmarish capacity of pattern matching at an industrial scale. Inequalities, repression, oppression, disinformation , propaganda and corruption will skyrocket because of it. It's genuinely terrifying.
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
My biggest issue is with how AI is being marketed, particularly by Apple. Every single Apple Intelligence commercial is about a mediocre person who is not up to the task in front of them, but asks their iPhone for help and ends up skating by. Their families are happy, their co-workers are impressed, and they learn nothing about how to handle the task on their own the next time except that their phone bailed their lame ass out.
It seems to be a reflection of our current political climate, though, where expertise is ignored, competence is scorned, and everyone is out for themselves.
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
It’s a tool that has gone from interesting (GPT3) to terrifying (Veo 3)
It's ironic that you describe your impression of LLMs in emotional terms.
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
AI becoming much more widespread isn’t because it’s actually that interesting. It’s all manufactured. Forcibly shoved into our faces. And for the negative things AI is capable of, I have an uneasy feeling about all this.
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
In addition to what everyone else has said, they’re doing all this not-useful work replacing humans with unrealistic hype, but to do it they’re using up SO MUCH natural resources it’s astonishing.
They have caused chip shortages. They are extracting all the natural water from aquifers in an area for cooling and then dumping it because it’s no longer potable. Microsoft and Google are talking about building nuclear power plants dedicated JUST to the LLMs.
They are doing this all for snake oil as others have pointed out. It’s destroying the world socially, economically, and physically. And not in a “oh cars disrupted buggy whips” kind of way; in a “the atmosphere is no longer breathable” kind of way.
-
Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.
One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988
While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.
So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.
It might be interesting to cross-post this question to !fuck_ai@lemmy.world
but brace for impact