Skip to content

An earnest question about the AI/LLM hate

Technology
57 47 0
  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    To me, it's not the tech itself, it's the fact that it's being pushed as something it most definitely isn't. They're grifting hard to stuff an incomplete feature down everyone's throats, while using it to datamine the everloving spit out of us.

    Truth be told, I'm genuinely excited about the concept of AGI, of the potential of what we're seeing now. I'm also one who believes AGI will ultimately be as a progeny and should be treated as such, as a being in itself, and while we aren't capable of generating that, we should still keep it in mind, to mould our R&D to be based on that principle and thought. So, in addition to being disgusted by the current day grift, I'm also deeply disappointed to see these people behaving this way - like madmen and cultists. And as a further note, looking at our species' approach toward anything it sees as Other doesn't really make me think humanity would be adequate parents for any type of AGI as we are now, either.

    The people who own/drive the development of AI/LLM/what-have-you (the main ones, at least) are the kind of people who would cause the AI apocalypse. That's my problem.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    For me personally, the problem is not so much LLMs and/or ML solutions (both of which I actively use), but the fact this industry is largely led by American tech oligarchs. Not only are they profoundly corrupt and almost comically dishonest, but they are also true degenerates.

  • I think a lot of it is anxiety; being replaced by AI, the continued enshitification of the services I loved, and the ever present notion that AI is, "the answer." After a while, it gets old and that anxiety mixes in with annoyance -- a perfect cocktail of animosity.

    And AI stole em dashes from me, but that's a me-problem.

    Yeah, fuck this thing with em dashes… I used them constantly, but now, it’s a sign something was written by an LLM!??!?

    Bunshit.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn't even come in an X86 variant (Lineage that is is, I'm aware android has been ported).

    I smell bovine faeces. Or are you, in fact, an LLM ?

    Lineage sounds a lot like "Linux." Take it easy on the lad.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    I personally just find it annoying how it's shoehorned into everyting regardless if it makes sense to be there or not, without the option to turn it off.

    I also don't find it helpful for most things I do.

  • Yeah, fuck this thing with em dashes… I used them constantly, but now, it’s a sign something was written by an LLM!??!?

    Bunshit.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    My hypothesis from the start is that people were on a roll with the crypto hate (which was a lot less ambiguous, since there were fewer legitimate applications there).

    Then the AI gold rush hit and both investors and haters smoothly rolled onto that and transferred over a lot of the same discourse. It helps that AIbros overhyped the crap out of the tech, but the carryover hate was also entirely unwilling to acknowledge any kind of nuance from the go.

    So now you have a bunch of people with significant emotional capital baked into the idea that genAI is fundamentally a scam and/or a world-destroying misstep that have a LOT of face to lose by conceding even a sliver of usefulness or legitimacy to the thing. They are not entirely right... but not entirely wrong, either, so there you go, the perfect recipe for an eternal culture war.

    Welcome to discourse and public opinion in the online age. It kinda sucks.

  • To me, it's not the tech itself, it's the fact that it's being pushed as something it most definitely isn't. They're grifting hard to stuff an incomplete feature down everyone's throats, while using it to datamine the everloving spit out of us.

    Truth be told, I'm genuinely excited about the concept of AGI, of the potential of what we're seeing now. I'm also one who believes AGI will ultimately be as a progeny and should be treated as such, as a being in itself, and while we aren't capable of generating that, we should still keep it in mind, to mould our R&D to be based on that principle and thought. So, in addition to being disgusted by the current day grift, I'm also deeply disappointed to see these people behaving this way - like madmen and cultists. And as a further note, looking at our species' approach toward anything it sees as Other doesn't really make me think humanity would be adequate parents for any type of AGI as we are now, either.

    The people who own/drive the development of AI/LLM/what-have-you (the main ones, at least) are the kind of people who would cause the AI apocalypse. That's my problem.

    Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that's how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    Emotional? No. Rational.

    Use of Ai is showing as a bad idea for so many reasons that have been raised by people who study this kind of thing. There's nothing I can tell you that has any more validity than the experts' opinions. Go see.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn't even come in an X86 variant (Lineage that is is, I'm aware android has been ported).

    I smell bovine faeces. Or are you, in fact, an LLM ?

    Calm down. They never said anything about the two things happening on the same device.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    We're outsourcing thinking to a bullshit generator controlled by mostly American mega-corporations who have repeatedly demonstrated that they want to do us harm, burning through scarce resources and rendering creative humans robbed and unemployed in the process.

    What's not to hate.

  • Lineage sounds a lot like "Linux." Take it easy on the lad.

    Could also be two separate things? I have a) dumped Windows and b) installed Lineage.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    Okay so imagine for a second that somebody just invented voice to text and everyone trying to sell it to you lies about it and claims it can read your thoughts and nobody will ever type things manually ever again.

    The people trying to sell us LLMs lie about how they work and what they actually do. They generate text that looks like a human wrote it. That's all they do. There's some interesting attributes of this behavior, namely that when prompted with text that's a question the LLM will usually end up generating text that ends up being an answer. The LLM doesn't understand any part of this process any better than your phones autocorrect, it's just really good at generating text that looks like stuff it's seen in training. Depending on what exactly you want this thing to do it can be extremely useful or a complete scam. Take for example code generation. By and large they can generate code mostly okay, I'd say they tend to be slightly worse than a competent human. Genuinely really impressive for what it is, but it's not revolutionary. Basically the only actual use cases for this tech so far has been glorified autocomplete. It's kind of like NFTs or Crypto at large, there is actual utility there but nobody who's trying to sell the idea to you is actually involved or cares about that part, they just want to trick you into becoming their new money printer.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn't even come in an X86 variant (Lineage that is is, I'm aware android has been ported).

    I smell bovine faeces. Or are you, in fact, an LLM ?

    He dumped windows (for Linux) amd installed LineageOS (on his phone).

    OP likely has two devices.

  • Fraking toaster…

  • Could also be two separate things? I have a) dumped Windows and b) installed Lineage.

    Very well could be!

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    To me, it is the loss of meaningful work.

    Alot of people have complained "why take arts and coders jobs - make AI take the drudgery filled work first and leave us the art and writing!" The problem is: automation already came for those jobs. In 90% of jobs today, the job CAN be automated with no AI needed. It just costs more to automate it then to pay a minimum wage worker. Than means anyone who works those jobs isn't ACTUALLY doing those jobs. They are instead saving their employer the difference between their pay and the amount needed to automate it.

    Before genAI came, there were a few jobs that couldn't be automated. Those people thought that they not only have job security, but they were the only people actually producing things worth value. They were the ones that weren't just saving a boss a buck. Then genAI came. Why write a book, code a program, or paint a painting if some program can do the same? Oh, it is better? More authentic? It is surprising how much of the population doesn't care. And AI is getting better - poisoned training and loss of their users critical thinking skills not withstanding.

    Soon, the only thing proud a worker can be about their work is how much they saved their employers money; and for most people that isn't meaning enough. Somethings got to change.

  • Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that's how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.

    Said it better than I did, this is exactly it!

    Right now, it's like watching everyone cheer on as the obvious Villain is developing nuclear weapons.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    how to fully dump Windows and install LineageOS.

    Are you fucking Moses? Then how the fuck did you manage to turn your Windows Machine into an android phone?

    is the level of acrimony toward LLMs.

    Good, since you apparantly arent able to use your brain, I'm gonna speed run it real quick:

    • Its frying the planet
    • its generating way to much slop, making anyone unable to find true non hallucinated information
    • its a literal PsyOP
    • it's rotting peoples critical thinking
    • its being shoved down everyone's throat, tho it can't even generate simple things.
    • people are using it to slood FOSS projects.

    such an emotional response with this crowd.

    Its not emotional, its just having the same negative experience over and over and over again

    It's a tool that has gone from interesting (GPT3) to terrifying

    The only thing that's terrifying about it us peoples brain rotting away their critical thinking

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    I'm not opposed to AI research in general and LLMs and whatever in principle. This stuff has plenty of legitimate use-cases.

    My criticism comes in three parts:

    1. The society is not equipped to deal with this stuff. Generative AI was really nice when everyone could immediately tell what was generated and what was not. But when it got better, it turns out people's critical thinking skills go right out of the window. We as a society started using generative AI for utter bullshit. It's making normal life weirder in ways we could hardly imagine. It would do us all a great deal of good if we took a short break from this and asked what the hell are we even doing here and maybe if some new laws would do any good.

    2. A lot of AI stuff purports to be openly accessible research software released as open source, and stuff is published in scientific journals. But they often have weird restrictions that fly in the face of open source definition (like how some AI models are "open source" but have a cap on users, which makes it non-open by definition). Most importantly, this research stuff is not easily replicable. It's done by companies with ridiculous amount of hardware and they shift petabytes of data which they refuse to reveal because it's a trade secret. If it's not replicable, its scientific value is a little bit in question.

    3. The AI business is rotten to the core. AI businesses like to pretend they're altruistic innovators who take us to the Future. They're a bunch of hypemen, slapping barely functioning components together to try to come up with Solutions to problems that aren't even problems. Usually to replace human workers, in a way that everyone hates. Nothing must stand in their way - not copyright, no rules of user conduct, not social or environmental impact they're creating. If you try to apply even a little bit of reasonable regulation to this - "hey, maybe you should stop downloading our entire site every 5 minutes, we only update it, like, monthly, and, by the way, we never gave you a permission to use this for AI training" - they immediately whinge about how you're impeding the great march of human progress or someshit.

    And I'm not worried about AI replacing software engineers. That is ultimately an ancient problem - software engineers come up with something that helps them, biz bros say "this is so easy to use that I can just make my programs myself, looks like I don't need you any more, you're fired, bye", and a year later, the biz bros come back and say "this software that I built is a pile of hellish garbage, please come back and fix this, I'll pay triple". This is just Visual Basic for Applications all over again.