Skip to content

An earnest question about the AI/LLM hate

Technology
57 47 464
  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    For me personally, the problem is not so much LLMs and/or ML solutions (both of which I actively use), but the fact this industry is largely led by American tech oligarchs. Not only are they profoundly corrupt and almost comically dishonest, but they are also true degenerates.

  • I think a lot of it is anxiety; being replaced by AI, the continued enshitification of the services I loved, and the ever present notion that AI is, "the answer." After a while, it gets old and that anxiety mixes in with annoyance -- a perfect cocktail of animosity.

    And AI stole em dashes from me, but that's a me-problem.

    Yeah, fuck this thing with em dashes… I used them constantly, but now, it’s a sign something was written by an LLM!??!?

    Bunshit.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn't even come in an X86 variant (Lineage that is is, I'm aware android has been ported).

    I smell bovine faeces. Or are you, in fact, an LLM ?

    Lineage sounds a lot like "Linux." Take it easy on the lad.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    I personally just find it annoying how it's shoehorned into everyting regardless if it makes sense to be there or not, without the option to turn it off.

    I also don't find it helpful for most things I do.

  • Yeah, fuck this thing with em dashes… I used them constantly, but now, it’s a sign something was written by an LLM!??!?

    Bunshit.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    My hypothesis from the start is that people were on a roll with the crypto hate (which was a lot less ambiguous, since there were fewer legitimate applications there).

    Then the AI gold rush hit and both investors and haters smoothly rolled onto that and transferred over a lot of the same discourse. It helps that AIbros overhyped the crap out of the tech, but the carryover hate was also entirely unwilling to acknowledge any kind of nuance from the go.

    So now you have a bunch of people with significant emotional capital baked into the idea that genAI is fundamentally a scam and/or a world-destroying misstep that have a LOT of face to lose by conceding even a sliver of usefulness or legitimacy to the thing. They are not entirely right... but not entirely wrong, either, so there you go, the perfect recipe for an eternal culture war.

    Welcome to discourse and public opinion in the online age. It kinda sucks.

  • To me, it's not the tech itself, it's the fact that it's being pushed as something it most definitely isn't. They're grifting hard to stuff an incomplete feature down everyone's throats, while using it to datamine the everloving spit out of us.

    Truth be told, I'm genuinely excited about the concept of AGI, of the potential of what we're seeing now. I'm also one who believes AGI will ultimately be as a progeny and should be treated as such, as a being in itself, and while we aren't capable of generating that, we should still keep it in mind, to mould our R&D to be based on that principle and thought. So, in addition to being disgusted by the current day grift, I'm also deeply disappointed to see these people behaving this way - like madmen and cultists. And as a further note, looking at our species' approach toward anything it sees as Other doesn't really make me think humanity would be adequate parents for any type of AGI as we are now, either.

    The people who own/drive the development of AI/LLM/what-have-you (the main ones, at least) are the kind of people who would cause the AI apocalypse. That's my problem.

    Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that's how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    Emotional? No. Rational.

    Use of Ai is showing as a bad idea for so many reasons that have been raised by people who study this kind of thing. There's nothing I can tell you that has any more validity than the experts' opinions. Go see.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn't even come in an X86 variant (Lineage that is is, I'm aware android has been ported).

    I smell bovine faeces. Or are you, in fact, an LLM ?

    Calm down. They never said anything about the two things happening on the same device.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    We're outsourcing thinking to a bullshit generator controlled by mostly American mega-corporations who have repeatedly demonstrated that they want to do us harm, burning through scarce resources and rendering creative humans robbed and unemployed in the process.

    What's not to hate.

  • Lineage sounds a lot like "Linux." Take it easy on the lad.

    Could also be two separate things? I have a) dumped Windows and b) installed Lineage.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    Okay so imagine for a second that somebody just invented voice to text and everyone trying to sell it to you lies about it and claims it can read your thoughts and nobody will ever type things manually ever again.

    The people trying to sell us LLMs lie about how they work and what they actually do. They generate text that looks like a human wrote it. That's all they do. There's some interesting attributes of this behavior, namely that when prompted with text that's a question the LLM will usually end up generating text that ends up being an answer. The LLM doesn't understand any part of this process any better than your phones autocorrect, it's just really good at generating text that looks like stuff it's seen in training. Depending on what exactly you want this thing to do it can be extremely useful or a complete scam. Take for example code generation. By and large they can generate code mostly okay, I'd say they tend to be slightly worse than a competent human. Genuinely really impressive for what it is, but it's not revolutionary. Basically the only actual use cases for this tech so far has been glorified autocomplete. It's kind of like NFTs or Crypto at large, there is actual utility there but nobody who's trying to sell the idea to you is actually involved or cares about that part, they just want to trick you into becoming their new money printer.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    I am truly impressed that you managed to replace a desktop operating system with a mobile os that doesn't even come in an X86 variant (Lineage that is is, I'm aware android has been ported).

    I smell bovine faeces. Or are you, in fact, an LLM ?

    He dumped windows (for Linux) amd installed LineageOS (on his phone).

    OP likely has two devices.

  • Fraking toaster…

  • Could also be two separate things? I have a) dumped Windows and b) installed Lineage.

    Very well could be!

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    To me, it is the loss of meaningful work.

    Alot of people have complained "why take arts and coders jobs - make AI take the drudgery filled work first and leave us the art and writing!" The problem is: automation already came for those jobs. In 90% of jobs today, the job CAN be automated with no AI needed. It just costs more to automate it then to pay a minimum wage worker. Than means anyone who works those jobs isn't ACTUALLY doing those jobs. They are instead saving their employer the difference between their pay and the amount needed to automate it.

    Before genAI came, there were a few jobs that couldn't be automated. Those people thought that they not only have job security, but they were the only people actually producing things worth value. They were the ones that weren't just saving a boss a buck. Then genAI came. Why write a book, code a program, or paint a painting if some program can do the same? Oh, it is better? More authentic? It is surprising how much of the population doesn't care. And AI is getting better - poisoned training and loss of their users critical thinking skills not withstanding.

    Soon, the only thing proud a worker can be about their work is how much they saved their employers money; and for most people that isn't meaning enough. Somethings got to change.

  • Agree, the last people in the world who should be making AGI, are. Rabid techbro nazi capitalist fucktards who feel slighted they missed out on (absolute, not wage) slaves and want to make some. Do you want terminators, because that's how you get terminators. Something with so much positive potential that is also an existential threat needs to be treated with so much more respect.

    Said it better than I did, this is exactly it!

    Right now, it's like watching everyone cheer on as the obvious Villain is developing nuclear weapons.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    how to fully dump Windows and install LineageOS.

    Are you fucking Moses? Then how the fuck did you manage to turn your Windows Machine into an android phone?

    is the level of acrimony toward LLMs.

    Good, since you apparantly arent able to use your brain, I'm gonna speed run it real quick:

    • Its frying the planet
    • its generating way to much slop, making anyone unable to find true non hallucinated information
    • its a literal PsyOP
    • it's rotting peoples critical thinking
    • its being shoved down everyone's throat, tho it can't even generate simple things.
    • people are using it to slood FOSS projects.

    such an emotional response with this crowd.

    Its not emotional, its just having the same negative experience over and over and over again

    It's a tool that has gone from interesting (GPT3) to terrifying

    The only thing that's terrifying about it us peoples brain rotting away their critical thinking

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    I'm not opposed to AI research in general and LLMs and whatever in principle. This stuff has plenty of legitimate use-cases.

    My criticism comes in three parts:

    1. The society is not equipped to deal with this stuff. Generative AI was really nice when everyone could immediately tell what was generated and what was not. But when it got better, it turns out people's critical thinking skills go right out of the window. We as a society started using generative AI for utter bullshit. It's making normal life weirder in ways we could hardly imagine. It would do us all a great deal of good if we took a short break from this and asked what the hell are we even doing here and maybe if some new laws would do any good.

    2. A lot of AI stuff purports to be openly accessible research software released as open source, and stuff is published in scientific journals. But they often have weird restrictions that fly in the face of open source definition (like how some AI models are "open source" but have a cap on users, which makes it non-open by definition). Most importantly, this research stuff is not easily replicable. It's done by companies with ridiculous amount of hardware and they shift petabytes of data which they refuse to reveal because it's a trade secret. If it's not replicable, its scientific value is a little bit in question.

    3. The AI business is rotten to the core. AI businesses like to pretend they're altruistic innovators who take us to the Future. They're a bunch of hypemen, slapping barely functioning components together to try to come up with Solutions to problems that aren't even problems. Usually to replace human workers, in a way that everyone hates. Nothing must stand in their way - not copyright, no rules of user conduct, not social or environmental impact they're creating. If you try to apply even a little bit of reasonable regulation to this - "hey, maybe you should stop downloading our entire site every 5 minutes, we only update it, like, monthly, and, by the way, we never gave you a permission to use this for AI training" - they immediately whinge about how you're impeding the great march of human progress or someshit.

    And I'm not worried about AI replacing software engineers. That is ultimately an ancient problem - software engineers come up with something that helps them, biz bros say "this is so easy to use that I can just make my programs myself, looks like I don't need you any more, you're fired, bye", and a year later, the biz bros come back and say "this software that I built is a pile of hellish garbage, please come back and fix this, I'll pay triple". This is just Visual Basic for Applications all over again.

  • Hello, recent Reddit convert here and I'm loving it. You even inspired me to figure out how to fully dump Windows and install LineageOS.

    One thing I can't understand is the level of acrimony toward LLMs. I see things like "stochastic parrot", "glorified autocomplete", etc. If you need an example, the comments section for the post on Apple saying LLMs don't reason is a doozy of angry people: https://infosec.pub/post/29574988

    While I didn't expect a community of vibecoders, I am genuinely curious about why LLMs strike such an emotional response with this crowd. It's a tool that has gone from interesting (GPT3) to terrifying (Veo 3) in a few years and I am personally concerned about many of the safety/control issues in the future.

    So I ask: what is the real reason this is such an emotional topic for you in particular? My personal guess is that the claims about replacing software engineers is the biggest issue, but help me understand.

    The main reason the invoke an emotional response. They stole everything from us (humans) illegally and then used it to make a technology that aims to replace us. I dont like that.

    The second part is that I think they are shit at what people are using them for. They seem like they provide great answers but they are far to often completely wrong and the user doesnt know. Its also annoying that they are being shoved into everything.

  • Women’s ‘red flag’ app Tea is a privacy nightmare

    Technology technology
    127
    1
    316 Stimmen
    127 Beiträge
    2k Aufrufe
    G
    So confirmation bias. Gotcha. That's generally not a great way to make sweeping generalizations about 50% of the population. You ever hear that adage about smelling shit wherever you go, maybe check your shoes?
  • Grindr Won’t Let Users Say 'No Zionists'

    Technology technology
    1
    1
    1 Stimmen
    1 Beiträge
    10 Aufrufe
    Niemand hat geantwortet
  • Why Smart Uniform Systems Are Essential for Manufacturing Plants

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • 238 Stimmen
    20 Beiträge
    183 Aufrufe
    A
    Unless you are a major corporation... you are not free to take anything.
  • 73 Stimmen
    13 Beiträge
    131 Aufrufe
    F
    They have a tiny version that is listed as 1000 on their website, plus the simulation is FOSS
  • Best way to block distractions

    Technology technology
    1
    0 Stimmen
    1 Beiträge
    20 Aufrufe
    Niemand hat geantwortet
  • 220 Stimmen
    99 Beiträge
    1k Aufrufe
    G
    In highrises with lots of stops and users, it uses some more advanced software to schedule the optimal stops, or distribute the load between multiple lifts. A similar concept exists for HDD controllers, where the read write arm must move to different positions to load data stored on different plates and sectors, and Repositioning the head is a slow and expensive process that cuts down the data transfer rate.
  • Microsoft Bans Employees From Using DeepSeek App

    Technology technology
    11
    1
    121 Stimmen
    11 Beiträge
    101 Aufrufe
    L
    (Premise - suppose I accept that there is such a definable thing as capitalism) I'm not sure why you feel the need to state this in a discussion that already assumes it as a necessary precondition of, but, uh, you do you. People blaming capitalism for everything then build a country that imports grain, while before them and after them it’s among the largest exporters on the planet (if we combine Russia and Ukraine for the “after” metric, no pun intended). ...what? What does this have to do with literally anything, much less my comment about innovation/competition? Even setting aside the wild-assed assumptions you're making about me criticizing capitalism means I 'blame [it] for everything', this tirade you've launched into, presumably about Ukraine and the USSR, has no bearing on anything even tangentially related to this conversation. People praising capitalism create conditions in which there’s no reason to praise it. Like, it’s competitive - they kill competitiveness with patents, IP, very complex legal systems. It’s self-regulating and self-optimizing - they make regulations and do bailouts preventing sick companies from dying, make laws after their interests, then reactively make regulations to make conditions with them existing bearable, which have a side effect of killing smaller companies. Please allow me to reiterate: ...what? Capitalists didn't build literally any of those things, governments did, and capitalists have been trying to escape, subvert, or dismantle those systems at every turn, so this... vain, confusing attempt to pin a medal on capitalism's chest for restraining itself is not only wrong, it fails to understand basic facts about history. It's the opposite of self-regulating because it actively seeks to dismantle regulations (environmental, labor, wage, etc), and the only thing it optimizes for is the wealth of oligarchs, and maybe if they're lucky, there will be a few crumbs left over for their simps. That’s the problem, both “socialist” and “capitalist” ideal systems ignore ape power dynamics. I'm going to go ahead an assume that 'the problem' has more to do with assuming that complex interacting systems can be simplified to 'ape (or any other animal's) power dynamics' than with failing to let the richest people just do whatever they want. Such systems should be designed on top of the fact that jungle law is always allowed So we should just be cool with everybody being poor so Jeff Bezos or whoever can upgrade his megayacht to a gigayacht or whatever? Let me say this in the politest way I know how: LOL no. Also, do you remember when I said this? ‘Won’t someone please think of the billionaires’ is wearing kinda thin You know, right before you went on this very long-winded, surreal, barely-coherent ramble? Did you imagine I would be convinced by literally any of it when all it amounts to is one giant, extraneous, tedious equivalent of 'Won't someone please think of the billionaires?' Simp harder and I bet maybe you can get a crumb or two yourself.