Skip to content

Why so much hate toward AI?

Technology
73 46 208
  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Because the goal of "AI" is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is "why should we continue to pay wages?".
    That is bad for everyone who isn't part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/... the data you input can and WILL eventually be used against you.

    If you only self-host and know what you're doing, this might be somewhat different, but it still won't stop the big guys from trying to swallow all the others whole.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    There is no AI.

    What's sold as an expert is actually a delusional graduate.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution?

    Both.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    I can only speak as an artist.

    Because it's entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who's works its platform is based on.

    You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I'd rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

    Fuck AI

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    AI companies need constantly new training data and straining open infrastructure with high volume requests. While they take everything out of others work they don't give anything back. It's literally asocial behaviour.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    It's easy to deny it's built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should've been private

  • AI companies need constantly new training data and straining open infrastructure with high volume requests. While they take everything out of others work they don't give anything back. It's literally asocial behaviour.

    What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Karma farming, as everything on any social network, be it centralized or decentralized. I'm not exactly enthusiastic about AI, but I can tell it has its use case (with caution). AI itself is not the problem. Most likely, Corps behind it are (their practices are not always transparent).

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    On top of everything else people mentioned, it's so profoundly stupid to me that AI is being pushed to take my summary of a message and turn it into an email, only for AI to then take those emails and spit out a summary again.

    At that point just let me ditch the formality and send over the summary in the first place.

    But more generally, I don't have an issue with "AI" just generative AI. And I have a huge issue with it being touted as this Oracle of knowledge when it isn't. It's dangerous to view it that way. Right now we're "okay" at differentiating real information from hallucinations, but so many people aren't and it will just get worse as people get complacent and AI gets better at hiding.

    Part of this is the natural evolution of techology and I'm sure the situation will improve, but it's being pushed so hard in the meantime and making the problem worse.

    The first Chat GPT models were kept private for being too dangerous, and they weren't even as "good" as the modern ones. I wish we could go back to those days.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Wasn't there the same question here yesterday?

  • It's easy to deny it's built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should've been private

    It’s honestly ridiculous too. Imagine saying that your whole business model is shooting people, and if you’re not allowed to shoot people then it’ll crash. So when accused of killing people, you go “nu uh” and hide the weapons you did it with, and the legal system is okay with that.

    It’s all so stupid.

  • Wasn't there the same question here yesterday?

    Yes. https://infosec.pub/post/29620772

    Seems someone deleted it, and now we have to discuss the same thing again.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Especially in coding?

    Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

    Audio, art - anything that doesn't need "bit perfect" output is another thing though.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    As several have already explained their questions, I will clarify some points.

    Not all countries consider AI training using copyrighted material as theft. For example, Japan has allowed AI to be trained with copyrighted material since 2019, and it's strange because that country is known for its strict laws in that regard.

    Also, saying that AI can't or won't harm society sells. Although I don't deny the consequences of this technology. But it will only be effective if AI doesn't get better, because then it could be counterproductive.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?

    On top of these, there's also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

    But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

    So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

  • What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.

    Cool everyone can use the website they scraped the data from already.

    Also anyone can use open weights models? Even those without beefy systems? Please...

  • Yes. https://infosec.pub/post/29620772

    Seems someone deleted it, and now we have to discuss the same thing again.

    According to modlog it was against Rule#2

  • Especially in coding?

    Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

    Audio, art - anything that doesn't need "bit perfect" output is another thing though.

    There's also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn't fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn't immediately spot.

    With art, it can't really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.

  • 147 Stimmen
    4 Beiträge
    2 Aufrufe
    czardestructo@lemmy.worldC
    Likely. The coils only job is to ignite the lamp by whacking it with high voltage to strip some barium elections off the coil to induce plasma and therefore electrical flow. The plasma then excites the phosphorus to make light. After that the coils could just be stubs of wire so long as current keeps flowing through the excited plasma. If you did it inductively it would achieve the same means but I don't think the plasma would be as dense so the lamp not as bright. My theory anyways.
  • Blocking real-world ads: is the future here?

    Technology technology
    33
    1
    198 Stimmen
    33 Beiträge
    131 Aufrufe
    S
    Also a work of fiction
  • Microsoft Tests Removing Its Name From Bing Search Box

    Technology technology
    11
    1
    53 Stimmen
    11 Beiträge
    45 Aufrufe
    alphapuggle@programming.devA
    Worse. Office.com now takes me to m365.cloud.microsoft which as of today now takes me to a fucking Copilot chat window. Ofc no way to disable it because gee why would anyone want to do that?
  • 371 Stimmen
    26 Beiträge
    98 Aufrufe
    hollownaught@lemmy.worldH
    Bit misleading. Tumour-associated antigens can very easily be detected very early. Problem is, these are only associated with cancer, and provide a very high rate of false positives They're better used as a stepping stone for further testing, or just seeing how advanced a cancer is That is to say, I'm assuming that's what this is about, as i didnt rwad the article. It's the first thing I thought of when I heard "cancer in bloodstream", as the other options tend to be a bit more bleak Edit: they're talking about cancer "shedding genetic material", which I hate how general they're being. Probably talking about proto oncogenes from dead tumour debris, but seems different to what I was expecting
  • 78 Stimmen
    9 Beiträge
    7 Aufrufe
    U
    Obligatory Knowledge Fight Reference: [https://knowledgefight.libsyn.com/1044-june-2-2025](In this installment, Dan and Jordan discuss a strange day on Alex's show where he spends a fair amount of time trying to dissuade his listeners from getting too suspicious about Palantir.)
  • Have LLMs Finally Mastered Geolocation? - bellingcat

    Technology technology
    3
    1
    50 Stimmen
    3 Beiträge
    20 Aufrufe
    R
    Depends on who programed the AI - and no, it is not Kyoto
  • Why Japan's animation industry has embraced AI

    Technology technology
    12
    1
    1 Stimmen
    12 Beiträge
    49 Aufrufe
    R
    The genre itself has become neutered, too. A lot of anime series have the usual "anime elements" and a couple custom ideas. And similar style, too glossy for my taste. OK, what I think is old and boring libertarian stuff, I'll still spell it out. The reason people are having such problems is because groups and businesses are de facto legally enshrined in their fields, it's almost like feudal Europe's system of privileges and treaties. At some point I thought this is good, I hope no evil god decided to fulfill my wish. There's no movement, and a faction (like Disney with Star Wars) that buys a place (a brand) can make any garbage, and people will still try to find the depth in it and justify it (that complaint has been made about Star Wars prequels, but no, they are full of garbage AND have consistent arcs, goals and ideas, which is why they revitalized the Expanded Universe for almost a decade, despite Lucas-<companies> having sort of an internal social collapse in year 2005 right after Revenge of the Sith being premiered ; I love the prequels, despite all the pretense and cringe, but their verbal parts are almost fillers, their cinematographic language and matching music are flawless, the dialogue just disrupts it all while not adding much, - I think Lucas should have been more decisive, a bit like Tartakovsky with the Clone Wars cartoon, just more serious, because non-verbal doesn't equal stupid). OK, my thought wandered away. Why were the legal means they use to keep such positions created? To make the economy nicer to the majority, to writers, to actors, to producers. Do they still fulfill that role? When keeping monopolies, even producing garbage or, lately, AI slop, - no. Do we know a solution? Not yet, because pressing for deregulation means the opponent doing a judo movement and using that energy for deregulating the way everything becomes worse. Is that solution in minimizing and rebuilding the system? I believe still yes, nothing is perfect, so everything should be easy to quickly replace, because errors and mistakes plaguing future generations will inevitably continue to be made. The laws of the 60s were simple enough for that in most countries. The current laws are not. So the general direction to be taken is still libertarian. Is this text useful? Of course not. I just think that in the feudal Europe metaphor I'd want to be a Hussite or a Cossack or at worst a Venetian trader.
  • 0 Stimmen
    1 Beiträge
    6 Aufrufe
    Niemand hat geantwortet