Skip to content

Why so much hate toward AI?

Technology
73 46 619
  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    On top of everything else people mentioned, it's so profoundly stupid to me that AI is being pushed to take my summary of a message and turn it into an email, only for AI to then take those emails and spit out a summary again.

    At that point just let me ditch the formality and send over the summary in the first place.

    But more generally, I don't have an issue with "AI" just generative AI. And I have a huge issue with it being touted as this Oracle of knowledge when it isn't. It's dangerous to view it that way. Right now we're "okay" at differentiating real information from hallucinations, but so many people aren't and it will just get worse as people get complacent and AI gets better at hiding.

    Part of this is the natural evolution of techology and I'm sure the situation will improve, but it's being pushed so hard in the meantime and making the problem worse.

    The first Chat GPT models were kept private for being too dangerous, and they weren't even as "good" as the modern ones. I wish we could go back to those days.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Wasn't there the same question here yesterday?

  • It's easy to deny it's built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should've been private

    It’s honestly ridiculous too. Imagine saying that your whole business model is shooting people, and if you’re not allowed to shoot people then it’ll crash. So when accused of killing people, you go “nu uh” and hide the weapons you did it with, and the legal system is okay with that.

    It’s all so stupid.

  • Wasn't there the same question here yesterday?

    Yes. https://infosec.pub/post/29620772

    Seems someone deleted it, and now we have to discuss the same thing again.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Especially in coding?

    Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

    Audio, art - anything that doesn't need "bit perfect" output is another thing though.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    As several have already explained their questions, I will clarify some points.

    Not all countries consider AI training using copyrighted material as theft. For example, Japan has allowed AI to be trained with copyrighted material since 2019, and it's strange because that country is known for its strict laws in that regard.

    Also, saying that AI can't or won't harm society sells. Although I don't deny the consequences of this technology. But it will only be effective if AI doesn't get better, because then it could be counterproductive.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?

    On top of these, there's also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

    But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

    So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

  • What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.

    Cool everyone can use the website they scraped the data from already.

    Also anyone can use open weights models? Even those without beefy systems? Please...

  • Yes. https://infosec.pub/post/29620772

    Seems someone deleted it, and now we have to discuss the same thing again.

    According to modlog it was against Rule#2

  • Especially in coding?

    Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

    Audio, art - anything that doesn't need "bit perfect" output is another thing though.

    There's also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn't fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn't immediately spot.

    With art, it can't really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Not much to win with.

    A fake bubble of broken technology that's not capable of doing what is advertised, it's environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Because of studies like https://arxiv.org/abs/2211.03622:

    Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

  • My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

    But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

    So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

    Also, it should never be used for art. I don’t care if you need to make a logo for a company and A.I. spits out whatever. But real art is about humans expressing something. We don’t value cave paintings because they’re perfect. We value them because someone thousands of years ago made it.

    So, that’s something I hate about it. People think it can “democratize” art. Art is already democratized. I have a child’s drawing on my fridge that means more to me than anything at any museum. The beauty of some things is not that it was generated. It’s that someone cared enough to try. I’d rather a misspelled crayon card from my niece than some shit ChatGPT generated.

  • Because the goal of "AI" is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is "why should we continue to pay wages?".
    That is bad for everyone who isn't part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/... the data you input can and WILL eventually be used against you.

    If you only self-host and know what you're doing, this might be somewhat different, but it still won't stop the big guys from trying to swallow all the others whole.

    Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"

  • Not much to win with.

    A fake bubble of broken technology that's not capable of doing what is advertised, it's environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.

    It's either broken and not capable or takes jobs.

    You can't be both useless and destroying jobs at the same time

  • My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?

    On top of these, there's also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.

    The industrial revolution called, they want their argument against the use of automated looms back.

  • It's either broken and not capable or takes jobs.

    You can't be both useless and destroying jobs at the same time

    Have you never had a corporate job? A technology can be very much useless while incompetent 'managers' who believe it can do better than humans WILL buy the former to get rid of the latter, even though that's a stupid thing to do, in order to meet their yearly targets and other similar idiotic measures of division/team 'productivity'

  • Because of studies like https://arxiv.org/abs/2211.03622:

    Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

    Seems like this is a good argument for specialization. Have AI make bad but fast code, pay specialty people to improve and make it secure when needed. My 2026 Furby with no connection to the outside world doesn't need secure code, it just needs to make kids smile.

  • It's either broken and not capable or takes jobs.

    You can't be both useless and destroying jobs at the same time

    And yet AI pulls through and somehow does manage to do both

  • 44 Stimmen
    10 Beiträge
    49 Aufrufe
    muusemuuse@sh.itjust.worksM
    Hospitals would likely be fine with it. The health insurance industry would not though and would pressure the hospital to cut you off. It’s illegal. But they would do it anyway. You would need serious fuck you money to change this. And even then, probably a lot of Luigis too.
  • Symbian: The forgotten FOSS phone OS

    Technology technology
    27
    1
    122 Stimmen
    27 Beiträge
    291 Aufrufe
    E
    Lol, so you saying that my N900 now lives in my TV?? :''( I miss you, my beloved N900, and I still talk about you to people.
  • Say Hello to the World's Largest Hard Drive, a Massive 36TB Seagate

    Technology technology
    263
    1
    616 Stimmen
    263 Beiträge
    3k Aufrufe
    M
    Really sad that S3 prices are still that high... also hetzner storage boxes
  • 71 Stimmen
    5 Beiträge
    70 Aufrufe
    adespoton@lemmy.caA
    Most major content producers have agreements with YouTube such that as their content is discovered, monetization all goes to the rights holders. In general, this seems like a pretty good idea, and better than copyright maximalism. However, I’ve had original works of my own “monetized by rights holder” because they used my work (with permission) in one of their products, and so now have co-opted all expressions of my work on YouTube. So the system isn’t perfect.
  • 353 Stimmen
    40 Beiträge
    515 Aufrufe
    L
    If AI constantly refined its own output, sure, unless it hits a wall eventually or starts spewing bullshit because of some quirk of training. But I doubt it could learn to summarise better without external input, just like a compiler won't produce a more optimised version of itself without human development work.
  • 1 Stimmen
    2 Beiträge
    27 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • Google confirms more ads on your paid YouTube Premium Lite soon

    Technology technology
    273
    1
    941 Stimmen
    273 Beiträge
    2k Aufrufe
    undefined@lemmy.hogru.chU
    I had to look it up, what I was thinking of was MQA. Looks like they discontinued it last year though.
  • 74 Stimmen
    10 Beiträge
    93 Aufrufe
    C
    Time to start chopping down flock cameras.