Skip to content

Why so much hate toward AI?

Technology
73 46 619
  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution?

    Both.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    I can only speak as an artist.

    Because it's entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who's works its platform is based on.

    You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I'd rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

    Fuck AI

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    AI companies need constantly new training data and straining open infrastructure with high volume requests. While they take everything out of others work they don't give anything back. It's literally asocial behaviour.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    It's easy to deny it's built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should've been private

  • AI companies need constantly new training data and straining open infrastructure with high volume requests. While they take everything out of others work they don't give anything back. It's literally asocial behaviour.

    What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Karma farming, as everything on any social network, be it centralized or decentralized. I'm not exactly enthusiastic about AI, but I can tell it has its use case (with caution). AI itself is not the problem. Most likely, Corps behind it are (their practices are not always transparent).

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    On top of everything else people mentioned, it's so profoundly stupid to me that AI is being pushed to take my summary of a message and turn it into an email, only for AI to then take those emails and spit out a summary again.

    At that point just let me ditch the formality and send over the summary in the first place.

    But more generally, I don't have an issue with "AI" just generative AI. And I have a huge issue with it being touted as this Oracle of knowledge when it isn't. It's dangerous to view it that way. Right now we're "okay" at differentiating real information from hallucinations, but so many people aren't and it will just get worse as people get complacent and AI gets better at hiding.

    Part of this is the natural evolution of techology and I'm sure the situation will improve, but it's being pushed so hard in the meantime and making the problem worse.

    The first Chat GPT models were kept private for being too dangerous, and they weren't even as "good" as the modern ones. I wish we could go back to those days.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Wasn't there the same question here yesterday?

  • It's easy to deny it's built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should've been private

    It’s honestly ridiculous too. Imagine saying that your whole business model is shooting people, and if you’re not allowed to shoot people then it’ll crash. So when accused of killing people, you go “nu uh” and hide the weapons you did it with, and the legal system is okay with that.

    It’s all so stupid.

  • Wasn't there the same question here yesterday?

    Yes. https://infosec.pub/post/29620772

    Seems someone deleted it, and now we have to discuss the same thing again.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Especially in coding?

    Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

    Audio, art - anything that doesn't need "bit perfect" output is another thing though.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    As several have already explained their questions, I will clarify some points.

    Not all countries consider AI training using copyrighted material as theft. For example, Japan has allowed AI to be trained with copyrighted material since 2019, and it's strange because that country is known for its strict laws in that regard.

    Also, saying that AI can't or won't harm society sells. Although I don't deny the consequences of this technology. But it will only be effective if AI doesn't get better, because then it could be counterproductive.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    My main gripes are more philosophical in nature, but should we automate away certain parts of the human experience? Should we automate art? Should we automate human connections?

    On top of these, there's also the concern of spam. AI is quick enough to flood the internet with low-effort garbage.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

    But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

    So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

  • What do you mean, they give open weights models back that anyone can use. Only the proprietary corporate AI is exploitative.

    Cool everyone can use the website they scraped the data from already.

    Also anyone can use open weights models? Even those without beefy systems? Please...

  • Yes. https://infosec.pub/post/29620772

    Seems someone deleted it, and now we have to discuss the same thing again.

    According to modlog it was against Rule#2

  • Especially in coding?

    Actually, that's where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from "vibe coding".

    Audio, art - anything that doesn't need "bit perfect" output is another thing though.

    There's also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn't fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn't immediately spot.

    With art, it can't really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Not much to win with.

    A fake bubble of broken technology that's not capable of doing what is advertised, it's environmentally destructive, its used for identification and genocide, it threatens and actually takes jobs, and concentrates money and power with the already wealthy.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Because of studies like https://arxiv.org/abs/2211.03622:

    Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant.

  • My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

    But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

    So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

    Also, it should never be used for art. I don’t care if you need to make a logo for a company and A.I. spits out whatever. But real art is about humans expressing something. We don’t value cave paintings because they’re perfect. We value them because someone thousands of years ago made it.

    So, that’s something I hate about it. People think it can “democratize” art. Art is already democratized. I have a child’s drawing on my fridge that means more to me than anything at any museum. The beauty of some things is not that it was generated. It’s that someone cared enough to try. I’d rather a misspelled crayon card from my niece than some shit ChatGPT generated.

  • 185 Stimmen
    56 Beiträge
    689 Aufrufe
    T
    Actually, nope! Claiming that you personally didn't learn with an IDE and that there are make-believe scenarios where one is not available is not actually addressing the argument. There really aren't any situations that make any sense at all where an IDE is not available. I've worked in literally the most strict and locked down environments in the world, and there is always approved software and tools to use... because duh! Of course there is, silly, work needs to get done. Unless you're talking about a coding 101 class or something academic and basic. Anyway, that's totally irrelevant regardless, because its PURE fantasy to have access to something like Claude and not have access to an IDE. So your argument is entirely flawed and invalid.
  • 438 Stimmen
    63 Beiträge
    1k Aufrufe
    jumping_redditor@sh.itjust.worksJ
    how about the military just stop paying for repairs entirely and let the equipment deteriorate and not replace it
  • 114 Stimmen
    10 Beiträge
    79 Aufrufe
    S
    I admire your positivity. I do not share it though, because from what I have seen, because even if there are open weights, the one with the biggest datacenter will in the future hold the most intelligent and performance model. Very similar to how even if storage space is very cheap today, large companies are holding all the data anyway. AI will go the same way, and thus the megacorps will and in some extent already are owning not only our data, but our thoughts and the ability to modify them. I mean, sponsored prompt injection is just the first thought modifying thing, imagine Google search sponsored hits, but instead it's a hyperconvincing AI response that subtly nudges you to a certain brand or way of thinking. Absolutely terrifies me, especially with all the research Meta has done on how to manipulate people's mood and behaviour through which social media posts they are presented with
  • 888 Stimmen
    230 Beiträge
    2k Aufrufe
    R
    ::: spoiler Tap for spoiler 12345 :::
  • 138 Stimmen
    28 Beiträge
    300 Aufrufe
    1
    Not our. i talk, and you talk. it is our discussion. It’s a discussion you are trying to have i am not trying to have, i am having it. here you are, replying to me. why are you trying so hard to prove that a discussion is not a discussion? it does not make sense. I labeled as a layman’s guess. yeah. and since i am more knowledgeable than you in this particular regard, i contributed some information you might not have had. now you do and your future layman's guess can be more educated. that is how the discussion works. and for some strange reason, you seem to be pissed about it.
  • Inside a Dark Adtech Empire Fed by Fake CAPTCHAs

    Technology technology
    1
    10 Stimmen
    1 Beiträge
    16 Aufrufe
    Niemand hat geantwortet
  • 47 Stimmen
    19 Beiträge
    169 Aufrufe
    mrjgyfly@lemmy.worldM
    Does that run the risk of leading to a future collapse of certain businesses, especially if their expenses remain consistently astronomical like OpenAI? Please note I don’t actually know—not trying to be cheeky with this question. Genuinely curious.
  • France considers requiring Musk’s X to verify users’ age

    Technology technology
    20
    1
    142 Stimmen
    20 Beiträge
    170 Aufrufe
    C
    TBH, age verification services exist. If it becomes law, integrating them shouldn't be more difficult than integrating a OIDC login. So everyone should be able to do it. Depending on these services, you might not even need to give a name, or, because they are separate entities, don't give your name to the platform using them. Other parts of regulation are more difficult. Like these "upload filters" that need to figure out if something shared via a service is violating any copyright before it is made available.