Skip to content

Why so much hate toward AI?

Technology
66 42 1
  • It's either broken and not capable or takes jobs.

    You can't be both useless and destroying jobs at the same time

    It can absolutely be both. Expensive competent people are replaced with inexpensive morons all the time.

    • Useless fake spam content.
    • Posting AI slop ruins the "social" part of social media. You're not reading real human thoughts anymore, just statistically plausible words.
    • Same with machine-generated "art". What's the point?
    • AI companies are leeches; they steal work for the purpose of undercutting the original creators with derivative content.
    • Vibe coders produce utter garbage that nobody, especially not themselves understands, and somehow are smug about it.
    • A lot of AI stuff is a useless waste of resources.

    Most of the hate is justified IMO, but a couple weeks ago I died on the hill arguing that an LLM can be useful as a code documentation search engine. Once the train started, even a reply that thought software libraries contain books got upvotes.

    Not to mention the environmental cost is literally astronomical. I would be very interested if AI code is functional x times out of 10 because it's success statistic for every other type of generation is much lower.

  • The capitalists owning the AI thanking you for fighting on their side.

    Lots of assumptions there. In case you actually care, I don't think any one company should be allowed to own the base system that allows AI to function, especially if it's trained off of public content or content owned by other groups, but that's kind of immaterial here. It seems insane to villainize a technology because of who might make money off of it. These are two separate arguments (and frankly, they historically have the opposite benefactors from what you would expect).

    Prior to the industrial revolution, weaving was done by hand, making all cloth expensive or the result of sweatshops (and it was still comparatively expensive as opposed to today). Case in point, you can find many pieces of historical worker clothing that was specifically made using every piece of a rectangular piece of fabric because you did not want to waste any little bit (today it's common for people to throw any scraps away because they don't like the section of pattern).

    With the advent of automated looms several things happened:

    • the skilled workers who could operate the looms quickly were put out of a job because the machine could do things much faster, although it required a few specialized operators to set up and repair the equipment.
    • the owners of the fabric mills that couldn't afford to upgrade either died out or specialized in fabrics that could not be made by the machines (which set up an arms race of sorts where the machine builders kept improving things)
    • the quality of fabric went down: when it was previously possible to have different structures of fabric with just a simple order to the worker, it took a while for machines to do something other than a simple weave (actually it took the work of Ada Lovelace, and see above mentioned arms race), and looms even today require a different range of threads than what can be hand woven, but...
    • the cost went down so much that the accessibility went through the roof. Suddenly the average pauper COULD afford to clothe their entire family with a weeks worth of clothes. New industries cropped up. Health and economic mobility soared.

    This is a huge oversimplification, but history is well known to repeat itself due to human nature. Follow the bullets above with today's arguments against AI and you will see an often ignored end result: humanity can grow to have more time and resources to improve the health and wellness of our population IF we use the tools. You can choose to complain that the contract worker isn't going to get paid his equivalent of $5/hr for spending 2 weeks arguing back and forth about a dog logo for a new pet store, but I am going to celebrate the person who realizes they can automate a system to find new business filings and approach every new business in their area with a package of 20 logos each that were AI generated using unique prompts from their experience in logo design all while reducing their workload and making more money.

  • On top of everything else people mentioned, it's so profoundly stupid to me that AI is being pushed to take my summary of a message and turn it into an email, only for AI to then take those emails and spit out a summary again.

    At that point just let me ditch the formality and send over the summary in the first place.

    But more generally, I don't have an issue with "AI" just generative AI. And I have a huge issue with it being touted as this Oracle of knowledge when it isn't. It's dangerous to view it that way. Right now we're "okay" at differentiating real information from hallucinations, but so many people aren't and it will just get worse as people get complacent and AI gets better at hiding.

    Part of this is the natural evolution of techology and I'm sure the situation will improve, but it's being pushed so hard in the meantime and making the problem worse.

    The first Chat GPT models were kept private for being too dangerous, and they weren't even as "good" as the modern ones. I wish we could go back to those days.

    At that point just let me ditch the formality and send over the summary in the first place.

    A tangent a little bit but so much this. Why haven't we normalized using fewer words already?
    Why do we keep writing (some blogs and all of content marketing) whole screens of text to convey just a sentence of real content?
    Why do we keep the useless hello and regards instead of just directly getting to the points already?

  • Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment...but you know, you can keep using that push lawnmower, just don't complain when the kids next door run circles around you at a quarter the cost.

    Automating parts of something as a reference tool is a WILDLY different thing than differing to AI to finalize your code, which will be shitcode.

    Anybody right now who is programming that is letting AI code out there is bad at their job.

  • Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment...but you know, you can keep using that push lawnmower, just don't complain when the kids next door run circles around you at a quarter the cost.

    but you know, you can keep using that push lawnmower, just don’t complain when the kids next door run circles around you at a quarter the cost.

    That push lawnmower will still mow the lawn in decades to come though, while your kids fancy high-tech lawnmower will explode in a few months and you're lucky if it doesn't burn the entire house down with it.

  • Lots of assumptions there. In case you actually care, I don't think any one company should be allowed to own the base system that allows AI to function, especially if it's trained off of public content or content owned by other groups, but that's kind of immaterial here. It seems insane to villainize a technology because of who might make money off of it. These are two separate arguments (and frankly, they historically have the opposite benefactors from what you would expect).

    Prior to the industrial revolution, weaving was done by hand, making all cloth expensive or the result of sweatshops (and it was still comparatively expensive as opposed to today). Case in point, you can find many pieces of historical worker clothing that was specifically made using every piece of a rectangular piece of fabric because you did not want to waste any little bit (today it's common for people to throw any scraps away because they don't like the section of pattern).

    With the advent of automated looms several things happened:

    • the skilled workers who could operate the looms quickly were put out of a job because the machine could do things much faster, although it required a few specialized operators to set up and repair the equipment.
    • the owners of the fabric mills that couldn't afford to upgrade either died out or specialized in fabrics that could not be made by the machines (which set up an arms race of sorts where the machine builders kept improving things)
    • the quality of fabric went down: when it was previously possible to have different structures of fabric with just a simple order to the worker, it took a while for machines to do something other than a simple weave (actually it took the work of Ada Lovelace, and see above mentioned arms race), and looms even today require a different range of threads than what can be hand woven, but...
    • the cost went down so much that the accessibility went through the roof. Suddenly the average pauper COULD afford to clothe their entire family with a weeks worth of clothes. New industries cropped up. Health and economic mobility soared.

    This is a huge oversimplification, but history is well known to repeat itself due to human nature. Follow the bullets above with today's arguments against AI and you will see an often ignored end result: humanity can grow to have more time and resources to improve the health and wellness of our population IF we use the tools. You can choose to complain that the contract worker isn't going to get paid his equivalent of $5/hr for spending 2 weeks arguing back and forth about a dog logo for a new pet store, but I am going to celebrate the person who realizes they can automate a system to find new business filings and approach every new business in their area with a package of 20 logos each that were AI generated using unique prompts from their experience in logo design all while reducing their workload and making more money.

    GenAI is automating the more human fields, not some production line work. This isn't gonna lead to an abundance of clothing that are maybe not artisan made, but the flooding of the art fields with low quality products. Hope you like Marvel slop, because you're gonna get even more Marvel slop, except even worse!

    Creativity isn't having an idea of a big booba anime girl, it's how you draw said big booba anime girl. Unless you're one of those "idea guys", who are still pissed off that the group of artists and programmers didn't steal the code of Call of Duty, to put VR support into it, so you could sell if for the publisher at a markup price, because VR used to be a big thing for a while.

  • Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment...but you know, you can keep using that push lawnmower, just don't complain when the kids next door run circles around you at a quarter the cost.

    Have you had to code review someone who is obviously just committing AI bullshit? It is an incredible waste of time. I know people who learned pre-LLM (i.e. have functioning brains) and are practically on the verge of complete apathy from having to babysit ai code/coders, especially as their management keeps pushing people to use it. As in, they must use LLM as a performance metric.

  • GenAI is automating the more human fields, not some production line work. This isn't gonna lead to an abundance of clothing that are maybe not artisan made, but the flooding of the art fields with low quality products. Hope you like Marvel slop, because you're gonna get even more Marvel slop, except even worse!

    Creativity isn't having an idea of a big booba anime girl, it's how you draw said big booba anime girl. Unless you're one of those "idea guys", who are still pissed off that the group of artists and programmers didn't steal the code of Call of Duty, to put VR support into it, so you could sell if for the publisher at a markup price, because VR used to be a big thing for a while.

    Gotcha, so no actual discourse then.

    Incidentally, I do enjoy Marvel "slop" and quite honestly one of my favorite YouTube channels is Abandoned Films https://youtu.be/mPQgim0CuuI

    This is super creative and would never be able to be made without AI.

    I also enjoy reading books like Psalm for the Wild Built. It's almost like there's space for both things...

  • Automating parts of something as a reference tool is a WILDLY different thing than differing to AI to finalize your code, which will be shitcode.

    Anybody right now who is programming that is letting AI code out there is bad at their job.

    No argument there.

  • GenAI is automating the more human fields, not some production line work. This isn't gonna lead to an abundance of clothing that are maybe not artisan made, but the flooding of the art fields with low quality products. Hope you like Marvel slop, because you're gonna get even more Marvel slop, except even worse!

    Creativity isn't having an idea of a big booba anime girl, it's how you draw said big booba anime girl. Unless you're one of those "idea guys", who are still pissed off that the group of artists and programmers didn't steal the code of Call of Duty, to put VR support into it, so you could sell if for the publisher at a markup price, because VR used to be a big thing for a while.

    but the flooding of the art fields with low quality products

    It's even worse than that, because the #1 use case is spam, regardless of what others think they personally gain out of it. It is exhausting filtering through the endless garbage spam results. And it isn't just text sites. Searching generic terms into sites like YouTube (e.g. "cats") will quickly lead you to a deluge of AI shit. Where did the real cats go?

    It's incredible that DrNik is coming out with a bland, fake movie trailer as an example of how AI is good. It's "super creative" to repeatedly prompt Veo3 to give you synthetic Hobbit-style images that have the vague appearance of looking like VistaVision. Actually, super creative is kinda already done, watch me go hyper creative:

    "Whoa, now you can make it look like an 80s rock music video. Whoa, now you can make it look like a 20s silent film. Whoa, now you can make look like a 90s sci-fi flick. Whoa, now you can make it look like a super hero film."

  • but the flooding of the art fields with low quality products

    It's even worse than that, because the #1 use case is spam, regardless of what others think they personally gain out of it. It is exhausting filtering through the endless garbage spam results. And it isn't just text sites. Searching generic terms into sites like YouTube (e.g. "cats") will quickly lead you to a deluge of AI shit. Where did the real cats go?

    It's incredible that DrNik is coming out with a bland, fake movie trailer as an example of how AI is good. It's "super creative" to repeatedly prompt Veo3 to give you synthetic Hobbit-style images that have the vague appearance of looking like VistaVision. Actually, super creative is kinda already done, watch me go hyper creative:

    "Whoa, now you can make it look like an 80s rock music video. Whoa, now you can make it look like a 20s silent film. Whoa, now you can make look like a 90s sci-fi flick. Whoa, now you can make it look like a super hero film."

    It even made "manual" programming worse.

    Wanted to google how to modify the path variable on Linux? Here's an AI hallucinated example, that will break your installation. Wanted to look up an algorithm? Here's an AI hallucinated explanation, that is wrong enough at some parts, that you just end up just wasting your own time.

  • Because the goal of "AI" is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is "why should we continue to pay wages?".
    That is bad for everyone who isn't part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/... the data you input can and WILL eventually be used against you.

    If you only self-host and know what you're doing, this might be somewhat different, but it still won't stop the big guys from trying to swallow all the others whole.

    the data you input can and WILL eventually be used against you.

    Can you expand further on this?

  • Not to mention the environmental cost is literally astronomical. I would be very interested if AI code is functional x times out of 10 because it's success statistic for every other type of generation is much lower.

    chatbot DCs burn enough electricity to power middle sized euro country, all for seven fingered hands and glue-and-rock pizza

  • Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment...but you know, you can keep using that push lawnmower, just don't complain when the kids next door run circles around you at a quarter the cost.

    congratulations on offloading your critical thinking skills to a chatbot that you most likely don't own. what are you gonna do when the bubble is over, or when dc with it burns down

  • the data you input can and WILL eventually be used against you.

    Can you expand further on this?

    User data has been the internet's greatest treasure trove since the advent of Google. LLM's are perfectly set up to extract the most intimate data available from their users ("mental health" conversations, financial advice, ...) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

    Regardless, there is no scenario in which the end user wins.

  • Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"

    Read 'The Communist Manifesto' if you'd like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don't want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won't go away (isn't future bright?)

    this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai

    then, again people automate things they don't want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don't want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don't like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like "no" or "your idea is physically impossible" or "there is no reason for any of this". people who don't like coding, vibecode. people who don't like painting, generate images. people who don't like understanding things, cram text through chatbots to summarize them. maybe you don't see a problem with this, but it's entirely a you problem

    this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your "thinking" to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won't repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there's no junior to promote in their place, because junior was replaced by a chatbot.

    this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i'm surely forgetting right now

    and also

    Is the backlash due to media narratives about AI replacing software engineers?

    it's you getting swept in outsized ad campaign for most bloated startup in history, not "backlash in media". what you see as "backlash" is everyone else that's not parroting openai marketing brochure

    While I don’t defend them,

    are you suure

    e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)

    e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can't imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money

  • User data has been the internet's greatest treasure trove since the advent of Google. LLM's are perfectly set up to extract the most intimate data available from their users ("mental health" conversations, financial advice, ...) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

    Regardless, there is no scenario in which the end user wins.

    For slightly earlier instance of it, there's also real time bidding

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Don't forget problems with everything around AI too. Like in the US, the Big Beautiful Bill (🤮) attempts to ban states from enforcing AI laws for ten years.

    And even more broadly what happens to the people who do lose jobs to AI? Safety nets are being actively burned down. Just saying "people are scared of new tech" ignores that AI will lead to a shift that we are not prepared for and people will suffer from it. It's way bigger than a handful of new tech tools in a vacuum.

  • Catbox.moe got screwed 😿

    Technology technology
    40
    55 Stimmen
    40 Beiträge
    13 Aufrufe
    archrecord@lemm.eeA
    I'll gladly give you a reason. I'm actually happy to articulate my stance on this, considering how much I tend to care about digital rights. Services that host files should not be held responsible for what users upload, unless: The service explicitly caters to illegal content by definition or practice (i.e. the if the website is literally titled uploadyourcsamhere[.]com then it's safe to assume they deliberately want to host illegal content) The service has a very easy mechanism to remove illegal content, either when asked, or through simple monitoring systems, but chooses not to do so (catbox does this, and quite quickly too) Because holding services responsible creates a whole host of negative effects. Here's some examples: Someone starts a CDN and some users upload CSAM. The creator of the CDN goes to jail now. Nobody ever wants to create a CDN because of the legal risk, and thus the only providers of CDNs become shady, expensive, anonymously-run services with no compliance mechanisms. You run a site that hosts images, and someone decides they want to harm you. They upload CSAM, then report the site to law enforcement. You go to jail. Anybody in the future who wants to run an image sharing site must now self-censor to try and not upset any human being that could be willing to harm them via their site. A social media site is hosting the posts and content of users. In order to be compliant and not go to jail, they must engage in extremely strict filtering, otherwise even one mistake could land them in jail. All users of the site are prohibited from posting any NSFW or even suggestive content, (including newsworthy media, such as an image of bodies in a warzone) and any violation leads to an instant ban, because any of those things could lead to a chance of actually illegal content being attached. This isn't just my opinion either. Digital rights organizations such as the Electronic Frontier Foundation have talked at length about similar policies before. To quote them: "When social media platforms adopt heavy-handed moderation policies, the unintended consequences can be hard to predict. For example, Twitter’s policies on sexual material have resulted in posts on sexual health and condoms being taken down. YouTube’s bans on violent content have resulted in journalism on the Syrian war being pulled from the site. It can be tempting to attempt to “fix” certain attitudes and behaviors online by placing increased restrictions on users’ speech, but in practice, web platforms have had more success at silencing innocent people than at making online communities healthier." Now, to address the rest of your comment, since I don't just want to focus on the beginning: I think you have to actively moderate what is uploaded Catbox does, and as previously mentioned, often at a much higher rate than other services, and at a comparable rate to many services that have millions, if not billions of dollars in annual profits that could otherwise be spent on further moderation. there has to be swifter and stricter punishment for those that do upload things that are against TOS and/or illegal. The problem isn't necessarily the speed at which people can be reported and punished, but rather that the internet is fundamentally harder to track people on than real life. It's easy for cops to sit around at a spot they know someone will be physically distributing illegal content at in real life, but digitally, even if you can see the feed of all the information passing through the service, a VPN or Tor connection will anonymize your IP address in a manner that most police departments won't be able to track, and most three-letter agencies will simply have a relatively low success rate with. There's no good solution to this problem of identifying perpetrators, which is why platforms often focus on moderation over legal enforcement actions against users so frequently. It accomplishes the goal of preventing and removing the content without having to, for example, require every single user of the internet to scan an ID (and also magically prevent people from just stealing other people's access tokens and impersonating their ID) I do agree, however, that we should probably provide larger amounts of funding, training, and resources, to divisions who's sole goal is to go after online distribution of various illegal content, primarily that which harms children, because it's certainly still an issue of there being too many reports to go through, even if many of them will still lead to dead ends. I hope that explains why making file hosting services liable for user uploaded content probably isn't the best strategy. I hate to see people with good intentions support ideas that sound good in practice, but in the end just cause more untold harms, and I hope you can understand why I believe this to be the case.
  • Forced E-Waste PCs And The Case Of Windows 11’s Trusted Platform

    Technology technology
    116
    1
    317 Stimmen
    116 Beiträge
    4 Aufrufe
    K
    I was pretty lucky in university as most of my profs were either using cross platform stuff or Linux exclusive software. I had a single class that wanted me using windows stuff and I just dropped that one. Awesome that you're getting back into it, it's definitely the best it's ever been (and you're right that Steam cracked the code). It sounds like you probably know what you're doing if you're running Linux VMs and stuff, but feel free to shoot me a PM if you run into any questions or issues I might be able to point you in the right direction for.
  • 48 Stimmen
    15 Beiträge
    2 Aufrufe
    evkob@lemmy.caE
    Their Bionic Eyes Are Now Obsolete and Unsupported
  • Nextcloud cries foul over Google Play Store app rejection

    Technology technology
    1
    1
    6 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • 406 Stimmen
    83 Beiträge
    12 Aufrufe
    J
    Of course they don't click anything. Google search has just become a front-end for Gemini, the answer is "served" up right at the top and most people will just take that for Gospel.
  • X blocks 8,000 accounts in India under government order

    Technology technology
    2
    1
    58 Stimmen
    2 Beiträge
    2 Aufrufe
    gsus4@mander.xyzG
    'member Aug 6 2024: https://www.ft.com/content/31919b4e-4a5a-4eba-ada7-88d3fec455f8 ;D UK faces resistance from X over taking down disinformation during riots Social media site owner Elon Musk has also been posting jibes at UK Prime Minister Keir Starmer Waiting to see those jibes at Modi... And who could forget in April 11, 2024: https://apnews.com/article/brazil-musk-x-twitter-moraes-bef06c0dbbb8ed87495b1afbb0edf211 What to know about Elon Musk’s ‘free speech’ feud with a Brazilian judge gotta see that feud with Indian judges, nobody asked him to block 8000 accounts, including western media outlets, whatever is he gonna do?
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    1 Aufrufe
    Niemand hat geantwortet
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    4 Aufrufe
    Niemand hat geantwortet