Skip to content

Why so much hate toward AI?

Technology
73 46 619
  • Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment...but you know, you can keep using that push lawnmower, just don't complain when the kids next door run circles around you at a quarter the cost.

    congratulations on offloading your critical thinking skills to a chatbot that you most likely don't own. what are you gonna do when the bubble is over, or when dc with it burns down

  • the data you input can and WILL eventually be used against you.

    Can you expand further on this?

    User data has been the internet's greatest treasure trove since the advent of Google. LLM's are perfectly set up to extract the most intimate data available from their users ("mental health" conversations, financial advice, ...) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

    Regardless, there is no scenario in which the end user wins.

  • Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"

    Read 'The Communist Manifesto' if you'd like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don't want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won't go away (isn't future bright?)

    this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai

    then, again people automate things they don't want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don't want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don't like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like "no" or "your idea is physically impossible" or "there is no reason for any of this". people who don't like coding, vibecode. people who don't like painting, generate images. people who don't like understanding things, cram text through chatbots to summarize them. maybe you don't see a problem with this, but it's entirely a you problem

    this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your "thinking" to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won't repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there's no junior to promote in their place, because junior was replaced by a chatbot.

    this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i'm surely forgetting right now

    and also

    Is the backlash due to media narratives about AI replacing software engineers?

    it's you getting swept in outsized ad campaign for most bloated startup in history, not "backlash in media". what you see as "backlash" is everyone else that's not parroting openai marketing brochure

    While I don’t defend them,

    are you suure

    e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)

    e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can't imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money

  • User data has been the internet's greatest treasure trove since the advent of Google. LLM's are perfectly set up to extract the most intimate data available from their users ("mental health" conversations, financial advice, ...) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

    Regardless, there is no scenario in which the end user wins.

    For slightly earlier instance of it, there's also real time bidding

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Don't forget problems with everything around AI too. Like in the US, the Big Beautiful Bill (🤮) attempts to ban states from enforcing AI laws for ten years.

    And even more broadly what happens to the people who do lose jobs to AI? Safety nets are being actively burned down. Just saying "people are scared of new tech" ignores that AI will lead to a shift that we are not prepared for and people will suffer from it. It's way bigger than a handful of new tech tools in a vacuum.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    "AI" is a pseudo-scientific grift.

    Perhaps more importantly, the underlying technologies (like any technology) are already co-opted by the state, capitalism, imperialism, etc. for the purposes of violence, surveillance, control, etc.

    Sure, it's cool for a chatbot to summarize stackexchange but it's much less cool to track and murder people while committing genocide. In either case there is no "intelligence" apart from the humans involved. "AI" is primarily a tool for terrible people to do terrible things while putting the responsibility on some ethereal, unaccountable "intelligence" (aka a computer).

  • Gotcha, so no actual discourse then.

    Incidentally, I do enjoy Marvel "slop" and quite honestly one of my favorite YouTube channels is Abandoned Films https://youtu.be/mPQgim0CuuI

    This is super creative and would never be able to be made without AI.

    I also enjoy reading books like Psalm for the Wild Built. It's almost like there's space for both things...

    This is creepy.

  • Also, it should never be used for art. I don’t care if you need to make a logo for a company and A.I. spits out whatever. But real art is about humans expressing something. We don’t value cave paintings because they’re perfect. We value them because someone thousands of years ago made it.

    So, that’s something I hate about it. People think it can “democratize” art. Art is already democratized. I have a child’s drawing on my fridge that means more to me than anything at any museum. The beauty of some things is not that it was generated. It’s that someone cared enough to try. I’d rather a misspelled crayon card from my niece than some shit ChatGPT generated.

    Yeah, "democratize art" means "I'm jealous of the cash sloshing around out there."

    People say things like "I'm not as good as this guy on TikTok." Why do you need to be? Literally, who asked?

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Dunning-Kruger effect.

    Lots of people now think they can be developpers because they did a shitty half working game using vibe coding.

    Would you trust a surgeon that rely on ChatGPT ? So why sould you trust LLM to develop programs ? You know that airplane, nuclear power plants, and a LOT of critical infrastructure rely on programs, right ?

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    AI is theft in the first place. None of the current engines have gotten their training data legally. The are based on pirated books and scraped content taken from websites that explicitely forbid use of their data for training LLMs.

    And all that to create mediocre parrots with dictionaries that are wrong half the time, and often enough give dangerous, even lethal advice, all while wasting power and computational resources.

  • Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"

    You should check out this
    https://thenib.com/im-a-luddite/

  • I can only speak as an artist.

    Because it's entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who's works its platform is based on.

    You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I'd rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

    Fuck AI

    You can't be sued over or copyright styles. Studio Ponoc is made up of ex-Ghibli staff, and they have been releasing moves for a while. Stop spreading misinformation.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    If you don’t hate AI, you’re not informed enough.

    It has the potential to disrupt pretty much everything in a negative way. Especially when regulations always lag behind. AI will be abused by corporations in the worst way possible, while also being bad for the planet.

    And the people who are most excited about it, tend to be the biggest shitheads. Basically, no informed person should want AI anywhere near them unless they directly control it.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Because so far we only see the negative impacts in human society IMO. Latest news haven't help at all, not to mention how USA is moving towards AI.
    Every positive of AI, leads to be used in a workplace, which then will most likely lead to lay offs.
    I may start to think that Finch in POI, was right all along.

    edit: They sell us an unfinished product, which we build in a wrong way.

  • Read 'The Communist Manifesto' if you'd like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

    The industrial revolution is what made socialism possible, since now a smaller amount of workers can support the elderly, children, etc.

    Just look at China before and after industrializing. Life expectancy way up, the government can provide services like public transit and medicine (for a nominal fee)

  • Have you never had a corporate job? A technology can be very much useless while incompetent 'managers' who believe it can do better than humans WILL buy the former to get rid of the latter, even though that's a stupid thing to do, in order to meet their yearly targets and other similar idiotic measures of division/team 'productivity'

    In corporate world managers get fired for not completing projects

  • The industrial revolution is what made socialism possible, since now a smaller amount of workers can support the elderly, children, etc.

    Just look at China before and after industrializing. Life expectancy way up, the government can provide services like public transit and medicine (for a nominal fee)

    We're discussing how industry and technology are used against the proletariat, not how state economies form. You can read the pamphlet referenced in the previous post if you'd like to understand the topic at hand.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Many people on Lemmy are extremely negative towards AI which is unfortunate. There are MANY dangers, but there are also Many obvious use cases where AI can be of help (summarizing a meeting, cleaning up any text etc.)

    Yes, the wax how these models have been trained is shameful, but unfoet9tjat ship has sailed, let's be honest.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    AI has only one problem to solve: salaries

  • 0 Stimmen
    1 Beiträge
    17 Aufrufe
    Niemand hat geantwortet
  • Tough, Tiny, and Totally Repairable: Inside the Framework 12

    Technology technology
    109
    1
    548 Stimmen
    109 Beiträge
    4k Aufrufe
    P
    What? No, the framework 12 is the thing the had before the 13 one. Nowadays, they call that model always 13 it seems. I think you're confusing something, I've got mine since a few years now.
  • 781 Stimmen
    231 Beiträge
    5k Aufrufe
    D
    Haha I'm kidding, it's good that you share your solution here.
  • Sunsetting the Ghostery Private Browser

    Technology technology
    8
    1
    33 Stimmen
    8 Beiträge
    69 Aufrufe
    P
    Sunsetting Dawn? Of course
  • OpenAI plans massive UAE data center project

    Technology technology
    4
    1
    0 Stimmen
    4 Beiträge
    47 Aufrufe
    V
    TD Cowen (which is basically the US arm of one of the largest Canadian investment banks) did an extensive report on the state of AI investment. What they found was that despite all their big claims about the future of AI, Microsoft were quietly allowing letters of intent for billions of dollars worth of new compute capacity to expire. Basically, scrapping future plans for expansion, but in a way that's not showy and doesn't require any kind of big announcement. The equivalent of promising to be at the party and then just not showing up. Not long after this reporting came out, it got confirmed by Microsoft, and not long after it came out that Amazon was doing the same thing. Ed Zitron has a really good write up on it; https://www.wheresyoured.at/power-cut/ Amazon isn't the big surprise, they've always been the most cautious of the big players on the whole AI thing. Microsoft on the other hand are very much trying to play things both ways. They know AI is fucked, which is why they're scaling back, but they've also invested a lot of money into their OpenAI partnership so now they have to justify that expenditure which means convincing investors that consumers absolutely love their AI products and are desparate for more. As always, follow the money. Stuff like the three mile island thing is mostly just applying for permits and so on at this point. Relatively small investments. As soon as it comes to big money hitting the table, they're pulling back. That's how you know how they really feel.
  • 19 Stimmen
    7 Beiträge
    60 Aufrufe
    A
    Fantastic! Me and my 7 legs tank you so much!
  • 0 Stimmen
    4 Beiträge
    45 Aufrufe
    K
    Only way I'll want a different phone brand is if it comes with ZERO bloatware and has an excellent internal memory/storage cleanse that has nothing to do with Google's Files or a random app I'm not sure I can trust without paying or rooting. So far my A series phones do what I need mostly and in my opinion is superior to the Motorola's my fiancé prefers minus the phone-phone charge ability his has, everything else I'm just glad I have enough control to tweak things to my liking, however these days Samsungs seem to be infested with Google bloatware and apps that insist on opening themselves back up regardless of the widespread battery restrictions I've assigned (even was sent a "Stop Closing my Apps" notif that sent me to an article ) short of Disabling many unnecessary apps bc fully rooting my devices is something I rarely do anymore. I have a random Chinese brand tablet where I actually have more control over the apps than either of my A series phones whee Force Stopping STAYS that way when I tell them to! I hate being listened to for ads and the unwanted draining my battery life and data (I live off-grid and pay data rates because "Unlimited" is some throttled BS) so my ability to control what's going on in the background matters a lot to me, enough that I'm anti Meta-apps and avoid all non-essential Google apps. I can't afford topline phones and the largest data plan, so I work with what I can afford and I'm sad refurbished A lines seem to be getting more expensive while giving away my control to companies. Last A line I bought that was supposed to be my first 5G phone was network locked, so I got ripped off, but it still serves me well in off-grid life. Only app that actually regularly malfunctions when I Force Stop it's background presence is Roku, which I find to have very an almost insidious presence in our lives. Google Play, Chrome, and Spotify never acts incompetent in any way no matter how I have to open the setting every single time I turn Airplane Mode off. Don't need Gmail with Chrome and DuckDuckGo has been awesome at intercepting self-loading ads. I hope one day DDG gets better bc Google seems to be terrible lately and I even caught their AI contradicting itself when asking about if Homo Florensis is considered Human (yes) and then asked the oldest age of human remains, and was fed the outdated narrative of 300,000 years versus 700,000+ years bipedal pre-humans have been carbon dated outside of the Cradle of Humanity in South Africa. SO sorry to go off-topic, but I've got a big gripe with Samsung's partnership with Google, especially considering the launch of Quantum Computed AI that is still being fine-tuned with company-approved censorships.
  • *deleted by creator*

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    20 Aufrufe
    Niemand hat geantwortet