Skip to content

Why so much hate toward AI?

Technology
73 46 215
  • Because the goal of "AI" is to make the grand majority of us all obsolete. The billion-dollar question AI is trying to solve is "why should we continue to pay wages?".
    That is bad for everyone who isn't part of the owner class. Even if you personally benefit from using it to make yourself more productive/creative/... the data you input can and WILL eventually be used against you.

    If you only self-host and know what you're doing, this might be somewhat different, but it still won't stop the big guys from trying to swallow all the others whole.

    the data you input can and WILL eventually be used against you.

    Can you expand further on this?

  • Not to mention the environmental cost is literally astronomical. I would be very interested if AI code is functional x times out of 10 because it's success statistic for every other type of generation is much lower.

    chatbot DCs burn enough electricity to power middle sized euro country, all for seven fingered hands and glue-and-rock pizza

  • Have you talked to any programmers about this? I know several who, in the past 6 months alone, have completely changed their view on exactly how effective AI is in automating parts of their coding. Not only are they using it, they are paying to use it because it gives them a personal return on investment...but you know, you can keep using that push lawnmower, just don't complain when the kids next door run circles around you at a quarter the cost.

    congratulations on offloading your critical thinking skills to a chatbot that you most likely don't own. what are you gonna do when the bubble is over, or when dc with it burns down

  • the data you input can and WILL eventually be used against you.

    Can you expand further on this?

    User data has been the internet's greatest treasure trove since the advent of Google. LLM's are perfectly set up to extract the most intimate data available from their users ("mental health" conversations, financial advice, ...) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

    Regardless, there is no scenario in which the end user wins.

  • Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"

    Read 'The Communist Manifesto' if you'd like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    taking a couple steps back and looking at bigger picture, something that you might have never done in your entire life guessing by tone of your post, people want to automate things that they don't want to do. nobody wants to make elaborate spam that will evade detection, but if you can automate it somebody will use it this way. this is why spam, ads, certain kinds of propaganda and deepfakes are one of big actual use cases of genai that likely won't go away (isn't future bright?)

    this is tied to another point. if a thing requires some level of skill to make, then naturally there are some restraints. in pre-slopnami times, making a deepfake useful in black propaganda would require co-conspirator that has both ability to do that and correct political slant, and will shut up about it, and will have good enough opsec to not leak it unintentionally. maybe more than one. now, making sorta-convincing deepfakes requires involving less people. this also includes things like nonconsensual porn, for which there are less barriers now due to genai

    then, again people automate things they don't want to do. there are people that do like coding. then also there are Idea Men butchering codebases trying to vibecode, while they don't want to and have no inclination for or understanding of coding and what it takes, and what should result look like. it might be not a coincidence that llms mostly charmed managerial class, which resulted in them pushing chatbots to automate away things they don't like or understand and likely have to pay people money for, all while chatbot will never say such sacrilegious things like "no" or "your idea is physically impossible" or "there is no reason for any of this". people who don't like coding, vibecode. people who don't like painting, generate images. people who don't like understanding things, cram text through chatbots to summarize them. maybe you don't see a problem with this, but it's entirely a you problem

    this leads to three further points. chatbots allow for low low price of selling your thoughts to saltman &co offloading all your "thinking" to them. this makes cheating in some cases exceedingly easy, something that schools have to adjust to, while destroying any ability to learn for students that use them this way. another thing is that in production chatbots are virtual dumbasses that never learn, and seniors are forced to babysit them and fix their mistakes. intern at least learns something and won't repeat that mistake again, chatbot will fall in the same trap right when you run out of context window. this hits all major causes of burnout at once, and maybe senior will leave. then what? there's no junior to promote in their place, because junior was replaced by a chatbot.

    this all comes before noticing little things like multibillion dollar stock bubble tied to openai, or their mid-sized-euro-country sized power demands, or whatever monstrosities palantir is cooking, and a couple of others that i'm surely forgetting right now

    and also

    Is the backlash due to media narratives about AI replacing software engineers?

    it's you getting swept in outsized ad campaign for most bloated startup in history, not "backlash in media". what you see as "backlash" is everyone else that's not parroting openai marketing brochure

    While I don’t defend them,

    are you suure

    e: and also, lots of these chatbots are used as accountability sinks. sorry nothing good will ever happen to you because Computer Says No (pay no attention to the oligarch behind the curtain)

    e2: also this is partially side effect of silicon valley running out of ideas after crypto crashed and burned, then metaverse crashed and burned, and also after all this all of these people (the same people who ran crypto before, including altman himself) and money went to pump next bubble, because they can't imagine anything else that will bring them that promised infinite growth, and they having money is result of ZIRP that might be coming to end and there will be fear and loathing because vcs somehow unlearned how to make money

  • User data has been the internet's greatest treasure trove since the advent of Google. LLM's are perfectly set up to extract the most intimate data available from their users ("mental health" conversations, financial advice, ...) which can be used against them in a soft way (higher prices when looking for mental health help) or they can be used to outright manipulate or blackmail you.

    Regardless, there is no scenario in which the end user wins.

    For slightly earlier instance of it, there's also real time bidding

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Don't forget problems with everything around AI too. Like in the US, the Big Beautiful Bill (🤮) attempts to ban states from enforcing AI laws for ten years.

    And even more broadly what happens to the people who do lose jobs to AI? Safety nets are being actively burned down. Just saying "people are scared of new tech" ignores that AI will lead to a shift that we are not prepared for and people will suffer from it. It's way bigger than a handful of new tech tools in a vacuum.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    "AI" is a pseudo-scientific grift.

    Perhaps more importantly, the underlying technologies (like any technology) are already co-opted by the state, capitalism, imperialism, etc. for the purposes of violence, surveillance, control, etc.

    Sure, it's cool for a chatbot to summarize stackexchange but it's much less cool to track and murder people while committing genocide. In either case there is no "intelligence" apart from the humans involved. "AI" is primarily a tool for terrible people to do terrible things while putting the responsibility on some ethereal, unaccountable "intelligence" (aka a computer).

  • Gotcha, so no actual discourse then.

    Incidentally, I do enjoy Marvel "slop" and quite honestly one of my favorite YouTube channels is Abandoned Films https://youtu.be/mPQgim0CuuI

    This is super creative and would never be able to be made without AI.

    I also enjoy reading books like Psalm for the Wild Built. It's almost like there's space for both things...

    This is creepy.

  • Also, it should never be used for art. I don’t care if you need to make a logo for a company and A.I. spits out whatever. But real art is about humans expressing something. We don’t value cave paintings because they’re perfect. We value them because someone thousands of years ago made it.

    So, that’s something I hate about it. People think it can “democratize” art. Art is already democratized. I have a child’s drawing on my fridge that means more to me than anything at any museum. The beauty of some things is not that it was generated. It’s that someone cared enough to try. I’d rather a misspelled crayon card from my niece than some shit ChatGPT generated.

    Yeah, "democratize art" means "I'm jealous of the cash sloshing around out there."

    People say things like "I'm not as good as this guy on TikTok." Why do you need to be? Literally, who asked?

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Dunning-Kruger effect.

    Lots of people now think they can be developpers because they did a shitty half working game using vibe coding.

    Would you trust a surgeon that rely on ChatGPT ? So why sould you trust LLM to develop programs ? You know that airplane, nuclear power plants, and a LOT of critical infrastructure rely on programs, right ?

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    AI is theft in the first place. None of the current engines have gotten their training data legally. The are based on pirated books and scraped content taken from websites that explicitely forbid use of their data for training LLMs.

    And all that to create mediocre parrots with dictionaries that are wrong half the time, and often enough give dangerous, even lethal advice, all while wasting power and computational resources.

  • Reads like a rant against the industrial revolution. "The industry is only concerned about replacing workers with steam engines!"

    You should check out this
    https://thenib.com/im-a-luddite/

  • I can only speak as an artist.

    Because it's entire functionality is based on theft. Companies are stealing the works of ppl and profiting off of it with no payment to the artists who's works its platform is based on.

    You often hear the argument that all artists borrow from others but if I created an anime that is blantantly copying the style of studio Ghibili I'd rightly be sued. On top of that AI is copying so obviously it recreates the watermarks from the original artists.

    Fuck AI

    You can't be sued over or copyright styles. Studio Ponoc is made up of ex-Ghibli staff, and they have been releasing moves for a while. Stop spreading misinformation.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    If you don’t hate AI, you’re not informed enough.

    It has the potential to disrupt pretty much everything in a negative way. Especially when regulations always lag behind. AI will be abused by corporations in the worst way possible, while also being bad for the planet.

    And the people who are most excited about it, tend to be the biggest shitheads. Basically, no informed person should want AI anywhere near them unless they directly control it.

  • I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

    Because so far we only see the negative impacts in human society IMO. Latest news haven't help at all, not to mention how USA is moving towards AI.
    Every positive of AI, leads to be used in a workplace, which then will most likely lead to lay offs.
    I may start to think that Finch in POI, was right all along.

    edit: They sell us an unfinished product, which we build in a wrong way.

  • Read 'The Communist Manifesto' if you'd like to understand in which ways the bourgeoisie used the industrial revolution to hurt the proletariat, exactly as they are with AI.

    The industrial revolution is what made socialism possible, since now a smaller amount of workers can support the elderly, children, etc.

    Just look at China before and after industrializing. Life expectancy way up, the government can provide services like public transit and medicine (for a nominal fee)

  • Have you never had a corporate job? A technology can be very much useless while incompetent 'managers' who believe it can do better than humans WILL buy the former to get rid of the latter, even though that's a stupid thing to do, in order to meet their yearly targets and other similar idiotic measures of division/team 'productivity'

    In corporate world managers get fired for not completing projects

  • The industrial revolution is what made socialism possible, since now a smaller amount of workers can support the elderly, children, etc.

    Just look at China before and after industrializing. Life expectancy way up, the government can provide services like public transit and medicine (for a nominal fee)

    We're discussing how industry and technology are used against the proletariat, not how state economies form. You can read the pamphlet referenced in the previous post if you'd like to understand the topic at hand.

  • EV tax credits might end even sooner than House bill proposed

    Technology technology
    7
    49 Stimmen
    7 Beiträge
    34 Aufrufe
    B
    It's not just tax credits for new cars, they are also getting rid of the Used EV Tax Credit which has helped to keep the prices of used EVs (relatively) lower.
  • 216 Stimmen
    13 Beiträge
    40 Aufrufe
    J
    It’s DEI’s fault!
  • 1k Stimmen
    95 Beiträge
    16 Aufrufe
    G
    Obviously the law must be simple enough to follow so that for Jim’s furniture shop is not a problem nor a too high cost to respect it, but it must be clear that if you break it you can cease to exist as company. I think this may be the root of our disagreement, I do not believe that there is any law making body today that is capable of an elegantly simple law. I could be too naive, but I think it is possible. We also definitely have a difference on opinion when it comes to the severity of the infraction, in my mind, while privacy is important, it should not have the same level of punishments associated with it when compared to something on the level of poisoning water ways; I think that a privacy law should hurt but be able to be learned from while in the poison case it should result in the bankruptcy of a company. The severity is directly proportional to the number of people affected. If you violate the privacy of 200 million people is the same that you poison the water of 10 people. And while with the poisoning scenario it could be better to jail the responsible people (for a very, very long time) and let the company survive to clean the water, once your privacy is violated there is no way back, a company could not fix it. The issue we find ourselves with today is that the aggregate of all privacy breaches makes it harmful to the people, but with a sizeable enough fine, I find it hard to believe that there would be major or lasting damage. So how much money your privacy it's worth ? 6 For this reason I don’t think it is wise to write laws that will bankrupt a company off of one infraction which was not directly or indirectly harmful to the physical well being of the people: and I am using indirectly a little bit more strict than I would like to since as I said before, the aggregate of all the information is harmful. The point is that the goal is not to bankrupt companies but to have them behave right. The penalty associated to every law IS the tool that make you respect the law. And it must be so high that you don't want to break the law. I would have to look into the laws in question, but on a surface level I think that any company should be subjected to the same baseline privacy laws, so if there isn’t anything screwy within the law that apple, Google, and Facebook are ignoring, I think it should apply to them. Trust me on this one, direct experience payment processors have a lot more rules to follow to be able to work. I do not want jail time for the CEO by default but he need to know that he will pay personally if the company break the law, it is the only way to make him run the company being sure that it follow the laws. For some reason I don’t have my usual cynicism when it comes to this issue. I think that the magnitude of loses that vested interests have in these companies would make it so that companies would police themselves for fear of losing profits. That being said I wouldn’t be opposed to some form of personal accountability on corporate leadership, but I fear that they will just end up finding a way to create a scapegoat everytime. It is not cynicism. I simply think that a huge fine to a single person (the CEO for example) is useless since it too easy to avoid and if it really huge realistically it would be never paid anyway so nothing usefull since the net worth of this kind of people is only on the paper. So if you slap a 100 billion file to Musk he will never pay because he has not the money to pay even if technically he is worth way more than that. Jail time instead is something that even Musk can experience. In general I like laws that are as objective as possible, I think that a privacy law should be written so that it is very objectively overbearing, but that has a smaller fine associated with it. This way the law is very clear on right and wrong, while also giving the businesses time and incentive to change their practices without having to sink large amount of expenses into lawyers to review every minute detail, which is the logical conclusion of the one infraction bankrupt system that you seem to be supporting. Then you write a law that explicitally state what you can do and what is not allowed is forbidden by default.
  • 50 Stimmen
    11 Beiträge
    45 Aufrufe
    G
    Anyone here use XING?
  • 3 Stimmen
    19 Beiträge
    27 Aufrufe
    M
    Are most people in "the west" worse off today than they were 150 years ago? Are there fewer well functioning democracies than there were then? Has no minority group seen any improvement in their freedom? Has there been no improvement in how people interact with each other? No improvement in poverty?
  • 4 Stimmen
    12 Beiträge
    19 Aufrufe
    guydudeman@lemmy.worldG
    Yeah, I don’t know how they’re doing it. They’re using some “zero trust” system. It’s beyond me.
  • 141 Stimmen
    4 Beiträge
    22 Aufrufe
    P
    The topic is more nuanced, all the logs indicate email/password combos that were compromised. While it is possible this is due to a malware infection, it could be something as simple as a phishing website. In this case, credentials are entered but no "malware" was installed. The point being it doesn't look great that someone has ANY compromises... But again, anyone who's used the Internet a bit has some compromised. For example, in a password manager (especially the one on iPhone), you'll often be notified of all your potentially compromised accounts. [image: 7a5e8350-e47e-4d67-b096-e6e470ec7050.jpeg]
  • 360 Stimmen
    24 Beiträge
    92 Aufrufe
    F
    If only they didn’t fake it to get their desired result, then maybe it could have been useful. I agree that LiDAR and other technologies should be used in conjunction with regular cameras. I don’t know why anyone would be against that unless they have vested interests. For various reasons though I understand that it isn’t always possible - price being a big one.