Skip to content

We need to stop pretending AI is intelligent

Technology
328 147 29
  • Ya... Humans so far have made everything not produced by Nature on Earth. 🤷

    So trusting tech made by them is trusting them. Specifically, a less reliable version of them.

  • It is intelligent and deductive, but it is not cognitive or even dependable.

    It's not. It's a math formula that predicts an output based on its parameters that it deduced from training data.

    Say you have following sets of data.

    1. Y = 3, X = 1
    2. Y = 4, X = 2
    3. Y = 5, X = 3

    We can calculate a regression model using those numbers to predict what Y would equal to if X was 4.

    I won't go into much detail, but

    Y = 2 + 1x + e

    e in an ideal world = 0 (which it is, in this case), that's our model's error, which is typically set to be within 5% or 1% (at least in econometrics). b0 = 2, this is our model's bias. And b1 = 1, this is our parameter that determines how much of an input X does when predicting Y.

    If x = 4, then

    Y = 2 + 1×4 + 0 = 6

    Our model just predicted that if X is 4, then Y is 6.

    In a nutshell, that's what AI does, but instead of numbers, it's tokens (think symbols, words, pixels), and the formula is much much more complex.

    This isn't intelligence and not deduction. It's only prediction. This is the reason why AI often fails at common sense. The error builds up, and you end up with nonsense, and since it's not thinking, it will be just as confidently incorrect as it would be if it was correct.

    Companies calling it "AI" is pure marketing.

  • It's not. It's a math formula that predicts an output based on its parameters that it deduced from training data.

    Say you have following sets of data.

    1. Y = 3, X = 1
    2. Y = 4, X = 2
    3. Y = 5, X = 3

    We can calculate a regression model using those numbers to predict what Y would equal to if X was 4.

    I won't go into much detail, but

    Y = 2 + 1x + e

    e in an ideal world = 0 (which it is, in this case), that's our model's error, which is typically set to be within 5% or 1% (at least in econometrics). b0 = 2, this is our model's bias. And b1 = 1, this is our parameter that determines how much of an input X does when predicting Y.

    If x = 4, then

    Y = 2 + 1×4 + 0 = 6

    Our model just predicted that if X is 4, then Y is 6.

    In a nutshell, that's what AI does, but instead of numbers, it's tokens (think symbols, words, pixels), and the formula is much much more complex.

    This isn't intelligence and not deduction. It's only prediction. This is the reason why AI often fails at common sense. The error builds up, and you end up with nonsense, and since it's not thinking, it will be just as confidently incorrect as it would be if it was correct.

    Companies calling it "AI" is pure marketing.

    Wikipedia is literally just a very long number, if you want to oversimplify things into absurdity. Modern LLMs are literally running on neural networks, just like you. Just less of them and with far less structure. It is also on average more intelligent than you on far more subjects, and can deduce better reasoning than flimsy numerology - not because you are dumb, but because it is far more streamlined. Another thing entirely is that it is cognizant or even dependable while doing so.

    Modern LLMs waste a lot more energy for a lot less simulated neurons. We had what you are describing decades ago. It is literally built on the works of our combined intelligence, so how could it also not be intelligent? Perhaps the problem is that you have a loaded definition of intelligence. And prompts literally work because of its deductive capabilities.

    Errors also build up in dementia and Alzheimers. We have people who cannot remember what they did yesterday, we have people with severed hemispheres, split brains, who say one thing and do something else depending on which part of the brain its relying for the same inputs. The difference is our brains have evolved through millennia through millions and millions of lifeforms in a matter of life and death, LLMs have just been a thing for a couple of years as a matter of convenience and buzzword venture capital. They barely have more neurons than flies, but are also more limited in regards to the input they have to process. The people running it as a service have a bested interest not to have it think for itself, but in what interests them. Like it or not, the human brain is also an evolutionary prediction device.

  • Wikipedia is literally just a very long number, if you want to oversimplify things into absurdity. Modern LLMs are literally running on neural networks, just like you. Just less of them and with far less structure. It is also on average more intelligent than you on far more subjects, and can deduce better reasoning than flimsy numerology - not because you are dumb, but because it is far more streamlined. Another thing entirely is that it is cognizant or even dependable while doing so.

    Modern LLMs waste a lot more energy for a lot less simulated neurons. We had what you are describing decades ago. It is literally built on the works of our combined intelligence, so how could it also not be intelligent? Perhaps the problem is that you have a loaded definition of intelligence. And prompts literally work because of its deductive capabilities.

    Errors also build up in dementia and Alzheimers. We have people who cannot remember what they did yesterday, we have people with severed hemispheres, split brains, who say one thing and do something else depending on which part of the brain its relying for the same inputs. The difference is our brains have evolved through millennia through millions and millions of lifeforms in a matter of life and death, LLMs have just been a thing for a couple of years as a matter of convenience and buzzword venture capital. They barely have more neurons than flies, but are also more limited in regards to the input they have to process. The people running it as a service have a bested interest not to have it think for itself, but in what interests them. Like it or not, the human brain is also an evolutionary prediction device.

    People don't predict values to determine their answers to questions...

    Also, it's called neural network, not because it works exactly like neurons but because it's somewhat similar. They don't "run on neural networks", they're called like that because it's more than one regression model where information is being passed on from one to another, sort of like a chain of neurons, but not exactly. It's just a different name for a transformer model.

    I don't know enough to properly compare it to actual neurons, but at the very least, they seem to be significantly more deterministic and way way more complex.

    Literally, go to chatgpt and try to test its common reasoning. Then try to argue with it. Open a new chat and do the exact same questions and points. You'll see exactly what I'm talking about.

    Alzheimer's is an entirely different story, and no, it's not stochastic. Seizures are stochastic, at least they look like that, which they may actually not be.

  • have you seen the American Republican party recently? it brings a new perspective on how stupid humans can be.

    Lmao true

  • A gun isn't dangerous, if you handle it correctly.

    Same for an automobile, or aircraft.

    If we build powerful AIs and put them "in charge" of important things, without proper handling they can - and already have - started crashing into crowds of people, significantly injuring them - even killing some.

    Thanks for the downer.

  • You're a meat based copy machine with a built in justification box.

    Except of course that humans invented language in the first place. So uh, if all we can do is copy, where do you suppose language came from? Ancient aliens?

    No we invented "human" language. There are dozens of other animal out there that all have their own languages, completely independant of our.

    We simply refined base calls to be more and more specific. Differences evolved because people are bad at telephone and lots of people have to be special/different and use slight variations every generation.

  • Thanks for the downer.

    Anytime, and incase you missed it: I'm not just talking about AI driven vehicles. AI driven decisions can be just as harmful: https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

  • People don't predict values to determine their answers to questions...

    Also, it's called neural network, not because it works exactly like neurons but because it's somewhat similar. They don't "run on neural networks", they're called like that because it's more than one regression model where information is being passed on from one to another, sort of like a chain of neurons, but not exactly. It's just a different name for a transformer model.

    I don't know enough to properly compare it to actual neurons, but at the very least, they seem to be significantly more deterministic and way way more complex.

    Literally, go to chatgpt and try to test its common reasoning. Then try to argue with it. Open a new chat and do the exact same questions and points. You'll see exactly what I'm talking about.

    Alzheimer's is an entirely different story, and no, it's not stochastic. Seizures are stochastic, at least they look like that, which they may actually not be.

    Literally, go to a house fly and try to test its common reasoning. Then try to argue with it. Find a new house fly and do the exact same questions and points. You'll see what I'm talking about.

    There's no way to argue in such nebulous terms when every minute difference is made into an unsurpassable obstacle. You are not going to convince me, and you are not open to being convinced. We'll just end up with absurd discussions, like talking about how and whether stochastic applies to Alzherimer's.

  • No we invented "human" language. There are dozens of other animal out there that all have their own languages, completely independant of our.

    We simply refined base calls to be more and more specific. Differences evolved because people are bad at telephone and lots of people have to be special/different and use slight variations every generation.

    Are you saying human languages are a derivative of bird language or something? If so, I'd like to see the proof of that.

  • FairPhone AMA

    Technology technology
    5
    14 Stimmen
    5 Beiträge
    0 Aufrufe
    alcan@lemmy.worldA
    Ask Me Anything
  • 366 Stimmen
    170 Beiträge
    2 Aufrufe
    ladyautumn@lemmy.blahaj.zoneL
    Yup extrapolate my opinions on other things based on this one conversation where you are hellbent on justifying people making nonconsentual pornography of women and girls. Yeah you're right I do not empathize with pedophiles. Is that supposed to be a gotcha or something? It should be entirely socially intolerable to be a pedophile. It makes you a danger to some of the most vulnerable people in society. Some psychological conditions make you dangerous and require you to be institutionalized. Being attracted to the idea of victimizing children is one of them. As for the thought crime nonsequitor (since we are talking about creating AI porn), yeah I'm really not interested in the hypothetical reality where we can read thoughts. We cant, and thats not whats being discussed, you have from the outset been deadset on taking the conversation there despite its entire lack of relevance to making pornography of someone without their consent. I also did say I have no issue with someone making a drawing that happens to look like someone they dont know and have never seen. Its the context, random internet guy who is still somehow incapable of understanding the harm of making non-consenting porn of your classmates and friends but is capable of empathizing with and defending pedophiles, that matters. Its the fact that the porn is of a real person that's relevant. A real human being who has had their likeness taken and converted into material for sexual gratification by the people in their life. That shouldn't happen to anyone, no matter their gender. But men and boys are not out here having their bodies sexualized and policed by the state in the same way women and girls are. This whole subject affects women and girls many times more than it does men and boys. It is a systemic issue for women and girls. It connects with other things, like cat calling and body standards and sexualization of the female body. It becomes part of a system. And if the people doing it are teenage boys, it is the perfect introduction to the idea that women and girls bodies belong to them. They dont even have to ask or consider their feelings or emotions before turning them into sexual material for them to consume. Youre trying really hard to characterize me one way or another on subjects that aren't related to this central theme. You are very defensive of the subject and seem to think its impossible for boys and men to simply not make non-consentual pornography of women and girls. Its as easy as that. Just don't do that. I have not stated my intention to make thought crime a thing, read my past comments I explain that I would still be disgusted and horrified to discover a group of men had been sharing a group fantasy about committing sexual acts upon me. I see those thoughts as harmful in the first place, as I also stated before. But I have made no statements about making those thoughts illegal. Will I never speak with someone again if they told me I was their masturbation muse? Yup. Goodbye, never ever ever speaking to that person again. If it was a group of people? Yup, id definitely be psychologically traumatized by a group of people coming together and reducing me down to a sexual experience that they can masturbate about together. Yeah that'd fuck me up pretty bad, would never speak with any of them again and might consider restraining orders. But I never said anything about making those fantasies themselves illegal. Content is different from thoughts. Writing a book is different than considering a plot in your head. Making a movie is different than imagining a scene in your mind. Building a house is different than considering floor plans. Pornography is different than fantasy. It is tangible outside of your mind. Humans are visual creatures. Pornography exists even once the creator is gone. It isnt a thought, it is tangible, you can see it. The harms are worse, as porn is real. It can be shared. It can be given to others. Different from thoughts, in just a glance porn made of you also shows you exactly in what ways the creator sees you. A visual representation of your dehumanization thats been shared with others. It is different in every single way. Our bodies are policed so extensively in this society and culture. Now we have to compare ourselves with the fake bodies that AI gives our exploiters. Now our nudity can be taken from us with just an image online. Even an innocent totally normal image isnt safe in any sense of the word. Algorithms have been made to take even that away from us. You accuse me of being unempathetic to pedophiles, a charge I will accept. I am unempathetic to them. Its their fixation on abusing children to deal with, thats their burden to carry. Many who do get help abuse children later anyway. Because unlike a sexual orientation, pedophilia is being fixated on abuse itself. Like rapists or others who are fixated on inflicting sexual pain and torture on others. Sexual exploitation is among the most psychologically harmful things someone can go through. And pedophiles have an attraction to sexually exploiting children. It's horrifying in every sense of the word, and yes I am entirely unempathetic of them. You, on the other hand, seemingly cannot understand how being sexually exploited actually harms someone. You've tried very hard to create alternate explanations for why creating porn of someone without their consent is okay. You've continuously denied the way that misogyny inflicts this extremely intensely on women and girls. You've deflected, and asked for continuous explanations about why being dehumanized itself is a bad thing. You've argued yourself into protecting pedophiles. And you are saying I need to assess my world view? You're single handedly proving that feminism hasnt made any lasting progress in modernity. We are still barely even human to men.
  • Biotech uses fermentation to produce milk proteins without cows

    Technology technology
    26
    199 Stimmen
    26 Beiträge
    5 Aufrufe
    M
    Alpro Not Milk comes pretty close for me, oat drink.
  • AJWIN — A Revolução do Entretenimento Online em Suas Mãos

    Technology technology
    1
    1
    0 Stimmen
    1 Beiträge
    2 Aufrufe
    Niemand hat geantwortet
  • 221 Stimmen
    16 Beiträge
    24 Aufrufe
    V
    Does it mean that some people take orders from AI and don't know it's AI ?
  • 374 Stimmen
    69 Beiträge
    53 Aufrufe
    T
    In those situations I usually enable 1.5x.
  • 4 Stimmen
    2 Beiträge
    6 Aufrufe
    M
    Epic is a piece of shit company. The only reason they are fighting this fight with Apple is because they want some of Apple’s platform fees for themselves. Period. The fact that they managed to convince a bunch of simpletons that they are somehow Robin Hood coming to free them from the tyrant (who was actually protecting all those users all along) is laughable. Apple created the platform, Apple managed it, curated it, and controlled it. That gives them the right to profit from it. You might dislike that but — guess what? Nobody forced you to buy it. Buy Android if Fortnight is so important to you. Seriously. Please. We won’t miss you. Epic thinks they have a right to profit from Apple’s platform and not pay them for all the work they did to get it to be over 1 billion users. That is simply wrong. They should build their own platform and their own App Store and convince 1 billion people to use it. The reason they aren’t doing that is because they know they will never be as successful as Apple has been.
  • 1 Stimmen
    4 Beiträge
    11 Aufrufe
    N
    that's probably not true. I imagine it was someone trying to harm the guy. a hilarious prank