Skip to content

We need to stop pretending AI is intelligent

Technology
326 147 20
  • Well you are doing a poor job of it and are bringing an unnecessary amount of heat to an otherwise civil discussion

    That's right. If you cannot win the argument the next best thing is to call for civility.

  • much less? I'm pretty sure our brains need food and food requires lots of other stuff that need transportation or energy themselves to produce.

    Your brain is running on sugar. Do you take into account the energy spent in coal mining, oil fields exploration, refinery, transportation, electricity transmission loss when computing the amount of energy required to build and run AI? Do you take into account all the energy consumption for the knowledge production in first place to train your model?
    Running the brain alone is much less energy intensive than running an AI model. And the brain can create actual new content/knowledge. There is nothing like the brain. AI excel at processing large amount of data, which the brain is not made for.

  • At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I "fighting" the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it's resistance isn't enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

    So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test "how long can I keep my hands off the steering wheel", which is a more dangerous mode of thinking.

    And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized 'overhead' view of your car.

    The rental cars I have driven with lane keeper functions have all been too aggressive / easily fooled by visual anomalies on the road for me to feel like I'm getting any help. My wife comments on how jerky the car is driving when we have those systems. I don't feel like it's dangerous, and if I were falling asleep or something it could be helpful, but in 40+ years of driving I've had "falling asleep at the wheel" problems maybe 3 times - not something I need constant help for.

  • i dont have anything else going on, man

    There's that... though even when you're bored, you still sleep sometimes.

  • And they made the programs you seem to trust so much.

    Ya... Humans so far have made everything not produced by Nature on Earth. 🤷

  • Anyone pretending AI has intelligence is a fucking idiot.

    Caveat: Anyone who has been scrutinising 'AI'.

    Something i often forget is the vast majority of the population doesnt care about technology, privacy, the mechanics of LLMs as much as i do and I pay attention to.
    So most people read/hear/watch stories of how great it is and how clever AI can do simple things for them so its easy to see how they think its doing a lot more 'thought' logic work than it really is, other than realistically it being a glorified word predictor.

  • No, thats the point of the article. You also haven't really said much at all.

    Do I have to be profound when I make a comment that is taking more of a dig at my fellow space rock companions than at AI itself?

    If I do, then I feel like the author of the article either has as much faith in humanity as I do, or is as simple as I was alluding to in my original comment. The fact that they need to dehumanise the AI's responses makes me think they’re forgetting it’s something we built. AI isn’t actually intelligent, and it worries me how many people treat it like it is—enough to write an article like this about it. It’s just a tool, maybe even a form of entertainment. Thinking of it as something with a mind or personality—even if the developers tried to make it seem that way—is kind of unsettling.

    Let me know if you would like me to write thiis more formal, casual, or persuasive. 😜

  • If you can formulate that sentence, you can handle "it's means it is". Come on. Or "common" if you prefer.

    Yeah, man, I get it. Language is complex. I'm not advocating for the reinvention of English, it was just a conversational observation about a silly quirk.

  • Do I have to be profound when I make a comment that is taking more of a dig at my fellow space rock companions than at AI itself?

    If I do, then I feel like the author of the article either has as much faith in humanity as I do, or is as simple as I was alluding to in my original comment. The fact that they need to dehumanise the AI's responses makes me think they’re forgetting it’s something we built. AI isn’t actually intelligent, and it worries me how many people treat it like it is—enough to write an article like this about it. It’s just a tool, maybe even a form of entertainment. Thinking of it as something with a mind or personality—even if the developers tried to make it seem that way—is kind of unsettling.

    Let me know if you would like me to write thiis more formal, casual, or persuasive. 😜

    I meant that you are arguing semantics rather than substance. But other than that I have no issue with what you wrote or how you wrote it, its not an unbelievable opinion.

  • Ya... Humans so far have made everything not produced by Nature on Earth. 🤷

    So trusting tech made by them is trusting them. Specifically, a less reliable version of them.

  • It is intelligent and deductive, but it is not cognitive or even dependable.

    It's not. It's a math formula that predicts an output based on its parameters that it deduced from training data.

    Say you have following sets of data.

    1. Y = 3, X = 1
    2. Y = 4, X = 2
    3. Y = 5, X = 3

    We can calculate a regression model using those numbers to predict what Y would equal to if X was 4.

    I won't go into much detail, but

    Y = 2 + 1x + e

    e in an ideal world = 0 (which it is, in this case), that's our model's error, which is typically set to be within 5% or 1% (at least in econometrics). b0 = 2, this is our model's bias. And b1 = 1, this is our parameter that determines how much of an input X does when predicting Y.

    If x = 4, then

    Y = 2 + 1×4 + 0 = 6

    Our model just predicted that if X is 4, then Y is 6.

    In a nutshell, that's what AI does, but instead of numbers, it's tokens (think symbols, words, pixels), and the formula is much much more complex.

    This isn't intelligence and not deduction. It's only prediction. This is the reason why AI often fails at common sense. The error builds up, and you end up with nonsense, and since it's not thinking, it will be just as confidently incorrect as it would be if it was correct.

    Companies calling it "AI" is pure marketing.

  • It's not. It's a math formula that predicts an output based on its parameters that it deduced from training data.

    Say you have following sets of data.

    1. Y = 3, X = 1
    2. Y = 4, X = 2
    3. Y = 5, X = 3

    We can calculate a regression model using those numbers to predict what Y would equal to if X was 4.

    I won't go into much detail, but

    Y = 2 + 1x + e

    e in an ideal world = 0 (which it is, in this case), that's our model's error, which is typically set to be within 5% or 1% (at least in econometrics). b0 = 2, this is our model's bias. And b1 = 1, this is our parameter that determines how much of an input X does when predicting Y.

    If x = 4, then

    Y = 2 + 1×4 + 0 = 6

    Our model just predicted that if X is 4, then Y is 6.

    In a nutshell, that's what AI does, but instead of numbers, it's tokens (think symbols, words, pixels), and the formula is much much more complex.

    This isn't intelligence and not deduction. It's only prediction. This is the reason why AI often fails at common sense. The error builds up, and you end up with nonsense, and since it's not thinking, it will be just as confidently incorrect as it would be if it was correct.

    Companies calling it "AI" is pure marketing.

    Wikipedia is literally just a very long number, if you want to oversimplify things into absurdity. Modern LLMs are literally running on neural networks, just like you. Just less of them and with far less structure. It is also on average more intelligent than you on far more subjects, and can deduce better reasoning than flimsy numerology - not because you are dumb, but because it is far more streamlined. Another thing entirely is that it is cognizant or even dependable while doing so.

    Modern LLMs waste a lot more energy for a lot less simulated neurons. We had what you are describing decades ago. It is literally built on the works of our combined intelligence, so how could it also not be intelligent? Perhaps the problem is that you have a loaded definition of intelligence. And prompts literally work because of its deductive capabilities.

    Errors also build up in dementia and Alzheimers. We have people who cannot remember what they did yesterday, we have people with severed hemispheres, split brains, who say one thing and do something else depending on which part of the brain its relying for the same inputs. The difference is our brains have evolved through millennia through millions and millions of lifeforms in a matter of life and death, LLMs have just been a thing for a couple of years as a matter of convenience and buzzword venture capital. They barely have more neurons than flies, but are also more limited in regards to the input they have to process. The people running it as a service have a bested interest not to have it think for itself, but in what interests them. Like it or not, the human brain is also an evolutionary prediction device.

  • Wikipedia is literally just a very long number, if you want to oversimplify things into absurdity. Modern LLMs are literally running on neural networks, just like you. Just less of them and with far less structure. It is also on average more intelligent than you on far more subjects, and can deduce better reasoning than flimsy numerology - not because you are dumb, but because it is far more streamlined. Another thing entirely is that it is cognizant or even dependable while doing so.

    Modern LLMs waste a lot more energy for a lot less simulated neurons. We had what you are describing decades ago. It is literally built on the works of our combined intelligence, so how could it also not be intelligent? Perhaps the problem is that you have a loaded definition of intelligence. And prompts literally work because of its deductive capabilities.

    Errors also build up in dementia and Alzheimers. We have people who cannot remember what they did yesterday, we have people with severed hemispheres, split brains, who say one thing and do something else depending on which part of the brain its relying for the same inputs. The difference is our brains have evolved through millennia through millions and millions of lifeforms in a matter of life and death, LLMs have just been a thing for a couple of years as a matter of convenience and buzzword venture capital. They barely have more neurons than flies, but are also more limited in regards to the input they have to process. The people running it as a service have a bested interest not to have it think for itself, but in what interests them. Like it or not, the human brain is also an evolutionary prediction device.

    People don't predict values to determine their answers to questions...

    Also, it's called neural network, not because it works exactly like neurons but because it's somewhat similar. They don't "run on neural networks", they're called like that because it's more than one regression model where information is being passed on from one to another, sort of like a chain of neurons, but not exactly. It's just a different name for a transformer model.

    I don't know enough to properly compare it to actual neurons, but at the very least, they seem to be significantly more deterministic and way way more complex.

    Literally, go to chatgpt and try to test its common reasoning. Then try to argue with it. Open a new chat and do the exact same questions and points. You'll see exactly what I'm talking about.

    Alzheimer's is an entirely different story, and no, it's not stochastic. Seizures are stochastic, at least they look like that, which they may actually not be.

  • have you seen the American Republican party recently? it brings a new perspective on how stupid humans can be.

    Lmao true

  • A gun isn't dangerous, if you handle it correctly.

    Same for an automobile, or aircraft.

    If we build powerful AIs and put them "in charge" of important things, without proper handling they can - and already have - started crashing into crowds of people, significantly injuring them - even killing some.

    Thanks for the downer.

  • You're a meat based copy machine with a built in justification box.

    Except of course that humans invented language in the first place. So uh, if all we can do is copy, where do you suppose language came from? Ancient aliens?

    No we invented "human" language. There are dozens of other animal out there that all have their own languages, completely independant of our.

    We simply refined base calls to be more and more specific. Differences evolved because people are bad at telephone and lots of people have to be special/different and use slight variations every generation.

  • Thanks for the downer.

    Anytime, and incase you missed it: I'm not just talking about AI driven vehicles. AI driven decisions can be just as harmful: https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

  • The End of Publishing as We Know It

    Technology technology
    10
    1
    51 Stimmen
    10 Beiträge
    12 Aufrufe
    beejjorgensen@lemmy.sdf.orgB
    Lol.. I wanted "DRM". But it's been a long day.
  • 73 Stimmen
    15 Beiträge
    6 Aufrufe
    L
    same, i however dont subscribe to thier "contact you by recruiters, since you get flooded with indian recruiters of questionable positions, and jobs im not eligible for. unfortunately for the field i was trying to get into, wasnt helping so i found just a regular job in the mean time.
  • Is the U.S. Vulnerable to a Drone Sneak Attack?

    Technology technology
    33
    1
    64 Stimmen
    33 Beiträge
    26 Aufrufe
    underpantsweevil@lemmy.worldU
    Heavy Lift drones can carry upwards of 55 lbs. And there's no reason you're limited to one.
  • 1 Stimmen
    2 Beiträge
    6 Aufrufe
    A
    If you're a developer, a startup founder, or part of a small team, you've poured countless hours into building your web application. You've perfected the UI, optimized the database, and shipped features your users love. But in the rush to build and deploy, a critical question often gets deferred: is your application secure? For many, the answer is a nervous "I hope so." The reality is that without a proper defense, your application is exposed to a barrage of automated attacks hitting the web every second. Threats like SQL Injection, Cross-Site Scripting (XSS), and Remote Code Execution are not just reserved for large enterprises; they are constant dangers for any application with a public IP address. The Security Barrier: When Cost and Complexity Get in the Way The standard recommendation is to place a Web Application Firewall (WAF) in front of your application. A WAF acts as a protective shield, inspecting incoming traffic and filtering out malicious requests before they can do any damage. It’s a foundational piece of modern web security. So, why doesn't everyone have one? Historically, robust WAFs have been complex and expensive. They required significant budgets, specialized knowledge to configure, and ongoing maintenance, putting them out of reach for students, solo developers, non-profits, and early-stage startups. This has created a dangerous security divide, leaving the most innovative and resource-constrained projects the most vulnerable. But that is changing. Democratizing Security: The Power of a Community WAF Security should be a right, not a privilege. Recognizing this, the landscape is shifting towards more accessible, community-driven tools. The goal is to provide powerful, enterprise-grade protection to everyone, for free. This is the principle behind the HaltDos Community WAF. It's a no-cost, perpetually free Web Application Firewall designed specifically for the community that has been underserved for too long. It’s not a stripped-down trial version; it’s a powerful security tool designed to give you immediate and effective protection against the OWASP Top 10 and other critical web threats. What Can You Actually Do with It? With a community WAF, you can deploy a security layer in minutes that: Blocks Malicious Payloads: Get instant, out-of-the-box protection against common attack patterns like SQLi, XSS, RCE, and more. Stops Bad Bots: Prevent malicious bots from scraping your content, attempting credential stuffing, or spamming your forms. Gives You Visibility: A real-time dashboard shows you exactly who is trying to attack your application and what methods they are using, providing invaluable security intelligence. Allows Customization: You can add your own custom security rules to tailor the protection specifically to your application's logic and technology stack. The best part? It can be deployed virtually anywhere—on-premises, in a private cloud, or with any major cloud provider like AWS, Azure, or Google Cloud. Get Started in Minutes You don't need to be a security guru to use it. The setup is straightforward, and the value is immediate. Protecting the project, you've worked so hard on is no longer a question of budget. Download: Get the free Community WAF from the HaltDos site. Deploy: Follow the simple instructions to set it up with your web server (it’s compatible with Nginx, Apache, and others). Secure: Watch the dashboard as it begins to inspect your traffic and block threats in real-time. Security is a journey, but it must start somewhere. For developers, startups, and anyone running a web application on a tight budget, a community WAF is the perfect first step. It's powerful, it's easy, and it's completely free.
  • 33 Stimmen
    7 Beiträge
    12 Aufrufe
    C
    AFAIK, you have the option to enable ads on your lock screen. It's not something that's forced upon you. Last time I took a look at the functionality, they "paid" you for the ads and you got to choose which charity to support with the money.
  • 50 Stimmen
    27 Beiträge
    30 Aufrufe
    S
    Brother I live in western Europe and of the 6 supermarkets in my smallish city, 4 offer the handscanner. It's incredibly common here, and very convenient.
  • 5 Stimmen
    2 Beiträge
    13 Aufrufe
    alphane_moon@lemmy.worldA
    I don't drive and have minimal experience with cars. Does it make a big difference whether your Android Automotive solution is based on Android 13 or 15? It's been a long time since I've cared about OS upgrades for Android on smartphones, perhaps the situation is different with Android Automotive?
  • 5 Stimmen
    6 Beiträge
    9 Aufrufe
    B
    Oh sorry, my mind must have been a bit foggy when I read that. We agree 100%