Skip to content

We need to stop pretending AI is intelligent

Technology
331 148 4.7k
  • Haha coming in hot I see. Seems like I've touched a nerve. You don't know anything about me or whether I'm creative in any way.

    All ideas have basis in something we have experienced or learned. There is no completely original idea. All music was influenced by something that came before it, all art by something the artist saw or experienced. This doesn't make it bad and it doesn't mean an AI could have done it

    What language was the first language based upon?

    What music influenced the first song performed?

    What art influenced the first cave painter?

  • However, there is a huge energy cost for that speed to process statistically the information to mimic intelligence. The human brain is consuming much less energy.
    Also, AI will be fine with well defined task where innovation isn't a requirement. As it is today, AI is incapable to innovate.

    The human brain is consuming much less energy

    Yes, but when you fully load the human brain's energy costs with 20 years of schooling, 20 years of "retirement" and old-age care, vacation, sleep, personal time, housing, transportation, etc. etc. - it adds up.

  • Customarily, when doing these kind of calculations we ignore stuff which keep us alive because these things are needed regardless of economic contributions, since you know people are people and not tools.

    people are people and not tools

    But this comparison is weighing people as tools vs alternative tools.

  • you can give me a sandwige and ill do a better job than AI

    But, will you do it 24-7-365?

  • The thing about self driving is that it has been like 90-95% of the way there for a long time now. It made dramatic progress then plateaued, as approaches have failed to close the gap, with exponentially more and more input thrown at it for less and less incremental subjective improvement.

    But your point is accurate, that humans have lapses and AI have lapses. The nature of those lapses is largely disjoint, so that makes an opportunity for AI systems to augment a human driver to get the best of both worlds. A constantly consistently vigilant computer driving monitoring and tending the steering, acceleration, and braking to be the 'right' thing in a neutral behavior, with the human looking for more anomolous situations that the AI tends to get confounded about, and making the calls on navigating certain intersections that the AI FSD still can't figure out. At least for me the worst part of driving is the long haul monotony on freeway where nothing happens, and AI excels at not caring about how monotonous it is and just handling it, so I can pay a bit more attention to what other things on the freeway are doing that might cause me problems.

    I don't have a Tesla, but have a competitor system and have found it useful, though not trustworthy. It's enough to greatly reduce the drain of driving, but I have to be always looking around, and have to assert control if there's a traffic jam coming up (it might stop in time, but it certainly doesn't slow down soon enough) or if I have to do a lane change in some traffic (if traffic conditions are light, it can change langes nicely, but without a whole lot of breathing room, it won't do it, which is nice when I can afford to be stupidly cautious).

    The one "driving aid" that I find actually useful is the following distance maintenance cruise control. I set that to the maximum distance it can reliably handle and it removes that "dimension" of driving problem from needing my constant attention - giving me back that attention to focus on other things (also driving / safety related.) "Dumb" cruise control works similarly when there's no traffic around at all, but having the following distance control makes it useful in traffic. Both kinds of cruise control have certain situations that you need to be aware of and ready to take control back at a moment's notice - preferably anticipating the situation and disengaging cruise control before it has a problem - but those exceptions are pretty rare / easily handled in practice.

    Things like lane keeping seem to be more trouble than they're worth, to me in the situations I drive in.

    Not "AI" but a driving tech that does help a lot is parking cameras. Having those additional perspectives from the camera(s) at different points on the vehicle is a big benefit during close-space maneuvers. Not too surprising that "AI" with access to those tools does better than normal drivers without.

  • No shit. Doesn’t mean it still isn’t extremely useful and revolutionary.

    “AI” is a tool to be used, nothing more.

    Still, people find it difficult to navigate this. Its use cases are limited, but it doesn't enforce that limit by itself. The user needs to be knowledgeable of the limitations and care enough not to go beyond them. That's also where the problem lies. Leaving stuff to AI, even if it compromises the results, can save SO much time that it encourages irresponsible use.

    So to help remind people of the limitations of generative AI, it makes sense to fight the tendency of companies to overstate the ability of their models.

  • The one "driving aid" that I find actually useful is the following distance maintenance cruise control. I set that to the maximum distance it can reliably handle and it removes that "dimension" of driving problem from needing my constant attention - giving me back that attention to focus on other things (also driving / safety related.) "Dumb" cruise control works similarly when there's no traffic around at all, but having the following distance control makes it useful in traffic. Both kinds of cruise control have certain situations that you need to be aware of and ready to take control back at a moment's notice - preferably anticipating the situation and disengaging cruise control before it has a problem - but those exceptions are pretty rare / easily handled in practice.

    Things like lane keeping seem to be more trouble than they're worth, to me in the situations I drive in.

    Not "AI" but a driving tech that does help a lot is parking cameras. Having those additional perspectives from the camera(s) at different points on the vehicle is a big benefit during close-space maneuvers. Not too surprising that "AI" with access to those tools does better than normal drivers without.

    At least in my car, the lane following (not keeping system) is handy because the steering wheel naturally tends to go where it should and less often am I "fighting" the tendency to center. The keeping system is at least for me largely nothing. If I turn signal, it ignores me crossing a lane. If circumstances demand an evasive maneuver that crosses a line, it's resistance isn't enough to cause an issue. At least mine has fared surprisingly well in areas where the lane markings are all kind of jacked up due to temporary changes for construction. If it is off, then my arms are just having to generally assert more effort to be in the same place I was going to be with the system. Generally no passenger notices when the system engages/disengages in the car except for the chiming it does when it switches over to unaided operation.

    So at least my experience has been a positive one, but it hits things just right with intervention versus human attention, including monitoring gaze to make sure I am looking where I should. However there are people who test "how long can I keep my hands off the steering wheel", which is a more dangerous mode of thinking.

    And yes, having cameras everywhere makes fine maneuvering so much nicer, even with the limited visualization possible in the synthesized 'overhead' view of your car.

  • I am more talking about listening to and reading scientists in media. The definition of consciousness is vague at best

    I think that then we actually agree.

  • We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.

    But what we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was discussed here five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.

    This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.

    So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition — not a shred — there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.

    Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the “hard problem of consciousness”. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, mental states with sensory representations (such as changes in heart rate, sweating and much more).

    Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.

    It is intelligent and deductive, but it is not cognitive or even dependable.

  • What language was the first language based upon?

    What music influenced the first song performed?

    What art influenced the first cave painter?

    You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

  • Ya of course I do. Humans are the most unreliable slick disgusting diseased morally inept living organisms on the planet.

    And they made the programs you seem to trust so much.

  • I think your argument is a bit besides the point.

    The first issue we have is that intelligence isn't well-defined at all. Without a clear definition of intelligence, we can't say if something is intelligent, and even though we as a species tried to come up with a definition of intelligence for centuries, there still isn't a well-defined one yet.

    But the actual question here isn't "Can AI serve information?" but is AI an intelligence. And LLMs are not. They are not beings, they don't evolve, they don't experience.

    For example, LLMs don't have a memory. If you use something like ChatGPT, its state doesn't change when you talk to it. It doesn't remember. The only way it can keep up a conversation is that for each request the whole chat history is fed back into the LLM as an input. It's like talking to a demented person, but you give that demented person a transcript of your conversation, so that they can look up everything you or they have said during the conversation.

    The LLM itself can't change due to the conversation you are having with them. They can't learn, they can't experience, they can't change.

    All that is done in a separate training step, where essentially a new LLM is generated.

    If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

  • Ai models are trained on basically the entirety of the internet, and more. Humans learn to speak on much less info. So, there's likely a huge difference in how human brains and LLMs work.

    It doesn’t take the entirety of the internet just for an LLM to respond in English. It could do so with far less. But it also has the entirety of the internet which arguably makes it superior to a human in breadth of information.

  • I've been thinking this for awhile. When people say "AI isn't really that smart, it's just doing pattern recognition" all I can help but think is "don't you realize that is one of the most commonly brought up traits concerning the human mind?" Pareidolia is literally the tendency to see faces in things because the human mind is constantly looking for the "face pattern". Humans are at least 90% regurgitating previous data. It's literally why you're supposed to read and interact with babies so much. It's how you learn "red glowy thing is hot". It's why education and access to knowledge is so important. It's every annoying person who has endless "did you know?" facts. Science is literally "look at previous data, iterate a little bit, look at new data".

    None of what AI is doing is truly novel or different. But we've placed the human mind on this pedestal despite all the evidence to the contrary. Eyewitness testimony, optical illusions, magic tricks, the hundreds of common fallacies we fall prey to.... our minds are incredibly fallible and are really just a hodgepodge of processes masquerading as "intelligence". We're a bunch of instincts in a trenchcoat. To think AI isn't or can't reach our level is just hubris. A trait that probably is more unique to humans.

    Yep we are on the same page. At our best, we can reach higher than regurgitating patterns. I’m talking about things like the scientific method and everything we’ve learned by it. But still, that’s a 5% minority, at best, of what’s going on between human ears.

  • Self Driving is only safer than people in absolutely pristine road conditions with no inclement weather and no construction. As soon as anything disrupts "normal" road conditions, self driving becomes significantly more dangerous than a human driving.

    Yes of course edge and corner cases are going to take much longer to train on because they don’t occur as often. But as soon as one self-driving car learns how to handle one of them, they ALL know. Meanwhile humans continue to be born and must be trained up individually and they continue to make stupid mistakes like not using their signal and checking their mirrors.

    Humans CAN handle cases that AI doesn’t know how to, yet, but humans often fail in inclement weather, around construction, etc etc.

  • Human brains are much more complex than a mirroring script xD The amount of neurons in your brain, AI and supercomputers only have a fraction of that. But you're right, for you its not much different than AI probably

    I’m pretty sure an AI could throw out a lazy straw man and ad hominem as quickly as you did.

  • No idea why you're getting downvoted. People here don't seem to understand even the simplest concepts of consciousness.

    I guess it wasn't super relevant to the prior comment, which was focused more on AI embodiment. Eh, it's just numbers anyway, no sweat off my back. Appreciate you, though!

  • If we can't say if something is intelligent or not, why are we so hell-bent on creating this separation from LLMs? I perfectly understand the legal underminings of copyright, the weaponization of AI by the marketing people, the dystopian levels of dependence we're developing on a so far unreliable technology, and the plethora of moral, legal, and existential issues surrounding AI, but this specific subject feels like such a silly hill to die on. We don't know if we're a few steps away from having massive AI breakthroughs, we don't know if we already have pieces of algorithms that closely resemble our brains' own. Our experiencing of reality could very well be broken down into simple inputs and outputs of an algorithmic infinite loop; it's our hubris that elevates this to some mystical, unreproducible thing that only the biomechanics of carbon-based life can achieve, and only at our level of sophistication, because you may well recall we've been down this road with animals before as well, claiming they dont have souls or aren't conscious beings, that somehow because they don't very clearly match our intelligence in all aspects (even though they clearly feel, bond, dream, remember, and learn), they're somehow an inferior or less valid existence.

    You're describing very fixable limitations of chatgpt and other LLMs, limitations that are in place mostly due to costs and hardware constraints, not due to algorithmic limitations. On the subject of change, it's already incredibly taxing to train a model, so of course continuous, uninterrupted training so as to more closely mimick our brains is currently out of the question, but it sounds like a trivial mechanism to put into place once the hardware or the training processes improve. I say trivial, making it sound actually trivial, but I'm putting that in comparison to, you know, actually creating an LLM in the first place, which is already a gargantuan task to have accomplished in itself. The fact that we can even compare a delusional model to a person with heavy mental illness is already such a big win for the technology even though it's meant to be an insult.

    I'm not saying LLMs are alive, and they clearly don't experience the reality we experience, but to say there's no intelligence there because the machine that speaks exactly like us and a lot of times better than us, unlike any other being on this planet, has some other faults or limitations....is kind of stupid. My point here is, intelligence might be hard to define, but it might not be as hard to crack algorithmically if it's an emergent property, and enforcing this "intelligence" separation only hinders our ability to properly recognize whether we're on the right path to achieving a completely artificial being that can experience reality or not. We clearly are, LLMs and other models are clearly a step in the right direction, and we mustn't let our hubris cloud that judgment.

    What is kinda stupid is not understanding how LLMs work, not understanding what the inherent limitations of LLMs are, not understanding what intelligence is, not understanding what the difference between an algorithm and intelligence is, not understanding what the difference between immitating something and being something is, claiming to "perfectly" understand all sorts of issues surrounding LLMs and then choosing to just ignore them and then still thinking you actually have enough of a point to call other people in the discussion "kind of stupid".

  • But, will you do it 24-7-365?

    i dont have anything else going on, man

  • You seem to think that one day somebody invented the first language, or made the first song?

    There was no "first language" and no "first song". These things would have evolved from something that was not quite a full language, or not quite a full song.

    Animals influenced the first cave painters, that seems pretty obvious.

    Yeah dude at one point there was no languages and no songs. You can get into "what counts as a language" but at one point there was none. Same with songs.

    Language specifically was pretty unlikely to be an individual effort, but at one point people grunting at each other became something else entirely.

    Your whole "there is nothing new under the sun" way of thinking is just an artifact of the era you were born in.

  • Americans’ junk-filled garages are hurting EV adoption, study says

    Technology technology
    148
    1
    196 Stimmen
    148 Beiträge
    67 Aufrufe
    paraphrand@lemmy.worldP
    I have a garage. And it’s one of the ones that barely fits a standard sized car. It’s also full of stuff due to the house being super tiny too. The house has zero closets, no basement, single floor. It’s apartment sized, basically. It’s a small town, the house is 100+ years old from what I understand. So you’re saying for the purposes of this article that most people live outside of cities in the US? And space shouldn’t be an issue? I’m not sure what your meme is about here.
  • 232 Stimmen
    71 Beiträge
    368 Aufrufe
    S
    So while Utah punches above its weight in tech, St. Paul area absolutely dwarfs it in population. Surely they have a robust cybersecurity industry there... https://lecbyo.files.cmp.optimizely.com/download/fa9be256b74111efa0ca8e42e80f1a8f?sfvrsn=a8aa5246_2 Utah, #1 projected tech sector growth in the next decade, of all 50 states. Utah, #8 for tech sector % of entire state economy, of all 50 states. Minnesota? Doesn't crack top 10 for any metrics. Utah may not be the biggest or techiest state, but it is way more so than Minnesota. The National Guard just seems like a desperate move. Again, this is my argument, but you are only seeing desperation as due to incompetence, not due to... actual severity. When they're deployed, they take orders from the the federal military, Not actually true unless the Nat Guard has been given a direct command by the Pentagon. and at peace, monitoring foreign threats seems like a federal thing. ... which is why the FBI were called in, in addition to the Nat Guard being able to report up the military CoC. You call in the National Guard to put down a riot or something where you just need bodies, not for anything niche. I mean, you yourself have explained that the Nat Guard does have a CyberSec ability, and I've explained they also have the ability to potentially summon even greater CyberSec ability. I guess you would be surprised how involved the military is / can be in defending against national security threatening, critical infrastructure comprimising kinds of domestic threats. Remember Stuxnet? Yeah other people can do that to us now, we kinda uncorked the genie bottle on that one. Otherwise, just call a local cybersecurity firm to trace the attack and assess damage. It is not everyone's instinct or best practice to immediately hire a contracted firm to do things that government agencies can, and have a responsibility to do. If this was like, Amazon being comprimised, yeah I can see that being a more likely avenue, though if it was serious, they'd probably call in some or multiple forms of 'the Feds' as well. But this was a breach/compromise of a municipal network... thats a government thing. Not a private sector thing. EDIT: Also, you are acting like either you are unaware of the following, or ... don't think its real? https://en.wikipedia.org/wiki/Utah_Data_Center Kind of a really big deal in terms of Utah and the tech sector and the Federal government and... things that were totally illegal before the PATRIOT Act. Exabytes of storage. Exabytes. Utah literally is where the NSA is doing their damndest to make a hardcopy of literally all internet traffic and content. Given how classified this facility is, I wouldn't be surprised if their employees don't exactly show up in standard Utah employment figures.
  • 220 Stimmen
    39 Beiträge
    412 Aufrufe
    A
    True, they will always play the victim even as they're hurting and exploiting people they see as less than. Don't allow them to have any evidence of credibility. I think his idea of hell would probably be having to lower himself to the standard of living most people would consider normal and comfortable. Having to learn to actually survive day to day if he were to find himself suddenly without a cent of the money he was born into and all future wages and earnings garnished to pay the people he has harmed, would probably be a fate worse than any hell he could imagine. I know there's no justice and there is pretty much no chance of him ever facing any sort of proportional punishment or consequence for his actions. But, if I could make it happen, having to suddenly learn to survive with the rest of us mortals in the society he has helped create, in his late fifties, wondering how he will even afford something as basic as healthcare while his body rapidly ages from stress and gradually falls apart, after a lifetime of unimaginable privilege, unable to go anywhere or do anything he enjoys without being recognized and having people curse his name. That would be the fate I would wish on somebody like him.
  • 85 Stimmen
    14 Beiträge
    92 Aufrufe
    internetcitizen2@lemmy.worldI
    Pumped up kicks is still relevant
  • How to Setup a Secure Ubuntu Home Server

    Technology technology
    12
    1
    95 Stimmen
    12 Beiträge
    84 Aufrufe
    U
    Thanks :D!
  • Alibaba Cloud claims new DB manager beats rival hyperscalers

    Technology technology
    1
    1
    8 Stimmen
    1 Beiträge
    23 Aufrufe
    Niemand hat geantwortet
  • 43 Stimmen
    10 Beiträge
    107 Aufrufe
    D
    Deserved it. Shouldn't have beem a racist xenophobe. Hate speech and incitement of violence is not legally protected in the UK. All those far-right rioters deserves prison.
  • 1 Stimmen
    8 Beiträge
    81 Aufrufe
    L
    I think the principle could be applied to scan outside of the machine. It is making requests to 127.0.0.1:{port} - effectively using your computer as a "server" in a sort of reverse-SSRF attack. There's no reason it can't make requests to 10.10.10.1:{port} as well. Of course you'd need to guess the netmask of the network address range first, but this isn't that hard. In fact, if you consider that at least as far as the desktop site goes, most people will be browsing the web behind a standard consumer router left on defaults where it will be the first device in the DHCP range (e.g. 192.168.0.1 or 10.10.10.1), which tends to have a web UI on the LAN interface (port 8080, 80 or 443), then you'd only realistically need to scan a few addresses to determine the network address range. If you want to keep noise even lower, using just 192.168.0.1:80 and 192.168.1.1:80 I'd wager would cover 99% of consumer routers. From there you could assume that it's a /24 netmask and scan IPs to your heart's content. You could do top 10 most common ports type scans and go in-depth on anything you get a result on. I haven't tested this, but I don't see why it wouldn't work, when I was testing 13ft.io - a self-hosted 12ft.io paywall remover, an SSRF flaw like this absolutely let you perform any network request to any LAN address in range.