Skip to content

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Technology
37 27 1
  • Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

    exactly. But what if there were more than just three (the infamous "guardrails")

  • This post did not contain any content.

    Good God what an absolutely ridiculous article, I would be ashamed to write that.

    Most fundamentally of course is the fact that the laws are robotics are not intended to work and are not designed to be used by future AI systems. I'm sure Asimov would be disappointed to say the least to find out that some people haven't got the message.

  • Asimov had quite a different idea.

    What if robots become like humans some day?

    That was his general topic through many of his stories. The three laws were quite similar to former slavery laws of Usa. With this analogy he worked on the question if robots are nearly like humans, and if they are even indistinguishable from humans: Would/should they still stay our servants then?

    Yepyep, agreed! I was referring strictly to the Three Laws as a cautionary element.

    Otherwise, I, too, think the point was to show that the only viable way to approach an equivalent or superior consciousness is as at least an equal, not as an inferior.

    And it makes a lot of sense. There's not much stopping a person from doing heinous stuff if a body of laws would be the only thing to stop them. I think socialisation plays a much more relevant role in the development of a conscience, of a moral compass, because empathy (edit: and by this, I don't mean just the emotional bit of empathy, I mean everything which can be considered empathy, be it emotional, rational, or anything in between and around) is a significantly stronger motivator for avoiding doing harm than "because that's the law."

    It's basic child rearing as I see it, if children aren't socialised, there will be a much higher chance that they won't understand why doing something would harm another, they won't see the actual consequences of their actions upon the subject. And if they don't understand that the subject of their actions is a being just like them, with an internal life and feelings, then they wouldn't have a strong enough* reason to not treat the subject as a piece of furniture, or a tool, or any other object one could see around them.

    Edit: to clarify, the distinction I made between equivalent and superior consciousness wasn't in reference to how smart one or the other is, I was referring to the complexity of said consciousness. For instance, I'd perceive anything which reacts to the world around them in a deliberate manner to be somewhat equivalent to me (see dogs, for instance), whereas something which takes in all of the factors mine does, plus some others, would be superior in terms of complexity. I genuinely don't even know what example to offer here, because I can't picture it. Which I think underlines why I'd say such a consciousness is superior.

    I will say, I would now rephrase it as "superior/different" in retrospect.

  • exactly. But what if there were more than just three (the infamous "guardrails")

    I genuinely think it's impossible. I think this would land us into Robocop 2, where they started overloading Murphy's system with thousands of directives (granted, not with the purpose of generating the perfect set of Laws for him) and he just ends up acting like a generic pull-string action figure, becoming "useless" as a conscious being.

    Most certainly impossible when attempted by humans, because we're barely even competent enough to guide ourselves, let alone something else.

  • The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.

    Maybe the author let AI write the article?

  • OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS

    That's the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they'll do whatever is most convenient for themselves.

    Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.

    The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn't build a positronic brain without them.

    You can't expect just whatever random AI to spontaneously decide to follow them.

  • The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn't build a positronic brain without them.

    You can't expect just whatever random AI to spontaneously decide to follow them.

    Asimov did write several stories about robots that didn't have the laws baked in.

    There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.

    There was one about a robot that helped humans while the humans were on a dangerous mission... I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.

    These are the two that come to mind immediately. I have read a lot of Asimov's robot stories, but it was many years ago. I'm sure there are several others. He wrote stories about the laws of robotics from basically every angle.

    He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.

  • Asimov did write several stories about robots that didn't have the laws baked in.

    There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.

    There was one about a robot that helped humans while the humans were on a dangerous mission... I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.

    These are the two that come to mind immediately. I have read a lot of Asimov's robot stories, but it was many years ago. I'm sure there are several others. He wrote stories about the laws of robotics from basically every angle.

    He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.

    I remember most of the R Daneel books, but I admit I haven't read all the various robot short stories.

  • Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

    Most of the stories are about how the laws don't work and how to circumvent them, yes.

  • I remember most of the R Daneel books, but I admit I haven't read all the various robot short stories.

    He wrote so many short stories about robots that it would be quite a feat if you had read all of them. When I was a child, I would always go to Half-Price Books and purchase whatever they had by Asimov that I hadn't already read, but I think he wrote something like 500 books.

  • Most of the stories are about how the laws don't work and how to circumvent them, yes.

    Most peope never read Asimov and it shows.

  • Good God what an absolutely ridiculous article, I would be ashamed to write that.

    Most fundamentally of course is the fact that the laws are robotics are not intended to work and are not designed to be used by future AI systems. I'm sure Asimov would be disappointed to say the least to find out that some people haven't got the message.

    People not getting the message is the default I think, for everything, like the song Mother knows best from Disneys Tangled, how many mothers say, see mother knows best

  • Most peope never read Asimov and it shows.

    I dont think reading Asimov would help for most people. I think most people will just not get the point of anything unless you spell it out

  • I dont think reading Asimov would help for most people. I think most people will just not get the point of anything unless you spell it out

    Critical thinking is indeed dead for much of the population.

  • Most of the stories are about how the laws don't work and how to circumvent them, yes.

    Some of the stories do also include solutions to those same issues, though that also tends to lead to limiting the capabilities of the robots. The message could be interpreted as it being a trade off between versatility and risk.

  • You can still enable uBlock Origin in Chrome, here is how

    Technology technology
    129
    1
    312 Stimmen
    129 Beiträge
    69 Aufrufe
    M
    For those that do need to use Chrome for whatever reason, don't bother with all this faff just use uBOL, it's just as good as uBO
  • 32 Stimmen
    3 Beiträge
    26 Aufrufe
    J
    Oddly enough i heard that in my head with trump's voice. What has been heard cannot be unheard!
  • Apparently Debian has alienated the developers

    Technology technology
    17
    14 Stimmen
    17 Beiträge
    121 Aufrufe
    H
    Oh man, I'm a bit late to the party here. He really believes the far-right Trump propaganda, and doesn't understand what diversity programs do. It's not a war between white men an all the other groups of people... It's just that is has proven to be difficult to for example write a menstrual tracker with a 99.9% male developer base. It's just super difficult to them to judge how that's going to be used in real-world scenarios and what some specific challenges and nice features are. That's why you listen to minority opinions, to deliver a product that caters to all people. And these minority opinions are notoriously difficult to attract. That's why we do programs for that. They are task-forces to address things aside from what's mainstream and popular. It'll also benefit straight white men. Liteally everyone because it makes Linux into a product that does more than just whatever is popular as of today. Same thing applies to putting effort into screen readers and disabled people and whatever other minorities need. If he just wants what is majority, I'd recommend installing Windows to him. Because that's where we're headed with this. That's the popular choice, at least on the desktop. That's what you're supposed to use if you dislike niche. Also his hubris... Says Debian should be free from politics. And the very next sentence he talks his politics and wants to shove his Trump anti-DEI politics into Debian.... Yeah, sure dude.
  • We need to stop pretending AI is intelligent

    Technology technology
    331
    1
    1k Stimmen
    331 Beiträge
    2k Aufrufe
    dsilverz@friendica.worldD
    @technocrit While I agree with the main point that "AI/LLMs has/have no agency", I must be the boring, ackchyually person who points out and remembers some nerdy things.tl;dr: indeed, AIs and LLMs aren't intelligent... we aren't so intelligent as we think we are, either, because we hold no "exclusivity" of intelligence among biosphere (corvids, dolphins, etc) and because there's no such thing as non-deterministic "intelligence". We're just biologically compelled to think that we can think and we're the only ones to think, and this is just anthropocentric and naive from us (yeah, me included).If you have the patience to read a long and quite verbose text, it's below. If you don't, well, no problems, just stick to my tl;dr above.-----First and foremost, everything is ruled by physics. Deep down, everything is just energy and matter (the former of which, to quote the famous Einstein equation e = mc, is energy as well), and this inexorably includes living beings.Bodies, flesh, brains, nerves and other biological parts, they're not so different from a computer case, CPUs/NPUs/TPUs, cables and other computer parts: to quote Sagan, it's all "made of star stuff", it's all a bunch of quarks and other elementary particles clumped together and forming subatomic particles forming atoms forming molecules forming everything we know, including our very selves...Everything is compelled to follow the same laws of physics, everything is subjected to the same cosmic principles, everything is subjected to the same fundamental forces, everything is subjected to the same entropy, everything decays and ends (and this comment is just a reminder, a cosmic-wide Memento mori).It's bleak, but this is the cosmic reality: cosmos is simply indifferent to all existence, and we're essentially no different than our fancy "tools", be it the wheel, the hammer, the steam engine, the Voyager twins or the modern dystopian electronic devices crafted to follow pieces of logical instructions, some of which were labelled by developers as "Markov Chains" and "Artificial Neural Networks".Then, there's also the human non-exclusivity among the biosphere: corvids (especially Corvus moneduloides, the New Caleidonian crow) are scientifically known for their intelligence, so are dolphins, chimpanzees and many other eukaryotas. Humans love to think we're exclusive in that regard, but we're not, we're just fooling ourselves!IMHO, every time we try to argue "there's no intelligence beyond humans", it's highly anthropocentric and quite biased/bigoted against the countless other species that currently exist on Earth (and possibly beyond this Pale Blue Dot as well). We humans often forgot how we are species ourselves (taxonomically classified as "Homo sapiens"). We tend to carry on our biological existences as if we were some kind of "deities" or "extraterrestrials" among a "primitive, wild life".Furthermore, I can point out the myriad of philosophical points, such as the philosophical point raised by the mere mention of "senses" ("Because it’s bodiless. It has no senses, ..." "my senses deceive me" is the starting point for Cartesian (René Descartes) doubt. While Descarte's conclusion, "Cogito ergo sum", is highly anthropocentric, it's often ignored or forgotten by those who hold anthropocentric views on intelligence, as people often ground the seemingly "exclusive" nature of human intelligence on the ability to "feel".Many other philosophical musings deserve to be mentioned as well: lack of free will (stemming from the very fact that we were unable to choose our own births), the nature of "evil" (both the Hobbesian line regarding "human evilness" and the Epicurean paradox regarding "metaphysical evilness"), the social compliance (I must point out to documentaries from Derren Brown on this subject), the inevitability of Death, among other deep topics.All deep principles and ideas converging, IMHO, into the same bleak reality, one where we (supposedly "soul-bearing beings") are no different from a "souless" machine, because we're both part of an emergent phenomena (Ordo ab chao, the (apparent) order out of chaos) that has been taking place for Æons (billions of years and beyond, since the dawn of time itself).Yeah, I know how unpopular this worldview can be and how downvoted this comment will probably get. Still I don't care: someone who gazed into the abyss must remember how the abyss always gazes us, even those of us who didn't dare to gaze into the abyss yet.I'm someone compelled by my very neurodivergent nature to remember how we humans are just another fleeting arrangement of interconnected subsystems known as "biological organism", one of which "managed" to throw stuff beyond the atmosphere (spacecrafts) while still unable to understand ourselves. We're biologically programmed, just like the other living beings, to "fear Death", even though our very cells are programmed to terminate on a regular basis (apoptosis) and we're are subjected to the inexorable chronological falling towards "cosmic chaos" (entropy, as defined, "as time passes, the degree of disorder increases irreversibly").
  • 0 Stimmen
    1 Beiträge
    12 Aufrufe
    Niemand hat geantwortet
  • 93 Stimmen
    35 Beiträge
    92 Aufrufe
    D
    Same as American companies. Send you targeted ads and news articles to influence your world view as a form of new soft power.
  • 18 Stimmen
    18 Beiträge
    75 Aufrufe
    freebooter69@lemmy.caF
    The US courts gave corporations person-hood, AI just around the corner.
  • 35 Stimmen
    1 Beiträge
    13 Aufrufe
    Niemand hat geantwortet