Skip to content

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Technology
37 27 0
  • Yeah, that's where my mind is at too.

    AI in its present form does not act. It does not do things. All it does is generate text. If a human responds to this text in harmful ways, that is human action. I suppose you could make a robot whose input is somehow triggered by the text, but neither it nor the text generator know what's happening or why.

    I'm so fucking tired of the way uninformed people keep anthropomorphizing this shit and projecting their own motives upon things that have no will or experiential qualia.

    agentic ai is a thing. AI can absolutely do things... it can send commands over an api which sends signals to electronics, like pulling triggers

  • The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.

    ¿Por que no los dos?

  • Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

    exactly. But what if there were more than just three (the infamous "guardrails")

  • This post did not contain any content.

    Good God what an absolutely ridiculous article, I would be ashamed to write that.

    Most fundamentally of course is the fact that the laws are robotics are not intended to work and are not designed to be used by future AI systems. I'm sure Asimov would be disappointed to say the least to find out that some people haven't got the message.

  • Asimov had quite a different idea.

    What if robots become like humans some day?

    That was his general topic through many of his stories. The three laws were quite similar to former slavery laws of Usa. With this analogy he worked on the question if robots are nearly like humans, and if they are even indistinguishable from humans: Would/should they still stay our servants then?

    Yepyep, agreed! I was referring strictly to the Three Laws as a cautionary element.

    Otherwise, I, too, think the point was to show that the only viable way to approach an equivalent or superior consciousness is as at least an equal, not as an inferior.

    And it makes a lot of sense. There's not much stopping a person from doing heinous stuff if a body of laws would be the only thing to stop them. I think socialisation plays a much more relevant role in the development of a conscience, of a moral compass, because empathy (edit: and by this, I don't mean just the emotional bit of empathy, I mean everything which can be considered empathy, be it emotional, rational, or anything in between and around) is a significantly stronger motivator for avoiding doing harm than "because that's the law."

    It's basic child rearing as I see it, if children aren't socialised, there will be a much higher chance that they won't understand why doing something would harm another, they won't see the actual consequences of their actions upon the subject. And if they don't understand that the subject of their actions is a being just like them, with an internal life and feelings, then they wouldn't have a strong enough* reason to not treat the subject as a piece of furniture, or a tool, or any other object one could see around them.

    Edit: to clarify, the distinction I made between equivalent and superior consciousness wasn't in reference to how smart one or the other is, I was referring to the complexity of said consciousness. For instance, I'd perceive anything which reacts to the world around them in a deliberate manner to be somewhat equivalent to me (see dogs, for instance), whereas something which takes in all of the factors mine does, plus some others, would be superior in terms of complexity. I genuinely don't even know what example to offer here, because I can't picture it. Which I think underlines why I'd say such a consciousness is superior.

    I will say, I would now rephrase it as "superior/different" in retrospect.

  • exactly. But what if there were more than just three (the infamous "guardrails")

    I genuinely think it's impossible. I think this would land us into Robocop 2, where they started overloading Murphy's system with thousands of directives (granted, not with the purpose of generating the perfect set of Laws for him) and he just ends up acting like a generic pull-string action figure, becoming "useless" as a conscious being.

    Most certainly impossible when attempted by humans, because we're barely even competent enough to guide ourselves, let alone something else.

  • The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.

    Maybe the author let AI write the article?

  • OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS

    That's the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they'll do whatever is most convenient for themselves.

    Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.

    The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn't build a positronic brain without them.

    You can't expect just whatever random AI to spontaneously decide to follow them.

  • The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn't build a positronic brain without them.

    You can't expect just whatever random AI to spontaneously decide to follow them.

    Asimov did write several stories about robots that didn't have the laws baked in.

    There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.

    There was one about a robot that helped humans while the humans were on a dangerous mission... I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.

    These are the two that come to mind immediately. I have read a lot of Asimov's robot stories, but it was many years ago. I'm sure there are several others. He wrote stories about the laws of robotics from basically every angle.

    He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.

  • Asimov did write several stories about robots that didn't have the laws baked in.

    There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.

    There was one about a robot that helped humans while the humans were on a dangerous mission... I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.

    These are the two that come to mind immediately. I have read a lot of Asimov's robot stories, but it was many years ago. I'm sure there are several others. He wrote stories about the laws of robotics from basically every angle.

    He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.

    I remember most of the R Daneel books, but I admit I haven't read all the various robot short stories.

  • Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

    Most of the stories are about how the laws don't work and how to circumvent them, yes.

  • I remember most of the R Daneel books, but I admit I haven't read all the various robot short stories.

    He wrote so many short stories about robots that it would be quite a feat if you had read all of them. When I was a child, I would always go to Half-Price Books and purchase whatever they had by Asimov that I hadn't already read, but I think he wrote something like 500 books.

  • Most of the stories are about how the laws don't work and how to circumvent them, yes.

    Most peope never read Asimov and it shows.

  • Good God what an absolutely ridiculous article, I would be ashamed to write that.

    Most fundamentally of course is the fact that the laws are robotics are not intended to work and are not designed to be used by future AI systems. I'm sure Asimov would be disappointed to say the least to find out that some people haven't got the message.

    People not getting the message is the default I think, for everything, like the song Mother knows best from Disneys Tangled, how many mothers say, see mother knows best

  • Most peope never read Asimov and it shows.

    I dont think reading Asimov would help for most people. I think most people will just not get the point of anything unless you spell it out

  • I dont think reading Asimov would help for most people. I think most people will just not get the point of anything unless you spell it out

    Critical thinking is indeed dead for much of the population.

  • Most of the stories are about how the laws don't work and how to circumvent them, yes.

    Some of the stories do also include solutions to those same issues, though that also tends to lead to limiting the capabilities of the robots. The message could be interpreted as it being a trade off between versatility and risk.

  • New Grads Hit AI Job Wall as Market Flips Upside Down

    Technology technology
    1
    1
    29 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • Study finds smartphone bans in Dutch schools improved focus

    Technology technology
    55
    359 Stimmen
    55 Beiträge
    370 Aufrufe
    D
    Based on what data?
  • 440 Stimmen
    104 Beiträge
    529 Aufrufe
    P
    I'm pretty sure I disabled/removed it when I got this phone. I don't specifically remember doing it but when I get a new phone, I watch some YouTube videos on how to purge all the crap I don't want. I read an article that mentioned using command line stuff to eliminate it and it kind looked familiar. I think I did this. I really should write stuff down.
  • 41 Stimmen
    3 Beiträge
    27 Aufrufe
    M
    Does anybody know of a resource that's compiled known to be affected system or motherboard models using this specific BMC? Eclypsium said the line of vulnerable AMI MegaRAC devices uses an interface known as Redfish. Server makers known to use these products include AMD, Ampere Computing, ASRock, ARM, Fujitsu, Gigabyte, Huawei, Nvidia, Supermicro, and Qualcomm. Some, but not all, of these vendors have released patches for their wares.
  • Sitting up and waiting.

    Technology technology
    7
    5 Stimmen
    7 Beiträge
    47 Aufrufe
    A
    What new AI slop hell is this?
  • 478 Stimmen
    22 Beiträge
    131 Aufrufe
    professorchodimaccunt@sh.itjust.worksP
    GOOD lets chance of spAIyware on there
  • Bookmark keywords, again (Firefox)

    Technology technology
    3
    4 Stimmen
    3 Beiträge
    27 Aufrufe
    bokehphilia@lemmy.mlB
    This is terrible news. I also have a keyboard-centric workflow and also make heavy use of keyword bookmarks. I too use custom bookmarklets containing JavaScript that I can invoke with a few key strokes for multiple uses including: 1: Auto-expanding all nested Reddit comments on posts with many comments on desktop. 2: Downloading videos from certain web sites. 3: Playing a play-by-forum online board game. 4: Helping expand and aid in downloading images from a certain host. 5: Sending X (Twitter) URLs in the browser bar to Nitter or TWStalker. And all these without touching the mouse! It's really disappointing to read that Firefox could be taking so much capability in the browser away.
  • 4 Stimmen
    2 Beiträge
    20 Aufrufe
    M
    Epic is a piece of shit company. The only reason they are fighting this fight with Apple is because they want some of Apple’s platform fees for themselves. Period. The fact that they managed to convince a bunch of simpletons that they are somehow Robin Hood coming to free them from the tyrant (who was actually protecting all those users all along) is laughable. Apple created the platform, Apple managed it, curated it, and controlled it. That gives them the right to profit from it. You might dislike that but — guess what? Nobody forced you to buy it. Buy Android if Fortnight is so important to you. Seriously. Please. We won’t miss you. Epic thinks they have a right to profit from Apple’s platform and not pay them for all the work they did to get it to be over 1 billion users. That is simply wrong. They should build their own platform and their own App Store and convince 1 billion people to use it. The reason they aren’t doing that is because they know they will never be as successful as Apple has been.