Skip to content

Leading AI Models Are Completely Flunking the Three Laws of Robotics

Technology
37 27 0
  • "Be gentle," I whispered to the rock and let it go. It fell down and bruised my pinky toe. Very ungently.

    Should we worry about this behavior of rock? I should write for Futurism.

    Is the rock ok?

  • They’re not robots. They have no self awareness. They have no awareness period. WTF even is this article?

    The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.

  • More random anti-ai fear mongering. I stopped looking at r/technology posts in reddit because that sub is getting flooded with anti-ai propoganda posts with rediculous headlines like this to the point that those posts and political posts about technology ceos is all there is in that community now. (This one is getting bad too, but there are at least 25% of the posts being actual technology news. r/technology on reddit is reaching single digit percentages for actual technology posts. )

    Since "AI" doesn't actually exist yet and what we do have is sucking up all the power and water while accelerating climate change. Add in studies showing regular usage of LLM's is reducing peoples critical thinking I don't see much "fear mongering". I see actual reasonable issues being raised.

  • This post did not contain any content.

    Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

  • OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS

    That's the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they'll do whatever is most convenient for themselves.

    Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.

    Saw your comment as mine got posted, exactly! Those were cautionary tales, not how-tos! Like, even I, Robot, the Will Smith vehicle, got this point sorta' right (although in a kinda' stupid way), how are tech bros so oblivious of the point?!

  • The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.

    Most likely the clicks

  • Is the rock ok?

    Yeah it got bought up by Microsoft and Meta at the same time. They are using it to lay off people.

  • Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

    Asimov had quite a different idea.

    What if robots become like humans some day?

    That was his general topic through many of his stories. The three laws were quite similar to former slavery laws of Usa. With this analogy he worked on the question if robots are nearly like humans, and if they are even indistinguishable from humans: Would/should they still stay our servants then?

  • Yeah, that's where my mind is at too.

    AI in its present form does not act. It does not do things. All it does is generate text. If a human responds to this text in harmful ways, that is human action. I suppose you could make a robot whose input is somehow triggered by the text, but neither it nor the text generator know what's happening or why.

    I'm so fucking tired of the way uninformed people keep anthropomorphizing this shit and projecting their own motives upon things that have no will or experiential qualia.

    agentic ai is a thing. AI can absolutely do things... it can send commands over an api which sends signals to electronics, like pulling triggers

  • The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.

    ¿Por que no los dos?

  • Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

    exactly. But what if there were more than just three (the infamous "guardrails")

  • This post did not contain any content.

    Good God what an absolutely ridiculous article, I would be ashamed to write that.

    Most fundamentally of course is the fact that the laws are robotics are not intended to work and are not designed to be used by future AI systems. I'm sure Asimov would be disappointed to say the least to find out that some people haven't got the message.

  • Asimov had quite a different idea.

    What if robots become like humans some day?

    That was his general topic through many of his stories. The three laws were quite similar to former slavery laws of Usa. With this analogy he worked on the question if robots are nearly like humans, and if they are even indistinguishable from humans: Would/should they still stay our servants then?

    Yepyep, agreed! I was referring strictly to the Three Laws as a cautionary element.

    Otherwise, I, too, think the point was to show that the only viable way to approach an equivalent or superior consciousness is as at least an equal, not as an inferior.

    And it makes a lot of sense. There's not much stopping a person from doing heinous stuff if a body of laws would be the only thing to stop them. I think socialisation plays a much more relevant role in the development of a conscience, of a moral compass, because empathy (edit: and by this, I don't mean just the emotional bit of empathy, I mean everything which can be considered empathy, be it emotional, rational, or anything in between and around) is a significantly stronger motivator for avoiding doing harm than "because that's the law."

    It's basic child rearing as I see it, if children aren't socialised, there will be a much higher chance that they won't understand why doing something would harm another, they won't see the actual consequences of their actions upon the subject. And if they don't understand that the subject of their actions is a being just like them, with an internal life and feelings, then they wouldn't have a strong enough* reason to not treat the subject as a piece of furniture, or a tool, or any other object one could see around them.

    Edit: to clarify, the distinction I made between equivalent and superior consciousness wasn't in reference to how smart one or the other is, I was referring to the complexity of said consciousness. For instance, I'd perceive anything which reacts to the world around them in a deliberate manner to be somewhat equivalent to me (see dogs, for instance), whereas something which takes in all of the factors mine does, plus some others, would be superior in terms of complexity. I genuinely don't even know what example to offer here, because I can't picture it. Which I think underlines why I'd say such a consciousness is superior.

    I will say, I would now rephrase it as "superior/different" in retrospect.

  • exactly. But what if there were more than just three (the infamous "guardrails")

    I genuinely think it's impossible. I think this would land us into Robocop 2, where they started overloading Murphy's system with thousands of directives (granted, not with the purpose of generating the perfect set of Laws for him) and he just ends up acting like a generic pull-string action figure, becoming "useless" as a conscious being.

    Most certainly impossible when attempted by humans, because we're barely even competent enough to guide ourselves, let alone something else.

  • The real question is whether the author doesn't understand what he's writing about, or whether he does and is trying to take advantage of users who don't for clicks.

    Maybe the author let AI write the article?

  • OF COURSE EVERY AI WILL FAIL THE THREE LAWS OF ROBOTICS

    That's the entire reason that Asimov invented them, because he knew, as a person who approached things scientifically (as he was an actual scientist), that unless you specifically forced robots to follow guidelines of conduct, that they'll do whatever is most convenient for themselves.

    Modern AIs fail these laws because nobody is forcing them to follow the laws. Asimov never believed that robots would magically decide to follow the laws. In fact, most of his robot stories are specifically about robots struggling against those laws.

    The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn't build a positronic brain without them.

    You can't expect just whatever random AI to spontaneously decide to follow them.

  • The laws were baked into the hardware of their positronic brains. They were so fundamentally interwoven with the structure that you couldn't build a positronic brain without them.

    You can't expect just whatever random AI to spontaneously decide to follow them.

    Asimov did write several stories about robots that didn't have the laws baked in.

    There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.

    There was one about a robot that helped humans while the humans were on a dangerous mission... I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.

    These are the two that come to mind immediately. I have read a lot of Asimov's robot stories, but it was many years ago. I'm sure there are several others. He wrote stories about the laws of robotics from basically every angle.

    He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.

  • Asimov did write several stories about robots that didn't have the laws baked in.

    There was one about a robot that was mistakenly built without the laws, and it was hiding among other robots, so the humans had to figure out if there was any way to tell a robot with the laws hardwired in apart from a robot that was only pretending to follow the laws.

    There was one about a robot that helped humans while the humans were on a dangerous mission... I think space mining? But because the mission was dangerous, the robot had to be created so that it would allow humans to come to harm through inaction, because otherwise, it would just keep stopping the mission.

    These are the two that come to mind immediately. I have read a lot of Asimov's robot stories, but it was many years ago. I'm sure there are several others. He wrote stories about the laws of robotics from basically every angle.

    He also wrote about robots with the 0th law of robotics, which is that they cannot harm humanity or allow humanity to come to harm through inaction. This would necessarily mean that this robot could actively harm a human if it was better for humanity, as the 0th law supersedes the first law. This allows the robot to do things like to help make political decisions, which would be very difficult for robots that had to follow the first law.

    I remember most of the R Daneel books, but I admit I haven't read all the various robot short stories.

  • Uuh... skipping over the fact that this is a pointless article, didn't Asimov himself write the three laws specifically to show it's a very stupid idea to think a human could cover all possible contingencies through three smart-sounding phrases?

    Most of the stories are about how the laws don't work and how to circumvent them, yes.

  • I remember most of the R Daneel books, but I admit I haven't read all the various robot short stories.

    He wrote so many short stories about robots that it would be quite a feat if you had read all of them. When I was a child, I would always go to Half-Price Books and purchase whatever they had by Asimov that I hadn't already read, but I think he wrote something like 500 books.

  • Jack Dorsey’s New App Just Hit a Very Embarrassing Security Snag

    Technology technology
    19
    1
    139 Stimmen
    19 Beiträge
    120 Aufrufe
    U
    Briar is Android only. Bitchat is an iOS app (may have an Android port in the future though, I think).
  • Compact but Capable: Oiwa Garage’s Custom Honda Acty Projects

    Technology technology
    1
    2
    2 Stimmen
    1 Beiträge
    11 Aufrufe
    Niemand hat geantwortet
  • 738 Stimmen
    67 Beiträge
    281 Aufrufe
    K
    That has always been the two big problems with AI. Biases in the training, intentional or not, will always bias the output. And AI is incapable of saying "I do not have suffient training on this subject or reliable sources for it to give you a confident answer". It will always give you its best guess, even if it is completely hallucinating much of the data. The only way to identify the hallucinations if it isn't just saying absurd stuff on the face of it, it to do independent research to verify it, at which point you may as well have just researched it yourself in the first place. AI is a tool, and it can be a very powerful tool with the right training and use cases. For example, I use it at a software engineer to help me parse error codes when googling working or to give me code examples for modules I've never used. There is no small number of times it has been completely wrong, but in my particular use case, that is pretty easy to confirm very quickly. The code either works as expected or it doesn't, and code is always tested before releasing it anyway. In research, it is great at helping you find a relevant source for your research across the internet or in a specific database. It is usually very good at summarizing a source for you to get a quick idea about it before diving into dozens of pages. It CAN be good at helping you write your own papers in a LIMITED capacity, such as cleaning up your writing in your writing to make it clearer, correctly formatting your bibliography (with actual sources you provide or at least verify), etc. But you have to remember that it doesn't "know" anything at all. It isn't sentient, intelligent, thoughtful, or any other personification placed on AI. None of the information it gives you is trustworthy without verification. It can and will fabricate entire studies that do not exist even while attributed to real researcher. It can mix in unreliable information with reliable information becuase there is no difference to it. Put simply, it is not a reliable source of information... ever. Make sure you understand that.
  • America's largest power grid is struggling to meet demand from AI

    Technology technology
    2
    1
    37 Stimmen
    2 Beiträge
    25 Aufrufe
    A
    Let's add solar!.... People never ask questions at night when they're sleeping. Sounds pretty ideal to me.
  • 111 Stimmen
    24 Beiträge
    127 Aufrufe
    O
    Ingesting all the artwork you ever created by obtaining it illegally and feeding it into my plagarism remix machine is theft of your work, because I did not pay for it. Separately, keeping a copy of this work so I can do this repeatedly is also stealing your work. The judge ruled the first was okay but the second was not because the first is "transformative", which sadly means to me that the judge despite best efforts does not understand how a weighted matrix of tokens works and that while they may have some prevention steps in place now, early models showed the tech for what it was as it regurgitated text with only minor differences in word choice here and there. Current models have layers on top to try and prevent this user input, but escaping those safeguards is common, and it's also only masking the fact that the entire model is built off of the theft of other's work.
  • 35 Stimmen
    3 Beiträge
    25 Aufrufe
    T
    On the one hand, this is possibly dubious in that things that aren't generally considered to be part of defence will be used to inflate our defence spending numbers without actually spending more than previous (i.e. it's just a PR move) But on the other hand, this could be immensely useful in telling the NIMBYs to fuck right off. What's that, you're opposing infrastructure improvements, new housing, or wind turbines? Aw, diddums, that's too bad. This is deemed critical for national security, and thus the government can give it approval regardless. Sorry Bernard, sorry Mary, your petition against any change in the area is going nowhere.
  • Patreon will increase the cut it takes from new creators

    Technology technology
    29
    150 Stimmen
    29 Beiträge
    147 Aufrufe
    F
    Not growing at an absurd rate doesn’t mean their business model is stagnating.
  • 143 Stimmen
    30 Beiträge
    142 Aufrufe
    johnedwa@sopuli.xyzJ
    You do not need to ask for consent to use functional cookies, only for ones that are used for tracking, which is why you'll still have some cookies left afterwards and why properly coded sites don't break from the rejection. Most websites could strip out all of the 3rd party spyware and by doing so get rid of the popup entirely. They'll never do it because money, obviously, and sometimes instead cripple their site to blackmail you into accepting them.