Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Are Asimov's three laws of robotics safe?

  1. Jul 17, 2004 #1
    A common suggestion on the topic of AI ethics is to implement the Three Laws of Robotics that Isaac Asimov explored in his science fiction stories (e.g. in "I, Robot", which has just come out as a movie).

    But are these laws really safe? As Asimov's stories themselves make clear, they can be misinterpreted in a lot of ways.

    On the 3 Laws Unsafe website, recently launched by the Singularity Institute, you'll find several articles further explaining possible flaws in the three laws.

    I think that as AI progresses, it will become extremely important to deal with safety issues in less simplistic ways. Any opinions/etc on this?
  2. jcsd
  3. Jul 17, 2004 #2
    In allowing AI to progress it is essential to also control the progressions infinitely. If you allow something to get "smarter" then you, then you are no longer in control.
  4. Jul 18, 2004 #3
    First of all, Asimov's laws are based on pure speculation and imagination. Until humans manage to artificially produce a non-biological intelligent being, everything we can say about this will remain pure speculation.

    If humans are "dumb" enough to create something that will destroy them, so be it. Life is a fight for survival so whoever wins, wins.

    Of course, this is all assuming that our intelligent robots have somehow managed to gain instincts, particularly those of survival. I think this all leads to the better question of "Why is survival/reproduction so import to biological beings?".
  5. Jul 18, 2004 #4
    I'd agree that trying to control something smarter than you is futile. This is why we should design AI to want to be nice, in and of themselves. Not just nice in a way we can exactly specify in advance; nice in such a way that it could determine the spirit, rather than the letter, of things like the three laws. This turns out to be a highly non-trivial problem.

    Maybe; however, I think we can't afford not to think extensively about this subject in advance.

    Life may be a fight for survival, but should we be content with this? I think we should be able to outgrow the nastier aspects of evolution, and cooperate with AI rather than fight it.

    Not necessarily; surviving is helpful in achieving just about any goal. Survival can be a conscious, rational choice, rather than an instinct.

    The biological beings to whom it wasn't important died without leaving children.
  6. Jul 26, 2004 #5
    Please bare with me all for this is my first post here.

    It would be nice in theory, however is it really possible? To stay nice we would in effect have to teach it sure. Though to keep it nice? Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist. I know where you're coming from Ontoplankton, and I have read the links (very good I have to add thank you) however, and forgive me if this well read book has been brought up here before though I believe it's essential reading, Society of The Mind for those who haven't read it is a look into how AI can ultimately start off with all the best intentions, and inadvertantly go array.
  7. Jul 26, 2004 #6
    What exactly do you mean by "nastier aspects of evolution"? What would be the purpose of cooperating with AI?

    I can make the "rational" choice of not eating and after five/seven days I'll be dead. Note however that this is not considered "normal" behaviour. For the majority of us (and all living beings for that matter), survival is pure instinct i.e. programmed, inherent.

    I don't think there were such beings. It be interesting if they existed though.

    The concept of good and evil is a notion developed by humans. If there were no humans, then there would be no evil.
  8. Jul 26, 2004 #7
    Ethics alone can't resolve the issues of the robotics 3. Epistemology is necessary, and neither Asimov nor others have persued these foundations extensively (except maybe in the story "Reason" of the book "I, Robot"). In short:
    "what does a robot think is true, what can a robot know, how good is a robot's knowledge?"


    In the story "Evidence" of the book "I, Robot" the 3 are compared loosely to the ethics of a good citizen (without the rigorous determinism).
  9. Jul 26, 2004 #8


    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    The three laws are just fictional elements to set up the problem stories. As such they had to be immediately plausible, but no more than that. I think it's an error to take them too seriously.
  10. Jul 26, 2004 #9
    Ditto. Hence my first post in this thread.
  11. Jul 27, 2004 #10
    I'm not sure. I don't agree with any of the reasons I've seen why it's supposed to be impossible a priori, though.

    If you're really interested in the subject, I recommend diving deeper into what the SingInst has to say on "Friendly AI", especially here. There's an enormous amount of insights there.

    We don't necessarily need true, absolute perfection, just a sufficiently close approximation. I don't see why an almost-perfect world couldn't exist (keeping in mind that we don't know exactly what we mean by "perfect").

    If there exist humans who are nice enough that we'd entrust the future to them, or at least that we're comfortable having them around in modern society, why shouldn't we also be able to make such an AI? We could even "clean it up" beyond what any human could achieve, if we knew what we were doing.
    Last edited: Jul 27, 2004
  12. Jul 27, 2004 #11
    By the "nastier aspects of evolution", I mean all the red-in-tooth-and-claw stuff. "Survival of the fittest" may be how nature works, but I don't think it's something to strive for just because it's how nature works.

    The purpose of cooperating with AI is basically the same as the purpose of cooperating with anyone: to not be harmed, and to not harm others, and to help us achieve (and maybe rethink) our goals.

    I might as well ask, what's the purpose of being killed by AI?

    Mostly true, but I was talking about AIs there, not humans.
  13. Jul 27, 2004 #12
    Right; while it looks to me like the Three Laws are often taken seriously in popular discussions, I don't think there are many real AI projects that are proposing to implement them. (I think Asimov took them fairly seriously, though.)

    Still, the arguments against Asimov's Laws apply to more approaches than just to the Three Laws. I think any approach based on a few (or many) unchangeable moral rules is dangerous. Only very few people seem to be taking the problems of ethical AI seriously enough to think hard about them.
  14. Aug 7, 2004 #13
    It makes for great storytelling. But Asimov meant it as logical and idealistic, as shown by how many times the laws are not even bended but bypassed by way of constructs in the stories.

    Oh yeah, don't forget the Zeroth Law. Or the Sub-First Law. Kidding about that last one ;)
  15. Aug 8, 2004 #14
    That's the one that says "A robot may not injure humanity, or, through inaction, allow humanity to come to harm", which is even more ambiguous than the others, IMHO.

    If any of you believe words like "harm" can be easily and clearly defined, I recommend googling "Ronald Opus" for a highly amusing story. :biggrin:
  16. Aug 9, 2004 #15


    User Avatar
    Science Advisor

    Instead of trying to figure out better logic for the robots, why not just give the robots a critical flaw? Make the robots run on Mac OS9 so when things get too complicated, they'll just crash :biggrin: .
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?

Similar Discussions: Are Asimov's three laws of robotics safe?
  1. Robot secretary (Replies: 2)

  2. Lusty Robots (Replies: 8)

  3. Robotics EBook (Replies: 1)

  4. Robotics competition (Replies: 8)

  5. Robot Car (Replies: 2)