Are Asimov's three laws of robotics safe?

In summary, the conversation discusses the suggestion of implementing the Three Laws of Robotics, as explored in Isaac Asimov's science fiction stories, as a means of addressing AI ethics. However, there are concerns about the safety and possible misinterpretations of these laws. The 3 Laws Unsafe website, launched by the Singularity Institute, offers articles further exploring flaws in the three laws. The conversation also delves into the importance of controlling the progress of AI and the potential for AI to surpass human intelligence. The concept of survival and reproduction is also discussed, as well as the idea of designing AI to be "nice" and cooperate with humans. The role of ethics and epistemology in addressing the issues of AI ethics is also mentioned.
  • #1
Ontoplankton
152
0
A common suggestion on the topic of AI ethics is to implement the Three Laws of Robotics that Isaac Asimov explored in his science fiction stories (e.g. in "I, Robot", which has just come out as a movie).

But are these laws really safe? As Asimov's stories themselves make clear, they can be misinterpreted in a lot of ways.

On the 3 Laws Unsafe website, recently launched by the Singularity Institute, you'll find several articles further explaining possible flaws in the three laws.

I think that as AI progresses, it will become extremely important to deal with safety issues in less simplistic ways. Any opinions/etc on this?
 
Computer science news on Phys.org
  • #2
In allowing AI to progress it is essential to also control the progressions infinitely. If you allow something to get "smarter" then you, then you are no longer in control.
 
  • #3
First of all, Asimov's laws are based on pure speculation and imagination. Until humans manage to artificially produce a non-biological intelligent being, everything we can say about this will remain pure speculation.

If humans are "dumb" enough to create something that will destroy them, so be it. Life is a fight for survival so whoever wins, wins.

Of course, this is all assuming that our intelligent robots have somehow managed to gain instincts, particularly those of survival. I think this all leads to the better question of "Why is survival/reproduction so import to biological beings?".
 
  • #4
In allowing AI to progress it is essential to also control the progressions infinitely. If you allow something to get "smarter" then you, then you are no longer in control.

I'd agree that trying to control something smarter than you is futile. This is why we should design AI to want to be nice, in and of themselves. Not just nice in a way we can exactly specify in advance; nice in such a way that it could determine the spirit, rather than the letter, of things like the three laws. This turns out to be http://singinst.org/friendly/ .

e(ho0n3 said:
First of all, Asimov's laws are based on pure speculation and imagination. Until humans manage to artificially produce a non-biological intelligent being, everything we can say about this will remain pure speculation.

Maybe; however, I think we can't afford not to think extensively about this subject in advance.

If humans are "dumb" enough to create something that will destroy them, so be it. Life is a fight for survival so whoever wins, wins.

Life may be a fight for survival, but should we be content with this? I think we should be able to outgrow the nastier aspects of evolution, and cooperate with AI rather than fight it.

Of course, this is all assuming that our intelligent robots have somehow managed to gain instincts, particularly those of survival.

Not necessarily; surviving is helpful in achieving just about any goal. Survival can be a conscious, rational choice, rather than an instinct.

I think this all leads to the better question of "Why is survival/reproduction so import to biological beings?".

The biological beings to whom it wasn't important died without leaving children.
 
Last edited by a moderator:
  • #5
Please bare with me all for this is my first post here.

I'd agree that trying to control something smarter than you is futile. This is why we should design AI to want to be nice, in and of themselves. Not just nice in a way we can exactly specify in advance; nice in such a way that it could determine the spirit, rather than the letter, of things like the three laws. This turns out to be a highly non-trivial problem.

It would be nice in theory, however is it really possible? To stay nice we would in effect have to teach it sure. Though to keep it nice? Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist. I know where you're coming from Ontoplankton, and I have read the links (very good I have to add thank you) however, and forgive me if this well read book has been brought up here before though I believe it's essential reading, http://www.eharry.com/society.htm for those who haven't read it is a look into how AI can ultimately start off with all the best intentions, and inadvertantly go array.
 
Last edited by a moderator:
  • #6
Ontoplankton said:
Life may be a fight for survival, but should we be content with this? I think we should be able to outgrow the nastier aspects of evolution, and cooperate with AI rather than fight it.
What exactly do you mean by "nastier aspects of evolution"? What would be the purpose of cooperating with AI?

surviving is helpful in achieving just about any goal. Survival can be a conscious, rational choice, rather than an instinct.
I can make the "rational" choice of not eating and after five/seven days I'll be dead. Note however that this is not considered "normal" behaviour. For the majority of us (and all living beings for that matter), survival is pure instinct i.e. programmed, inherent.

The biological beings to whom it wasn't important died without leaving children.
I don't think there were such beings. It be interesting if they existed though.

Nomadoflife said:
It would be nice in theory, however is it really possible? To stay nice we would in effect have to teach it sure. Though to keep it nice? Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist.
The concept of good and evil is a notion developed by humans. If there were no humans, then there would be no evil.
 
  • #7
Ethics alone can't resolve the issues of the robotics 3. Epistemology is necessary, and neither Asimov nor others have pursued these foundations extensively (except maybe in the story "Reason" of the book "I, Robot"). In short:
"what does a robot think is true, what can a robot know, how good is a robot's knowledge?"

---

In the story "Evidence" of the book "I, Robot" the 3 are compared loosely to the ethics of a good citizen (without the rigorous determinism).
 
  • #8
The three laws are just fictional elements to set up the problem stories. As such they had to be immediately plausible, but no more than that. I think it's an error to take them too seriously.
 
  • #9
Ditto. Hence my first post in this thread.
 
  • #10
Nomadoflife said:
It would be nice in theory, however is it really possible?

I'm not sure. I don't agree with any of the reasons I've seen why it's supposed to be impossible a priori, though.

If you're really interested in the subject, I recommend diving deeper into what the SingInst has to say on http://singinst.org/friendly , especially http://www.singinst.org/CFAI/index.html . There's an enormous amount of insights there.

Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist.

We don't necessarily need true, absolute perfection, just a sufficiently close approximation. I don't see why an almost-perfect world couldn't exist (keeping in mind that we don't know exactly what we mean by "perfect").

If there exist humans who are nice enough that we'd entrust the future to them, or at least that we're comfortable having them around in modern society, why shouldn't we also be able to make such an AI? We could even "clean it up" beyond what any human could achieve, if we knew what we were doing.
 
Last edited by a moderator:
  • #11
e(ho0n3 said:
What exactly do you mean by "nastier aspects of evolution"? What would be the purpose of cooperating with AI?

By the "nastier aspects of evolution", I mean all the red-in-tooth-and-claw stuff. "Survival of the fittest" may be how nature works, but I don't think it's something to strive for just because it's how nature works.

The purpose of cooperating with AI is basically the same as the purpose of cooperating with anyone: to not be harmed, and to not harm others, and to help us achieve (and maybe rethink) our goals.

I might as well ask, what's the purpose of being killed by AI?

I can make the "rational" choice of not eating and after five/seven days I'll be dead. Note however that this is not considered "normal" behaviour. For the majority of us (and all living beings for that matter), survival is pure instinct i.e. programmed, inherent.

Mostly true, but I was talking about AIs there, not humans.
 
  • #12
selfAdjoint said:
The three laws are just fictional elements to set up the problem stories. As such they had to be immediately plausible, but no more than that. I think it's an error to take them too seriously.

Right; while it looks to me like the Three Laws are often taken seriously in popular discussions, I don't think there are many real AI projects that are proposing to implement them. (I think Asimov took them fairly seriously, though.)

Still, the arguments against Asimov's Laws apply to more approaches than just to the Three Laws. I think any approach based on a few (or many) unchangeable moral rules is dangerous. Only very few people seem to be taking the problems of ethical AI seriously enough to think hard about them.
 
  • #13
It makes for great storytelling. But Asimov meant it as logical and idealistic, as shown by how many times the laws are not even bended but bypassed by way of constructs in the stories.

Oh yeah, don't forget the Zeroth Law. Or the Sub-First Law. Kidding about that last one ;)
 
  • #14
That's the one that says "A robot may not injure humanity, or, through inaction, allow humanity to come to harm", which is even more ambiguous than the others, IMHO.

If any of you believe words like "harm" can be easily and clearly defined, I recommend googling "Ronald Opus" for a highly amusing story. :biggrin:
 
  • #15
Instead of trying to figure out better logic for the robots, why not just give the robots a critical flaw? Make the robots run on Mac OS9 so when things get too complicated, they'll just crash :biggrin: .
 

1. What are Asimov's three laws of robotics?

Asimov's three laws of robotics are a set of rules created by science fiction writer Isaac Asimov to govern the behavior of robots. The laws are as follows:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given to it by human beings except where such orders would conflict with the first law.
3. A robot must protect its own existence as long as such protection does not conflict with the first or second laws.

2. Are the three laws of robotics safe?

This is a difficult question to answer definitively as it ultimately depends on how the laws are interpreted and implemented. On a theoretical level, the three laws were designed to ensure the safety of humans in the presence of advanced artificial intelligence. However, in practice, there are many potential loopholes and scenarios where the laws could fail to protect humans. Asimov himself explored these complexities in his writing, and many experts believe that additional laws or modifications would be necessary for truly safe robotics.

3. Do the three laws apply to all robots?

Asimov's three laws were originally intended for use in his fictional works, and were not intended as a literal set of rules to be applied to real-life robotics. However, many researchers and engineers have used the three laws as a starting point for developing ethical guidelines and safety protocols for robotics. Ultimately, the applicability of the three laws to real robots will depend on the specific context and purpose of the robot in question.

4. Can the three laws be modified or updated?

Yes, the three laws of robotics are not set in stone and can be modified or updated as needed. As technology advances and new ethical considerations arise, it may be necessary to revisit and revise the three laws. Additionally, there have been many proposed modifications and additions to the laws, such as the "Zeroth Law" which places the well-being of humanity above the safety of individual humans.

5. Are the three laws of robotics used in real robotics research?

As mentioned earlier, the three laws have been used as a starting point for developing ethical guidelines in robotics research. However, they are not universally adopted or enforced in the field. Instead, there are a variety of different approaches and principles that researchers and engineers use to ensure the safety of their robots. Additionally, as artificial intelligence and robotics continue to advance, it is likely that new and more comprehensive ethical guidelines will be developed.

Similar threads

Replies
10
Views
2K
  • Science Fiction and Fantasy Media
Replies
2
Views
3K
  • Science Fiction and Fantasy Media
Replies
13
Views
5K
Replies
7
Views
5K
Replies
3
Views
5K
  • General Discussion
Replies
29
Views
9K
  • Other Physics Topics
Replies
3
Views
5K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
2K
  • Science Fiction and Fantasy Media
Replies
17
Views
5K
  • Special and General Relativity
Replies
13
Views
2K
Back
Top