Are Asimov's three laws of robotics safe?

  • #1
148
0
A common suggestion on the topic of AI ethics is to implement the Three Laws of Robotics that Isaac Asimov explored in his science fiction stories (e.g. in "I, Robot", which has just come out as a movie).

But are these laws really safe? As Asimov's stories themselves make clear, they can be misinterpreted in a lot of ways.

On the 3 Laws Unsafe website, recently launched by the Singularity Institute, you'll find several articles further explaining possible flaws in the three laws.

I think that as AI progresses, it will become extremely important to deal with safety issues in less simplistic ways. Any opinions/etc on this?
 

Answers and Replies

  • #2
21
0
In allowing AI to progress it is essential to also control the progressions infinitely. If you allow something to get "smarter" then you, then you are no longer in control.
 
  • #3
1,357
0
First of all, Asimov's laws are based on pure speculation and imagination. Until humans manage to artificially produce a non-biological intelligent being, everything we can say about this will remain pure speculation.

If humans are "dumb" enough to create something that will destroy them, so be it. Life is a fight for survival so whoever wins, wins.

Of course, this is all assuming that our intelligent robots have somehow managed to gain instincts, particularly those of survival. I think this all leads to the better question of "Why is survival/reproduction so import to biological beings?".
 
  • #4
148
0
In allowing AI to progress it is essential to also control the progressions infinitely. If you allow something to get "smarter" then you, then you are no longer in control.
I'd agree that trying to control something smarter than you is futile. This is why we should design AI to want to be nice, in and of themselves. Not just nice in a way we can exactly specify in advance; nice in such a way that it could determine the spirit, rather than the letter, of things like the three laws. This turns out to be http://singinst.org/friendly/ [Broken].

e(ho0n3 said:
First of all, Asimov's laws are based on pure speculation and imagination. Until humans manage to artificially produce a non-biological intelligent being, everything we can say about this will remain pure speculation.
Maybe; however, I think we can't afford not to think extensively about this subject in advance.

If humans are "dumb" enough to create something that will destroy them, so be it. Life is a fight for survival so whoever wins, wins.
Life may be a fight for survival, but should we be content with this? I think we should be able to outgrow the nastier aspects of evolution, and cooperate with AI rather than fight it.

Of course, this is all assuming that our intelligent robots have somehow managed to gain instincts, particularly those of survival.
Not necessarily; surviving is helpful in achieving just about any goal. Survival can be a conscious, rational choice, rather than an instinct.

I think this all leads to the better question of "Why is survival/reproduction so import to biological beings?".
The biological beings to whom it wasn't important died without leaving children.
 
Last edited by a moderator:
  • #5
Please bare with me all for this is my first post here.

I'd agree that trying to control something smarter than you is futile. This is why we should design AI to want to be nice, in and of themselves. Not just nice in a way we can exactly specify in advance; nice in such a way that it could determine the spirit, rather than the letter, of things like the three laws. This turns out to be a highly non-trivial problem.
It would be nice in theory, however is it really possible? To stay nice we would in effect have to teach it sure. Though to keep it nice? Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist. I know where you're coming from Ontoplankton, and I have read the links (very good I have to add thank you) however, and forgive me if this well read book has been brought up here before though I believe it's essential reading, http://www.eharry.com/society.htm for those who haven't read it is a look into how AI can ultimately start off with all the best intentions, and inadvertantly go array.
 
Last edited by a moderator:
  • #6
1,357
0
Ontoplankton said:
Life may be a fight for survival, but should we be content with this? I think we should be able to outgrow the nastier aspects of evolution, and cooperate with AI rather than fight it.
What exactly do you mean by "nastier aspects of evolution"? What would be the purpose of cooperating with AI?

surviving is helpful in achieving just about any goal. Survival can be a conscious, rational choice, rather than an instinct.
I can make the "rational" choice of not eating and after five/seven days I'll be dead. Note however that this is not considered "normal" behaviour. For the majority of us (and all living beings for that matter), survival is pure instinct i.e. programmed, inherent.

The biological beings to whom it wasn't important died without leaving children.
I don't think there were such beings. It be interesting if they existed though.

Nomadoflife said:
It would be nice in theory, however is it really possible? To stay nice we would in effect have to teach it sure. Though to keep it nice? Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist.
The concept of good and evil is a notion developed by humans. If there were no humans, then there would be no evil.
 
  • #7
Ethics alone can't resolve the issues of the robotics 3. Epistemology is necessary, and neither Asimov nor others have persued these foundations extensively (except maybe in the story "Reason" of the book "I, Robot"). In short:
"what does a robot think is true, what can a robot know, how good is a robot's knowledge?"

---

In the story "Evidence" of the book "I, Robot" the 3 are compared loosely to the ethics of a good citizen (without the rigorous determinism).
 
  • #8
selfAdjoint
Staff Emeritus
Gold Member
Dearly Missed
6,786
8
The three laws are just fictional elements to set up the problem stories. As such they had to be immediately plausible, but no more than that. I think it's an error to take them too seriously.
 
  • #9
1,357
0
Ditto. Hence my first post in this thread.
 
  • #10
148
0
Nomadoflife said:
It would be nice in theory, however is it really possible?
I'm not sure. I don't agree with any of the reasons I've seen why it's supposed to be impossible a priori, though.

If you're really interested in the subject, I recommend diving deeper into what the SingInst has to say on http://singinst.org/friendly [Broken], especially http://www.singinst.org/CFAI/index.html [Broken]. There's an enormous amount of insights there.

Don't get wrong here for think it would be wonderful, however I also believe it to be theoretically impossible. It would be like saying that a perfect world without evil can also exsist.
We don't necessarily need true, absolute perfection, just a sufficiently close approximation. I don't see why an almost-perfect world couldn't exist (keeping in mind that we don't know exactly what we mean by "perfect").

If there exist humans who are nice enough that we'd entrust the future to them, or at least that we're comfortable having them around in modern society, why shouldn't we also be able to make such an AI? We could even "clean it up" beyond what any human could achieve, if we knew what we were doing.
 
Last edited by a moderator:
  • #11
148
0
e(ho0n3 said:
What exactly do you mean by "nastier aspects of evolution"? What would be the purpose of cooperating with AI?
By the "nastier aspects of evolution", I mean all the red-in-tooth-and-claw stuff. "Survival of the fittest" may be how nature works, but I don't think it's something to strive for just because it's how nature works.

The purpose of cooperating with AI is basically the same as the purpose of cooperating with anyone: to not be harmed, and to not harm others, and to help us achieve (and maybe rethink) our goals.

I might as well ask, what's the purpose of being killed by AI?

I can make the "rational" choice of not eating and after five/seven days I'll be dead. Note however that this is not considered "normal" behaviour. For the majority of us (and all living beings for that matter), survival is pure instinct i.e. programmed, inherent.
Mostly true, but I was talking about AIs there, not humans.
 
  • #12
148
0
selfAdjoint said:
The three laws are just fictional elements to set up the problem stories. As such they had to be immediately plausible, but no more than that. I think it's an error to take them too seriously.
Right; while it looks to me like the Three Laws are often taken seriously in popular discussions, I don't think there are many real AI projects that are proposing to implement them. (I think Asimov took them fairly seriously, though.)

Still, the arguments against Asimov's Laws apply to more approaches than just to the Three Laws. I think any approach based on a few (or many) unchangeable moral rules is dangerous. Only very few people seem to be taking the problems of ethical AI seriously enough to think hard about them.
 
  • #13
It makes for great storytelling. But Asimov meant it as logical and idealistic, as shown by how many times the laws are not even bended but bypassed by way of constructs in the stories.

Oh yeah, don't forget the Zeroth Law. Or the Sub-First Law. Kidding about that last one ;)
 
  • #14
148
0
That's the one that says "A robot may not injure humanity, or, through inaction, allow humanity to come to harm", which is even more ambiguous than the others, IMHO.

If any of you believe words like "harm" can be easily and clearly defined, I recommend googling "Ronald Opus" for a highly amusing story. :biggrin:
 
  • #15
ShawnD
Science Advisor
668
1
Instead of trying to figure out better logic for the robots, why not just give the robots a critical flaw? Make the robots run on Mac OS9 so when things get too complicated, they'll just crash :biggrin: .
 

Related Threads on Are Asimov's three laws of robotics safe?

  • Last Post
Replies
6
Views
5K
  • Last Post
Replies
1
Views
5K
  • Last Post
Replies
17
Views
1K
Replies
7
Views
1K
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
8
Views
4K
  • Last Post
Replies
14
Views
3K
  • Last Post
Replies
8
Views
2K
  • Last Post
Replies
2
Views
2K
Replies
4
Views
2K
Top