Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Is Destroying An Advanced Robot Murder?

  1. May 17, 2007 #1


    User Avatar
    Gold Member

    The premise of the last book in a trilogy by Asimov deals with the ethical and moral question "Is it murder when a highly advanced robot is completely destroyed by a human or another robot"?

    from: http://www.answers.com/robots%20of%20dawn [Broken]

    The "Robots of Dawn" is the last in a trilogy by Isaac Asimov that started off with the "Caves of Steel" and "The Naked Sun".

    What do you think? Imagine that you had developed a relationship with an highly evolved, human-like robot that really was as spontaneious, entertaining and as intellectually stimulating as any human you'd ever met.

    Now imagine that someone destroyed that robot. Would you consider it as henous a crime as murder? Would the courts agree? The medical community... etc...?
    Last edited by a moderator: May 2, 2017
  2. jcsd
  3. May 17, 2007 #2
    If the robot was not conscious, then i wouldnt consider it murder.
    If it were, i would.
  4. May 17, 2007 #3
    How do you measure level of consciousness? and what is the treshold to start caring?
  5. May 17, 2007 #4
    I cant and the threshhold would be any conscious being.
  6. May 17, 2007 #5
    To check if the robot is concious there is a simple test. Tell him to look at something like a chair, and describe what he thinks it is.

    If he says its a chair, then he is concious.

    If he says that it is a visual representation of a chair generated by one of his subprograms responding to sensory input, he is not conciousness.

    As long as he see just a chair, he has the same feeling of personal identity (the illusion of seperation between ourselves and other things) as we do.
  7. May 17, 2007 #6
    I dont agree that experiencing the illusion of subject object is sign of consciousness. According to your test you cannot test consciousness of animals or plants. (Well, here we go with the eternal problem of consciousness). I propose another solution. If one is attached to the robot, killing or destroying the robot caused the person harm and should be taken to the court. Just like killing animal or any other "things" we may grow attached to. Its not the best but its kinda straight forward.

    For PIT2, You produced contradictory statement. To continue, is solar system conscious? why yes/no and how do you measure it? Do you care if I kill it?
  8. May 17, 2007 #7
    I was talking about the principle, not the practical situation. If the robot is conscious, and if i would know that, then i would consider it murder.

    But, since i cant determine whether the robot is conscious, and the robot in every way appears to be human, then i would also consider it murder because i believe human behaviour can only exist with consciousness present.

    Also my mirror neurons and subconscious instincts would probably work regardless of what my rational mind decides to believe in, and i would feel empathy for the robot and consider it murder.
  9. May 17, 2007 #8
    Ok PIT2, that makes more sense.

    I found a potential contradiction in my earlier statement. What if someone can show that he/she feels attached to all living things including animals we eat. Shoud we be brought to the court? (i guess yes, but what should be the ruling? )
  10. May 17, 2007 #9


    User Avatar
    Science Advisor
    Gold Member

    I have two answers which I think are the only workable ones:

    In terms of individual morality it is a question of whether the individual thinks it is murder. "I am killing someone" vs. "I am destroying something."

    I am supposing that one distinguishes murder from say involuntary "man"slaughter, i.e. to kill in ignorance is not murder.

    The second answer is that apart from each individuals morality all we have left is social convention. It's murder if the law or the social conventions say it is murder.

    I see in most such discussions the implicit assumption that to kill a sentient entity is murder. This is the social convention and personal ethic most tend to adopt. Which of course must, and here did, bring us to the question to "Is the hypothesized robot a sentient entity".

    I do recall some SF author's point that it is a sentient entities responsibility to demonstrate its sentience. I wouldn't quite go that far. I would however propose a modification to Turning's test:

    Construct an artificial language rich enough to express abstract concepts such as "don't kill me" or even better "You shouldn't kill me" but said language simple enough so that the entity can at least learn to mimic its syntax and grammar. Then attempt to teach the entity this language.

    I'm using artificial language so that one can circumnavigate lack of inherent skills such as speech or the ability to consciously control skin color and also so the language itself is created a posteriori to the creation of the entity so as to avoid the various "card file" loop holes to the usual Turning test.

    One then applies Turning's test using this artificial language. If the entity learns the language well enough to communicate through a blind channel to some questioner and if it can argue the case "You shouldn't kill me!" it is then sentient. (That is even if the argument isn't a very good one... the ability to argue is the ability to comprehend the issues involved.)

    By this criterion certain human beings are in fact not sentient. By this criterion some of the smarter non-human animals on this planet are very close to being sentient.

    James Baugh
  11. May 17, 2007 #10
    If you believe in the materialisitic-scientific worldview, human beings already are "advanced robots". If we see ourselves as machines, why shouldn't we extend our rights to our fellow mechanisms?

    If on the other hand you don't believe in materialism, then the question is moot as we will never be able to build a thing that seems to have consciousness.
    Last edited by a moderator: May 17, 2007
  12. May 17, 2007 #11


    User Avatar
    Gold Member

    Does this test apply to infants? Probably not.
  13. May 17, 2007 #12


    User Avatar

    Staff: Mentor

    Don't you have it backwards? Saying "It's a chair" only requires pattern recognition - matching a visual pattern to a catalog/description. have been doing that for decades.

    If, on the other hand, he understands how his view of the chair is generated, that's self-awareness: sentience.

    Of course, no human would describe a chair as an occular image received by the sensory receptors of our imaging device, then transmitted to a processing center for pattern recognition either...
  14. May 17, 2007 #13


    User Avatar
    Gold Member

    I suppose I'd better define "highly advanced robot". In the books Asimov describes his robots as having "positronic" neuronetworks. I can only imagine that these neuronets are the closest approximation to an organic neuronetwork one can construct without actually growing neurons to match a machine's physiology.

    The approximate neuronetwork in Asimov's robots or "positronic" network acts like an organic entity in that it pulses when it is stimulated, simulating the electromagnetic pulse of the sodium/potasium pump mechanism found in the axons, dendrites and synapses of a nerve cell. Bear in mind that Asimov wrote the first two books in the 50s and the last one "Robots of Dawn" in the 80s. So his initial understanding of neurophysiology was limited by the scientific understanding of that field from the 1950s.

    I'd say that "highly advanced" robots would mean the type of mechanism that, although constructed by humans, has taken an evolutionary path of its own and developed, according to robot laws and the laws of nature, a sense of self awareness that could well be described as parallel to a humans. This is in light of the fact that the robots have become part of human society over a period of about 1500 years (into our future).

    (edit) One complication Asimov didn't go very far into detail about was emotion in these robots. I'm guessing that leaving the hormonal system out of the mix in a robot's physiology would be one way to ensure the 3 laws of robots. In other words, no "crimes of passion" or "emotional outbursts" could occur without the hormonal system or an approximation thereof showing up in the robot anatomy. Thank you!
    Last edited: May 17, 2007
  15. May 17, 2007 #14


    User Avatar
    Science Advisor
    Gold Member

    I erred in stating "By this criterion some humans are not sentient".

    The test (like the original Turning test) is sufficient but not necessary to define sentience, i.e. it will prove sentience but failure doesn't disprove sentience. Thus indeed infants could not pass the test, at least not quickly and may yet be sentient.

    You may take advantage of my lack of a specific time frame in which case the infant can over say 5 years learn the language sufficiently to communicate and demonstrate its awareness and ability to think abstractly.

    Another issue is when sentience (by any of various definitions) emerges in a human. Certainly it is not likely present at conception. Certainly it is by most criterion present in a five year old person (absent severe mental disability). Can an objective criterion be defined? If so then when along the path from zygote to fetus to infant to adult does it emerge?

    Et cetera
    and regards,
  16. May 17, 2007 #15


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Any conscious being? You mean like a monkey? A dog? A snail? A plant?
  17. May 17, 2007 #16


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    So.... when the biologist, or the physicist, or the neurologist, or the philosopher admits that he has merely acquired mental imagry, and his brain has processed that into an abstraction we call a chair, then 'e ceases to be conscious?

    And, by your criterion, a robot that was simply programmed to state what kind of furniture it perceives (in particular, without any self-awareness), it would be conscious?
    Last edited: May 17, 2007
  18. May 18, 2007 #17
    I think discussing this is irrelevant. We don't yet know how we could go about programming self-awareness, or even the simple ability to make a decision. The definition of artificial life will be decided, if ever, after we have already created it.
  19. May 18, 2007 #18
    If they are conscious, yes then i would call it a crime to do them harm.
    Last edited: May 18, 2007
  20. May 18, 2007 #19
    There is a good chance that human morals have been selected over time through evolutionary processes. We treat people who kill other humans different from people who kill ants.

    Conscious is a not at all the clear. Is conscious the ability to be aware of ones surrounding? A lot of organisms on the earth are aware of their surrounding. What degree of ability classifies as conscious and what degree does not?

    Humans, as a species, have benefited from not killing fellow humans on a whim. Therefore, such behavior probably have been selected for over time. We don't really care if someone killed an ant. We do care if someone killed a human.

    If advanced robots will have the same delicate relationship with humans as humans have with humans, treating the destruction of such an advanced robot may be considered murder over time. However, by that time, will there even be a difference between humans and advanced robots?
  21. May 18, 2007 #20
    Should we be able to destroy things we have created?
  22. May 18, 2007 #21


    User Avatar
    Gold Member

    When does self awareness emerge during the course of human development? When does it emerge in a machine? Good questions.

    I think we can start to categorize whether the robot is the property of a human or whether it is a product of nature. In doing so we begin to define which laws apply in the case of destroying an advanced form of a robot. Is it property damage only? Or, is it murder?
  23. May 18, 2007 #22
    I very much like the opening of American Declaration of Independence, the "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights..." bit. The question of equating robot's destruction with that of murder is, I believe, only an instance of the wider question of "Who is equal to Man?", to which entities can we apply the same "self-evident truths"? For example, only that to which we concede the "unalienable Right of Life" can be murdered in the criminal law sense.

    So, what takes it that we consider something, say a robot, equal to Man? I'm amusing myself with a thought about that.

    Frequently I come to think about lion hunt in savannah. There is something I find very gross in these documentaries, that chills me each time. And I certainly don't think anything about lions, they are acting the same as Man does -- I've had my stake for dinner today too. It's the lions' pray that disturbs me. There is this big herd of stocky looking animals, outnumbering lions ten to one, and most of them heavier and taller than a lion. A herd capable of killing anything smaller that stands in its way in a stampede. And what happens? The lionesses pick out a fragile looking member of the herd (a non-adult frequently), and go after it -- perfectly human. The herd, instead of responding with a concentrated stampede, just goes each gnu for itself -- already on the verge of human behavior. And the final horror: after the lions got their pray and start shredding it apart on the spot, the herd just stops and continues about whatever they were doing before. Now, that is outright inhuman.

    The savannah herd lacks empathy. Its members are not capable of putting themselves in the position of their fellow killed member and acting upon that sensation. In effect, they do not grant themselves the unalienable rights that the Declaration speaks about. Hence, they are not equal to Man. There is my current working answer: it is equal to Man that which itself grants to its kin the unalienable rights compatible to Man's.

    Back to the intelligent robots, here's my practical recipe to decide if they can be murdered. Start destroying one by one -- say those damaged, obsolete, etc. -- in "public", among many fellow working robots. And wait for the robo-militia to come at you, waving their Declaration of Independence.

    (But don't start a war then :)

    Chusslove Illich (Часлав Илић)
    Last edited: May 18, 2007
  24. May 19, 2007 #23


    User Avatar
    Gold Member

    It is still humans who are writing the laws, 1500 years from now. Robots may have integrated well with society but they still have not been granted equal status to humans.

    Look at chimps and apes today. They are sentient. They can read, communicate and they are empathic to name a few sentient traits. Yet, they are not treated as equal. When they're caged, tortured and murdered in the name of research humans applaud the quest for answers to their medical problems. There are no laws against this other than the animal rights activist's wish list of compassionate laws.

    Are the robots property?

    Are they individuals with the same rights as humans? (based on their ability to reason, study, communicate, respond logically etc...)
  25. May 19, 2007 #24
    Murder is a word that applies only to a specific interaction between two humans that results in death. Humans do not murder a chimpanzee (a very advanced primate) if they throw a stone at it and kill it without cause--in the same way humans cannot murder robots. Murder is an unjustified action of one human against another that results in death.
  26. May 19, 2007 #25


    User Avatar
    Gold Member


    Thanks for pointing that out Rade, where were you earlier!?

    How about, is it "disambiguation" to destroy a robot?:rolleyes:
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook