.Scott said:
Just to dispose of that "bodies" part, an appropriate interface can be provided for a piezo sensor to allow it to generate brain-compatible signals. And the result would be "really real" pain. Similarly, the signals from human pain sensors can be directed to a silicon device and the result is not "really real".
If you want a computer to produce "really real" pain, I believe you need these features:
1) It needs the basic
qualia. Moving bits around in Boolean gates doesn't do this. It is a basic technology problem. From my point of view, it is a variation of
Grover's Algorithm.
2) As with humans, it needs a 1st-person model that includes "self-awareness" and an assessment of "well-being" and "control". But this is just a data problem.
3) As with humans, it needs to have a 2nd and 3rd person model - at least a minimum set of built-in social skills.
4) It needs to treat a pain signal as distracting and alarming - with the potential of "taking control" - and thus subverting all other "well-being" objectives.
5) Then it needs to support the escalating pain response: ignore it, seek a remedy, grimace/cry, explicitly request help.
6) For completeness, it would be nice for it to recognize the grimace and calls for help from others.
What you say is all good, I agree reproducing pain signals is not the unsolvable problem here. In fact even now as far as I know we can insert electrodes into the brain and by applying small potentials cause certain senses to be felt when in actuality they are not felt.
The problem is - how do you respond to pain and what you make of it...
I will go in more detail about this lower in my post.
.Scott said:
we can strongly suspect that this "qualia" device provides certain information services more economically than Boolean logic.
Not just more economically , but I'd say Boolean logic can only really solve "logical" information - that is information that you can quantify , ascribe certain value to it , like the pixels within a picture and such.
How do you ascribe value to pain felt by a self aware entity ?
Think about it , the physical signal is easy to reproduce, how do you reproduce the response so that it is fully compatible with free will and also is conscious?
We do know humans have a very wide and varying level of pain thresholds and most importantly attitude towards pain. I know some deep believers actually use their pain and suffering as a pathway for spiritual growth, now even if you don;t believe in God, you still can observe the physical results, namely, that one person gets depressed and decays while suffering pain while another one grows mentally and becomes more mentally capable.
There are religious practices where people abstain from food and even drink or do other self inflicted pain and report that afterwards feel better.
How do you program this within a silicon logic that is made according to the main thesis of evolutionary biology - avoiding actions detrimental to survival?
Because if you make a robot that is preprogrammed with the logic of evolutionary biology then you can only create a deterministic machine, because clearly pain equals damage and damage is bad for survival.
And yet humans , the really advanced ones, I would argue learn from the very damage they have created and sometimes even put themselves into harms way for a benefit that only often they themselves can understand.
Recall the "Pavlovsk experimental seed station " and the scientists that during the nazi siege of Leningrad stayed there and died from hunger just to protect the seed collection.
That is an outstanding level of self harm inflicted consciously for nothing more than the belief of a possible better future in case of success.
How do you calculate the necessity for suicide in certain situation using simple logic?
I think that and other examples like it are on the level of what is commonly referred to as faith - the ultimate state of self awareness and also the part of human consciousness that really doesn't seem computational to me. Because you are making a conscious decision based on unknown variables, one of those variables, for example, the idea that other people can be capable of good , therefore dying for the sake of humanity's future is worthwhile.
Mind you, the idea of humans capable of good, knowing of all the wars and atrocities that we have committed during history and all of that within the largest war ever in history , is really not a self evident idea and I'm sure it wasn't self evident to those scientists back then that consciously starved themselves to death instead of eating from the seeds they had.
So they went against every evolutionary instinct for self survival and all of that for a unknown goal, I'd say they had tremendous faith. How do you preprogram that within a AI computer in such a way that it isn't deterministic?
The way I see it, you will either produce a robot that is suicidal even when it doesn't have to be or a robot that isn't even when it should have been, because I don't see a way one can calculate the necessity for suicide on logical bases alone.