Hurkyl said:
An experiment to determine how well a specific television and RGB combination reproduces a specific color would be to grab 100 people, show them both the original color and the color on the television, and ask them if they are the same.
Voila -- you have an objective measurement that X of your 100 test subjects reported having the same color viewing experience.
Depending on your view of mental causation, SW VandeCarr has provided one reasonable perspective.
SW VandeCarr said:
This doesn't tell you that they see the same thing you do. A person will learn to associate a sensation caused by light from the lower end of the visible light spectrum with a name and behave in a consistent way regarding that particular sensation..
DaveC426913 said:
That depends on the actual mechanism of the wiring. Are you tapping in before the processing, or after? If you are simply wired to the sensory inputs (i.e. before processing) then of course you would see 'red'. But if you are wired in after the processing ... the question is raised: what does it mean to be wired in "after processing"?
Hurkyl said:
The experiment is an objective measurement of something. And "qualia are what people measure" is, IMO, a rather reasonable working definition.
What Hurkyl and DaveC are trying to suggest has of course, been considered before. I’m going to point out what I believe both of you believe and left unsaid. These things you believe and leave unsaid are what are in conflict with your statements. I believe both arguments fall into the knowledge paradox I mentioned earlier.
What was believed and left unsaid:
- The causal closure of the physical domain: I’m assuming you both accept the causal closure of the physical and reject any kind of nonphysical cause.
- Computationalism: I’m assuming you both accept computationalism.
If that’s correct, the knowledge paradox applies and any claims (or behaviors) that people are somehow “measuring” their own qualia, are incorrect. This problem is further exacerbated by computationalism, which makes it impossible in my mind for anyone to claim they are somehow measuring their own qualia or reporting it in any way.
The short explanation of the knowledge paradox is that there is always a physical cause for any physical behavior. This is essentially the “behaviorism” MIH (correct me if I’m wrong) is referring to:
Math Is Hard said:
I think that's quite in line with what the http://en.wikipedia.org/wiki/Behaviorism" were getting at in their heyday in the 1950s and 1960s. What could be observed and measured was important, but unobservable mental events and representations were trivial, and for all scientific purposes, meaningless.
A behavior is physically observable and we can treat it scientifically. We don’t need to even talk about mental states when referring to behavior. Physical states are assumed to influence other physical states, so mental states are basically ‘along for the ride’ and are epiphenomenal on the physical states. So if someone is to suggest that a mental state is a cause for a behavior, all one has to do to deny this is point to the causal closure of the physical domain. Jaegwon Kim among others has made his living making this point.
What makes this argument much more powerful is the aspect of computationalism which assumes classical mechanical interactions between neurons are those causal actions that give rise to the emergent phenomena of consciousness including qualia. And here is where any argument that a person is ‘measuring’ their own qualia in some way, becomes untenable. I’ll try and explain …
We often talk of the brain as being a computer of sorts, so I’m going to assume strong AI (ie: a suitable, classical computer can experience the same things as a person) is true for the moment only to help explain the problem. The example can then be extended to neurons.
The physical state of a computer can be fully described through a description of its 1) architecture, 2) a description of its physical state, and 3) a description of its input and output over a time period [dt]. Knowing how the computer’s billions of microscopic transistors are wired will fully describe its architecture. With this in hand, we can know the basic layout of the machine, but we won’t know what physical state it is in at some given time. If we know the position of each transistor at a given time, we can know the physical state of the machine at that time. The third thing we need is physical input and output to describe the machine, or boundary conditions over time. With these three things, we can describe in physical terms (by describing physical properties), everything there is to know about the machine’s function. We can know how it will “behave” at any time by knowing these three things.
We might extend this physical description to a molecular description of the switches, but this isn’t necessary to describe what the machine is doing. All we need to describe the machine’s function are the architecture, physical state and boundary conditions and we have enough information to determine the time evolution of the machine over any given time interval.
Next, we introduce qualia to the description of the machine. How should we do this? We can know everything about what a computer does over some time interval. However, we might also think that whatever the machine indicates in the way of behavior or verbal explanation, is also a description of the qualia that machine experiences. In other words, if a computer flinches as if in pain, and screams as if in pain, that behavior is equal to, and an indication of, the experience of pain. The behavior and the experience are one in the same. The experience of pain may be epiphenomenal, but we might assume that the experience of pain is THE SAME AS the physical behavior. This is the most common conclusion and why consciousness and mental states are often thought to be epiphenomenal. This conclusion holds there is a 1 to 1 correlation between the behavior, or time evolution of the physical states, and the experience of the qualia. The problem with this logic however, is the knowledge paradox.
The knowledge paradox points out that it doesn’t matter what experience the machine is thought to be having when it expresses a behavior or verbal description of some phenomenal experience, that behavior and that verbal description are utterly and completely controlled by the architecture, physical states and boundary conditions of the machine over that time interval. The phenomenal experience can not influence the architecture, physical state, nor can it influence the boundary conditions of the computer. Qualia can influence none of that. These phenomena we know as qualia can have no influence over any physical aspect of the computer. So
we not only don’t know what the machine is experiencing, also;
we can’t know if the machine is experiencing anything at all! All we can do is know that it is behaving in a way that we might describe as being in pain, but we can’t know if there is any experience going on at all inside, nor what it might be. The machine’s behavior is fully understood by understanding the architecture, physical state and I/O. We could not for example, know if the machine was experiencing the color red, or the smell of coffee, or experiencing an orgasm when it behaved as if it were in pain.
So there is a logical split between what physics tells us about the time evolution of a computer and what we can know about the experience a computer is having. Qualia are clearly not describable by describing the architecture, physical states and boundary conditions of a computer. And for the case of a computer, and by extension any computational system, the properties of qualia are not capable of influencing in any way, the physical evolution of those systems. Qualia can not be measured by the computer in any way because not a single measurement is taking place, nor are there any aspects of the computer that are responding to a specific type of phenomena except the change in the electrical state on the transistors.
When we come to a logical dead end and find there is no way out, then the problem is most likely with one of our unwritten assumptions.
Note that I haven't gotten into emergence or downward causation and don't think that's necessary here. Weak emergence as defined by Bedau for example, is all we need to understand what kind of emergence is applicable to a computational system, and I've maintained that version of emergence in the explanation above.