Chalmers on functionalism (organizational invariance)

  • Thread starter hypnagogue
  • Start date
  • Tags
    Invariance
In summary, Chalmers' thought experiment explores the idea that the subjective experiences of a physical system are determined by its organizational properties rather than its physical makeup. This is demonstrated through a thought experiment involving the gradual replacement of neurons with silicon chips in two identical systems, which results in no change in the system's state of consciousness. This idea is controversial but supported by Chalmers' well-reasoned analysis, making it a more convincing argument than the traditional functionalist thought experiment.
  • #1
hypnagogue
Staff Emeritus
Science Advisor
Gold Member
2,285
2
The following is a thought experiment devised by David Chalmers which suggests that the physical constitution of a conscious system doesn't have any bearing on that system's state of consciousness. Rather, Chalmers argues that the only relevant properties of a system in determining its state of consciousness are organizational (functional / information processing) ones. More simply put, the idea is that the subjective experiences of a physical system don't depend on the stuff the system is made of, but rather what the system does. On this hypothesis, any two systems that process information in the same way should experience the same state of consciousness, regardless of their physical makeup.

The argument that follows has the flavor of the traditional functionalist thought experiment that goes something like the following: "Does a system S made of silicon have the same conscious experiences as a system N made of biological neurons, provided that S and N process information in identical ways? It does. Imagine that we replace a single neuron in N with a silicon chip that performs the same local information processing. There is no attendant difference in N's quality of consciousness. (Why should there be? Intuitively, there should be no difference.) Now, if we continue replacing neurons in N with silicon chips that perform the same local functions one by one, there will be no change in N's state of consciousness at each step, and eventually we will arrive at a system identical to S whose conscious experiences are still identical to N. Therefore, N and S have identical conscious experiences."

I have always (and still do) regard this traditional thought experiment for functionalism as terribly inadequate. It has the flavor of an inductive proof, but it begs the question on the base case (written in bold in the above argument); how can we just state outright that replacing a neuron in N with a silicon chip will not change N's state of consciousness? That is the issue up for debate, so we cannot assume that it is true and use it in our argument in order to prove our argument. Even if our intuition may suggest that the base case is true, it could be the case that our intuition is misguided.

Chalmers' argument uses the same basic thought experiment but employs a much more sophisticated and convincing analysis of the consequences of replacing a neuron in N with a silicon chip that performs the same local function. Rather than beg the question at the crucial point, Chalmers gives a well reasoned argument for why the replacement of a neuron in N by a silicon chip should not make a difference in N's state of consciousness.

Chalmers' thought experiment as displayed below is excerpted from his paper http://www.u.arizona.edu/~chalmers/papers/facing.html .

-----------------------------------------

2. The principle of organizational invariance. This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise. According to this principle, what matters for the emergence of experience is not the specific physical makeup of a system, but the abstract pattern of causal interaction between its components. This principle is controversial, of course. Some (e.g. Searle 1980) have thought that consciousness is tied to a specific biology, so that a silicon isomorph of a human need not be conscious. I believe that the principle can be given significant support by the analysis of thought-experiments, however.

Very briefly: suppose (for the purposes of a reductio ad absurdum) that the principle is false, and that there could be two functionally isomorphic systems with different experiences. Perhaps only one of the systems is conscious, or perhaps both are conscious but they have different experiences. For the purposes of illustration, let us say that one system is made of neurons and the other of silicon, and that one experiences red where the other experiences blue. The two systems have the same organization, so we can imagine gradually transforming one into the other, perhaps replacing neurons one at a time by silicon chips with the same local function. We thus gain a spectrum of intermediate cases, each with the same organization, but with slightly different physical makeup and slightly different experiences. Along this spectrum, there must be two systems A and B between which we replace less than one tenth of the system, but whose experiences differ. These two systems are physically identical, except that a small neural circuit in A has been replaced by a silicon circuit in B.

The key step in the thought-experiment is to take the relevant neural circuit in A, and install alongside it a causally isomorphic silicon circuit, with a switch between the two. What happens when we flip the switch? By hypothesis, the system's conscious experiences will change; from red to blue, say, for the purposes of illustration. This follows from the fact that the system after the change is essentially a version of B, whereas before the change it is just A.

But given the assumptions, there is no way for the system to notice the changes! Its causal organization stays constant, so that all of its functional states and behavioral dispositions stay fixed. As far as the system is concerned, nothing unusual has happened. There is no room for the thought, "Hmm! Something strange just happened!". In general, the structure of any such thought must be reflected in processing, but the structure of processing remains constant here. If there were to be such a thought it must float entirely free of the system and would be utterly impotent to affect later processing. (If it affected later processing, the systems would be functionally distinct, contrary to hypothesis). We might even flip the switch a number of times, so that experiences of red and blue dance back and forth before the system's "inner eye". According to hypothesis, the system can never notice these "dancing qualia".

This I take to be a reductio of the original assumption. It is a central fact about experience, very familiar from our own case, that whenever experiences change significantly and we are paying attention, we can notice the change; if this were not to be the case, we would be led to the skeptical possibility that our experiences are dancing before our eyes all the time. This hypothesis has the same status as the possibility that the world was created five minutes ago: perhaps it is logically coherent, but it is not plausible. Given the extremely plausible assumption that changes in experience correspond to changes in processing, we are led to the conclusion that the original hypothesis is impossible, and that any two functionally isomorphic systems must have the same sort of experiences. To put it in technical terms, the philosophical hypotheses of "absent qualia" and "inverted qualia", while logically possible, are empirically and nomologically impossible.

(Some may worry that a silicon isomorph of a neural system might be impossible for technical reasons. That question is open. The invariance principle says only that if an isomorph is possible, then it will have the same sort of conscious experience.)

There is more to be said here, but this gives the basic flavor. Once again, this thought experiment draws on familiar facts about the coherence between consciousness and cognitive processing to yield a strong conclusion about the relation between physical structure and experience. If the argument goes through, we know that the only physical properties directly relevant to the emergence of experience are organizational properties. This acts as a further strong constraint on a theory of consciousness.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
My criticism of Chalmers' gedanken experiment lies in this quote:

Given the extremely plausible assumption that changes in experience correspond to changes in processing, we are led to the conclusion that the original hypothesis is impossible, and that any two functionally isomorphic systems must have the same sort of experiences

Chalmers reduces the material explanation of consciousness to stuff, and assumes that conscious experiences are directly dependent on the stuff-level in our brains, so if that varies our experiences must vary. But there are also the organization-levels and the process-levels to consider. they are part of the materialist hypothesis too.

Indeed, in the seven levels identified with internet processing, only the lowest one (connection) can be identified with stuff. The three top ones are about different inflections of process. If that amount of subtlety is available for cold silicon, then philosophers cannot deny it to materialistic consciousness.

Now it is a commonplace of experience that the stuff level does not always constrain the process level in IT. You can surf the same sites with your laptop in a wi-fi cafeteria that you can with your desktop machine and cable modem. And you will see the same color patterns on the pages, for example. So it is not impossible that consciousness, residing at the process level, is independent of the underlying stuff level, so that Chalmers' assertion is falsified. No?
 
  • #3
I find your phrasing confusing, but you seem to be saying the following: Chalmers argues that a physical system's consciousness is dependent on the kind of 'stuff' that comprises it. I hope that is not what you are claiming, because Chalmers' argument is designed to prove the exact opposite.
 
  • #4
Chalmers is doing a reductio ad absurdam. He asserts the mechanists believe that consciousness is just stuff and draws his contradiction from that assertion. And I showed that his assertion need not be forced on the mechanists, which destroys his reductio.
 
  • #5
Where in the world do you get that from? He doesn't assert that mechanists (or any other group in particular, for that matter) believe that consciousness is 'just' stuff. He proposes only an argument for why that particular position, taken on its own, is not a good one to hold. The argument has nothing to do with mechanists in particular or any other philosophical position in general, except precisely the one that holds that consciousness depends in some way on physical constitution. (A quick word search shows that the words "mechanist" and "mechanistic" do not even appear in either of the articles I referenced in the original post.)
 
  • #6
Originally posted by hypnagogue Imagine that we replace a single neuron in N with a silicon chip that performs the same local information processing. There is no attendant difference in N's quality of consciousness. (Why should there be? Intuitively, there should be no difference.)
The intuition is wrong here. You don't know here what exactly is the "functional property" and what is the "irrelevant side-effect" of a particular implementation. For example, the detailed chemistry and electric fields may be the functional properties (affecting mental states). To replicate all physical and chemical effects you have to have exactly identical physical system -- with identical constituents and identical boundary and initial conditions (otherwise the component would be physically distinguishable and have distinct effects on the rest of the system and on themselves).

It is different matter if you know exactly the functionality and how it is accomplished by the components. Your wrist-watch gears or chips can be replaced by work-alikes and it will work just the same. But with human brain and consiousness, we don't know nearly in sufficient detail what or how it does it or even what it is that is being done.

It would be like giving a 2 year old kid your computer or a clock and asking him to perform funcionally identical replacement -- kid may replace a grey rectangular computer chip with a grey rectangular cap of some toy. It's the same as far as he knows.

Note that the accuracy of the model of the target system that the experimenter has affects how closely to the original and for how long will the modified system operate. Replacing a grey chip on a clock with a grey plastic cap will not impair the clock's operation until it has to move its hand. As time goes on, the clock's function will be impaired more and more (its time accuracy will drop), until exactly 12 hours later when the cycle repeats.
 
Last edited:
  • #7


nightlight- in your response, you quoted what I suppose we could call the "weak" functionalist argument, but I assume you mean your reply to also apply to the "strong" functionalist argument (as detailed by Chalmers). Your critique is one I hold as well against the weak argument, but (although it goes against my initial intuition) it appears as if the strong argument is resistant to this line of critique. I am not sure exactly where I stand on this issue, but I lean towards Chalmers' position since it appears to be airtight.

Originally posted by nightlight
The intuition is wrong here. You don't know here what exactly is the "functional property" and what is the "irrelevant side-effect" of a particular implementation. For example, the detailed chemistry and electric fields may be the functional properties (affecting mental states). To replicate all physical and chemical effects you have to have exactly identical physical system -- with identical constituents and identical boundary and initial conditions (otherwise the component would be physically distinguishable and have distinct effects on the rest of the system and on themselves).

Here is the flavor of the argument: imagine that we create a silicon circuit that exactly duplicates the functional properties of, say, the visual cortex of an individual named Bob. Here we mean functional properties just to be the manner in which information, as encoded in neuronal firing patterns, is manipulated by the brain. So suppose the computational output of Bob's visual cortex is defined by the function F(I), where I is any pattern of input fed into the visual cortex. F(I) is a quite subtle, complex, and dynamic function, but suppose we can exactly duplicate F(I) in a silicon substrate called S. Suppose further that S can be integrated into a biological brain, so that neurons can hook up to S and share information with it seemlessly. (The choice of silicon here is arbitrary, for the purpose of illustration; in principle any substrate that can exactly compute F(I) will do.)

Now imagine that we install S alongside Bob's normal biological visual cortex. The setup includes a switch. When the switch is off, S is inactive and Bob's brain functions as it always has. When the switch is on, input is redirected from Bob's visual cortex to S, such that Bob's visual cortex is inactive but S seemlessly computes F(I) and feeds the output into the rest of Bob's brain just as his normal visual cortex would. So Bob's neural firing patterns are identical whether the switch is on or off; the only difference at any given time is the nature of the substrate upon which F(I) is computed.

Now, suppose we take the stance that the consciousness of a physical system depends in some way upon the nature of the 'stuff' that comprises that system. Along these lines, suppose that for a certain visual stimulus, Bob will see 'red' if he observes this stimulus when his normal visual cortex is intact (switch off), and he will see 'blue' when using S as a replacement for his visual cortex (switch on).

From this we should expect that if Bob is experiencing redness while his switch is off, then if we suddenly switch Bob's switch on, he should say something to the effect of "Hey, that red painting just turned blue!" But this is impossible, because by definition Bob's neural firing patterns will be identical whether his switch is on or off. Because his neural firing patterns will not change, his behavior, his beliefs, and so on, will also not change. Bob will swear up and down that he sees red whether his switch is on or off. As Chalmers says, there is just no room within this formulation for Bob to even notice that something about his conscious experience has changed.

To deny this conclusion, one must take one of the following positions: a) one must hold that beliefs, behavioral dispositions, and so on, are not dependent in any way upon neural firing patterns, or b) one must hold that conscious experiences do not serve any role whatsoever in determining beliefs, behavioral dispotisions, and so on. Both of these are highly undesirable positions to hold. Mounds of neuroscientific data are available to refute a). b) strongly contradicts basic intuition about consciousness; in order for one to believe b), one must hold (for instance) that one's experience of redness vis a vis one's experience of blueness plays absolutely no role in determining whether one calls this thing 'red' and that thing 'blue' or vice versa. One cannot simultaneously believe b) and believe that consciousness serves any useful purpose.
 
  • #8


hypnagogue: it appears as if the strong argument is resistant to this line of critique.
In the absence of a scientific theory of consciousness, the strong argument is essentially a tautology (i.e. arguing "let's say we can change all that matters" then asking "can the change matter?"). It may be airtight, but there is nothing left inside.

The options (a) and (b) do not cover all variations of panpsychism. Since panpsychic reasoning isn't natural for everyone, the missing option (c) is best seen via an analogy closer to everyone. Say Chalmers asks you to suppose he can produce silicon & plastic version of your wife, W2, and her response W2(i) on any input (i) from the rest of the family is the same as that of the original wife, W1(i). Would the change make any difference to the family. Well, it would make difference to W1. Then there is still a difference beween W1(i) and W2(i), since 1!=2, i.e. the rest of the family knows the original W1 is somewhere else and W2 is just a look-alike robot.

Now Chalmers could argue, let's assume the rest of the family doesn't know the switch occured. So there is still W1 and it is not all the same to her. And to fool the rest I doubt even an identical twin sister who had had an in-depth debriefing from her sister could maintain the illusion that everything is the same for very long. Then Chalmers can say, let's create another planet,... then a parallel universe...

In my variant of panpsychism, the most elemental 'qualia bearing elements' (briefly, Q's; assumed to be physical objects) have only two mental states: Off (asleep) and On (I-am-aware). The i-am state of Q1 is distinct qualia then i-am state of Q2... Brain signaling can turn on/off any Q as needed. When Q7 is in i-am state that may be "redness" for the person because that is how Q7 happened to be wired in the color processing circuitry when the person learned/developed enough to see colors. Replacing Q7 with Q7' replaces redness with something else. At least until the Q7' and the rest of the brain re-learn & adapt to the new version of 'redness', after which it probably would appear as the same 'redness' as before (this would be similar to person learning to live with upside-down glasses; after few days it would all look normal). The original Q7 would still be 'redness' if turned on away from the brain.

I don't see how this case would be captured by (a) or (b). It obviously is not (a) since brain signalling does turn Q7 on/off, i.e. mental states do depend on signalling. The option (b) is excluded by assuming that the states and the interaction rules of Q's are the fundamental physical laws, which give rise to the conventional physical laws at the macroscopic level of Q substratum (this is a hypothesis of sub-quantum level similar to Wolfram's or Fredkin's cellular automata ideas, an updated version of Leibniz monads and Democritus "atoms").
 
  • #9


Originally posted by nightlight
In the absence of a scientific theory of consciousness, the strong argument is essentially a tautology (i.e. arguing "let's say we can change all that matters" then asking "can the change matter?"). It may be airtight, but there is nothing left inside.

I would like you to elaborate on this, since it seems like it might be a promising objection-- just as long as you elaborate much more carefully than you did below.

The options (a) and (b) do not cover all variations of panpsychism. Since panpsychic reasoning isn't natural for everyone, the missing option (c) is best seen via an analogy closer to everyone. Say Chalmers asks you to suppose he can produce silicon & plastic version of your wife, W2, and her response W2(i) on any input (i) from the rest of the family is the same as that of the original wife, W1(i). Would the change make any difference to the family. Well, it would make difference to W1. Then there is still a difference beween W1(i) and W2(i), since 1!=2, i.e. the rest of the family knows the original W1 is somewhere else and W2 is just a look-alike robot.

Now Chalmers could argue, let's assume the rest of the family doesn't know the switch occured. So there is still W1 and it is not all the same to her. And to fool the rest I doubt even an identical twin sister who had had an in-depth debriefing from her sister could maintain the illusion that everything is the same for very long. Then Chalmers can say, let's create another planet,... then a parallel universe...

I'm puzzled at what you could possibly be trying to show here, since it is not really analagous in any important sense with Chalmers' thought experiment. We are interested in discerning whether or not changing the physical constitution of a brain changes that brain's subjective experiences. To compare this to how a family might react if a family member were replaced by a plastic robot is a poor strawman argument on several levels, bearing no meaningful resemblence to the original issue.

Replacing Q7 with Q7' replaces redness with something else.

And what exactly is Q7'? Is it a functional isomorph with different physical constitution, or is it something of similar physical constitution as Q7 but with a different function? Or is Q7' different in both function and constitution?

I don't see how this case would be captured by (a) or (b).

It's impossible for me to comment until you specify what Q7' might be.
 
  • #10
From this we should expect that if Bob is experiencing redness while his switch is off, then if we suddenly switch Bob's switch on, he should say something to the effect of "Hey, that red painting just turned blue!" But this is impossible, because by definition Bob's neural firing patterns will be identical whether his switch is on or off. Because his neural firing patterns will not change, his behavior, his beliefs, and so on, will also not change. Bob will swear up and down that he sees red whether his switch is on or off. As Chalmers says, there is just no room within this formulation for Bob to even notice that something about his conscious experience has changed.
I am kinda confused here.

Why do we have to deny Chalmer's claims? We have changed Bob makeup in a way that only an external observer can see, and thus now he is applying a false experience of Red to a false sense of Blue. What's the problem?

Before and after, Bob can still be considered to be conscious. Thus, duplicating his makeup has duplicated his consciousness, and there is no place for his consciousness to hide during the transition but as a manifestation of the form of the matter.
 
  • #11
Originally posted by FZ+
I am kinda confused here.

Ditto for your objections. :wink:

Maybe I did a bad job of explaining.

Why do we have to deny Chalmer's claims? We have changed Bob makeup in a way that only an external observer can see, and thus now he is applying a false experience of Red to a false sense of Blue. What's the problem?

What claims of Chalmers' are you talking about here? What is a 'false experience' of red or blue?

Before and after, Bob can still be considered to be conscious. Thus, duplicating his makeup has duplicated his consciousness, and there is no place for his consciousness to hide during the transition but as a manifestation of the form of the matter.

Not sure what you mean here either. It was never in question whether Bob was conscious or not in the first place. What do you mean by 'consciousness hiding' in this context?
 
  • #12


I'm puzzled at what you could possibly be trying to show here, since it is not really analagous in any important sense with Chalmers' thought experiment.
It demonstrates that the Chalmers' objective, to show that "physical constitution of a conscious system doesn't have any bearing on that system's state of consciousness" doesn't fit the forms of panpsychism in which the "mind-stuff" of the whole is a composition of the mind-stuff of the components. In order to make the point clearer for those who have trouble conceiving the mind-stuff of neurons or atoms, for the analogy I shifted the observation point up to the level where the constituents are easily understood as capable of being conscious. The analogy demonstrates that for this type of panpsychism, replacing the constituents replaces also the mind-stuff of other components and, consequently, the mind-stuff of the whole (of the larger social network containing the individuals as its components).

And what exactly is Q7'? Is it a functional isomorph with different physical constitution, or is it something of similar physical constitution as Q7 but with a different function? Or is Q7' different in both function and constitution?
Q7' is merely a notation for the "substitute Q" of the Q7, i.e. some other Q being put in place of the Q7. Each Q is unique object and the qualia it has, the Q's 'i-am-aware' state, is unique (and elemental), the same way that the coordinate of each molecule in the air at any given moment is unique. Thus, in this model, it doesn't make sense asking whether Q7' is some kind of replica of Q7 -- each Q is unique and each 'i-am' is unique and elemental qualia. It just happens that some Q, say Q7 is the Q that is wired to be turned on when some part of the person's sensory network detects red. The person is simply used to experiencing 'red' as the 'i-am' of that particular Q.

In this model, the qualia are not epiphenomenon, i.e. they are the inside view of the state of each Q. At the same time, from outside, the Q's i-am is merely state 1 of the Q object. The dynamics governing interaction of Q's depends on their state and connections and its macroscopic manifestations (for a composite systems), viewed from outside are the regular physical laws and from inside the state of consiousness of the composed system.

That this type of model (a set of simple few-state automatons) can give rise to conventional physical dynamics (e.g. Schrodinger, Dirac and Maxwell equations) has been demonstrated in the last few decades (since early 1980s, initiated by the MIT's Fredkin, Toffoli). Check for example a recent variation on that theme by G.N Ord (which contain the references and the links of related models and precursors; LANL has several of his papers).
 
Last edited:
  • #13


Originally posted by nightlight
It demonstrates that the Chalmers' objective, to show that "physical constitution of a conscious system doesn't have any bearing on that system's state of consciousness" doesn't fit the forms of panpsychism in which the "mind-stuff" of the whole is a composition of the mind-stuff of the components.

Perhaps, but in the absence of a solid refutation of the argument, all this indicates to me is that that form of panpsychism is not acceptable. Working clearly within the parameters of the original argument, I would like you to show exactly where the argument breaks down. If you assert that the argument is wrong without showing exactly how it is wrong, you are just begging the question.
 
  • #14
hypnagogue all this indicates to me is that that form of panpsychism is not acceptable.
Chalmers asserts that any theory of consciousness has to satisfy his "independence property". In order to prove it, he (or you, as his proxy here) rejects potential theories for which the argument doesn't or cannot go through as "not acceptable." That amounts to proving his "independence property" for all potential theories which don't contradict his property. This makes the "proof" redundant and opens the possibility that his subset of "acceptable" theories has as the intersection with the set of empirically valid theories an empty set.

As to the particulars of Chalmers' argument at the top of this thread, here are some of the most obvious holes:

1. "suppose [...] there could be two functionally isomorphic systems...

The "functionally isomorphic" is a very fuzzy concept in the absence of definition of "function" (this was my original objection; here I will point out one more problem with it). In the absence of any specificity of "function", I could label your states (say, from now, in steps of 100ms) S1, S2, S3,... and label the corresponding (in time) states of your coffe cup as C1, C2, C3,... and since your state transition diagram is S1 -> S2 -> S3... and for the coffe cup C1 -> C2 -> C3, the two systems are "functionally isomorphic" (regarding the "function" of changing physical states in 100ms snapshots). So, by Chalmer's "independence principle" you and your coffe cup must have the same experience. What does your coffe cup say?

What exactly are the criteria by which you are allowed to label your state S1 as corresponding to state C1 of some other system in order to establish the "functional isomorphism" ? The criteria cannot utilize "qualia" of either sistem (otherwise it becomes a circular argument).

Can "providing the same verbal/motoric response to some subset of the external stimuli" work as the criteria, i.e. some kind of Turing test? (Obviously, one cannot claim "all external stimuli" since the two systems cannot be at the same place at the same time; also the finite test time precludes the match on infinite number of stimuli.)

Would then a robot which uses a large lookup table, where for each stimuli S(i) it merely retrieves from memory the response R(i) (the stimuli space may have some metric defined so that the "nearest" S'(i) can be matched in the absence of exact stimuli match) and executes it, have to have same qualia according to Chalmers' "independence principle" ? After all, the functioning of a neural network could be described as a pattern recognition, i.e. a form of an approximate retrieval from the memory.

2. "...two functionally isomorphic systems with different experiences. Perhaps only one of the systems is conscious, or perhaps both are conscious but they have different experiences.

Here Chalmers assumes that "experience" is something he can take out and measure somehow in order to be able to assign any meaning to the terms "different experience" and "same experience". Since the two "functionally isomorphic" systems have to pass at least some level of Turing test (otherwise you have "same experience" as your coffe cup), the least one can require from "functional isomorphism" is that they both say "red" when red color is shown and "blue" if blue color is shown. (Otherwise, the two systems are speaking different languages.)

So, now you have two systems, A and B, both saying "red" when red is shown. How can Chalmers know anything about "what does redness really look like" to A and to B in order to start comparing them or making any assertion about it? He can be at most one of the two systems, say A. In that case he cannot say anything about what is "redness like to B." Essentially, by definition, what is it really like to be "system X" can be known only to system X -- it is a single and unique vantage point that only system X can occupy. Any presumed comparison (in order to give meaning to his terms "different experience" or "same experience") is at best unfalsifiable (it has no contact with empirical method) and at worst self-contradictory (like saying "lets consider a triangle A which has four, or perhaps five, corners").

Therefore his principle can at best be a definition of what he is going to call "the same experience for two different systems." Insisting subsequently on proving that A and B have the same experience amounts to an equivalent of "proving" that a triangle has three corners after defining the term "triangle" as a polygon with three corners.
 
Last edited:
  • #15
Originally posted by nightlight
Chalmers asserts that any theory of consciousness has to satisfy his "independence property".

I searched several papers by Chalmers on consciousness, including the ones cited in this thread, and found no matches for "independence property," so you'll have to fill me in on what you mean by that. Although it would probably be better to use terms Chalmers himself uses when talking about his work.

As to the particulars of Chalmers' argument at the top of this thread, here are some of the most obvious holes:

1. "suppose [...] there could be two functionally isomorphic systems...

The "functionally isomorphic" is a very fuzzy concept in the absence of definition of "function" (this was my original objection; here I will point out one more problem with it). In the absence of any specificity of "function", I could label your states (say, from now, in steps of 100ms) S1, S2, S3,... and label the corresponding (in time) states of your coffe cup as C1, C2, C3,... and since your state transition diagram is S1 -> S2 -> S3... and for the coffe cup C1 -> C2 -> C3, the two systems are "functionally isomorphic" (regarding the "function" of changing physical states in 100ms snapshots). So, by Chalmer's "independence principle" you and your coffe cup must have the same experience. What does your coffe cup say?

Chalmers specifies what he means by "functional organization" and "functional isomorphism" in his paper http://www.u.arizona.edu/~chalmers/papers/qualia.html :

To put the issue differently, even once it is accepted that experience arises from physical systems, the question remains open: in virtue of what sort of physical properties does conscious experience arise? Some property that brains can possesses will presumably be among them, but it is far from clear just what the relevant properties are. Some have suggested biochemical properties; some have suggested quantum-mechanical properties; many have professed uncertainty. A natural suggestion is that when experience arises from a physical system, it does so in virtue of the system's functional organization. On this view, the chemical and indeed the quantum substrates of the brain are not directly relevant to the existence of consciousness, although they may be indirectly relevant. What is central is rather the brain's abstract causal organization, an organization that might be realized in many different physical substrates.

In this paper I defend this view. Specifically, I defend a principle of organizational invariance, holding that experience is invariant across systems with the same fine-grained functional organization. More precisely, the principle states that given any system that has conscious experiences, then any system that has the same functional organization at a fine enough grain will have qualitatively identical conscious experiences. A full specification of a system's fine-grained functional organization will fully determine any conscious experiences that arise.

To clarify this, we must first clarify the notion of functional organization. This is best understood as the abstract pattern of causal interaction between the components of a system, and perhaps between these components and external inputs and outputs. A functional organization is determined by specifying (1) a number of abstract components, (2) for each component, a number of different possible states, and (3) a system of dependency relations, specifying how the states of each component depends on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states. Beyond specifying their number and their dependency relations, the nature of the components and the states is left unspecified.

A physical system realizes a given functional organization when the system can be divided into an appropriate number of physical components each with the appropriate number of possible states, such that the causal dependency relations between the components of the system, inputs, and outputs precisely reflect the dependency relations given in the specification of the functional organization. A given functional organization can be realized by diverse physical systems. For example, the organization realized by the brain at the neural level might in principle be realized by a silicon system.

A physical system has functional organization at many different levels, depending on how finely we individuate its parts and on how finely we divide the states of those parts. At a coarse level, for instance, it is likely that the two hemispheres of the brain can be seen as realizing a simple two-component organization, if we choose appropriate interdependent states of the hemispheres. It is generally more useful to view cognitive systems at a finer level, however. For our purposes I will always focus on a level of organization fine enough to determine the behavioral capacities and dispositions of a cognitive system. This is the role of the "fine enough grain" clause in the statement of the organizational invariance principle; the level of organization relevant to the application of the principle is one fine enough to determine a system's behavioral dispositions. In the brain, it is likely that the neural level suffices, although a coarser level might also work. For the purposes of illustration I will generally focus on the neural level of organization of the brain, but the arguments generalize.

Clearly, your coffee cup analogy fails to fit the notion of functional isomorphism, as defined by Chalmers, on several levels.

2. "...two functionally isomorphic systems with different experiences. Perhaps only one of the systems is conscious, or perhaps both are conscious but they have different experiences.

Here Chalmer's assumes that "experience" is something he can take out and measure somehow in order to be able to assign any meaning to the terms "different experience" and "same experience". Since the two "functionally isomorphic" systems have to pass at least some level of Turing test (otherwise you have "same experience" as your coffe cup), the least one can require from "functional isomorphism" is that they both say "red" when red color is shown and "blue" if blue color is shown. (Otherwise, the two systems are speaking different languages.)

If I looked at a Van Gogh painting yesterday and I look at it again today, I can reasonably assert that the painting aroused the same visual experience in me on both occassions. So clearly there is some sense in which experiences can be compared for similarity and difference.

As for your objection that we cannot be sure if two separate physical systems (say, me and you) have the same experiences, it seems to be irrelevant to the argument as formulated by Chalmers. His thought experiment involves a single organism switching between biological and non-biological substrates for some subset of the computations performed by its brain. If the switch between these functionally isomorphic substrates causes a different experience in that single organism (say, switching between red and blue), then the organism should be able to compare the two experiences and discern a difference just as readily as you or I can differentiate between the experiences of looking at a red wall and a blue one.
 
Last edited by a moderator:
  • #16


hypnagogue ...found no matches for "independence property," so you'll have to fill me in on what you mean by that.
You have brought up the 'independency' wording in your intro:

More simply put, the idea is that the subjective experiences of a physical system don't depend on the stuff the system is made of, but rather what the system does.

I used your 'independency' wording since it is shorter, more straightforward and sounds less pompous than the Chalmers' "principle of organizational invariance". I didn't imagine it would confuse anyone who has read the thread from start.

Now that the term is cleared up, could you address the substance of the original objection, i.e. how does your procedure of labeling as "not acceptable" the potential theories which can't satisfy "invariance" (such as variants of panpsychism) avoid turning the principle into tautology (as described originally)?

(1) a number of abstract components, (2) for each component, a number of different possible states, and (3) a system of dependency relations, specifying how the states of each component depends on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states. ...

Clearly, your coffee cup analogy fails to fit the notion of functional isomorphism, as defined by Chalmers, on several levels.


Not at all. Dividing the system into smaller volumes leaves the coffe-cup argument as is. Namely, each of your sub-volumes advances through unique and non-repeating microscopic states, S1, S2,... just as each of the coffe cup's sub-volumes does. Fields (electric and quantum wave functions) from each particle in each sub-volume spread over the whole system in either case. In a physical system, all pieces go through unique non-repeated states and all pieces depend on all others (via electric and quantum matter fields). Now, he can say plainly that what he really means is the same neuron-work-alike objects with all their connections and electric signaling being same (within some error margins, say 1% or some such).

In any case, it seems his "invariance" and "functional isomorphism" definitions are much narrower than the general-sounding terminology he uses would suggest. It seems arbitrary in any case, but he can define whatever he wants, there is nothing to argue abot that.

If I looked at a Van Gogh painting yesterday and I look at it again today, I can reasonably assert that the painting aroused the same visual experience in me on both occassions. So clearly there is some sense in which experiences can be compared for similarity and difference.

Only within the same system. What is it exactly like to be you, only you can know.

As for your objection that we cannot be sure if two separate physical systems (say, me and you) have the same experiences, it seems to be irrelevant to the argument as formulated by Chalmers.

He uses the assumption that the experience is different (e.g. one experiences red and another blue) during the exchange procedure to determine the location and bounaries of the subsystem which makes difference. So, now that you say he doesn't need it, let's agree and modify his reasoning so the second system or its "experience" doesn't appear at all (why then did he go into trouble with it anyway?). Now, you will need some other way to specify which subsystem to replace with the silicon work-alike. You can specify it as "some" or "any"... as discussed below.

His thought experiment involves a single organism switching between biological and non-biological substrates for some subset of the computations performed by its brain. If the switch between these functionally isomorphic substrates causes a different experience in that single organism (say, switching between red and blue), then the organism should be able to compare the two experiences and discern a difference just as readily as you or I can differentiate between the experiences of looking at a red wall and a blue one.

If he is saying that there exist some subset of neurons he can replace with "silicon work-alikes" without causing any change in some particular "redness" experience, yes of course, there are probably many you can change, or even remove altogether, with no effect on "redness" (neurons die by the thousands or millions every day, so we can consider the existence of the replacable subsets an experimental fact).

If he is asserting that he can replace any subset of neurons with silicon work-alikes and there won't be any change in perception of redness, then that is equivalent of putting in by hand the conclusion he is trying to prove.

If he is merely trying to say in a roundabout way that 'zombies' can't exist (that's one of nonfalsifiable consequence of his nonfalsifiable "invariance" principle), then fine, let's see the theory that complies with that postulate and connects it in a falsifiable manner with the empirical world.

As I see it, his "principle of organizational invariance" is at best a convoluted definition for the term "same qualia in different systems" as "qualia reported by the functionally isomorphic systems in response to the same stimulus" where the "functionally isomorphic" is something similar to cloning down to the level of neuronal electric signalling, not above and not below, but right there somewhere. Well. Ok. He can define whatever he wishes. (Whether it will turn out to be a useful definition is another matter.)
 
Last edited:
  • #17


Originally posted by nightlight
Now that the term is cleared up, could you address the substance of the original objection, i.e. how does your procedure of labeling as "not acceptable" the potential theories which can't satisfy "invariance" (such as variants of panpsychism) avoid turning the principle into tautology (as described originally)?

Because there are principled reasons for believing the organizational invariance argument. There are no principled reasons for believing your alternatives, or at least you have not presented any yet. Since we have good reason to believe Chalmers' argument, we should reject any hypotheses that contradict it, unless we can discover a flaw in the argument. Reason takes precedence over pure postulation.

Not at all. Dividing the system into smaller volumes leaves the coffe-cup argument as is. Namely, each of your sub-volumes advances through unique and non-repeating microscopic states, S1, S2,... just as each of the coffe cup's sub-volumes does. Fields (electric and quantum wave functions) from each particle in each sub-volume spread over the whole system in either case. In a physical system, all pieces go through unique non-repeated states and all pieces depend on all others (via electric and quantum matter fields). Now, he can say plainly that what he really means is the same neuron-work-alike objects with all their connections and electric signaling being same (within some error margins, say 1% or some such).

The coffee cup analogy does not work. Let me be more explicit.

To clarify this, we must first clarify the notion of functional organization. This is best understood as the abstract pattern of causal interaction between the components of a system, and perhaps between these components and external inputs and outputs. A functional organization is determined by specifying (1) a number of abstract components, (2) for each component, a number of different possible states, and (3) a system of dependency relations, specifying how the states of each component depends on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states.

Let's systematically run through the criteria listed by Chalmers.

1) It should be possible in principle to divide the coffee cup into as many abstract components as there are neurons in the brain, so suppose that we do just this.

2) Now we need a mapping such that each abstract component in the coffee cup has as many possible states as there are possible states for a neuron. Here, the relevant states of a neuron would seem to be 'on' and 'off,' so we need each abstract component in the coffee cup to have 2 possible states and no more. Without thinking about that much more deeply, I will concede that this too seems possible in principle.

3) Now we need a system of dependency relations specifying how the states of each abstract component in our coffee cup depend on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states, such that this set of dependency relations for the coffee cup precisely mirrors the set of dependency relations existing for neurons in the brain. (A simpler, though more vague, way of saying this is that information flows through both systems in precisely the same way.) This is where the coffee cup analogy fails spectacularly. Unless you propose that there exists some way that we can break a coffee cup into abstract parts such that these parts process abstract patterns of information in precisely the same way that neurons in the brain process abstract patterns of information, the analogy is non-existent.

As for your objection that we cannot be sure if two separate physical systems (say, me and you) have the same experiences, it seems to be irrelevant to the argument as formulated by Chalmers.

He uses the assumption that the experience is different (e.g. one experiences red and another blue) during the exchange procedure to determine the location and bounaries of the subsystem which makes difference.

No, the location and boundary of the subsystem that makes the difference is determined at the start, by hypothesis. It is built into the structure of his reductio ad absurdum.

So, now that you say he doesn't need it

No, I said he doesn't need to compare experiences across two organisms. Of course he needs to tentatively establish (by hypothesis)that switching between subsystems will cause the organism to see different colors, for the purpose of his reductio.

If he is asserting that he can replace any subset of neurons with silicon work-alikes and there won't be any change in perception of redness, then that is equivalent of putting in by hand the conclusion he is trying to prove.

No, it is not. He does not just assume that his principle of organizational invariance is true. Rather, he shows that if this principle is not true, then it must imply that a) beliefs, behavioral dispositions, and so on, are not dependent in any way upon neural firing patterns, or b) conscious experiences do not serve any causal role whatsoever in determining beliefs, behavioral dispotisions, and so on.

So, once again, Chalmers shows that we are forced to choose between the principle of organizational invariance or one of a) or b).
Neurobiological research strongly indicates that a) is not a viable option, and b) strongly violates our intuition about the function of consciousness and also makes it unintelligible how consciousness should have any evolutionary advantageous function. So, given how undesirable a) and b) are, one is naturally inclined to choose the principle of organizational invariance. This does not amount to a proof per se, but it does clarify the consequences of what position we choose to believe. And all of this is achieved by means of careful reasoning; there is no part of Chalmers' argument where he begs the question.
 
  • #18
Is "redness" same for everyone?

hypnagogue Because there are principled reasons for believing the organizational invariance argument. There are no principled reasons for believing your alternatives, or at least you have not presented any yet. Since we have good reason to believe Chalmers' argument, we should reject any hypotheses that contradict it,...
In other words, Chalmers could proclaim "all polygons have three corners principle" and if anyone suggests a rectangle as a counterexample, you can symply brush it off as "not acceptable" and still insist it is a valid principle and your response is a perfectly logical and valid arguing technique. I'd say, it is a fine technique if you were an editor of a journal an I had submitted a paper you disagree with.

Otherwise, you need to qualify properly the Chalmers' "invariance principle" as principle which holds for all "acceptable" theories, where the "acceptable" theories are defined as those theories which don't contradict Chalmers' invariance principle.

This is not matter of which theory of consciousness is valid overall, but simply a question of direct counterexample to the alleged proof -- for theories in which the qualia are associated with some specific and unique microscopic components of the individual's brain, the Chalmers' principle is outright false. The alleged "proof" doesn't demonstrate that anything else is reduced 'ad absurdum' for such theories. So, your reply is that Chalmers has some "principled reasons" to believe his principle and since he is apparently an important person, we will label all counter-examples as "not acceptable" and maintain that he has proven that his principle must hold for all theories of consciousness. Yes Sir!

... unless we can discover a flaw in the argument. Reason takes precedence over pure postulation.

1) Counter-example to the stated principle trumps any need to look further and find exactly where are the errors in the alleged proof that follows. (If you state "all polygons have three corners" I can merely point to rectangle and there is no need to find an error in your proof that it must be so.)

2) There is no proof until there are coherent and precise premises and a non-tautological (contentful) statement of the conclusion to be proved (see the circularity objection at the end of this note).

The coffee cup analogy does not work. Let me be more explicit...

3) Now we need a system of dependency relations specifying how the states of each abstract component in our coffee cup depend on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states, such that this set of dependency relations for the coffee cup precisely mirrors the set of dependency relations existing for neurons in the brain.

Again you don't seem to realize that in a physical system, such as brain or coffe cup, "dependency relations" do mirror exactly between the brain and the coffe cup since every component of the brain interacts with all other components of the brain, i.e. the detailed physical state of each component depends on the detailed physical state of all other components and of its own previous state (for N components, there are N^2 "dependency" relations). The same holds for the coffe cup components -- for N components there are N^2 "dependency relations" forming precisely the same (however trivial) dependency graph.

What you're missing is a criteria to filter the kinds of interactions that you will count. Only then you can have some specially defined dependency type of, say, "Chalmers dependency relations" which could differentiate the dependency graphs of a coffe cup and a brain.

My point here is that the general terms "states" and "dependency relations" without further qualifications can't differentiate coffe cup from brain. If Chalmers has in mind some specific C-component and C-state and C-dependency, then fine, that is then the kind of systems for which he is claiming validity of his invariance principle -- i.e. if two systems operate the same way on, apparently, the neuron-granularity level (with the same currents, the same connections) then Chalmers is asserting that these two systems must have the "same qualia" for the same input.

The problem here is that without an independent definition of the term "same qualia for two different systems", the entire content of his invariance principle amounts to a Chalmers' definition of the term "same qualia for two different systems." Therefore his subsequent attempt to prove his definition is at best a circular and pointless word shuffling game.

Note that I am talking about Chalmers' "invariance principle" above, not the alleged proof of it i.e. the comparison of qualia between different systems is precisely the essence of that principle. Therefore the above objection (as well as my earlier objections to this type of comparison, applied to his principle) is relevant.
 
Last edited:
  • #19


Originally posted by nightlight
In other words, Chalmers could proclaim "all polygons have three corners principle" and if anyone suggests a rectangle as a counterexample, you can symply brush it off as "not acceptable" and still insist it is a valid principle and your response is a perfectly logical and valid arguing technique.

Again, here's the structure of the argument.

1. Either the consciousness of a system depends in some way on the nature of the 'stuff' that comprises the system, or it does not. There are only two possibilities here, and one of them must be true. Chalmers' principle of organizational invariance (POI) holds that consciousness doesn't depend on the nature of 'stuff', so by definition it follows that any and all theories that do not agree with the POI must take the position that in at least one instance, consciousness does depend on the nature of 'stuff' and not just what the 'stuff' does.

2. Chalmers' redectio argument shows that if we assume that POI is false, it logically follows from this assumption that either a) beliefs, behavioral dispositions, and so on, are not dependent in any way upon neural firing patterns, or b) conscious experiences do not serve any causal role whatsoever in determining beliefs, behavioral dispotisions, and so on. Both a) and b) are extremely undesirable positions to hold, for reasons I have already explained.

3. From 1 & 2 it follows that any theory of consciousness T that holds that the POI is false must also hold that either a) or b) is true. Thus, to whatever extent we characterize positions a) and b) as undesirable/unacceptable/untenable, we must also characterize T as an equally undesirable/unacceptable/untenable theory. And to whatever extent we characterize T as undesirable/unacceptable/untenable, we must hold the POI to be proportionately desirable/acceptable/tenable.

(For instance, say we have 1% confidence that a) or b) could be true. Then we also have 1% confidence that any hypothesis that contradicts the POI could be true. Since it is a logical certainty that either POI is true or POI is not true, we can also say that we have 99% confidence that POI is true in this example.)

Step 3 explains why I have described any hypotheses that do not agree with POI as "unacceptable." The alternatives to POI are unacceptable only because they all have the unacceptable consequence a) or b). Contrast this with your analogy, where there are no unacceptable consequences that follow from the counterexample to your 'polygon principle.' Thus, your analogy is actually disanalogous.

This is not matter of which theory of consciousness is valid overall, but simply a question of direct counterexample to the alleged proof -- for theories in which the qualia are associated with some specific and unique microscopic components of the individual's brain, the Chalmers' principle is outright false.

Then these theories must hold that either a) or b). Both a) and b) are undesirable positions to hold, so these theories must also be equally undesirable to hold.

Again you don't seem to realize that in a physical system, such as brain or coffe cup, "dependency relations" do mirror exactly between the brain and the coffe cup since every component of the brain interacts with all other components of the brain, i.e. the detailed physical state of each component depends on the detailed physical state of all other components and of its own previous state (for N components, there are N^2 "dependency" relations). The same holds for the coffe cup components -- for N components there are N^2 "dependency relations" forming precisely the same (however trivial) dependency graph.

Again, what we need is a system of dependency relations specifying how the states of each abstract component in the system depend on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states. We need to know how the states depend each other, not just what states depend on which other states. If the states do not depend on each other in the same manner, then they will not compute the same function, and therefore they will not be functionally isomorphic. A coffee cup is not functionally isomorphic to a brain, so your analogy has no substance.

The problem here is that without an independent definition of the term "same qualia for two different systems", the entire content of his invariance principle amounts to a Chalmers' definition of the term "same qualia for two different systems." Therefore his subsequent attempt to prove his definition is at best a circular and pointless word shuffling game.

If we assume that the contents of consciousness are fully determined by some set of criteria C, then for any two systems in which the circumstances of C are identical, the contents of consciousness will be identical as well. Most thinkers have no problem readily accepting that the contents of consciousness are fully determined by some set of criteria C (otherwise one concedes that the contents of consciousness are generated randomly). Therefore, most thinkers will readily accept that if two systems are identical across all criteria C, then they will have identical qualia. This is not a Chalmers definition of "same qualia," it's just a logical one.

Chalmers' argument is an attempt to clarify which criteria are included in C. He shows that if one of these criteria is the nature of the 'stuff' making up the the conscious system, then it follows that either a) or b). Since it seems highly unlikely that a) or b) could be true, it is equally unlikely that the nature of the 'stuff' making up a system is a criterion included in C.
 
Last edited:
  • #20


I will get to the rest of your argument in a separate message later. Here I will address only the coffe cup sub-argument:
hypnagogue Again, what we need is a system of dependency relations specifying how the states of each abstract component in the system depend on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states. We need to know how the states depend each other, not just what states depend on which other states.
With both, the brain and the coffe cup, (the full/most detailed) state change in any component affects all other components by changing their (full) states. If component A (of the brain or the coffe cup) changes state from SA to SA1 components B, C,... change their states from SB to SB1, SC to SC1,... If A changes to a different state SA2, then B, C... change to different states SB2, SC2,... If states SA, SA1, SA2... are different from each other then states SB, SB1, SB2... are different from each other. That tells you "what" and "how", which happen to be the same thing unless you coarse-grain the detailed physical state (distinct SA's always cause distinct SB's, SC's etc).

You have to specify some special kind of component and state, a coarse-grained form of detailed physical state, in order to have different causal dependency graph between the brain and the coffe cup. At the most detailed state level, the N components of either perform same type of transition (different initial point to different final point). The coarse-grained form of state, say C-state, would have to contain entire class of detailed physical states.

If the states do not depend on each other in the same manner, then they will not compute the same function, and therefore they will not be functionally isomorphic. A coffee cup is not functionally isomorphic to a brain, so your analogy has no substance.

They do "compute" the same "function", they merely express the result in a different format -- if you expose the coffe cup in state SC to blue light its state becomes SC(b), and if you expose it to red it becomes SC(r) (where SC(b), SC(r) and SC are all distinct). The same form of transition occurs with brain (or entire human): the initial SB goes into SB(b) or SB(r) and all states SB, SB(b), SB(r) are different. The "result" of the computation is different for different inputs and same for the same inputs. Obviously, you will need different "reader" devices if you wish to translate the results of computations into a form readable by humans. With brain, an interface to human motoric system may result in spoken words 'blue' or 'red' while with the coffe cup the "reader" device may be some physical measuring apparatus (which measures, say, absorbed and scattered energy/momentum of photons and cup, atom excitations) to read-off the kind of photons which had struck the cup from the "result" computed by the cup (its final state SC(b) or SC(r)).

The general physical definitions of components, state, compute, result of computation,... etc can't differentiate the two. You have to narrow down substantially what you mean by "component" and "state" and "compute" otherwise the coffe cup and the brain would have to have the same mental state when exposed to the same input (according to POI).

It seems that Chalmers really has in mind, roughly, the replication of functionality at the level of neural currents (pulse trains), since that is what his thought-experiment explicitly uses (the physically interchangeable sub-systems with compatible electro-neural connectors). Whatever it is, though, it needs to be stated upfront (as an assumption of POI), since the most general 'states', 'components', computation' cannot differentiate a brain from a coffe cup.
 
Last edited:
  • #21
To compare the states of two systems, we need to construct a mapping such that the component states of one system map in a self-consistent manner onto the component states of the other. For instance, take two systems A and B with the same number of abstract components, whose components can only be in one of two abstract states: s1 or s2. Suppose we denote any arbitrary component of A as cA, and we say that cA is in state s1 by writing s1(cA) and state s2 by writing s2(cA), with analogous notation for B. Then the only self-consistent mappings we can construct between the two are M1: {s1(cA)--> s1(cB), s2(cA)--> s2(cB)} or M2: {s1(cA)--> s2(cB), s2(cA)--> s1(cB)}, such that each cA is always mapped onto the same cB. We then say that any cA is isomorphic to its assigned cB if they are in the same mapped states. For instance, under mapping M1, a certain cA in state s1 is isomorphic to its assigned cB only if that cB is in state s1; likewise, under mapping M2, a certain cA in state s1 is isomorphic to its assigned cB only if that cB is in state 2. We say that A is isomorphic to B only if each cA in A is isomorphic to its assigned cB in B.

Suppose that A and B are isomorphic at some initial time t0 and receive arbitrary, isomporphic inputs over a certain period of time T. If A and B preserve their isomorphism for the duration of T, then we say that they are functionally isomorphic; if any point during T the isomorphism is broken, then A and B are not functionally isomorphic.

Now, for a brain to be functionally isomorphic to a coffee cup, first we have to divide the two into an equal number of abstract components each with the same number of possible abstract states. Let us take it for granted that we are considering the abstract components of the brain to be neurons, whose relevant states are "on" or "off." So we need to divide a coffee cup into billions of abstract components, each with two possible states s1 or s2. Next we have to construct a mapping such that each neuron is assigned one unique coffee cup component, such that a neuron being "on" is mapped onto either s1 or s2, and a neuron being "off" is mapped onto whichever of s1 or s2 is remaining. Say we map "on" onto s1 and "off" onto s2. Now suppose at some time t0 the brain is isomorphic the coffee cup, and that for some period of time T both systems receive arbitrary, isomorphic inputs. Then they are only functionally isomorphic if the isomorphism between brain and coffee cup is preserved for the duration of T.

This is the relevant sense of the term "functionally isomorphic." Note that the brain and coffee cup will be functionally isomorphic only if they have their components have isomorphic dependency relations. By dependency relation we do not mean just that x depends on y, but rather we mean that x depends in some specified manner on y. To use a mathematical analogy, if we say y=f(x), we have not fully specified the dependency relation between x and y; to do that we would have to say y=x+2, or somesuch. In particular, if y=f(x) and g=f(h), then we don't know enough to say that the same dependency relations hold for both y and x, and g and h (as you have been implying). To know that the same dependency relation holds between the 2 pairs, we would have to specify something like y=x+2 and g=h+2.
 
Last edited:
  • #22


hypnagogue 1. Either the consciousness of a system depends in some way on the nature of the 'stuff' that comprises the system, or it does not...
... it logically follows from this assumption that either a) beliefs, behavioral dispositions, and so on, are not dependent in any way upon neural firing patterns, or b) conscious experiences do not serve any causal role whatsoever in determining beliefs, behavioral dispotisions, and so on. Both a) and b) are extremely undesirable positions to hold, for reasons I have already explained...
If we assume that the contents of consciousness are fully determined by some set of criteria C, then for any two systems in which the circumstances of C are identical, the contents of consciousness will be identical as well.
The basic flaw of the argument is in the misuse (or confused use) of the concept causality (in variations emphasized above). The Chalmers' and your explanations rely on the soft, informal meaning of "causality" concept in order to weave the argument. I'll need a more discriminating concept of "causality" to demonstrate the confusion.

Saying that A causes B, or A determines B or B depends on A...etc, are statements within some model (or theory) of a phenomenon. For example, what causes the apple to fall on the ground? In Newton's gravity it is the gravitational force F=G*M*m/r^2. In Einstein's gravity the falling is caused by the curvature of the space around the apple. In Stochastic Electrodynamics (SED) it is caused by residual/shielding effects of the electromagnetic fields (zero point field/ZPF). The same resulting phenomenon, apple falling, has different causal chains explaining it. The 'causes' are always constructs within models, or causal systems. Note also that even though a particular "cause" within some causal system may not be directly observable (i.e. it may not have an explicit operational definition, such as ZPF, or quantum vacuum, or space curvature), they serve as the ingredients of the deductions within the system which at the end do result in empirical consequences.

In the example of falling apple, we have a resulting phenomenon R (falling apple) and we have 3 models M1, M2, and M3. Each model contains the mapping of the result R as R1, R2 and R3. Each causal system has its own cause C1, C2 and C3 and its own causal explanation C1->R1, C2->R2, C3->R3. All 3 contain the mapping into their causal system of the empirical fact R and can agree that they are explaining the same empirical fact (the "falling apple") from different perspectives/within different causal systems.

The confusion that thoroughly permeates your and Chalmers' argument and the alleged "only alternatives (a) and (b)" is the mixup, down to a complete lack of awareness of distinctions (or even existence), among the separate causal systems involved in the argument. In the falling apple example, a similar kind of confusion, could be, say, the insistence that the space curvature must exist in SED and Newton causal systems (which it can't since they are based on Euclidean space metric) and therefore such systems lack cause for result R, or that R is random, making them thus "unacceptable" (due to alleged lack of causality). They don't lack it, obviously, they merely have different causal chains which happen not to have counterpart for C2 (the space curvature). The only legitimate constraint is that different causal systems have to coincide in the final empirical consequences (e.g. predict the same time of apple hitting the ground), not that they have to have precise counterparts of all (theoretical) elements of the alternate models/systems. Or, your watch need not match some "standard" watch gear for gear, spring for spring, chip for chip, to be a perfectly fine watch -- it only needs to show the same time.

Your "undesirable consequence" (a) insists that physical conditions C1 (e.g. neural currents etc), which cause physical behavior R1 (e.g. a person's statement, within causal system of physical laws, of seeing red) must also be a cause, in the causal system of experiences, of R2 (experience of redness by the person), otherwise we have, allegedly, detached qualia or random qualia. These are two separate causal systems, you can't mix causes from one to another. You can insist on mapping between the models (in order to use informal abbreviation such as "C1 causes R2" meaning actually "C1 causes R1" and R1 of Model1 corresponds to R2 of Model2) only for the empirical consequences (if R2 and R1 are empirical facts). But you can't require that each model must have all of its theoretical components (those without direct operational definition) mapped to another's. That would be as if you and I are asked to solve, each in our own notebook, the apple fall problem and you pick labels h for height and t1 and t2 for initial and final time variables, while I pick z for vertical coordinate and 0 and t for initial and final time. Then you/Chalmers claim that my result t is 'detached' or 'random' since it is not related to 'h' (which doesn't exist in my notebook but only in yours). Such critique is obviously invalid -- the only thing our notebooks have to agree is that your duration of fall t2-t1 (sec) must be same as my duration t (sec).

In the model of panpsychism I sketched earlier, the qualia of each Q is not empirical fact accessible to anyone but only to the system which controls the states of Qs. Insisting on what it 'has to be like' for me to see 'red', or that it has to be the same as for you or for a 'functionally isomorphic computer', is meaningless in that causal system. In principle you can have some other theory in which 'redness' is associated with some particular molecule and a simple injection of that molecule into the right place would cause anyone to see redness. For that kind of theory Chalmers' invariance could make sense. But that is a matter of experiments, not a logical argument (he can't reduce 'ad absurdum' the alternatives, other than through conceptual mixups and soft/fuzzy definitions).

Your "undesirable consequence" (b) ("conscious experiences do not serve any causal role whatsoever in determining beliefs, behavioral dispositions") is a similar mixup of causes/consequences between distinct causal systems -- qualia 'my redness' has causal role in my system of qualia, qualia 'your redness' has role in your system of qualia. But 'my redness' has no causal effect on your system of qualia or on anyone elses, including that of the psychologist/philosopher performing the test or any instruments and data processing equipment he may be using (these represent the physical causal system). The causes in the physical causal system (the red frequency of light) have consequences in that system (e.g. a particular neural activity - 'red activity'). As before, in informal speech one may use such mixups as abbreviations, but that carries no logical force (such as needed for 'reductio ad absurdum') if applied to theoretical (non-operational or non-empirical) elements of different causal systems.
 
Last edited:
  • #23
Originally posted by hypnagogue Let us take it for granted that we are considering the abstract components of the brain to be neurons, whose relevant states are "on" or "off."

Well, that is what I am saying, too. You need some coarse-grained states, call them C-states, not the general "states" (the full physical states). Each of your C-states (on/off) contains infinite number of physical states. The transition diagrams in the general state space are isomorphic for a component of a cup and brain -- infinite number of distinct source points leading to infinite number of distinct destination points, with 1-1 mapping. Only after the sufficient coarse-graining you can establish the non-isomorphism between the state transition diagram for coffe cup and brain.

The question then is what is the coarse graining that Chalmers assumes and why that one (what is empirical basis)? Without it he can't even express the principle. Why at neuron level (if that is what it is)? Why not require replication of all neuro-transmitters? Why not require replication of detailed electric field or quantum matter fields (there are theories of consciousness which propose the configuration of electric fields or wave functions as the physical counterpart of consciousness)? It seems arbitrarily restrictive to pick neurons and their currents only.

If Chalmers is saying he won't commit to any particular granularity, but merely state 'some such granularity exists' (whatever it may be), then there is problem of replication, i.e. his thought experiment may be impossible even in principle since some physical properties are not clonable even in principle (cf. "no cloning theorem" of QM). So, in order to use his argument he would have to postulate that the desired granularity not only exists but that the states are clonable at that level of granularity. These additional postulates would have to be a part of his principle.
 
Last edited:
  • #24
I cannot follow all the above but a few thoughts...

Chalmer's argument has no implications for whether an entity is conscious or not. He argues that if the neurons (or let's say all particles and fields) that make up my brain were replaced by functionally equivalent silicon implants then this would entail that I would continue to be conscious of the same qualia. This is clearly only the case if I don't lose consciousness somewhere in the process. As someone pointed out above he just assumes that I won't, which is cheating.

That is, it is not known whether consciousness arises from the brain or not and therefore it is not known whether an 'isomorphic' silicon copy of my brain states would be conscious.

It does seem reasonable to say that if this new brain really is functionally isomorphic with my current one then it is capable of processing the same information. But this is merely a tautology. It says nothing about consciousness.

I feel Chalmer's confuses mind and consciousness. If one says that qualia are caused by brain then there seems no in principle reason why qualia should not be caused by some other structure isomorphic with my brain. But qualia are not consciousness, they are contents of consciousness. Thus unless this new brain is conscious there will be no qualia however (observably) functionally isomorphic it is. It would just be a computer capable of doing what the brain does.

A lot depends on what 'functionally isomorphic' means. If it means 'conscious and experiencing the same qualia' then Chalmer's argument is circular (as I think someone pointed out already). So what does it mean? I don't know - in this context there does not seem to be any way of defining 'functionally isomorphic' that is not entirely circular.

Still, I may be misunderstanding the argument. I usually agree with Chalmers. It seems to simply say that if an artificial brain is functionally isomomorphic with my brain then it will function the same as mine. That doesn't seem contentious since it's true by definition. However I can't see why it follows that this isomorphic brain will create qualia, since qualia require consciousness and consciousness may not arise from brain.
 
Last edited:
  • #25
Canute I feel Chalmer's confuses mind and consciousness.
He knows the difference quite well. I think he is merely overreaching with this principle. He is trying to turn his quest, the recognition of consciousness as an object of science, look more "scientific" i.e. more similar to hard sciences by objectifying the subjective. He is saying that the redness as it appears to you is same as the redness as it appears to someone else (including his "functionally isomorphic" robot).

There are two types of problems with this attempt: a) in the statement and meaning of his principle b) with the proof. The proof suffers from the mixups in the reasoning about causality as well as from the wishful assumptions, as you pointed out.

The statement of the principle itself could hardly be seen as anything but a tautology, i.e. Chalmers' definition of the concept "the same qualia for different observers," since he doesn't give any independent operational definition of such term (his "proof" is a sideways attempt at leaving a picturesque impression in the reader's mind of the independent definition via his imagined experiment i.e. it is a rhetorical gimmick).

The qualia are by definition subjective, i.e. for the redness as you see it there is a unique vantage point which by definition only you can occupy. One can't assume that a statement that your redness is same as my redness can be understood in the conventional sense i.e. by imagining some "objective observer" stepping onto my vantage point, establishing what is it like to see my redness, then stepping onto your vantage point, establishing what is it like for you to see your redness, then comparing the two rednesses in order to proclaim whether they are the same redness or not. Therefore, if one wishes to compare the qualia across subjects and make statements about the results of such comparisons one needs some definition of what does that mean. So, the Chalmers' principle is at best his definition of what the term "same qualia for different subjects" means.

In other writings (and in the personal communications) Chalmers shows inclinations toward some kind of panpsychism, the subjectifying the objective, which I think is a more coherent way to make the study of consciousness fit with the rest of natural science.
 
Last edited:
  • #26
I'm fine with that. I didn't know Chalmer's was toying with panpsychism. I always thought he wouldn't have the bottle to go further than politely suggesting that science needs a bit of redefintion. But I haven't read him for quite a while.
 
  • #27
Originally posted by Canute
I'm fine with that. I didn't know Chalmer's was toying with panpsychism. I always thought he wouldn't have the bottle to go further than politely suggesting that science needs a bit of redefintion. But I haven't read him for quite a while.

Here are some quotes from his recent papers:

http://www.u.arizona.edu/~chalmers/papers/moving.html#4.4 Some of the most intriguing pieces, to me, are those that speculate about the shape of a fundamental theory of consciousness. Many of these proposals invoke some form of panpsychism...

http://www.u.arizona.edu/~chalmers/papers/nature.html Type-F monism is the view that consciousness is constituted by the intrinsic properties of fundamental physical entities: that is, by the categorical bases of fundamental physical dispositions.[*] On this view, phenomenal or protophenomenal properties are located at the fundamental level of physical reality, and in a certain sense, underlie physical reality itself... Overall, type-F monism promises a deeply integrated and elegant view of nature. No-one has yet developed any sort of detailed theory in this class, and it is not yet clear whether such a theory can be developed. But at the same time, there appear to be no strong reasons to reject the view. As such, type-F monism is likely to provide fertile grounds for further investigation, and it may ultimately provide the best integration of the physical and the phenomenal within the natural world.
 
Last edited by a moderator:
  • #28
Interesting, thanks. I better go and trawl through his site again. I suppose one day it'll occur to academic philosophers to start investigating Advaita and Buddhism properly. 'Type-F monism' indeed. Sounds like Chalmers is nearly up to where Parmeneides was.
 

1. What is functionalism in the context of Chalmers' work?

Functionalism is a philosophical theory that suggests that mental states can be defined by their functional role in the larger system of the mind. In the context of Chalmers' work, functionalism refers to the idea that mental states can be described in terms of their functional relationships to other mental states and to external stimuli.

2. What is organizational invariance and how does it relate to Chalmers' functionalism?

Organizational invariance is the idea that the organization of a system is more important than the specific physical components of that system. In the context of Chalmers' functionalism, organizational invariance means that the specific physical properties of a mental state are not as important as its functional role in the overall system of the mind.

3. How does Chalmers use functionalism and organizational invariance to explain consciousness?

Chalmers argues that consciousness can be explained by understanding the functional role of mental states in the larger system of the mind. He suggests that consciousness emerges from the complex organization and interactions of these mental states, rather than being a separate, non-physical entity.

4. What are some criticisms of Chalmers' functionalism and organizational invariance?

Some critics argue that functionalism and organizational invariance cannot fully explain the subjective experience of consciousness. They also point out that it is difficult to define and measure mental states solely based on their functional relationships. Others argue that these theories do not adequately address the mind-body problem.

5. How does Chalmers' work on functionalism and organizational invariance contribute to the field of cognitive science?

Chalmers' work has been influential in shaping the debate and research surrounding consciousness and the mind-body problem in cognitive science. His ideas have also been used to inform the development of artificial intelligence and artificial consciousness. Additionally, his theories have sparked further research and discussion on the nature of mental states and their relationship to the brain and the physical world.

Similar threads

Replies
6
Views
1K
Replies
18
Views
1K
  • General Discussion
Replies
4
Views
659
Replies
4
Views
71
  • Special and General Relativity
3
Replies
84
Views
4K
Replies
1
Views
1K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • General Discussion
Replies
19
Views
6K
  • Thermodynamics
2
Replies
57
Views
6K
  • Advanced Physics Homework Help
Replies
5
Views
1K
Back
Top