Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Chalmers on functionalism (organizational invariance)

  1. Feb 4, 2004 #1

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The following is a thought experiment devised by David Chalmers which suggests that the physical constitution of a conscious system doesn't have any bearing on that system's state of consciousness. Rather, Chalmers argues that the only relevant properties of a system in determining its state of consciousness are organizational (functional / information processing) ones. More simply put, the idea is that the subjective experiences of a physical system don't depend on the stuff the system is made of, but rather what the system does. On this hypothesis, any two systems that process information in the same way should experience the same state of consciousness, regardless of their physical makeup.

    The argument that follows has the flavor of the traditional functionalist thought experiment that goes something like the following: "Does a system S made of silicon have the same conscious experiences as a system N made of biological neurons, provided that S and N process information in identical ways? It does. Imagine that we replace a single neuron in N with a silicon chip that performs the same local information processing. There is no attendant difference in N's quality of consciousness. (Why should there be? Intuitively, there should be no difference.) Now, if we continue replacing neurons in N with silicon chips that perform the same local functions one by one, there will be no change in N's state of consciousness at each step, and eventually we will arrive at a system identical to S whose conscious experiences are still identical to N. Therefore, N and S have identical conscious experiences."

    I have always (and still do) regard this traditional thought experiment for functionalism as terribly inadequate. It has the flavor of an inductive proof, but it begs the question on the base case (written in bold in the above argument); how can we just state outright that replacing a neuron in N with a silicon chip will not change N's state of consciousness? That is the issue up for debate, so we cannot assume that it is true and use it in our argument in order to prove our argument. Even if our intuition may suggest that the base case is true, it could be the case that our intuition is misguided.

    Chalmers' argument uses the same basic thought experiment but employs a much more sophisticated and convincing analysis of the consequences of replacing a neuron in N with a silicon chip that performs the same local function. Rather than beg the question at the crucial point, Chalmers gives a well reasoned argument for why the replacement of a neuron in N by a silicon chip should not make a difference in N's state of consciousness.

    Chalmers' thought experiment as displayed below is excerpted from his paper Facing Up to the Problem of Consciousness, and can be found in much greater detail in his paper Absent Qualia, Fading Qualia, Dancing Qualia.

    -----------------------------------------

    2. The principle of organizational invariance. This principle states that any two systems with the same fine-grained functional organization will have qualitatively identical experiences. If the causal patterns of neural organization were duplicated in silicon, for example, with a silicon chip for every neuron and the same patterns of interaction, then the same experiences would arise. According to this principle, what matters for the emergence of experience is not the specific physical makeup of a system, but the abstract pattern of causal interaction between its components. This principle is controversial, of course. Some (e.g. Searle 1980) have thought that consciousness is tied to a specific biology, so that a silicon isomorph of a human need not be conscious. I believe that the principle can be given significant support by the analysis of thought-experiments, however.

    Very briefly: suppose (for the purposes of a reductio ad absurdum) that the principle is false, and that there could be two functionally isomorphic systems with different experiences. Perhaps only one of the systems is conscious, or perhaps both are conscious but they have different experiences. For the purposes of illustration, let us say that one system is made of neurons and the other of silicon, and that one experiences red where the other experiences blue. The two systems have the same organization, so we can imagine gradually transforming one into the other, perhaps replacing neurons one at a time by silicon chips with the same local function. We thus gain a spectrum of intermediate cases, each with the same organization, but with slightly different physical makeup and slightly different experiences. Along this spectrum, there must be two systems A and B between which we replace less than one tenth of the system, but whose experiences differ. These two systems are physically identical, except that a small neural circuit in A has been replaced by a silicon circuit in B.

    The key step in the thought-experiment is to take the relevant neural circuit in A, and install alongside it a causally isomorphic silicon circuit, with a switch between the two. What happens when we flip the switch? By hypothesis, the system's conscious experiences will change; from red to blue, say, for the purposes of illustration. This follows from the fact that the system after the change is essentially a version of B, whereas before the change it is just A.

    But given the assumptions, there is no way for the system to notice the changes! Its causal organization stays constant, so that all of its functional states and behavioral dispositions stay fixed. As far as the system is concerned, nothing unusual has happened. There is no room for the thought, "Hmm! Something strange just happened!". In general, the structure of any such thought must be reflected in processing, but the structure of processing remains constant here. If there were to be such a thought it must float entirely free of the system and would be utterly impotent to affect later processing. (If it affected later processing, the systems would be functionally distinct, contrary to hypothesis). We might even flip the switch a number of times, so that experiences of red and blue dance back and forth before the system's "inner eye". According to hypothesis, the system can never notice these "dancing qualia".

    This I take to be a reductio of the original assumption. It is a central fact about experience, very familiar from our own case, that whenever experiences change significantly and we are paying attention, we can notice the change; if this were not to be the case, we would be led to the skeptical possibility that our experiences are dancing before our eyes all the time. This hypothesis has the same status as the possibility that the world was created five minutes ago: perhaps it is logically coherent, but it is not plausible. Given the extremely plausible assumption that changes in experience correspond to changes in processing, we are led to the conclusion that the original hypothesis is impossible, and that any two functionally isomorphic systems must have the same sort of experiences. To put it in technical terms, the philosophical hypotheses of "absent qualia" and "inverted qualia", while logically possible, are empirically and nomologically impossible.

    (Some may worry that a silicon isomorph of a neural system might be impossible for technical reasons. That question is open. The invariance principle says only that if an isomorph is possible, then it will have the same sort of conscious experience.)

    There is more to be said here, but this gives the basic flavor. Once again, this thought experiment draws on familiar facts about the coherence between consciousness and cognitive processing to yield a strong conclusion about the relation between physical structure and experience. If the argument goes through, we know that the only physical properties directly relevant to the emergence of experience are organizational properties. This acts as a further strong constraint on a theory of consciousness.
     
    Last edited: Feb 4, 2004
  2. jcsd
  3. Feb 4, 2004 #2

    selfAdjoint

    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    My criticism of Chalmers' gedanken experiment lies in this quote:

    Given the extremely plausible assumption that changes in experience correspond to changes in processing, we are led to the conclusion that the original hypothesis is impossible, and that any two functionally isomorphic systems must have the same sort of experiences

    Chalmers reduces the material explanation of consciousness to stuff, and assumes that conscious experiences are directly dependent on the stuff-level in our brains, so if that varies our experiences must vary. But there are also the organization-levels and the process-levels to consider. they are part of the materialist hypothesis too.

    Indeed, in the seven levels identified with internet processing, only the lowest one (connection) can be identified with stuff. The three top ones are about different inflections of process. If that amount of subtlety is available for cold silicon, then philosophers cannot deny it to materialistic consciousness.

    Now it is a commonplace of experience that the stuff level does not always constrain the process level in IT. You can surf the same sites with your laptop in a wi-fi cafeteria that you can with your desktop machine and cable modem. And you will see the same color patterns on the pages, for example. So it is not impossible that consciousness, residing at the process level, is independent of the underlying stuff level, so that Chalmers' assertion is falsified. No?
     
  4. Feb 4, 2004 #3

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I find your phrasing confusing, but you seem to be saying the following: Chalmers argues that a physical system's consciousness is dependent on the kind of 'stuff' that comprises it. I hope that is not what you are claiming, because Chalmers' argument is designed to prove the exact opposite.
     
  5. Feb 4, 2004 #4

    selfAdjoint

    User Avatar
    Staff Emeritus
    Gold Member
    Dearly Missed

    Chalmers is doing a reductio ad absurdam. He asserts the mechanists believe that consciousness is just stuff and draws his contradiction from that assertion. And I showed that his assertion need not be forced on the mechanists, which destroys his reductio.
     
  6. Feb 4, 2004 #5

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Where in the world do you get that from? He doesn't assert that mechanists (or any other group in particular, for that matter) believe that consciousness is 'just' stuff. He proposes only an argument for why that particular position, taken on its own, is not a good one to hold. The argument has nothing to do with mechanists in particular or any other philosophical position in general, except precisely the one that holds that consciousness depends in some way on physical constitution. (A quick word search shows that the words "mechanist" and "mechanistic" do not even appear in either of the articles I referenced in the original post.)
     
  7. Feb 9, 2004 #6
    The intuition is wrong here. You don't know here what exactly is the "functional property" and what is the "irrelevant side-effect" of a particular implementation. For example, the detailed chemistry and electric fields may be the functional properties (affecting mental states). To replicate all physical and chemical effects you have to have exactly identical physical system -- with identical constituents and identical boundary and initial conditions (otherwise the component would be physically distinguishable and have distinct effects on the rest of the system and on themselves).

    It is different matter if you know exactly the functionality and how it is accomplished by the components. Your wrist-watch gears or chips can be replaced by work-alikes and it will work just the same. But with human brain and consiousness, we don't know nearly in sufficient detail what or how it does it or even what it is that is being done.

    It would be like giving a 2 year old kid your computer or a clock and asking him to perform funcionally identical replacement -- kid may replace a grey rectangular computer chip with a grey rectangular cap of some toy. It's the same as far as he knows.

    Note that the accuracy of the model of the target system that the experimenter has affects how closely to the original and for how long will the modified system operate. Replacing a grey chip on a clock with a grey plastic cap will not impair the clock's operation until it has to move its hand. As time goes on, the clock's function will be impaired more and more (its time accuracy will drop), until exactly 12 hours later when the cycle repeats.
     
    Last edited: Feb 9, 2004
  8. Feb 9, 2004 #7

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Re: Re: Chalmers on functionalism (organizational invariance)

    nightlight- in your response, you quoted what I suppose we could call the "weak" functionalist argument, but I assume you mean your reply to also apply to the "strong" functionalist argument (as detailed by Chalmers). Your critique is one I hold as well against the weak argument, but (although it goes against my initial intuition) it appears as if the strong argument is resistant to this line of critique. I am not sure exactly where I stand on this issue, but I lean towards Chalmers' position since it appears to be airtight.

    Here is the flavor of the argument: imagine that we create a silicon circuit that exactly duplicates the functional properties of, say, the visual cortex of an individual named Bob. Here we mean functional properties just to be the manner in which information, as encoded in neuronal firing patterns, is manipulated by the brain. So suppose the computational output of Bob's visual cortex is defined by the function F(I), where I is any pattern of input fed into the visual cortex. F(I) is a quite subtle, complex, and dynamic function, but suppose we can exactly duplicate F(I) in a silicon substrate called S. Suppose further that S can be integrated into a biological brain, so that neurons can hook up to S and share information with it seemlessly. (The choice of silicon here is arbitrary, for the purpose of illustration; in principle any substrate that can exactly compute F(I) will do.)

    Now imagine that we install S alongside Bob's normal biological visual cortex. The setup includes a switch. When the switch is off, S is inactive and Bob's brain functions as it always has. When the switch is on, input is redirected from Bob's visual cortex to S, such that Bob's visual cortex is inactive but S seemlessly computes F(I) and feeds the output into the rest of Bob's brain just as his normal visual cortex would. So Bob's neural firing patterns are identical whether the switch is on or off; the only difference at any given time is the nature of the substrate upon which F(I) is computed.

    Now, suppose we take the stance that the consciousness of a physical system depends in some way upon the nature of the 'stuff' that comprises that system. Along these lines, suppose that for a certain visual stimulus, Bob will see 'red' if he observes this stimulus when his normal visual cortex is intact (switch off), and he will see 'blue' when using S as a replacement for his visual cortex (switch on).

    From this we should expect that if Bob is experiencing redness while his switch is off, then if we suddenly switch Bob's switch on, he should say something to the effect of "Hey, that red painting just turned blue!" But this is impossible, because by definition Bob's neural firing patterns will be identical whether his switch is on or off. Because his neural firing patterns will not change, his behavior, his beliefs, and so on, will also not change. Bob will swear up and down that he sees red whether his switch is on or off. As Chalmers says, there is just no room within this formulation for Bob to even notice that something about his conscious experience has changed.

    To deny this conclusion, one must take one of the following positions: a) one must hold that beliefs, behavioral dispositions, and so on, are not dependent in any way upon neural firing patterns, or b) one must hold that conscious experiences do not serve any role whatsoever in determining beliefs, behavioral dispotisions, and so on. Both of these are highly undesirable positions to hold. Mounds of neuroscientific data are available to refute a). b) strongly contradicts basic intuition about consciousness; in order for one to believe b), one must hold (for instance) that one's experience of redness vis a vis one's experience of blueness plays absolutely no role in determining whether one calls this thing 'red' and that thing 'blue' or vice versa. One cannot simultaneously believe b) and believe that consciousness serves any useful purpose.
     
  9. Feb 9, 2004 #8
    Re: Re: Re: Chalmers on functionalism (organizational invariance)

    In the absence of a scientific theory of consciousness, the strong argument is essentially a tautology (i.e. arguing "let's say we can change all that matters" then asking "can the change matter?"). It may be airtight, but there is nothing left inside.

    The options (a) and (b) do not cover all variations of panpsychism. Since panpsychic reasoning isn't natural for everyone, the missing option (c) is best seen via an analogy closer to everyone. Say Chalmers asks you to suppose he can produce silicon & plastic version of your wife, W2, and her response W2(i) on any input (i) from the rest of the family is the same as that of the original wife, W1(i). Would the change make any difference to the family. Well, it would make difference to W1. Then there is still a difference beween W1(i) and W2(i), since 1!=2, i.e. the rest of the family knows the original W1 is somewhere else and W2 is just a look-alike robot.

    Now Chalmers could argue, let's assume the rest of the family doesn't know the switch occured. So there is still W1 and it is not all the same to her. And to fool the rest I doubt even an identical twin sister who had had an in-depth debriefing from her sister could maintain the illusion that everything is the same for very long. Then Chalmers can say, lets create another planet,.... then a parallel universe...

    In my variant of panpsychism, the most elemental 'qualia bearing elements' (briefly, Q's; assumed to be physical objects) have only two mental states: Off (asleep) and On (I-am-aware). The i-am state of Q1 is distinct qualia then i-am state of Q2... Brain signaling can turn on/off any Q as needed. When Q7 is in i-am state that may be "redness" for the person because that is how Q7 happened to be wired in the color processing circuitry when the person learned/developed enough to see colors. Replacing Q7 with Q7' replaces redness with something else. At least until the Q7' and the rest of the brain re-learn & adapt to the new version of 'redness', after which it probably would appear as the same 'redness' as before (this would be similar to person learning to live with upside-down glasses; after few days it would all look normal). The original Q7 would still be 'redness' if turned on away from the brain.

    I don't see how this case would be captured by (a) or (b). It obviously is not (a) since brain signalling does turn Q7 on/off, i.e. mental states do depend on signalling. The option (b) is excluded by assuming that the states and the interaction rules of Q's are the fundamental physical laws, which give rise to the conventional physical laws at the macroscopic level of Q substratum (this is a hypothesis of sub-quantum level similar to Wolfram's or Fredkin's cellular automata ideas, an updated version of Leibniz monads and Democritus "atoms").
     
  10. Feb 9, 2004 #9

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Re: Re: Re: Re: Chalmers on functionalism (organizational invariance)

    I would like you to elaborate on this, since it seems like it might be a promising objection-- just as long as you elaborate much more carefully than you did below.

    I'm puzzled at what you could possibly be trying to show here, since it is not really analagous in any important sense with Chalmers' thought experiment. We are interested in discerning whether or not changing the physical constitution of a brain changes that brain's subjective experiences. To compare this to how a family might react if a family member were replaced by a plastic robot is a poor strawman argument on several levels, bearing no meaningful resemblence to the original issue.

    And what exactly is Q7'? Is it a functional isomorph with different physical constitution, or is it something of similar physical constitution as Q7 but with a different function? Or is Q7' different in both function and constitution?

    It's impossible for me to comment until you specify what Q7' might be.
     
  11. Feb 9, 2004 #10

    FZ+

    User Avatar

    I am kinda confused here.

    Why do we have to deny Chalmer's claims? We have changed Bob makeup in a way that only an external observer can see, and thus now he is applying a false experience of Red to a false sense of Blue. What's the problem?

    Before and after, Bob can still be considered to be conscious. Thus, duplicating his makeup has duplicated his consciousness, and there is no place for his consciousness to hide during the transition but as a manifestation of the form of the matter.
     
  12. Feb 9, 2004 #11

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Ditto for your objections. :wink:

    Maybe I did a bad job of explaining.

    What claims of Chalmers' are you talking about here? What is a 'false experience' of red or blue?

    Not sure what you mean here either. It was never in question whether Bob was conscious or not in the first place. What do you mean by 'consciousness hiding' in this context?
     
  13. Feb 9, 2004 #12
    Re: Re: Re: Re: Re: Chalmers on functionalism (organizational invariance)

    It demonstrates that the Chalmers' objective, to show that "physical constitution of a conscious system doesn't have any bearing on that system's state of consciousness" doesn't fit the forms of panpsychism in which the "mind-stuff" of the whole is a composition of the mind-stuff of the components. In order to make the point clearer for those who have trouble conceiving the mind-stuff of neurons or atoms, for the analogy I shifted the observation point up to the level where the constituents are easily understood as capable of being conscious. The analogy demonstrates that for this type of panpsychism, replacing the constituents replaces also the mind-stuff of other components and, consequently, the mind-stuff of the whole (of the larger social network containing the individuals as its components).

    Q7' is merely a notation for the "substitute Q" of the Q7, i.e. some other Q being put in place of the Q7. Each Q is unique object and the qualia it has, the Q's 'i-am-aware' state, is unique (and elemental), the same way that the coordinate of each molecule in the air at any given moment is unique. Thus, in this model, it doesn't make sense asking whether Q7' is some kind of replica of Q7 -- each Q is unique and each 'i-am' is unique and elemental qualia. It just happens that some Q, say Q7 is the Q that is wired to be turned on when some part of the person's sensory network detects red. The person is simply used to experiencing 'red' as the 'i-am' of that particular Q.

    In this model, the qualia are not epiphenomenon, i.e. they are the inside view of the state of each Q. At the same time, from outside, the Q's i-am is merely state 1 of the Q object. The dynamics governing interaction of Q's depends on their state and connections and its macroscopic manifestations (for a composite systems), viewed from outside are the regular physical laws and from inside the state of consiousness of the composed system.

    That this type of model (a set of simple few-state automatons) can give rise to conventional physical dynamics (e.g. Schrodinger, Dirac and Maxwell equations) has been demonstrated in the last few decades (since early 1980s, initiated by the MIT's Fredkin, Toffoli). Check for example a recent variation on that theme by G.N Ord (which contain the references and the links of related models and precursors; LANL has several of his papers).
     
    Last edited: Feb 9, 2004
  14. Feb 10, 2004 #13

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Re: Re: Re: Re: Re: Re: Chalmers on functionalism (organizational invariance)

    Perhaps, but in the absence of a solid refutation of the argument, all this indicates to me is that that form of panpsychism is not acceptable. Working clearly within the parameters of the original argument, I would like you to show exactly where the argument breaks down. If you assert that the argument is wrong without showing exactly how it is wrong, you are just begging the question.
     
  15. Feb 10, 2004 #14
    Chalmers asserts that any theory of consciousness has to satisfy his "independence property". In order to prove it, he (or you, as his proxy here) rejects potential theories for which the argument doesn't or cannot go through as "not acceptable." That amounts to proving his "independence property" for all potential theories which don't contradict his property. This makes the "proof" redundant and opens the possibility that his subset of "acceptable" theories has as the intersection with the set of empirically valid theories an empty set.

    As to the particulars of Chalmers' argument at the top of this thread, here are some of the most obvious holes:

    1. "suppose [...] there could be two functionally isomorphic systems...

    The "functionally isomorphic" is a very fuzzy concept in the absence of definition of "function" (this was my original objection; here I will point out one more problem with it). In the absence of any specificity of "function", I could label your states (say, from now, in steps of 100ms) S1, S2, S3,... and label the corresponding (in time) states of your coffe cup as C1, C2, C3,... and since your state transition diagram is S1 -> S2 -> S3... and for the coffe cup C1 -> C2 -> C3, the two systems are "functionally isomorphic" (regarding the "function" of changing physical states in 100ms snapshots). So, by Chalmer's "independence principle" you and your coffe cup must have the same experience. What does your coffe cup say?

    What exactly are the criteria by which you are allowed to label your state S1 as corresponding to state C1 of some other system in order to establish the "functional isomorphism" ? The criteria cannot utilize "qualia" of either sistem (otherwise it becomes a circular argument).

    Can "providing the same verbal/motoric response to some subset of the external stimuli" work as the criteria, i.e. some kind of Turing test? (Obviously, one cannot claim "all external stimuli" since the two systems cannot be at the same place at the same time; also the finite test time precludes the match on infinite number of stimuli.)

    Would then a robot which uses a large lookup table, where for each stimuli S(i) it merely retrieves from memory the response R(i) (the stimuli space may have some metric defined so that the "nearest" S'(i) can be matched in the absence of exact stimuli match) and executes it, have to have same qualia according to Chalmers' "independence principle" ? After all, the functioning of a neural network could be described as a pattern recognition, i.e. a form of an approximate retrieval from the memory.

    2. "...two functionally isomorphic systems with different experiences. Perhaps only one of the systems is conscious, or perhaps both are conscious but they have different experiences.

    Here Chalmers assumes that "experience" is something he can take out and measure somehow in order to be able to assign any meaning to the terms "different experience" and "same experience". Since the two "functionally isomorphic" systems have to pass at least some level of Turing test (otherwise you have "same experience" as your coffe cup), the least one can require from "functional isomorphism" is that they both say "red" when red color is shown and "blue" if blue color is shown. (Otherwise, the two systems are speaking different languages.)

    So, now you have two systems, A and B, both saying "red" when red is shown. How can Chalmers know anything about "what does redness really look like" to A and to B in order to start comparing them or making any assertion about it? He can be at most one of the two systems, say A. In that case he cannot say anything about what is "redness like to B." Essentially, by definition, what is it really like to be "system X" can be known only to system X -- it is a single and unique vantage point that only system X can occupy. Any presumed comparison (in order to give meaning to his terms "different experience" or "same experience") is at best unfalsifiable (it has no contact with empirical method) and at worst self-contradictory (like saying "lets consider a triangle A which has four, or perhaps five, corners").

    Therefore his principle can at best be a definition of what he is going to call "the same experience for two different systems." Insisting subsequently on proving that A and B have the same experience amounts to an equivalent of "proving" that a triangle has three corners after defining the term "triangle" as a polygon with three corners.
     
    Last edited: Feb 10, 2004
  16. Feb 10, 2004 #15

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I searched several papers by Chalmers on consciousness, including the ones cited in this thread, and found no matches for "independence property," so you'll have to fill me in on what you mean by that. Although it would probably be better to use terms Chalmers himself uses when talking about his work.

    Chalmers specifies what he means by "functional organization" and "functional isomorphism" in his paper Absent Qualia, Fading Qualia, Dancing Qualia:

    Clearly, your coffee cup analogy fails to fit the notion of functional isomorphism, as defined by Chalmers, on several levels.

    If I looked at a Van Gogh painting yesterday and I look at it again today, I can reasonably assert that the painting aroused the same visual experience in me on both occassions. So clearly there is some sense in which experiences can be compared for similarity and difference.

    As for your objection that we cannot be sure if two separate physical systems (say, me and you) have the same experiences, it seems to be irrelevant to the argument as formulated by Chalmers. His thought experiment involves a single organism switching between biological and non-biological substrates for some subset of the computations performed by its brain. If the switch between these functionally isomorphic substrates causes a different experience in that single organism (say, switching between red and blue), then the organism should be able to compare the two experiences and discern a difference just as readily as you or I can differentiate between the experiences of looking at a red wall and a blue one.
     
  17. Feb 10, 2004 #16
    Re: Re: Chalmers on functionalism (organizational invariance)

    You have brought up the 'independency' wording in your intro:

    More simply put, the idea is that the subjective experiences of a physical system don't depend on the stuff the system is made of, but rather what the system does.

    I used your 'independency' wording since it is shorter, more straightforward and sounds less pompous than the Chalmers' "principle of organizational invariance". I didn't imagine it would confuse anyone who has read the thread from start.

    Now that the term is cleared up, could you address the substance of the original objection, i.e. how does your procedure of labeling as "not acceptable" the potential theories which can't satisfy "invariance" (such as variants of panpsychism) avoid turning the principle into tautology (as described originally)?

    (1) a number of abstract components, (2) for each component, a number of different possible states, and (3) a system of dependency relations, specifying how the states of each component depends on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states. ...

    Clearly, your coffee cup analogy fails to fit the notion of functional isomorphism, as defined by Chalmers, on several levels.


    Not at all. Dividing the system into smaller volumes leaves the coffe-cup argument as is. Namely, each of your sub-volumes advances through unique and non-repeating microscopic states, S1, S2,... just as each of the coffe cup's sub-volumes does. Fields (electric and quantum wave functions) from each particle in each sub-volume spread over the whole system in either case. In a physical system, all pieces go through unique non-repeated states and all pieces depend on all others (via electric and quantum matter fields). Now, he can say plainly that what he really means is the same neuron-work-alike objects with all their connections and electric signaling being same (within some error margins, say 1% or some such).

    In any case, it seems his "invariance" and "functional isomorphism" definitions are much narrower than the general-sounding terminology he uses would suggest. It seems arbitrary in any case, but he can define whatever he wants, there is nothing to argue abot that.

    If I looked at a Van Gogh painting yesterday and I look at it again today, I can reasonably assert that the painting aroused the same visual experience in me on both occassions. So clearly there is some sense in which experiences can be compared for similarity and difference.

    Only within the same system. What is it exactly like to be you, only you can know.

    As for your objection that we cannot be sure if two separate physical systems (say, me and you) have the same experiences, it seems to be irrelevant to the argument as formulated by Chalmers.

    He uses the assumption that the experience is different (e.g. one experiences red and another blue) during the exchange procedure to determine the location and bounaries of the subsystem which makes difference. So, now that you say he doesn't need it, let's agree and modify his reasoning so the second system or its "experience" doesn't appear at all (why then did he go into trouble with it anyway?). Now, you will need some other way to specify which subsystem to replace with the silicon work-alike. You can specify it as "some" or "any"... as discussed below.

    His thought experiment involves a single organism switching between biological and non-biological substrates for some subset of the computations performed by its brain. If the switch between these functionally isomorphic substrates causes a different experience in that single organism (say, switching between red and blue), then the organism should be able to compare the two experiences and discern a difference just as readily as you or I can differentiate between the experiences of looking at a red wall and a blue one.

    If he is saying that there exist some subset of neurons he can replace with "silicon work-alikes" without causing any change in some particular "redness" experience, yes of course, there are probably many you can change, or even remove altogether, with no effect on "redness" (neurons die by the thousands or millions every day, so we can consider the existence of the replacable subsets an experimental fact).

    If he is asserting that he can replace any subset of neurons with silicon work-alikes and there won't be any change in perception of redness, then that is equivalent of putting in by hand the conclusion he is trying to prove.

    If he is merely trying to say in a roundabout way that 'zombies' can't exist (that's one of nonfalsifiable consequence of his nonfalsifiable "invariance" principle), then fine, let's see the theory that complies with that postulate and connects it in a falsifiable manner with the empirical world.

    As I see it, his "principle of organizational invariance" is at best a convoluted definition for the term "same qualia in different systems" as "qualia reported by the functionally isomorphic systems in response to the same stimulus" where the "functionally isomorphic" is something similar to cloning down to the level of neuronal electric signalling, not above and not below, but right there somewhere. Well. Ok. He can define whatever he wishes. (Whether it will turn out to be a useful definition is another matter.)
     
    Last edited: Feb 10, 2004
  18. Feb 10, 2004 #17

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Re: Re: Re: Chalmers on functionalism (organizational invariance)

    Because there are principled reasons for believing the organizational invariance argument. There are no principled reasons for believing your alternatives, or at least you have not presented any yet. Since we have good reason to believe Chalmers' argument, we should reject any hypotheses that contradict it, unless we can discover a flaw in the argument. Reason takes precedence over pure postulation.

    The coffee cup analogy does not work. Let me be more explicit.

    Let's systematically run through the criteria listed by Chalmers.

    1) It should be possible in principle to divide the coffee cup into as many abstract components as there are neurons in the brain, so suppose that we do just this.

    2) Now we need a mapping such that each abstract component in the coffee cup has as many possible states as there are possible states for a neuron. Here, the relevant states of a neuron would seem to be 'on' and 'off,' so we need each abstract component in the coffee cup to have 2 possible states and no more. Without thinking about that much more deeply, I will concede that this too seems possible in principle.

    3) Now we need a system of dependency relations specifying how the states of each abstract component in our coffee cup depend on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states, such that this set of dependency relations for the coffee cup precisely mirrors the set of dependency relations existing for neurons in the brain. (A simpler, though more vague, way of saying this is that information flows through both systems in precisely the same way.) This is where the coffee cup analogy fails spectacularly. Unless you propose that there exists some way that we can break a coffee cup into abstract parts such that these parts process abstract patterns of information in precisely the same way that neurons in the brain process abstract patterns of information, the analogy is non-existent.

    No, the location and boundary of the subsystem that makes the difference is determined at the start, by hypothesis. It is built into the structure of his reductio ad absurdum.

    No, I said he doesn't need to compare experiences across two organisms. Of course he needs to tentatively establish (by hypothesis)that switching between subsystems will cause the organism to see different colors, for the purpose of his reductio.

    No, it is not. He does not just assume that his principle of organizational invariance is true. Rather, he shows that if this principle is not true, then it must imply that a) beliefs, behavioral dispositions, and so on, are not dependent in any way upon neural firing patterns, or b) conscious experiences do not serve any causal role whatsoever in determining beliefs, behavioral dispotisions, and so on.

    So, once again, Chalmers shows that we are forced to choose between the principle of organizational invariance or one of a) or b).
    Neurobiological research strongly indicates that a) is not a viable option, and b) strongly violates our intuition about the function of consciousness and also makes it unintelligible how consciousness should have any evolutionary advantageous function. So, given how undesirable a) and b) are, one is naturally inclined to choose the principle of organizational invariance. This does not amount to a proof per se, but it does clarify the consequences of what position we choose to believe. And all of this is achieved by means of careful reasoning; there is no part of Chalmers' argument where he begs the question.
     
  19. Feb 11, 2004 #18
    Is "redness" same for everyone?

    In other words, Chalmers could proclaim "all polygons have three corners principle" and if anyone suggests a rectangle as a counterexample, you can symply brush it off as "not acceptable" and still insist it is a valid principle and your response is a perfectly logical and valid arguing technique. I'd say, it is a fine technique if you were an editor of a journal an I had submitted a paper you disagree with.

    Otherwise, you need to qualify properly the Chalmers' "invariance principle" as principle which holds for all "acceptable" theories, where the "acceptable" theories are defined as those theories which don't contradict Chalmers' invariance principle.

    This is not matter of which theory of consciousness is valid overall, but simply a question of direct counterexample to the alleged proof -- for theories in which the qualia are associated with some specific and unique microscopic components of the individual's brain, the Chalmers' principle is outright false. The alleged "proof" doesn't demonstrate that anything else is reduced 'ad absurdum' for such theories. So, your reply is that Chalmers has some "principled reasons" to believe his principle and since he is apparently an important person, we will label all counter-examples as "not acceptable" and maintain that he has proven that his principle must hold for all theories of consciousness. Yes Sir!

    ... unless we can discover a flaw in the argument. Reason takes precedence over pure postulation.

    1) Counter-example to the stated principle trumps any need to look further and find exactly where are the errors in the alleged proof that follows. (If you state "all polygons have three corners" I can merely point to rectangle and there is no need to find an error in your proof that it must be so.)

    2) There is no proof until there are coherent and precise premises and a non-tautological (contentful) statement of the conclusion to be proved (see the circularity objection at the end of this note).

    The coffee cup analogy does not work. Let me be more explicit....

    3) Now we need a system of dependency relations specifying how the states of each abstract component in our coffee cup depend on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states, such that this set of dependency relations for the coffee cup precisely mirrors the set of dependency relations existing for neurons in the brain.

    Again you don't seem to realize that in a physical system, such as brain or coffe cup, "dependency relations" do mirror exactly between the brain and the coffe cup since every component of the brain interacts with all other components of the brain, i.e. the detailed physical state of each component depends on the detailed physical state of all other components and of its own previous state (for N components, there are N^2 "dependency" relations). The same holds for the coffe cup components -- for N components there are N^2 "dependency relations" forming precisely the same (however trivial) dependency graph.

    What you're missing is a criteria to filter the kinds of interactions that you will count. Only then you can have some specially defined dependency type of, say, "Chalmers dependency relations" which could differentiate the dependency graphs of a coffe cup and a brain.

    My point here is that the general terms "states" and "dependency relations" without further qualifications can't differentiate coffe cup from brain. If Chalmers has in mind some specific C-component and C-state and C-dependency, then fine, that is then the kind of systems for which he is claiming validity of his invariance principle -- i.e. if two systems operate the same way on, apparently, the neuron-granularity level (with the same currents, the same connections) then Chalmers is asserting that these two systems must have the "same qualia" for the same input.

    The problem here is that without an independent definition of the term "same qualia for two different systems", the entire content of his invariance principle amounts to a Chalmers' definition of the term "same qualia for two different systems." Therefore his subsequent attempt to prove his definition is at best a circular and pointless word shuffling game.

    Note that I am talking about Chalmers' "invariance principle" above, not the alleged proof of it i.e. the comparison of qualia between different systems is precisely the essence of that principle. Therefore the above objection (as well as my earlier objections to this type of comparison, applied to his principle) is relevant.
     
    Last edited: Feb 11, 2004
  20. Feb 12, 2004 #19

    hypnagogue

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Re: Is "redness" same for everyone?

    Again, here's the structure of the argument.

    1. Either the consciousness of a system depends in some way on the nature of the 'stuff' that comprises the system, or it does not. There are only two possibilities here, and one of them must be true. Chalmers' principle of organizational invariance (POI) holds that consciousness doesn't depend on the nature of 'stuff', so by definition it follows that any and all theories that do not agree with the POI must take the position that in at least one instance, consciousness does depend on the nature of 'stuff' and not just what the 'stuff' does.

    2. Chalmers' redectio argument shows that if we assume that POI is false, it logically follows from this assumption that either a) beliefs, behavioral dispositions, and so on, are not dependent in any way upon neural firing patterns, or b) conscious experiences do not serve any causal role whatsoever in determining beliefs, behavioral dispotisions, and so on. Both a) and b) are extremely undesirable positions to hold, for reasons I have already explained.

    3. From 1 & 2 it follows that any theory of consciousness T that holds that the POI is false must also hold that either a) or b) is true. Thus, to whatever extent we characterize positions a) and b) as undesirable/unacceptable/untenable, we must also characterize T as an equally undesirable/unacceptable/untenable theory. And to whatever extent we characterize T as undesirable/unacceptable/untenable, we must hold the POI to be proportionately desirable/acceptable/tenable.

    (For instance, say we have 1% confidence that a) or b) could be true. Then we also have 1% confidence that any hypothesis that contradicts the POI could be true. Since it is a logical certainty that either POI is true or POI is not true, we can also say that we have 99% confidence that POI is true in this example.)

    Step 3 explains why I have described any hypotheses that do not agree with POI as "unacceptable." The alternatives to POI are unacceptable only because they all have the unacceptable consequence a) or b). Contrast this with your analogy, where there are no unacceptable consequences that follow from the counterexample to your 'polygon principle.' Thus, your analogy is actually disanalogous.

    Then these theories must hold that either a) or b). Both a) and b) are undesirable positions to hold, so these theories must also be equally undesirable to hold.

    Again, what we need is a system of dependency relations specifying how the states of each abstract component in the system depend on the previous states of all components and on inputs to the system, and how outputs from the system depend on previous component states. We need to know how the states depend eachother, not just what states depend on which other states. If the states do not depend on eachother in the same manner, then they will not compute the same function, and therefore they will not be functionally isomorphic. A coffee cup is not functionally isomorphic to a brain, so your analogy has no substance.

    If we assume that the contents of consciousness are fully determined by some set of criteria C, then for any two systems in which the circumstances of C are identical, the contents of consciousness will be identical as well. Most thinkers have no problem readily accepting that the contents of consciousness are fully determined by some set of criteria C (otherwise one concedes that the contents of consciousness are generated randomly). Therefore, most thinkers will readily accept that if two systems are identical across all criteria C, then they will have identical qualia. This is not a Chalmers definition of "same qualia," it's just a logical one.

    Chalmers' argument is an attempt to clarify which criteria are included in C. He shows that if one of these criteria is the nature of the 'stuff' making up the the conscious system, then it follows that either a) or b). Since it seems highly unlikely that a) or b) could be true, it is equally unlikely that the nature of the 'stuff' making up a system is a criterion included in C.
     
    Last edited: Feb 12, 2004
  21. Feb 12, 2004 #20
    Re: Re: Is "redness" same for everyone?

    I will get to the rest of your argument in a separate message later. Here I will address only the coffe cup sub-argument:
    With both, the brain and the coffe cup, (the full/most detailed) state change in any component affects all other components by changing their (full) states. If component A (of the brain or the coffe cup) changes state from SA to SA1 components B, C,... change their states from SB to SB1, SC to SC1,... If A changes to a different state SA2, then B, C... change to different states SB2, SC2,... If states SA, SA1, SA2... are different from each other then states SB, SB1, SB2... are different from each other. That tells you "what" and "how", which happen to be the same thing unless you coarse-grain the detailed physical state (distinct SA's always cause distinct SB's, SC's etc).

    You have to specify some special kind of component and state, a coarse-grained form of detailed physical state, in order to have different causal dependency graph between the brain and the coffe cup. At the most detailed state level, the N components of either perform same type of transition (different initial point to different final point). The coarse-grained form of state, say C-state, would have to contain entire class of detailed physical states.

    If the states do not depend on eachother in the same manner, then they will not compute the same function, and therefore they will not be functionally isomorphic. A coffee cup is not functionally isomorphic to a brain, so your analogy has no substance.

    They do "compute" the same "function", they merely express the result in a different format -- if you expose the coffe cup in state SC to blue light its state becomes SC(b), and if you expose it to red it becomes SC(r) (where SC(b), SC(r) and SC are all distinct). The same form of transition occurs with brain (or entire human): the initial SB goes into SB(b) or SB(r) and all states SB, SB(b), SB(r) are different. The "result" of the computation is different for different inputs and same for the same inputs. Obviously, you will need different "reader" devices if you wish to translate the results of computations into a form readable by humans. With brain, an interface to human motoric system may result in spoken words 'blue' or 'red' while with the coffe cup the "reader" device may be some physical measuring aparatus (which measures, say, absorbed and scattered energy/momentum of photons and cup, atom excitations) to read-off the kind of photons which had struck the cup from the "result" computed by the cup (its final state SC(b) or SC(r)).

    The general physical definitions of components, state, compute, result of computation,... etc can't differentiate the two. You have to narrow down substantially what you mean by "component" and "state" and "compute" otherwise the coffe cup and the brain would have to have the same mental state when exposed to the same input (according to POI).

    It seems that Chalmers really has in mind, roughly, the replication of functionality at the level of neural currents (pulse trains), since that is what his thought-experiment explicitly uses (the physically interchangeable sub-systems with compatible electro-neural connectors). Whatever it is, though, it needs to be stated upfront (as an assumption of POI), since the most general 'states', 'components', computation' cannot differentiate a brain from a coffe cup.
     
    Last edited: Feb 12, 2004
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Chalmers on functionalism (organizational invariance)
Loading...