What is the special signal problem and how does it challenge computationalism?

  • Thread starter Q_Goest
  • Start date
  • Tags
    Signal
In summary, the conversation discusses the "special signal problem" in computationalism, where a brain in a vat receives identical signals to those it would receive in a person's head. The brain is then separated into smaller sections, and the problem arises as to whether or not the brain can still experience consciousness without a specific "special signal". The conversation also introduces the idea of counterfactual sensitivity and the need to understand why this particular signal is crucial to consciousness. The conversation references a thought experiment by Arnold Zuboff and suggests that any theory of mind must address the importance of counterfactual alternatives.
  • #1
Q_Goest
Science Advisor
Homework Helper
Gold Member
3,012
42
Computationalism has a number of inconsistencies that haven’t logically been refuted. The purpose of this thread is to discuss one of them and see if there’s a way out of the problem and/or to help get perspectives on the problem. That problem I’ll call the “special signal problem”.

In the paper by Arnold Zuboff* entitled “http://themindi.blogspot.com/2007/02/147.html" ”, Arnold provides a thought experiment by creating a story around a brain in a vat. The brain is provided with all the same inputs and outputs it would have had in the person’s head, but like The Matrix, he has this brain in a jar and simply provides the same signals to simulate the brain being in a head. By providing these signals, we can safely assume the brain undergoes the same changes of state as it would otherwise go through in the head, so it is assumed the experiences are also identical. I’d recommend reading the story, it’s very entertaining.

The first twist to the story is to have the brain cut in half. He then suggests the signals to the neurons at the cut are provided the same signals they would receive in the brain, except the opposite brain half doesn’t produce them. Instead, the signals are provided by an “impulse cartridge”. What exactly that is, isn’t important. Just the fact it can simulate the connection at each broken synapse or other fractured surface is all that is necessary. It provides a “signal” that allows each half of the brain to continue firing as if it were still together and connected as a single brain.

Arnold then separates the brain into smaller sections, again using the impulse cartridge to simulate the interactions at each of the breaks in the brain until finally, the brain is separated into individual neurons. We can then ask the question of whether or not the brain still experiences everything as it did when it was together as a complete brain in a vat. The implied assumption is that the brain can’t remain consciously aware of anything any more, regardless of whether or not the individual neurons are still operating as if they were together. Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.

To explain why the brain might no longer be able to support phenomenal consciousness, a character in the story, Cassander, suggests there are 4 guiding principals, anyone of which when violated, might prevent phenomenal experience from occurring. They are:
- The condition of proximity: The cells are no longer close together as they were in the brain.
- The condition of no actual causal connection: Also called “counterfactual information” the problem with no causal connection between the parts of the brain is the problem most philosophers focus on. That is, regardless of the fact there is a duplicate signal, the actual signal is what is needed to maintain phenomenal consciousness. This actual signal is the “special signal”.
- The condition of synchronization: The cells may no longer be in sync.
- The condition of topology: Whether or not the cells are in the same basic spatial relationship, or pointing in the right direction.

The focus of the special signal problem will be the counterfactual sensitivity concern since that is the focus in the literature.

The problem that Arnold writes about has also been written about by others in different ways including Maudlin, Putnam and Bishop. But regardless of how the problem is approached, the defenders of computationalism focus on why the system in question must be able to support counterfactual information and thus be capable of performing other computations. That is, regardless of whether or not a portion of a system is needed to perform a computation, if the system can not support the right counterfactual information, it can't instantiate a program and it can't support phenomenal consciousness.

Note that in Zuboff’s story, only the subjective experience is compromised by moving from a causally connected brain to a brain that is NOT causally connected. In the causally disconnected brain, all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided. This seems to suggest that the duplicate signal provided by the impulse cartridge is not sufficient, only the original signal is sufficient to create the phenomenon of consciousness. The duplicate signal may have all the same properties, may be indistinguishable from the original, and may maintain all the same objectively measurable properties throughout the disconnected brain. But the duplicate signal is not sufficient per computationalism, to support consciousness. We need a special signal. We need the original one.

From this analysis, it is clear that any theory of mind must address how and why counterfactual alternatives are crucial to consciousness. We need to understand what is so special about that particular signal.

*Story available on the web at: http://themindi.blogspot.com/2007/02/147.html
and is part of the book by Hofstadter and Dennett (editors) entitled “The Mind’s I: Fantasies and Reflections on Self and Soul”.
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
Q_Goest said:
The first twist to the story is to have the brain cut in half. He then suggests the signals to the neurons at the cut are provided the same signals they would receive in the brain, except the opposite brain half doesn’t produce them. Instead, the signals are provided by an “impulse cartridge”. What exactly that is, isn’t important. Just the fact it can simulate the connection at each broken synapse or other fractured surface is all that is necessary. It provides a “signal” that allows each half of the brain to continue firing as if it were still together and connected as a single brain.

This seems like a logical fallacy to me. If there's no feedback connection between regions of the brain, then you've physically simplified the system, haven't you? It's no longer a single nonlinear system, it's two systems that obey the principle of superposition (if you're assuming that you can generate input from one independently of the other, then you're assuming they're separable.).

http://pre.aps.org/abstract/PRE/v64/i6/e061907
http://www.nature.com/nrn/journal/v2/n4/abs/nrn0401_229a.html

Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.

Of course, which is a good indication that you're not really confronting computationalism, but building a straw man. This seems to contradict what you say here:


Note that in Zuboff’s story, only the subjective experience is compromised by moving from a causally connected brain to a brain that is NOT causally connected. In the causally disconnected brain, all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided.

This is false! Nonlinear coupled systems can't be uncoupled without physically changing the objectively measureable phenomena. Superposition does not hold for such systems. Once you've destroyed the feedback loops, you've destroyed a significant aspect of neural processing.
 
  • #3
Pythagorean said:
there's no feedback connection between regions of the brain
How does any portion of the brain "know" that it has no "feedback" (whatever "feedback" is)? It has every bit as much feedback in both cases. It has exactly the same feedback. It has the same exact signals. You're asking for a special signal.
Pythagorean said:
but building a straw man. This seems to contradict what you say here:
I don't see the contradiction. Also, you need to refrain from using the term "strawman". Instead, just provide your argument. The term is generally understood to be derisive. If you really must use the term, show how it is a strawman, don't just throw out the term without understanding what it means.
This is false! Nonlinear coupled systems can't be uncoupled without physically changing the objectively measureable phenomena.
This issue is fairly straightforward. Take a control volume as given in fluid mechanics such as used to describe any nonlinear fluid system. The assumption in engineering and the sciences is immediately that a control volume exhibits any objectively measurable phenomena as a similar volume that's not disconnected. All the equations are the same. In fact, if they were not separable, you'd need to violate conservation of mass, momentum or energy. There are views on both sides of this issue (Scott on one side, many others oppose). In general I'd say Scott is out voted.
 
  • #4
Q_Goest said:
How does any portion of the brain "know" that it has no "feedback" (whatever "feedback" is)? It has every bit as much feedback in both cases. It has exactly the same feedback. It has the same exact signals. You're asking for a special signal.

I'm not claiming anything doesn't "know". I'm saying your physically changing the system arrangement, so a qualitative change isn't contradictory to the basis of computationalism (or more importantly physicalism in general).

Feedback is exactly how feedback is defined in signal processing. You have a guitar, an amplifier, and headphones. If you disconnect the output from the input (by using headphones) you won't get feedback. If you connect the output to the input (by using an amplifier and standing close enough so that the output of the amplifier affects the string vibrations, which affect the induction coils, which goes through to the output and back in again. Or if you stand in front of a TV with a video camera that is attached to the TV.

But the important thing is that we have several nonlinear objects coupled together (with at least three different kinds of couplings: inhibitory, excitatory, and diffusive). What signal would you introduce to each one if they weren't causally connected anymore? They're signal at any moment, depends on the state of the rest of the ensemble, and especially their neighbors. You also lose effects of volume transmission (spatial coupling: neurotransmitters, possibly the electric field, glia dynamics) which are important to global regulation ...as you've already alluded to, I thought, when you stated that computationalism would already agree that the system has been broken).

The term is generally understood to be derisive. If you really must use the term, show how it is a strawman, don't just throw out the term without understanding what it means.

It is a strawman because it misrepresents the computationalist position in a fundamental way. It makes a physical change that would effect processing and then claims that this computationalism wouldn't predict this change. That is the issue we are currently debating.
This issue is fairly straight forward. Take a control volume as given in fluid mechanics such as used to describe any nonlinear fluid system. The assumption in engineering and the sciences is immediately that a control volume exhibits any objectively measurable phenomena as a similar volume that's not disconnected. All the equations are the same. In fact, if they were not separable, you'd need to violate conservation of mass, momentum or energy. There are views on both sides of this issue (Scott on one side, many others oppose). In general I'd say Scott is out voted.

Control volume deals with three linearly independent dimensions. I'm talking about the most common biophysical neuron models (Hodgkins-Huxley, Morris-Lecar)). In these models, the dimensions of the phase space represent the most likely candidate for the information transfer of the neuron: membrane potential and ion channel permeability.

voltage-gated channels are dependent on voltage, and voltage is dependent on the voltage-gated channel. So there is already a feedback mechanism within one cell (that would still exist in the isolated cell).

There are also ligand-gated channels and the Hodgkin-Huxley model has modifications for synaptic transmission:
http://en.wikipedia.org/wiki/Biological_neuron_model#Synaptic_transmission

But this is just one "cell" of the model. It already has complex molecular processing going on inside of it, but now we're going to couple these cells together. The couplings themselves are not linear. Synaptic transmissions can be inhibitory or excitatory (generally modeled with a hyperbolic tangent) or diffusive (modeled as a second derivative) but they all depend on the state of their neighbors. Control volume seems to refer to a more euclidean set of dimensions. We don't couple little cells of volumes in spatial dimensions. We connect neurons by information flow. Axons can be all kinds of different lengths, so the story would be very confusing if we looked at spatial dimension alone.

Dynamical behavior of the firings in a coupled neuronal system
Phys. Rev. E 47, 2893–2898 (1993)
http://pre.aps.org/abstract/PRE/v47/i4/p2893_1

Global and local synchrony of coupled neurons in small-world networks
BIOLOGICAL CYBERNETICS
Volume 90, Number 4, 302-309, DOI: 10.1007/s00422-004-0471-9
http://www.springerlink.com/content/6etxgkjk49668ffn/

Small Clusters of Electrically Coupled Neurons Generate Synchronous Rhythms in the Thalamic Reticular Nucleus
The Journal of Neuroscience, January 14, 2004, 24(2):341-349; doi:10.1523/JNEUROSCI.3358-03.2004
http://neuro.cjb.net/cgi/content/abstract/24/2/341
 
Last edited:
  • #5
Pythagorean said:
It is a strawman because it misrepresents the computationalist position in a fundamental way. It makes a physical change that would effect processing and then claims that this computationalism wouldn't predict this change. That is the issue we are currently debating.
Clearly, the story by Zuboff isn’t a strawman. I think the first step has to be in understanding what the computationalist view is. That Zuboff’s argument undermines that view is well understood by the editors of the book as they discuss their views of the paper. Anyone reviewing this paper would do well to read that review which is at the end of the story. Further, there are arguments presented by others that are very similar to Zuboff’s reductio which present the same issues in different ways. Clearly, any computationalist view needs to take into account how and why these special signals are needed to maintain phenomenal consciousness.

The method of attack used by some is to reformulate compuationalism and claim that certain physical systems are fundamentally holistic. What one needs to realize is that by reformulating computationalism like this, the standard notions everyone is using about the brain must be rewritten. If we argue that nonlinear systems are those systems necessary to support phenomenal consciousness, any claim that a computer (FSA, CSA, anything that uses microchips, relays, gears, mechanical switches, etc…) is capable of supporting phenomenal consciousness will have to be abandoned. I think it is self evident, and hopefully no one here is ignorant of physics enough to argue, that one can take a computer, simulate a given input, and get a consistent output, and that we can do this to any portion of a computer. It can be performed on the machine as a whole, such as a “brain in vat” thought experiment uses, or it can be done on any small portion of the machine, right down to the single transistor, mechanical switch or other computational element. Computers are seperable, and that’s well understood by philosophers such as Hofstadter and Dennett as well as a large number of philosophers that have written papers claiming a special signal is somehow necessary.

If one wishs to claim that only nonlinear systems are capable of phenomenal awareness, they have already abandoned computationalism as it stands today. However, if that’s the case, it would probably be best to separate out those discussions in a second thread. This thread is about Zuboff’s article, and the philosophy surrounding these special signals. Another thread on separability of classical systems might be a good place to discuss those concerns. A good place to start might be the http://www.science.uva.nl/~seop/entries/physics-holism/#Holism" and address the issue as they do here:
Classical physics presents no definitive examples of either physical property holism or nonseparability.

Side note: when posting references, please provide a description of why they are relevant to the discussion. It seems as if you are asking that I read through your references and figure out for myself why they are relevant and how they support your views. It would be much more effective to do this yourself since you already have something in mind.
 
Last edited by a moderator:
  • #6
Q Goest said:
We can then ask the question of whether or not the brain still experiences everything as it did when it was together as a complete brain in a vat.
Assuming that everything is 'connected', as you seem to be saying it is, then it would seem to follow that "the brain still experiences everything as it did when it was together as a complete brain in a vat".

Q Goest said:
The implied assumption is that the brain can’t remain consciously aware of anything any more, regardless of whether or not the individual neurons are still operating as if they were together.
Implied wrt what?

Q Goest said:
Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.
Well, not just wrt computationalism, but yes, if the brain is "dissociated" then is isn't a "brain". Is it? Obviously, if the brain's parts become "dissociated", then it won't function as a "brain". Duh. But you indicated that steps were taken to keep the brain's parts connected. So, at what point did it become "dissociated"?

Q Goest said:
The condition of proximity: The cells are no longer close together as they were in the brain.
But you said the cells were still 'connected', presumably in a way that preserved normal brain function. If not, then isn't that one explanation regarding where/how the brain became "dissociated" and stopped being a "brain"? So, proximity, per se, shouldn't have anything to do with it. Hence, this is an invalid "condition". Unless we simply don't have the technology to keep a dissected brain funtioning as a whole, intact brain, and then it is a valid condition. And then, ok, so what?? We wouldn't expect, say, four, or 1000, or 10^8 separated pieces of a previously whole, normally functioning, brain to function as a brain. Would we? So, it isn't really proximity, it's connectedness that matters.

On the other hand, if it really is proximity that determines connectedness, that is, if the brain cells really do have to be more or less contiguous for normal brain functioning, then that's it. Right?

Q Goest said:
The condition of no actual causal connection: Also called “counterfactual information” the problem with no causal connection between the parts of the brain is the problem most philosophers focus on.
Huh? Why is the "duplicate signal" called counterfactual? This makes no sense to me. If there's a "connection" at all, then there's a "causal connection". Isn't there?

Q Goest said:
That is, regardless of the fact there is a duplicate signal, the actual signal is what is needed to maintain phenomenal consciousness. This actual signal is the “special signal”.
Well, all you seem to have said so far is that we don't know how to duplicate neuronal connections. Is that a fact?

Or, are you saying that there are reasons to believe that neuronal connections can't be duplicated, or are you saying that duplication of neuronal connections is a function of proximity? Or are you saying that nobody really knows?

Q Goest said:
The condition of synchronization: The cells may no longer be in sync.
Well, this would seem to follow if nobody is sure that the "actual" neuronal connections have been maintained. One sure way to ascertain this is by observing whether or not the separated brain parts function, collectively, as they did before they were separated.

Q Goest said:
The condition of topology: Whether or not the cells are in the same basic spatial relationship, or pointing in the right direction.
Uh, yeah. Well that too. But of course if you've already sliced and diced the brain into a zillion pieces, and haven't the faintest idea how to "connect" them, then it wouldn't seem to matter much how each of those zillion pieces is oriented. Would it?

Q Goest said:
The focus of the special signal problem will be the counterfactual sensitivity concern since that is the focus in the literature.
Huh??

Q Goest said:
The problem that Arnold writes about has also been written about by others in different ways including Maudlin, Putnam and Bishop. But regardless of how the problem is approached, the defenders of computationalism focus on why the system in question must be able to support counterfactual information and thus be capable of performing other computations. That is, regardless of whether or not a portion of a system is needed to perform a computation, if the system can not support the right counterfactual information, it can't instantiate a program and it can't support phenomenal consciousness.
But you've, I think, previously described a "system" which, apparently, consists of a zillion disconnected brain cells. It can't process "factual" stuff, much less "counterfactual" stuff, whatever that means.

Q Goest said:
Note that in Zuboff’s story, only the subjective experience is compromised by moving from a causally connected brain to a brain that is NOT causally connected.
How is this known?

Q Goest said:
In the causally disconnected brain, all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided.
If "all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided", then the separated brain parts can't be said to be causally disconnected -- unless there's some other measure, which would bring us back to my immediately previous question.

Q Goest said:
This seems to suggest that the duplicate signal provided by the impulse cartridge is not sufficient, only the original signal is sufficient to create the phenomenon of consciousness.
What is it that seems to suggest this? Subjective experience? How do you objectively measure that?

Q Goest said:
The duplicate signal may have all the same properties, may be indistinguishable from the original, and may maintain all the same objectively measurable properties throughout the disconnected brain. But the duplicate signal is not sufficient per computationalism, to support consciousness. We need a special signal. We need the original one.
This "computationalism" is indeed some special stuff. Or maybe Zuboff, et al., pulled this out of their separated, collective, butts.
Is that even possible?

Q Goest said:
From this analysis, it is clear that any theory of mind must address how and why counterfactual alternatives are crucial to consciousness. We need to understand what is so special about that particular signal.
Imho, it's clear that this "analysis" is fubar. No offense to you personally of course. What seems to be most important for nomal brain functioning is that the brain is in one piece. Thus, it wouldn't seem that "counterfactual alternatives", whatever those might refer to, have anything to do with it.

Anyway, again imho, the computationalist approach is, fundamentally, way off -- even though it might explain certain phenomena.
 
  • #7
Hi ThomasT. It doesn’t sound as if you’ve read the story. Just a few notes; when I mention the brain being “dissociated” I’m referring to the state of the brain as given in the story that has these hypothetical “impulse cartridges” (ICs) used to fire neurons instead of the actual neurons they would normally be attached to. In a sense, the brain is still ‘connected’ by these ICs but the brain is not physically connected.

The reason brought out by most philosophers regarding why the hypothetically dissociated ‘brain’ in the story is no longer capable of supporting p-consciousness is that it can no longer support “counterfactuals” which is to say that the various bits of the brain can only perform that one single act. It is therefore not instantiating a program nor performing a computation, similar to what you’ve said here:
Obviously, if the brain's parts become "dissociated", then it won't function as a "brain".
This is your common sense reaction that says that if a conscious system (brain, computer, whatever) is not in it’s connected state, it can’t support consciousness, even if everything is still going through the same changes in state, such as described by Zuboff.

Those who have argued that only a device which supports counterfactuals can support p-consciousness include Chalmers (1996), Christly (1994), Endicott (1996) and others. Endicott’s position is typical of this perceived need for a causally connected system as he discusses the definition of a computation, “We can then say that a system is a genuine computational device when there is a correspondence between its physical states and its formal states such that the causal structure of the physical system is isomorphic to the formal structure of the computational operations.” Christly talks of “our common sense understanding of causation” being required for a computational device to be defined as such and therefore being able to support consciousness. And Chalmers suggests that “a physical system implements a computation if the causal structure of the system mirrors the formal structure of the computation.”

The point all these authors attempt to make regards the common sense definition of a computation. Without the ability to perform this computation, one could argue that the phenomenon of consciousness would also be compromised. So without the ability to support counterfactual information, these philosophers would say the system is no longer performing valid computations, it can’t instantiate a program, so it can’t be conscious.

A point in opposition is presented by Bishop (2001). About a computational machine which is unable to support counterfactuals, Bishop says, “The first point to note in this response is that it seems to require a non-physical causal link between non-entered machine states and the resulting phenomenal experience of the system over a given time interval.” It’s precisely this “non-physical causal link” that I’m calling a “special signal”. That is, the signal produced by these hypothetical ICs is insufficient. The signal has to come from a causally connected system.

I’d agree with Bishop and would add that the defense is short sighted. Tim Maudlin (1989) proposes a similar thought experiment using buckets of water and an armature similar to a Turing machine which I’ll use as a guide for an extension on Zuboff’s story as follows. One could for example, rewrite Zuboff’s story slightly to suggest that these impulse cartridges are in fact connected to each other (wirelessly if you’d like) so that all the neurons are actually interacting through these IC’s. The IC’s could also be synchronized to allow for the problem of time delay as described by Zuboff. With all the neurons in close proximity, the time delay wouldn’t even be noticeable to the brain. Now we’d have a causally connected brain, able to support counterfactuals, and the signals provided by the IC’s should therefore support p-consciousness per Chalmers, Christley, Endicott, etc… We could also devise the IC’s such that they have a recording of every neuron firing that they are attached to so that they don’t even need to rely on a wireless signal from their corresponding IC to fire. We could suggest that if the recorded IC data matches the firing that comes from the neuron, no signal is sent. Only if the signal coming from the neuron is different does the IC send a signal to the mating IC attached to another neuron. The mating IC on the opposite neuron would also have this recording and they would fire the neuron as long as they don’t get a change signal from the corresponding IC they are mated to. Now you have a system that is causally connected only when there is counterfactual information needing to be computed. But when there is no counterfactual information, there is no connection between IC’s and the entire system per the counterfactual argument, is no longer conscious.

Bottom line, the signal going to each neuron in Zuboff’s story that is provided by these IC’s has certain properties such as electrical potential and ion flow. Any isolated neuron that is receiving these impulses will fire regardless of whether the signal comes from the IC or the neighboring neuron. But computationalism holds that only the neighboring neuron can allow for consciousness, the identical signal coming from an IC (if it is not causally connected) is insufficient. Physical properties are the same for both signals, so why is only one capable of supporting consciousness?



Bishop, J. M. Nov 2001 Public Lecture, ‘Dancing with Pixies: strong Artificial Intelligence & panpsychism’, British Computer Society, (Berkshire Branch)
Chalmers, D. J. 1996, ‘Does a Rock Implement Every Finite-State Automaton’, Synthese, 108, 309-333
Christley, R. L. 1994, ‘Why Everything Doesn’t Realize Every Computation’, Minds and Machines, 4(4), 403-420
Endicott, R. P 1996, ‘Searle, Syntax, and Observer Relativity’, Canadian J of Philosophy 26(1):101-122
Maudlin, T. 1989, ‘Computation and Consciousness’, Journal of Philosophy 86:407-432
 
  • #8
Q_Goest said:
Clearly, the story by Zuboff isn’t a strawman. I think the first step has to be in understanding what the computationalist view is.

Well, you've accused me of being a computationalist several times, but I used the standard computationalist description: that neurons perform information processing and that this is relevant to our mental states.

If we argue that nonlinear systems are those systems necessary to support phenomenal consciousness, any claim that a computer (FSA, CSA, anything that uses microchips, relays, gears, mechanical switches, etc…) is capable of supporting phenomenal consciousness will have to be abandoned.

Nobody's arguing nonlinear systems are necessary to support conscioiusness. The point is that you have a well-known non-linear coupled system and you're pretending like it doesn't change anything to decouple them.

You must realize that my argument doesn't state that consciosness is a result of nonlinearity. It states that your argument doesn't consider the facts and you may as well be talking about a different system. And here's the main problem:

You have nowhere to generate an IC from. This is a lot like taking the Maxwell's demon scenario... and then claiming that it will still work without the demon. But the demon is the only thing making the problem physically sensible (as was shown later by follow up calculations that found the demon's measurement and action made up for the entropy "loss").

I think it is self evident, and hopefully no one here is ignorant of physics enough to argue, that one can take a computer, simulate a given input, and get a consistent output, and that we can do this to any portion of a computer. It can be performed on the machine as a whole, such as a “brain in vat” thought experiment uses, or it can be done on any small portion of the machine, right down to the single transistor, mechanical switch or other computational element. Computers are seperable, and that’s well understood by philosophers such as Hofstadter and Dennett as well as a large number of philosophers that have written papers claiming a special signal is somehow necessary.

So what? It's really another strawman to say that computationalism is the same as the computer metaphor. They're two totally different systems.

If one wishs to claim that only nonlinear systems are capable of phenomenal awareness, they have already abandoned computationalism as it stands today. However, if that’s the case, it would probably be best to separate out those discussions in a second thread.

I'm not sure it's abandoned computationalism yet. Computationalism is about information processing isn't it? Information processing is linear in a computer (why the hell would we make it nonlinear?). It developed in a nonlinear way in life (just like nearly every complex natural process in nonlinear. This is a well accepted fact: that the real world is nonlinear.)
But if you really think this is not computationalism then I will desist and you can stop calling me a computationalist.

Side note: when posting references, please provide a description of why they are relevant to the discussion. It seems as if you are asking that I read through your references and figure out for myself why they are relevant and how they support your views. It would be much more effective to do this yourself since you already have something in mind.

All you have to do is read the abstracts. They demonstrate the main point that you have coupled systems and their behavior is different when they're uncoupled.

Classical physics presents no definitive examples of either physical property holism or nonseparability.

Complex systems is in a strange place. It does allow for "holism" (that is, the assumption of superposition does not hold).

But complex systems don't require any fundamental change in classical physics. The systems fall right out of classical Newtonian treatment. The systems are deterministic and exist on a smooth manifold, just like every other classical problem. In the case of neurons, they're created by observing behavior of the system experimentally and modeling them with smooth mathematical equations (as Hodgkins and Huxley did about 50 years ago).

So some people (like my advisor) still consider nonlinear systems as classical systems. I personally can't find anything that makes them fundamentally different from Newtonian physics (a system of differential equations on a smooth manifold). I was only taught to use superposition as a neat trick to make things easier, I wasn't taught that it was the way the world was.

Even Newton, in his Copernican Scholium (that actually didn't hit print until 200 years or so after he wrote the original principia) admitted that the trajectory of planets was too irregular and complex to be exactly predicted with the separable mathematical approach.

http://plato.stanford.edu/entries/Newton-principia/

But once again, throughout all my classical training, linearity is an assumption, a special case; not the norm.
 
  • #9
Wait a minute. From your discussion with ThomasT, the regions ARE still coupled, OR is the IC supposed to generate the signal AS IF.

I've been assuming the latter, which is why I referenced Maxwell's Demon
 
  • #10
From http://www.macrovu.com/CCTGeneralInfo.html" :

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for supporting consciousness.
2. Non-triviality condition: It is necessary that the system support http://en.wikipedia.org/wiki/Counterfactual_conditionals" - states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce identical mentality (if they produce any at all).

Imagine 2 pinball machines, the second of which has had exactly those pins removed that are never touched by the ball. We now have 2 different machines but it doesn't make a difference to the paths traced by the pinballs. The counterfactuals are different in each machine (the pinballs would behave differently if the first were to hit a "counterfactual" pin), but the physical activity of the 2 systems is, as it happens, identical. So, counterfactual differences in 2 systems are irrelevant to differences in the physical activity of those systems.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

Suggestions for avoiding this paradox:
1. Rejecting supervenience.
2. Assigning no consciousness to either machine (Eric Barnes, 1991):
Neither of the imagined machines is conscious, because neither of them can causally interact with the environment. According to the computationalists, causal interaction is a necessary component for any model of consciousness. The object of any cognitive act must play a causally active role for it to be true that one cognizes that object.

I think by rejecting supervenience, you actually reject computationalism. By accepting causal interaction you are forced to accept downward causation, which doesn't sound good too. So what's left is rejecting the non-triviality condition.
 
Last edited by a moderator:
  • #11
Ferris_bg said:
From http://www.macrovu.com/CCTGeneralInfo.html" :

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for supporting consciousness.
2. Non-triviality condition: It is necessary that the system support http://en.wikipedia.org/wiki/Counterfactual_conditionals" - states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce identical mentality (if they produce any at all).

Imagine 2 pinball machines, the second of which has had exactly those pins removed that are never touched by the ball. We now have 2 different machines but it doesn't make a difference to the paths traced by the pinballs. The counterfactuals are different in each machine (the pinballs would behave differently if the first were to hit a "counterfactual" pin), but the physical activity of the 2 systems is, as it happens, identical. So, counterfactual differences in 2 systems are irrelevant to differences in the physical activity of those systems.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

Suggestions for avoiding this paradox:
1. Rejecting supervenience.
2. Assigning no consciousness to either machine (Eric Barnes, 1991):
Neither of the imagined machines is conscious, because neither of them can causally interact with the environment. According to the computationalists, causal interaction is a necessary component for any model of consciousness. The object of any cognitive act must play a causally active role for it to be true that one cognizes that object.

I think by rejecting supervenience, you actually reject computationalism. By accepting causal interaction you are forced to accept downward causation, which doesn't sound good too. So what's left is rejecting the non-triviality condition.
Very nice Ferris, thanks for that. I've seen that reference before and thought it was decent but had forgotten about it. So do you think we should reject the non-triviality condition? Why/why not?
 
Last edited by a moderator:
  • #12
Pythagorean said:
Well, you've accused me of being a computationalist several times, but I used the standard computationalist description: that neurons perform information processing and that this is relevant to our mental states.
This description of computationalism is ok for now, though it isn't very rigorous. I think I know what you mean though. However, what you said earlier was:
[Zuboff's reductio] is a strawman because it misrepresents the computationalist position in a fundamental way.
Zuboff's reductio isn't a strawman and it doesn't misrepresent the computationalist position. The article was written by a philosopher, put in a book by 2 editors who are staunch computationalists, and Zuboff's article as well as many others like it are placed in peer reviewed philosophical journals. No one else is calling this a strawman. If that many profesional philosophers acknowledge there's validity in Zuboff's article, I think it's time to take it seriously and stop calling it a strawman.
So what? It's really another strawman to say that computationalism is the same as the computer metaphor. They're two totally different systems.
Strawman arguments... There are other phrases. Other ways of making a point. If you don't understand, please discuss instead of accusing everyone of strawman arguments. It's beginning to sound silly.
From your discussion with ThomasT, the regions ARE still coupled, OR is the IC supposed to generate the signal AS IF.
May I suggest reviewing the paper again? Zuboff is suggesting the IC's act in a way that does not support counterfactual information. My point to ThomasT is that we can rewrite his thought experiment such that the line between supporting counterfactual information and not is so blurred that it can't be defined.
 
  • #13
Q_Goest,

Ferris_bg cleared things up.

I was defending computationalism only on the basis of 1. Without first defining 'program', the definition does indeed lack rigor, which would make it easy to evade arguments that constrict it beyond that. Namely, because we can come up with "physical systems" from 1. that are capable of lots of interesting phenomena and decoupling a system into it's components would no longer be a single physical system, but a set of individual physical systems (since they are not causally connected... agree?).

but Ferris_bg's definition seems to be more relevant to your post.

3 (supervenience) I have considered before as a consequence of physicalism. I don't really see a problem with it, though I recognize the contradiction now, between 2 and 3 as ferris_bg presented it.

2 (non-triviality) I do not understand why this would be a condition for consciousness. From an evolutionary perspective, it makes sense that biological systems that were able to survive and propagate would have counterfactual states as a consequence of being well-prepared for their environment. Does this mean that it's required for consciousness? I don't know. It seems difficult to imagine life lasting very long without having the built-in redundancies that a) allow survival and b) supports counterfactual states.
 
  • #14
Q_Goest said:
To explain why the brain might no longer be able to support phenomenal consciousness, a character in the story, Cassander, suggests there are 4 guiding principals, anyone of which when violated, might prevent phenomenal experience from occurring. They are:
- The condition of proximity: The cells are no longer close together as they were in the brain.
- The condition of no actual causal connection: Also called “counterfactual information” the problem with no causal connection between the parts of the brain is the problem most philosophers focus on. That is, regardless of the fact there is a duplicate signal, the actual signal is what is needed to maintain phenomenal consciousness. This actual signal is the “special signal”.
- The condition of synchronization: The cells may no longer be in sync.
- The condition of topology: Whether or not the cells are in the same basic spatial relationship, or pointing in the right direction.
It is somewhat like a printed circuit being replaced by long wires. The system would work, but its efficacy would be affected. The factors pointed out by Cassander would affect the efficacy, and hence, the consciousness. However, being a thought experiment, we can assume that efficacy is not affected. In that case, the segmented brain will not lose its consciousness. So there is no need of any special signal; only the efficacy of the signals matters. Consciousness, I think, depends on the efficacy of the brain; there should be a minimum threshold efficacy for the consciousness to manifest explicitly as in the case of humans. The high efficacy of the brain is an evolutionary edge that we have against other animals. Maybe, self-awareness represents the evolutionary peak of the entity called life!
 
  • #15
finiter:

So do you mean efficacy in the electrical engineering sense, or the neuroscience sense?

Q_Goest:

Here is Valera's (et al.) point (one of the papers I referenced earlier):

These neural assemblies have a transient,
dynamical existence that spans the time required to
accomplish an elementary cognitive act (a fraction of a
second). But, at the same time, their existence is long
enough for neural activity to propagate through the
assembly, a propagation that necessarily involves cycles
of reciprocal spike exchanges with transmission delays
that last tens of milliseconds. So, in both the brain and
the Web analogy, the relevant variable required to
describe these assemblies is not so much the individual
activity of the components of the system but the
dynamic nature of the links between them.
(emphasis mine)
The brainweb: Phase synchronization and large-scale integration
Francisco Varela, Jean-Philippe Lachaux, Eugenio Rodriguez & Jacques Martinerie
Nature Reviews Neuroscience 2, 229-239 (April 2001) | doi:10.1038/35067550
http://www.nature.com/nrn/journal/v2/n4/full/nrn0401_229a.html

Which is why the construction that requires the 'special signal' seems awkward to me. You are actually physically changing the system in the dynamical systems perspective: you have changed the physical landscape and the information structure. And the dynamical systems perspective has been successful in explaining phenomena of the brain in terms of their neural correlates:
http://scholar.google.com/scholar?h...1DVCG_enUS354US354&um=1&ie=UTF-8&sa=N&tab=ws"
http://scholar.google.com/scholar?h...stem&btnG=Search&as_sdt=400&as_ylo=&as_vis=0"
http://scholar.google.com/scholar?h...ders&btnG=Search&as_sdt=400&as_ylo=&as_vis=0"

Does any of this betray computationalism? If it does, haven't the computationalists been ignoring the last 20 or so years of empirical data then (along with Zuboff)?

To me, it seems this eliminates the need for a "special signal". It also means that despite 3 (supervenience) being true in principle, it's impossible to do in practice (entropy + chaotic sensitivity means you can never have exactly two of anything).

I don't see any obvious consequence for 2 (non-triviality) but 2 seems to be a bit like claiming Maxwell's demon violates thermodynamics, in which case your problem is that you haven't included the demon's actions in your calculations: you have to make the measurement on the first system to produce the second system. That costs energy to replicate between the measurements and the construction, and that cost comes out of the very environment that will shape the rest of the system's history, so there's even further potential consequences.

addendum

A real life experimental analog of the thought experiment:

Variability is included in the models as subject-to-subject differences in the strengths of anatomical connections, scan-to-scan changes in the level of attention, and trial-to-trial interactions with non-specific neurons processing noise stimuli. We find that time-series correlations between integrated synaptic activities between the anterior temporal and the prefrontal cortex were larger during the DMS task than during a control task. These results were less clear when the integrated synaptic activity was haemodynamically convolved to generate simulated fMRI activity. As the strength of the model anatomical connectivity between temporal and frontal cortex was weakened, so too was the strength of the corresponding functional connectivity. These results provide a partial validation for using fMRI functional connectivity to assess brain interregional relations.
(emphasis mine, demonstrating the experimental analog of the IC)
Philos Trans R Soc Lond B Biol Sci. 2005 May 29;360(1457):1093-108.
Investigating the neural basis for functional and effective connectivity. Application to fMRI.
Horwitz B, Warner B, Fitzer J, Tagamets MA, Husain FT, Long TW.
http://www.ncbi.nlm.nih.gov/pubmed/16087450

Note that Varela's references, 92, 103, and 104 are similar sources of empirical evidence indicating the importance of the reciprocal dynamical connection. 92 is none other than Karl Friston's "Functional and effective connectivity in neuroimaging":

on page 59, Friston demonstrates (both hypothetically and experimentally) how coarse fractioning (decoupling) of the brain is a neural symptom of schizophrenia. Coincidentally, this matches the qualitative description of schizophrenia (and it's etymology: "splitting of the mind").

One plausible example of this a psychology professor of mine suggested is that, in the case of say, auditory hallucinations, Broca's area has malfunctions that dissociate (uncouple) it from the feedback network. So it appears to the schizophrenic, as if the speech generation (from Broca's area) is not coming from them, but from an external source.
 
Last edited by a moderator:
  • #16
Pythagorean said:
I was defending computationalism only on the basis of 1. Without first defining 'program', the definition does indeed lack rigor, which would make it easy to evade arguments that constrict it beyond that. Namely, because we can come up with "physical systems" from 1. that are capable of lots of interesting phenomena and decoupling a system into it's components would no longer be a single physical system, but a set of individual physical systems (since they are not causally connected... agree?).
What I think you're saying is that by decoupling a system into it's components, such as done by Zuboff , the physical system is no longer causally connected and therefore it is not a single, physical system. That is essentially what most philosophers are saying by saying it doesn't support counterfactual information. Saying it isn't "causally connected" is the same thing as saying it doesn't "support counterfactual information" which is the same thing as the "non-triviality condition" described by Maudlin and quoted by Ferris. It's all the same argument. The argument is intended to enforce a specific type of system configuration on a computational system. It is intended to say that certain, subjective phenomena will not occur in a system that isn't causally connected, doesn't support counterfactuals or violates the non-triviality condition.

The problem then is what Maudlen has stated and to quote Ferris on Maudlin:
Imagine 2 pinball machines, the second of which has had exactly those pins removed that are never touched by the ball. We now have 2 different machines but it doesn't make a difference to the paths traced by the pinballs. The counterfactuals are different in each machine (the pinballs would behave differently if the first were to hit a "counterfactual" pin), but the physical activity of the 2 systems is, as it happens, identical. So, counterfactual differences in 2 systems are irrelevant to differences in the physical activity of those systems.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

I'm looking at this same issue from a different perspective and saying, why should any subjective experience change if we maintain the same causal actions on every part of the brain? There is no way for any part of the brain to "know" the difference between the connected state and the unconnected one. The causal actions acting on every part of the brain are identical, so we have identical feedback on every part, it just isn't coming from the adjacent brain cell, it is coming from an IC instead. The difference then is a "special signal" which can't be differentiated by anything physical. You are asking for a NON-PHYSICAL signal!

Hopefully that clears up the OP. Maybe we need to move on from here. I'll address your second post later.
Pythagorean said:
3 (supervenience) I have considered before as a consequence of physicalism. I don't really see a problem with it, though I recognize the contradiction now, between 2 and 3 as ferris_bg presented it.

2 (non-triviality) I do not understand why this would be a condition for consciousness. From an evolutionary perspective, it makes sense that biological systems that were able to survive and propagate would have counterfactual states as a consequence of being well-prepared for their environment. Does this mean that it's required for consciousness? I don't know. It seems difficult to imagine life lasting very long without having the built-in redundancies that a) allow survival and b) supports counterfactual states.
I'm totally unclear on what you're trying to say here.
 
  • #17
A system not supporting certain counterfactual states won't be conscious if such states should have triggered, but this doesn't mean this system would have zero value of consciousness all the time. Good example of this is the http://en.wikipedia.org/wiki/Anosognosia" :
'The Neurology of Consciousness' said:
A well-known example of anosognosia is often found in hemispatial neglect patients. This condition is usually caused by a stroke to the right parietal lobe that causes disruption of attention and spatial awareness of the left side of space. They often behave as if the left side of the world does not exist. For example, they will only dress the right side of their body or eat all the food on the right side of a plate but not the left. Yet, despite the obviousness of the deficit to people observing the patients, the patients themselves are not aware of their deficit. They do not sense that anything is wrong with them!

There is no system to sense that something is wrong, so the patient assumes that everything is normal. For example, when these patients are confronted with a bimanual task in which they cannot complete because they are unable to move their left hand, they may reply with a statement such as "I didn't want to do that task". When the paralyzed hand is presented to them, they often respond with the rationalization "That's not my hand".


So now the example with the two systems one supporting counterfactual states and the other not, looks like "normal condition" software and "anosognosia condition" software applied to the system's attention modules. When identical physical activity is running through some of the other modules - both systems are conscious, when spatial awareness of the left side of space is triggered - only the "normal condition" machine is conscious, when attention at the right side of space is triggered - again both systems are conscious.
 
Last edited by a moderator:
  • #18
Q_Goest said:
Computationalism has a number of inconsistencies that haven’t logically been refuted. The purpose of this thread is to discuss one of them and see if there’s a way out of the problem and/or to help get perspectives on the problem. That problem I’ll call the “special signal problem”. (...) Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.
I'm quite confortable with having 'computationalist' on a T-shirt. However, I certainly won't predict that. What I would say is that the “impulse cartridge” is itself a (partial or total) simulation of the cousciouness, so the consciousness is (partially or totally) in the cartridge.

What was the problem?
 
  • #19
Let's simplify this a bit:

Cartridge A - capable of providing input to the LHS of the brain, which is equivalent to what the RHS would normally provide.
Cartridge B - capable of providing input to the RHS of the brain, which is equivalent to what the LHS would normally provide.

So when A is connected to the LHS of a brain, the brain doesn't know the difference. And when B is connected to the RHS of a brain, the brain again doesn't know the difference. (As per the OP).

Now, what if you connect those two cartridges together? Is that system considered conscious?

I'd say, if the cartridges are reacting to external input and working together (providing each other inputs / accepting outputs), reacting appropriately, I'd say the system is conscious. After all, that's all our brain does.

If we then 'widen the gap' between A and B and use wires to bridge the gap, does that change anything?

To me, simply having a neuron / wire / any connector doesn't alter anything. It is what the system does 'as one' that matters.

A bunch of neurons reacting to artificial signals isn't a brain.

To me, this comes down more to what you consider conscious to be. A brain that doesn't react to any outside stimulus / input is nothing more than a computer running on a loop.
 
Last edited:
  • #20
jarednjames said:
Let's simplify this a bit:

Cartridge A - capable of providing input to the LHS of the brain, which is equivalent to what the RHS would normally provide.
Cartridge B - capable of providing input to the RHS of the brain, which is equivalent to what the LHS would normally provide.

So when A is connected to the LHS of a brain, the brain doesn't know the difference. And when B is connected to the RHS of a brain, the brain again doesn't know the difference. (As per the OP).

Now, what if you connect those two cartridges together? Is that system considered conscious?

I'd say, if the cartridges are reacting to external input and working together (providing each other inputs / accepting outputs), reacting appropriately, I'd say the system is conscious. After all, that's all our brain does.

If we then 'widen the gap' between A and B and use wires to bridge the gap, does that change anything?

To me, simply having a neuron / wire / any connector doesn't alter anything. It is what the system does 'as one' that matters.

A bunch of neurons reacting to artificial signals isn't a brain.

To me, this comes down more to what you consider conscious to be. A brain that doesn't react to any outside stimulus / input is nothing more than a computer running on a loop.
Good post jarednjames, thanks for that. So you're saying if the LHS and RHS aren't connected, that doesn't meet the definition of a brain so it isn't conscious? Or maybe there is just a small change in the experience. Perhaps we could then tell* someone, "Hey, my experience just changed when you stopped allowing the LHS talk to the RHS!"

If we could acknowledge the fact our experience changed, how did we know? What physical change to either side of the brain occurred that allowed us to distinguish this change? Didn't we just say the signals going into either side are physically identical regardless of whether or not the 2 halves are connected? Or perhaps our experience just slowly fades away as more and more cuts are made, but we are unable to report any change because we don't notice?

If we contest that the signals are physically identical but the phenomenal experience changes (and perhaps we can even report it) then we need a special signal, one that is somehow causally linked to the other side of the brain. We need something more than a signal that is a physical duplicate!

*Notice in Zuboff's story that he repeatedly goes back to checking the ability of the brain in vat to report it's condition.
 
  • #21
Well, if you cut my brain in half and then attach a computer (let's ignore memories and the like) that accepts input from my internal areas, eyes, nose etc and then processes in the same way as the RHS of my brain and even provides input the LHS where appropriate, I'm not going to notice a difference.

Now, if you had a cartridge attached acting as the RHS of my brain, if it isn't accepting external input from the above mentioned areas, eventually I will notice as it won't be reacting to changes.

The problem I see here is that we are assuming the LHS does exactly the same as the RHS and they report everything they do to each other and can therefore work independently of each other assuming input from each side is maintained / emulated. The moment you remove the RHS, you need to compensate for it by simulating it identically (whether it's by providing the required input to the LHS or to anywhere else in the body), otherwise changes will be noticed. The system won't be functioning as it was before and so isn't symetrical.

Brain in a jar or not, if it isn't attempting to monitor / process / compensate other areas of the body as it normally would, it is no longer functioning as a brain would.

If you had a cartridge which 'knows' exactly what it needs to provide signal wise (so it knows every future input the LHS would receive if the RHS were intact), you wouldn't notice a difference in so far as the LHS is concerned, but you would notice that certain features only present in the RHS aren't there.

An example of this would be: if your sense of smell was in the RHS of the brain, and no signal relating to it is sent to the LHS, if you remove the RHS you lose your sense of smell. You are no longer functioning in the same way you were before.
 
Last edited:
  • #22
Q_Goest said:
What I think you're saying is that by decoupling a system into it's components, such as done by Zuboff , the physical system is no longer causally connected and therefore it is not a single, physical system. That is essentially what most philosophers are saying by saying it doesn't support counterfactual information. Saying it isn't "causally connected" is the same thing as saying it doesn't "support counterfactual information" which is the same thing as the "non-triviality condition" described by Maudlin and quoted by Ferris. It's all the same argument. The argument is intended to enforce a specific type of system configuration on a computational system. It is intended to say that certain, subjective phenomena will not occur in a system that isn't causally connected, doesn't support counterfactuals or violates the non-triviality condition.

The problem then is what Maudlen has stated and to quote Ferris on Maudlin:


I'm looking at this same issue from a different perspective and saying, why should any subjective experience change if we maintain the same causal actions on every part of the brain? There is no way for any part of the brain to "know" the difference between the connected state and the unconnected one. The causal actions acting on every part of the brain are identical, so we have identical feedback on every part, it just isn't coming from the adjacent brain cell, it is coming from an IC instead. The difference then is a "special signal" which can't be differentiated by anything physical. You are asking for a NON-PHYSICAL signal!

Hopefully that clears up the OP. Maybe we need to move on from here. I'll address your second post later.

Well no, you don't have feedback on every part... it defies the definition of feedback to disconnect the two components and simulate them both at the same time.

To make this work, you need a Maxwellian demon to invest energy into measuring one system, processing how it would effect the other system, then doing the same with the other system. You're effectively reconnecting them causally to be able to make an IC in the first place. Anyway, post #15 has the empirical evidence and touches on the Maxwell demon problem.
 
  • #23
Q_Goest said:
*Notice in Zuboff's story that he repeatedly goes back to checking the ability of the brain in vat to report it's condition.
Not exactly. He repeatedly goes back to checking the ability of a brain equivalent to report it's condition. As the equivalent is suposed to be... well equivalent, then a computationnlist view need to say that consciousness remain the same.

The problem is not with computationnalism, the problem is with your assertion of what computationnalist will predict. Remove this incorrect assumption, and the paradox fade away.
 
  • #24
Great! We’re getting somewhere.

Now let’s go back to the story and reread what Zuboff says. If we argue off the top of our heads, we’ll end up misunderstanding and come up with reasons why the story doesn’t make any sense. Gotta read the story a few times perhaps before you grasp the implications.

jarednjames said:
Now, if you had a cartridge attached acting as the RHS of my brain, if it isn't accepting external input from the above mentioned areas, eventually I will notice as it won't be reacting to changes.
You’re absolutely right. But that’s not what Zuboff is saying.
Pythagorean said:
To make this work, you need a Maxwellian demon to invest energy into measuring one system, processing how it would effect the other system, then doing the same with the other system. You're effectively reconnecting them causally to be able to make an IC in the first place.
You’re in the same boat as jarednjames. That’s not quite what Zuboff has in mind.

What Zuboff is suggesting is that he wants to provide “experiences” to the brain. By that, he’s saying that hypothetically, the scientists in charge know how the brain is arranged and what it will do, so by providing these recorded experiences, they must know what every neuron is going to do before it does it. He suggests that these experiences are derived empirically from paid subjects. He says:
Zuboff said:
His scientist friends kept busy researching, by means of paid subjects, which patterns of neuron firings were like the natural neural responses to very pleasant situations; and, through the use of a complex electrode machine, they kept inducing only these neural activities in their dear friend's brain.

Like you, I’m sure that trying to figure out exactly what a brain is going to do, right down to the individual neuron, is a feat of science we may never be able to do. But in principal, the brain is a physical thing so therefore it must conform to the laws of nature which we must assume are knowable, in principal. If it makes it any easier, we might use something easier than a brain to predict. Something like a computer in which every transistor is completely predictable. We could duplicate a given computer many times over, determine what experiences it might have and duplicate those experiences on another, identical computer. Regardless, I think that in principal, Zuboff’s argument holds. All he wants to do is suggest that one can, in principal, know how a given physical substrate is going to behave in physical terms, so that the time evolution of that physical thing is predictable at least in principal.

Once we understand what every neuron does in a given situation or experience, we can simply ‘plug’ that input into the various senses so those receptors experience a given situation exactly as it would occur if the brain were in a human body. Now, we could watch as every neuron did exactly as we expected it to do, firing in a synchronous behavior and developing experiences in the brain.

And once we know what ever neuron is doing, Zuboff is suggesting that we cut this brain in half. Since we know what it will do, we can duplicate exactly, all the inputs to the two halves. Zuboff then continues cutting the brain into halves until finally he gets down to individual neurons.
Zuboff said:
First it was agreed that if a whole-brain experience could come about with the brain split and yet the two halves programmed as I have described, the same experience could come about if each hemisphere too were carefully divided and each piece treated just as each of the two hemispheres had been. Thus each of four pieces of brain could now be given not only its own bath but a whole lab-allowing many more people to participate. There naturally seemed nothing to stop further and further divisions of the thing, until finally, ten centuries later, there was this situation-a man on each neuron, each man responsible for an impulse cartridge that was fixed to both ends of that neuron -- transmitting and receiving an impulse whenever it was programmed to do so.
Does this help explain the story? What else needs to be explained? Perhaps I’ll stop here and look to ensure people have a grasp of the story before we try and discuss it.
 
  • #25
Q Goest, thanks for your lengthy and informative reply to my somewhat flippant post. I was expecting a warning. Anyway, I'm following the discussion in conjunction with reading the material you referenced, including, and of course especially, Zuboff's story.
 
  • #26
Q_Goest said:
May I suggest reviewing the paper again? Zuboff is suggesting the IC's act in a way that does not support counterfactual information.
I had miss that. Well, that's not what I understood from the paper. If the IC does not support 'counterfactual information', then it cannot replace an hemisphere, which possesses its own source of inputs. Then there is no computionnal equivalence with the biological brain, and all interest in this though experiment fade away.

In the end, it seems that this though experiment is just build on vague enough description so that one will either think that the biological brain is simulated or not. The trick is to pretend the first and to conclude from the last.

So Pythagorean was right from the beginning: that's straw man building. I'm not saying it is dishonest*, but the fact that it has ben discussed in books... how weak is that argument. The same is true for Penrose and Searle: bright guys, honest thinkers, and well... it's not hard to see the inconsistencies of their philosophy of mind.


*Maybe strawman has this taste in english: I'm just saying that the argument is build on an misguided assumptions about what computationnalists predictions should be.
 
  • #27
Pythagorean said:
So do you mean efficacy in the electrical engineering sense, or the neuroscience sense?
It is just the connections, the signals and the processing, whether it is neurons or other things. If any man-made processor can be as efficient as the brain, it will mimic consciousness. But, of course, it will need the required peripherals. Once we provide the peripherals, it will need the required environment. So, the basic difference, I think, is that we are born, not created. But a computer is created, not born; so it will always require a creator to provide the required environment. (The present environment itself, I think, is a deterministic outcome and not a random choice). Suppose we made a computer with peripherals suited for the existing environment and with a performance matching us, then it would resemble us in all respects, and that will tantamount to cloning ourselves. Then the only question would be of patent rights.
 
  • #28
Lets start from the very beginning,

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for
supporting consciousness.
2. Non-triviality condition: It is necessary that the system support counterfactual states -
states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce
identical mentality (if they produce any at all).

Definitions as presented in http://books.google.com/books?id=_8HjZaRwU-UC&source=gbs_navlinks_s , Daniel M. Hausman, 1998 and by David Lewis, 1973:

Causal connection:
CC: For all events A and B, A and B are causally connected if and only if they are distinct and either A causes B, B causes A, or A and B are effects of a common cause.

Counterfactual dependence implies causal connection:
CDCC: If A and B are distinct events and B counterfactually depends on A, then A and B are causally connected.

Counterfactual dependence: Effects counterfactually depend on their causes, while causes do not counterfactually depend on their effects and effects of a common cause do not counterfactually depend on one another.

Distinctness means that the events are not identical, neither overlaps the other, and neither implies the other.


Now let's look at the following scenario, where a machine is waiting for some input:

Waiting for input: Input a number N (we press 4):

Event A:
option 1) if N == 4: cause B
option 2) else if N == 7: cause C
option 3) else cause D

We have the following physical activity when we input 4:
A (B | C | D) --[ 4 ]-> B, which can be reduced to: A -> B

Lets remove option 2 now and input 7:
A (B | D) --[ 7 ]-> D, which can again be reduced to: A -> D

Now let's remove option 3, input 4 and see what are the possible results:
A (B) --[ 4 ]-> B? Can we reduce this to A -> B?
case 1) Yes, we can: the computation "if N == 4: cause B" still exist.
case 2) No, we can't: "N == 4" is NOT defined, because the "else condition" is missing. The condition where everything except the number 4 is defined, so that the concept of numbers is still defined. You can't define a single number, without defining the whole concept, so the "else condition" is required.


That's what Cassander is asking:
[PLAIN]http://themindi.blogspot.com/2007/02/147.html said:
[/PLAIN]
Now we are about to abandon yet another condition of usual experience - that of actual causal connection. Granted you can be clever enough to get around what is usually quite necessary to an experience coming about. So now, with your programming, it will no longer be necessary for impulses in one half of the brain actually to be a cause of the completion of the whole-brain pattern in the other hemisphere in order for the whole-brain pattern to come about. But is the result still the bare fact of the whole-brain experience or have you, in removing this condition, removed an absolute principle of, an essential condition for, a whole-brain experience really being had?


The scientists accept case 1) and it turns out that the consciousness disappears. Now let's go back to the other example.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

Here again the mistake of the Cassander's friends is being done, because the term "identical physical activity" implies the informational concept of the both machines to be identical, so removing the counterfactual states you change the whole concept thus the physical activities may seem identical to the observer, but they are in fact not. The machine may select its only one option left and the physical activity will look like "A -> B", but it can't be reduced to A -> B, because the option is taken "subconsciously" (the concept is not defined thus no computation is being made). We have a http://www.iep.utm.edu/functism/#H6" in the house.
 
Last edited by a moderator:
  • #29
Ferris_bg said:
Lets start from the very beginning,

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for supporting consciousness.
2. Non-triviality condition: It is necessary that the system support counterfactual states -
states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce identical mentality (if they produce any at all).
The very begining, to me, it's Turing. I don't think the three premisses you cite are helpfull. In fact, there is a misleading use of the word 'program' with different meanings.

Try replacing 'program' by 'algorithm' and have a look at premisse 1&3. One algorithm can run a division on some input, and nothing defined if the input is not a number. The same way: you can have an algorithm that lead to counsciouness on some input, not on different input. So to make sense of these premisses, you need to think that the input are part of the physical system. But then premisse 2 cannot hold -you can't change the input without changing the physical system.

Try replacing 'program' by 'Turing machine'. Now premisses 1&3 are perfectly sound, but not premisse 2: the input is part of what define the Turing machine, so you don't have any Turing-machine that supports the non-triviality condition. Algorithms do, because each time the input is changed the same algorithm will correspond to a different Turing machine.

So for premisses 1 and 3 program equates Turing machine, whereas for premisse 2 program equates algorithm. This is what makes these 'premisses' misleading.

Ferris_bg said:
It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't.
So no, it's not easy at all. You're saying that two machines can be be both identical and have different input. If it's identical in the sense of a Turing machine, the input is the same. If it's identical in terms of algorithm, then both will support 'counterfactual states'.
 
  • #30
Lievo said:
So Pythagorean was right from the beginning: that's straw man building.

Lievo said:
I don't think the three premisses you cite are helpfull. In fact, there is a misleading use of the word 'program' with different meanings.
There's a reason every other post in this forum is locked and this is why. People want to disregard the academic value of philosophy and talk about whatever pops into their head. Zuboff's story isn't a strawman as any philosopher will attest. Further, Ferris_bg's post is on the mark in quoting Maudlin's 1989 paper which is relevant to this thread. If you don't understand the literature, you need to ask, otherwise your ignorance of the literature here only confuses the discussion and takes us off course.
 
  • #31
Ferris_bg said:
Causal connection:
CC: For all events A and B, A and B are causally connected if and only if they are distinct and either A causes B, B causes A, or A and B are effects of a common cause.

Counterfactual dependence implies causal connection:
CDCC: If A and B are distinct events and B counterfactually depends on A, then A and B are causally connected.

Counterfactual dependence: Effects counterfactually depend on their causes, while causes do not counterfactually depend on their effects and effects of a common cause do not counterfactually depend on one another.

Distinctness means that the events are not identical, neither overlaps the other, and neither implies the other.

Now let's look at the following scenario, where a machine is waiting for some input:

Waiting for input: Input a number N (we press 4):

Event A:
option 1) if N == 4: cause B
option 2) else if N == 7: cause C
option 3) else cause D

We have the following physical activity when we input 4:
A (B | C | D) --[ 4 ]-> B, which can be reduced to: A -> B

Lets remove option 2 now and input 7:
A (B | D) --[ 7 ]-> D, which can again be reduced to: A -> D

Now let's remove option 3, input 4 and see what are the possible results:
A (B) --[ 4 ]-> B? Can we reduce this to A -> B?
case 1) Yes, we can: the computation "if N == 4: cause B" still exist.
case 2) No, we can't: "N == 4" is NOT defined, because the "else condition" is missing. The condition where everything except the number 4 is defined, so that the concept of numbers is still defined. You can't define a single number, without defining the whole concept, so the "else condition" is required.
Hi Ferris. Another good write-up!

I'd like to touch on the "else condition" you mention. In Maudlin’s paper, he shows how his beloved Olympia can be provided with all the right causal structure except that he adds “blocks” which don’t touch the machine and don’t interact unless counterfactual information is required, in which case those blocks prevent the machine from operating successfully. He calls this “argument by addition” and “argument by subtraction”. The blocks added are the addition and rusty chains are the subtraction.

In the end, Maudlin concludes:
Maudlin said:
Olympia has shown us at least that some other level besides the computational must be sought. But, until we have found that level and until we have explicated the relationship between it and computational structure, the belief that pursuit of the pure computationlist program will ever lead to the creation of artificial minds, or to understand the natural ones, remains only a pios hope.
I would say that Maudlin has concluded that the “else condition” is not a requirement for the creation of mind. He seems to have quite a few supporters in that regard including Zuboff, Hillary Putnam and http://www.gold.ac.uk/computing/staff/m-bishop/" . Mark is an interesting character. Being a professor of cognitive computing, you’d think such a person would naturally be a computationalist, but he’s defended the idea that counterfactuals are not a necessary condition for over a decade now, writing perhaps a dozen or more papers on it.

I think the most fundamental problem with the idea of counterfactuals is that people expect computers to be what they define them as. However, computers are symbol manipulation systems, and as such, they are observer relative. There is nothing intrinsic to nature about them.

I’d be interested in your feedback.
 
Last edited by a moderator:
  • #32
Lievo,

I suppose what you have in mind about the "Turing vs algorithm" confusion is well illustrated in this reference:
'Philosophy of Mind' said:
Turing's thesis: If two systems are input-output equivalent, they have the same psychological status; in particular, one is mental just in case the other is.

For machine functionalism is consistent with the denial of Turing's thesis: It will say that input-output equivalence, or behavioral equivalence, isn't sufficient to guarantee the same degree of mentality. What arguably follows from machine functionalism is only that systems that realize the same Turing machine - that is, systems for which an identical Turing machine is a correct machine description - enjoy the same degree of mentality.

It appears, then, that Turing's thesis is mistaken: Internal processing ought to make a difference to mentality. Imagine two machines, each of which does basic arithmetic operations for integers up to 100: Both give correct answers for any input of the form n + m, n x m, n - m, and n / m for whole numbers n and m less than or equal to 100. But one of the machines calculates ("figures out") the answer by applying the usual algorithms we use for these operations, whereas the other has a file in which answers are stored for all possible problems of addition, multiplication, subtraction, and division for integers up to 100, and its computation consists in "looking up" the answer for any problem given to it. The second machine is really more like a filing cabinet than a computing machine; it does nothing that we would normally describe as "calculation" or "computation". Neither machine is complex enough to be considered for possible mentality; however, the example should convince us that we should consider the structure of internal processing, as well as input-output correlations, in deciding whether a given system has mentality. If this is correct, it shows the inadequacy of a purely behavioral test, such as the Turing test, as a criterion of mentality.


Q_Goest,

By rejecting the non-triviality condition, Bishop and the others are only welcoming panpsychism, which does not contradict with functionalism. No matter if counterfactuals are crucial for consciousness or not, you are still nowhere as of disproving token identity theories. Personally I favor the system approach as the best materialistic option. I am an idealist, on a side note.
 
  • #33
Ferris_bg said:
By rejecting the non-triviality condition, Bishop and the others are only welcoming panpsychism, which does not contradict with functionalism.
You got me thinking on this one... Actually, Bishop is not welcoming panpsychism, he's suggesting that counterfactuals are not physical, similar to what I've stated in the OP, so we can't invoke them as a requirement for computationalism. Once we dispence with counterfactuals (because they're simply wrong), computationalism predicts panpsychism, and panpsychism is unacceptable. So in order to avoid panpsychism, we have to avoid computationalism. One might think the answer is to insist on the non-triviality condition, but Bishop is suggesting we reject that, just like Maudlin is. Once we reject non-triviality (once we reject the requirement for counterfactuals) computationalism predicts panpsychism. Perhaps we should go through that argument as well...

Bishop's argument follows work by Putnam which is published in his book, "Representation and Reality". In the appendix, Putnam has an argument that attempts to discredit functionalism to the extent that he feels functionalism only implies behaviorism. Putnam is retired now, so Bishop has taken up his flag so to speak, and continues to work on advancing Putnam's argument.
 
Last edited:
  • #34
Q_Goest said:
Zuboff's story isn't a strawman as any philosopher will attest.(...) your ignorance
Any philosopher, really? :redface: (...) Well you're downgrading from appeal to authority to simple insults. For the sake of the discussion you started, I hope you'll come to your sense and do something better -adressing my points for example, or the points of anyone not already agreeing with you.

Ferris_bg said:
Lievo,

I suppose what you have in mind about the "Turing vs algorithm" confusion is well illustrated in this reference:
one of the machines calculates ("figures out") the answer by applying the usual algorithms we use for these operations, whereas the other has a file in which answers are stored (...) and its computation consists in "looking up" the answer for any problem given to it.
Very pertinent. Here the question would be: how was the look-up table filled in first place?

It can be done by picking up numbers at random, at least theorically, so at least theorically there is the possibility for a philosophical zombie. But in practice, you won't be able to do that, no more you'll be able to see a ball tunneling through a wall -even if it's theorically allowed by quantum mechanic. What you'll need to do so as to fill the table is to compute it, and that's where consciousness should be said to be if the algorithm belongs to this sub-class.

In a sense, the bits that constitute this message are no more conscious that the look-up table, and in both case it's just the shadow of some mind.
 
Last edited:
  • #35
Q_Goest, you still haven't responded to post #15, which has sources and quotes from experts and confronts the physical premise for the thought experiment, which I feel I've demonstrated as flawed, which makes the question not so productive. The strawman is being built for physicalism, I guess, not computationalism.

The thought experiment explicitly only applies to one kind of physical system: linear systems (which don't truly exist, but are convenient approximations). I would think this is an important epistemological consideration.
 

Similar threads

  • Quantum Interpretations and Foundations
Replies
1
Views
971
  • General Discussion
Replies
2
Views
3K
  • General Discussion
2
Replies
61
Views
14K
Replies
113
Views
18K
  • General Discussion
Replies
28
Views
6K
  • Special and General Relativity
3
Replies
70
Views
13K
  • General Discussion
Replies
33
Views
5K
  • General Discussion
Replies
5
Views
6K
Replies
29
Views
4K
Back
Top