What is the special signal problem and how does it challenge computationalism?

  • Thread starter Thread starter Q_Goest
  • Start date Start date
  • Tags Tags
    Signal
AI Thread Summary
The "special signal problem" challenges computationalism by questioning whether a brain can maintain consciousness when its parts are disconnected yet still receive simulated signals. Arnold Zuboff's thought experiment illustrates this by depicting a brain in a vat that is cut into halves and then smaller sections, each receiving signals from an "impulse cartridge." Despite the signals being identical to those in a connected brain, the argument posits that without actual causal connections, phenomenal consciousness cannot be sustained. This leads to the conclusion that a unique "special signal" is necessary for consciousness, as mere duplication of signals does not suffice. The discussion emphasizes the need for a deeper understanding of how counterfactual alternatives are crucial to consciousness within any theory of mind.
Q_Goest
Science Advisor
Messages
3,012
Reaction score
42
Computationalism has a number of inconsistencies that haven’t logically been refuted. The purpose of this thread is to discuss one of them and see if there’s a way out of the problem and/or to help get perspectives on the problem. That problem I’ll call the “special signal problem”.

In the paper by Arnold Zuboff* entitled “http://themindi.blogspot.com/2007/02/147.html" ”, Arnold provides a thought experiment by creating a story around a brain in a vat. The brain is provided with all the same inputs and outputs it would have had in the person’s head, but like The Matrix, he has this brain in a jar and simply provides the same signals to simulate the brain being in a head. By providing these signals, we can safely assume the brain undergoes the same changes of state as it would otherwise go through in the head, so it is assumed the experiences are also identical. I’d recommend reading the story, it’s very entertaining.

The first twist to the story is to have the brain cut in half. He then suggests the signals to the neurons at the cut are provided the same signals they would receive in the brain, except the opposite brain half doesn’t produce them. Instead, the signals are provided by an “impulse cartridge”. What exactly that is, isn’t important. Just the fact it can simulate the connection at each broken synapse or other fractured surface is all that is necessary. It provides a “signal” that allows each half of the brain to continue firing as if it were still together and connected as a single brain.

Arnold then separates the brain into smaller sections, again using the impulse cartridge to simulate the interactions at each of the breaks in the brain until finally, the brain is separated into individual neurons. We can then ask the question of whether or not the brain still experiences everything as it did when it was together as a complete brain in a vat. The implied assumption is that the brain can’t remain consciously aware of anything any more, regardless of whether or not the individual neurons are still operating as if they were together. Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.

To explain why the brain might no longer be able to support phenomenal consciousness, a character in the story, Cassander, suggests there are 4 guiding principals, anyone of which when violated, might prevent phenomenal experience from occurring. They are:
- The condition of proximity: The cells are no longer close together as they were in the brain.
- The condition of no actual causal connection: Also called “counterfactual information” the problem with no causal connection between the parts of the brain is the problem most philosophers focus on. That is, regardless of the fact there is a duplicate signal, the actual signal is what is needed to maintain phenomenal consciousness. This actual signal is the “special signal”.
- The condition of synchronization: The cells may no longer be in sync.
- The condition of topology: Whether or not the cells are in the same basic spatial relationship, or pointing in the right direction.

The focus of the special signal problem will be the counterfactual sensitivity concern since that is the focus in the literature.

The problem that Arnold writes about has also been written about by others in different ways including Maudlin, Putnam and Bishop. But regardless of how the problem is approached, the defenders of computationalism focus on why the system in question must be able to support counterfactual information and thus be capable of performing other computations. That is, regardless of whether or not a portion of a system is needed to perform a computation, if the system can not support the right counterfactual information, it can't instantiate a program and it can't support phenomenal consciousness.

Note that in Zuboff’s story, only the subjective experience is compromised by moving from a causally connected brain to a brain that is NOT causally connected. In the causally disconnected brain, all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided. This seems to suggest that the duplicate signal provided by the impulse cartridge is not sufficient, only the original signal is sufficient to create the phenomenon of consciousness. The duplicate signal may have all the same properties, may be indistinguishable from the original, and may maintain all the same objectively measurable properties throughout the disconnected brain. But the duplicate signal is not sufficient per computationalism, to support consciousness. We need a special signal. We need the original one.

From this analysis, it is clear that any theory of mind must address how and why counterfactual alternatives are crucial to consciousness. We need to understand what is so special about that particular signal.

*Story available on the web at: http://themindi.blogspot.com/2007/02/147.html
and is part of the book by Hofstadter and Dennett (editors) entitled “The Mind’s I: Fantasies and Reflections on Self and Soul”.
 
Last edited by a moderator:
Physics news on Phys.org
Q_Goest said:
The first twist to the story is to have the brain cut in half. He then suggests the signals to the neurons at the cut are provided the same signals they would receive in the brain, except the opposite brain half doesn’t produce them. Instead, the signals are provided by an “impulse cartridge”. What exactly that is, isn’t important. Just the fact it can simulate the connection at each broken synapse or other fractured surface is all that is necessary. It provides a “signal” that allows each half of the brain to continue firing as if it were still together and connected as a single brain.

This seems like a logical fallacy to me. If there's no feedback connection between regions of the brain, then you've physically simplified the system, haven't you? It's no longer a single nonlinear system, it's two systems that obey the principle of superposition (if you're assuming that you can generate input from one independently of the other, then you're assuming they're separable.).

http://pre.aps.org/abstract/PRE/v64/i6/e061907
http://www.nature.com/nrn/journal/v2/n4/abs/nrn0401_229a.html

Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.

Of course, which is a good indication that you're not really confronting computationalism, but building a straw man. This seems to contradict what you say here:


Note that in Zuboff’s story, only the subjective experience is compromised by moving from a causally connected brain to a brain that is NOT causally connected. In the causally disconnected brain, all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided.

This is false! Nonlinear coupled systems can't be uncoupled without physically changing the objectively measureable phenomena. Superposition does not hold for such systems. Once you've destroyed the feedback loops, you've destroyed a significant aspect of neural processing.
 
Pythagorean said:
there's no feedback connection between regions of the brain
How does any portion of the brain "know" that it has no "feedback" (whatever "feedback" is)? It has every bit as much feedback in both cases. It has exactly the same feedback. It has the same exact signals. You're asking for a special signal.
Pythagorean said:
but building a straw man. This seems to contradict what you say here:
I don't see the contradiction. Also, you need to refrain from using the term "strawman". Instead, just provide your argument. The term is generally understood to be derisive. If you really must use the term, show how it is a strawman, don't just throw out the term without understanding what it means.
This is false! Nonlinear coupled systems can't be uncoupled without physically changing the objectively measureable phenomena.
This issue is fairly straightforward. Take a control volume as given in fluid mechanics such as used to describe any nonlinear fluid system. The assumption in engineering and the sciences is immediately that a control volume exhibits any objectively measurable phenomena as a similar volume that's not disconnected. All the equations are the same. In fact, if they were not separable, you'd need to violate conservation of mass, momentum or energy. There are views on both sides of this issue (Scott on one side, many others oppose). In general I'd say Scott is out voted.
 
Q_Goest said:
How does any portion of the brain "know" that it has no "feedback" (whatever "feedback" is)? It has every bit as much feedback in both cases. It has exactly the same feedback. It has the same exact signals. You're asking for a special signal.

I'm not claiming anything doesn't "know". I'm saying your physically changing the system arrangement, so a qualitative change isn't contradictory to the basis of computationalism (or more importantly physicalism in general).

Feedback is exactly how feedback is defined in signal processing. You have a guitar, an amplifier, and headphones. If you disconnect the output from the input (by using headphones) you won't get feedback. If you connect the output to the input (by using an amplifier and standing close enough so that the output of the amplifier affects the string vibrations, which affect the induction coils, which goes through to the output and back in again. Or if you stand in front of a TV with a video camera that is attached to the TV.

But the important thing is that we have several nonlinear objects coupled together (with at least three different kinds of couplings: inhibitory, excitatory, and diffusive). What signal would you introduce to each one if they weren't causally connected anymore? They're signal at any moment, depends on the state of the rest of the ensemble, and especially their neighbors. You also lose effects of volume transmission (spatial coupling: neurotransmitters, possibly the electric field, glia dynamics) which are important to global regulation ...as you've already alluded to, I thought, when you stated that computationalism would already agree that the system has been broken).

The term is generally understood to be derisive. If you really must use the term, show how it is a strawman, don't just throw out the term without understanding what it means.

It is a strawman because it misrepresents the computationalist position in a fundamental way. It makes a physical change that would effect processing and then claims that this computationalism wouldn't predict this change. That is the issue we are currently debating.
This issue is fairly straight forward. Take a control volume as given in fluid mechanics such as used to describe any nonlinear fluid system. The assumption in engineering and the sciences is immediately that a control volume exhibits any objectively measurable phenomena as a similar volume that's not disconnected. All the equations are the same. In fact, if they were not separable, you'd need to violate conservation of mass, momentum or energy. There are views on both sides of this issue (Scott on one side, many others oppose). In general I'd say Scott is out voted.

Control volume deals with three linearly independent dimensions. I'm talking about the most common biophysical neuron models (Hodgkins-Huxley, Morris-Lecar)). In these models, the dimensions of the phase space represent the most likely candidate for the information transfer of the neuron: membrane potential and ion channel permeability.

voltage-gated channels are dependent on voltage, and voltage is dependent on the voltage-gated channel. So there is already a feedback mechanism within one cell (that would still exist in the isolated cell).

There are also ligand-gated channels and the Hodgkin-Huxley model has modifications for synaptic transmission:
http://en.wikipedia.org/wiki/Biological_neuron_model#Synaptic_transmission

But this is just one "cell" of the model. It already has complex molecular processing going on inside of it, but now we're going to couple these cells together. The couplings themselves are not linear. Synaptic transmissions can be inhibitory or excitatory (generally modeled with a hyperbolic tangent) or diffusive (modeled as a second derivative) but they all depend on the state of their neighbors. Control volume seems to refer to a more euclidean set of dimensions. We don't couple little cells of volumes in spatial dimensions. We connect neurons by information flow. Axons can be all kinds of different lengths, so the story would be very confusing if we looked at spatial dimension alone.

Dynamical behavior of the firings in a coupled neuronal system
Phys. Rev. E 47, 2893–2898 (1993)
http://pre.aps.org/abstract/PRE/v47/i4/p2893_1

Global and local synchrony of coupled neurons in small-world networks
BIOLOGICAL CYBERNETICS
Volume 90, Number 4, 302-309, DOI: 10.1007/s00422-004-0471-9
http://www.springerlink.com/content/6etxgkjk49668ffn/

Small Clusters of Electrically Coupled Neurons Generate Synchronous Rhythms in the Thalamic Reticular Nucleus
The Journal of Neuroscience, January 14, 2004, 24(2):341-349; doi:10.1523/JNEUROSCI.3358-03.2004
http://neuro.cjb.net/cgi/content/abstract/24/2/341
 
Last edited:
Pythagorean said:
It is a strawman because it misrepresents the computationalist position in a fundamental way. It makes a physical change that would effect processing and then claims that this computationalism wouldn't predict this change. That is the issue we are currently debating.
Clearly, the story by Zuboff isn’t a strawman. I think the first step has to be in understanding what the computationalist view is. That Zuboff’s argument undermines that view is well understood by the editors of the book as they discuss their views of the paper. Anyone reviewing this paper would do well to read that review which is at the end of the story. Further, there are arguments presented by others that are very similar to Zuboff’s reductio which present the same issues in different ways. Clearly, any computationalist view needs to take into account how and why these special signals are needed to maintain phenomenal consciousness.

The method of attack used by some is to reformulate compuationalism and claim that certain physical systems are fundamentally holistic. What one needs to realize is that by reformulating computationalism like this, the standard notions everyone is using about the brain must be rewritten. If we argue that nonlinear systems are those systems necessary to support phenomenal consciousness, any claim that a computer (FSA, CSA, anything that uses microchips, relays, gears, mechanical switches, etc…) is capable of supporting phenomenal consciousness will have to be abandoned. I think it is self evident, and hopefully no one here is ignorant of physics enough to argue, that one can take a computer, simulate a given input, and get a consistent output, and that we can do this to any portion of a computer. It can be performed on the machine as a whole, such as a “brain in vat” thought experiment uses, or it can be done on any small portion of the machine, right down to the single transistor, mechanical switch or other computational element. Computers are seperable, and that’s well understood by philosophers such as Hofstadter and Dennett as well as a large number of philosophers that have written papers claiming a special signal is somehow necessary.

If one wishs to claim that only nonlinear systems are capable of phenomenal awareness, they have already abandoned computationalism as it stands today. However, if that’s the case, it would probably be best to separate out those discussions in a second thread. This thread is about Zuboff’s article, and the philosophy surrounding these special signals. Another thread on separability of classical systems might be a good place to discuss those concerns. A good place to start might be the http://www.science.uva.nl/~seop/entries/physics-holism/#Holism" and address the issue as they do here:
Classical physics presents no definitive examples of either physical property holism or nonseparability.

Side note: when posting references, please provide a description of why they are relevant to the discussion. It seems as if you are asking that I read through your references and figure out for myself why they are relevant and how they support your views. It would be much more effective to do this yourself since you already have something in mind.
 
Last edited by a moderator:
Q Goest said:
We can then ask the question of whether or not the brain still experiences everything as it did when it was together as a complete brain in a vat.
Assuming that everything is 'connected', as you seem to be saying it is, then it would seem to follow that "the brain still experiences everything as it did when it was together as a complete brain in a vat".

Q Goest said:
The implied assumption is that the brain can’t remain consciously aware of anything any more, regardless of whether or not the individual neurons are still operating as if they were together.
Implied wrt what?

Q Goest said:
Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.
Well, not just wrt computationalism, but yes, if the brain is "dissociated" then is isn't a "brain". Is it? Obviously, if the brain's parts become "dissociated", then it won't function as a "brain". Duh. But you indicated that steps were taken to keep the brain's parts connected. So, at what point did it become "dissociated"?

Q Goest said:
The condition of proximity: The cells are no longer close together as they were in the brain.
But you said the cells were still 'connected', presumably in a way that preserved normal brain function. If not, then isn't that one explanation regarding where/how the brain became "dissociated" and stopped being a "brain"? So, proximity, per se, shouldn't have anything to do with it. Hence, this is an invalid "condition". Unless we simply don't have the technology to keep a dissected brain funtioning as a whole, intact brain, and then it is a valid condition. And then, ok, so what?? We wouldn't expect, say, four, or 1000, or 10^8 separated pieces of a previously whole, normally functioning, brain to function as a brain. Would we? So, it isn't really proximity, it's connectedness that matters.

On the other hand, if it really is proximity that determines connectedness, that is, if the brain cells really do have to be more or less contiguous for normal brain functioning, then that's it. Right?

Q Goest said:
The condition of no actual causal connection: Also called “counterfactual information” the problem with no causal connection between the parts of the brain is the problem most philosophers focus on.
Huh? Why is the "duplicate signal" called counterfactual? This makes no sense to me. If there's a "connection" at all, then there's a "causal connection". Isn't there?

Q Goest said:
That is, regardless of the fact there is a duplicate signal, the actual signal is what is needed to maintain phenomenal consciousness. This actual signal is the “special signal”.
Well, all you seem to have said so far is that we don't know how to duplicate neuronal connections. Is that a fact?

Or, are you saying that there are reasons to believe that neuronal connections can't be duplicated, or are you saying that duplication of neuronal connections is a function of proximity? Or are you saying that nobody really knows?

Q Goest said:
The condition of synchronization: The cells may no longer be in sync.
Well, this would seem to follow if nobody is sure that the "actual" neuronal connections have been maintained. One sure way to ascertain this is by observing whether or not the separated brain parts function, collectively, as they did before they were separated.

Q Goest said:
The condition of topology: Whether or not the cells are in the same basic spatial relationship, or pointing in the right direction.
Uh, yeah. Well that too. But of course if you've already sliced and diced the brain into a zillion pieces, and haven't the faintest idea how to "connect" them, then it wouldn't seem to matter much how each of those zillion pieces is oriented. Would it?

Q Goest said:
The focus of the special signal problem will be the counterfactual sensitivity concern since that is the focus in the literature.
Huh??

Q Goest said:
The problem that Arnold writes about has also been written about by others in different ways including Maudlin, Putnam and Bishop. But regardless of how the problem is approached, the defenders of computationalism focus on why the system in question must be able to support counterfactual information and thus be capable of performing other computations. That is, regardless of whether or not a portion of a system is needed to perform a computation, if the system can not support the right counterfactual information, it can't instantiate a program and it can't support phenomenal consciousness.
But you've, I think, previously described a "system" which, apparently, consists of a zillion disconnected brain cells. It can't process "factual" stuff, much less "counterfactual" stuff, whatever that means.

Q Goest said:
Note that in Zuboff’s story, only the subjective experience is compromised by moving from a causally connected brain to a brain that is NOT causally connected.
How is this known?

Q Goest said:
In the causally disconnected brain, all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided.
If "all objectively measurable phenomena still occur within every part of the brain just as they had in the causally connected one since the signals are still provided", then the separated brain parts can't be said to be causally disconnected -- unless there's some other measure, which would bring us back to my immediately previous question.

Q Goest said:
This seems to suggest that the duplicate signal provided by the impulse cartridge is not sufficient, only the original signal is sufficient to create the phenomenon of consciousness.
What is it that seems to suggest this? Subjective experience? How do you objectively measure that?

Q Goest said:
The duplicate signal may have all the same properties, may be indistinguishable from the original, and may maintain all the same objectively measurable properties throughout the disconnected brain. But the duplicate signal is not sufficient per computationalism, to support consciousness. We need a special signal. We need the original one.
This "computationalism" is indeed some special stuff. Or maybe Zuboff, et al., pulled this out of their separated, collective, butts.
Is that even possible?

Q Goest said:
From this analysis, it is clear that any theory of mind must address how and why counterfactual alternatives are crucial to consciousness. We need to understand what is so special about that particular signal.
Imho, it's clear that this "analysis" is fubar. No offense to you personally of course. What seems to be most important for nomal brain functioning is that the brain is in one piece. Thus, it wouldn't seem that "counterfactual alternatives", whatever those might refer to, have anything to do with it.

Anyway, again imho, the computationalist approach is, fundamentally, way off -- even though it might explain certain phenomena.
 
Hi ThomasT. It doesn’t sound as if you’ve read the story. Just a few notes; when I mention the brain being “dissociated” I’m referring to the state of the brain as given in the story that has these hypothetical “impulse cartridges” (ICs) used to fire neurons instead of the actual neurons they would normally be attached to. In a sense, the brain is still ‘connected’ by these ICs but the brain is not physically connected.

The reason brought out by most philosophers regarding why the hypothetically dissociated ‘brain’ in the story is no longer capable of supporting p-consciousness is that it can no longer support “counterfactuals” which is to say that the various bits of the brain can only perform that one single act. It is therefore not instantiating a program nor performing a computation, similar to what you’ve said here:
Obviously, if the brain's parts become "dissociated", then it won't function as a "brain".
This is your common sense reaction that says that if a conscious system (brain, computer, whatever) is not in it’s connected state, it can’t support consciousness, even if everything is still going through the same changes in state, such as described by Zuboff.

Those who have argued that only a device which supports counterfactuals can support p-consciousness include Chalmers (1996), Christly (1994), Endicott (1996) and others. Endicott’s position is typical of this perceived need for a causally connected system as he discusses the definition of a computation, “We can then say that a system is a genuine computational device when there is a correspondence between its physical states and its formal states such that the causal structure of the physical system is isomorphic to the formal structure of the computational operations.” Christly talks of “our common sense understanding of causation” being required for a computational device to be defined as such and therefore being able to support consciousness. And Chalmers suggests that “a physical system implements a computation if the causal structure of the system mirrors the formal structure of the computation.”

The point all these authors attempt to make regards the common sense definition of a computation. Without the ability to perform this computation, one could argue that the phenomenon of consciousness would also be compromised. So without the ability to support counterfactual information, these philosophers would say the system is no longer performing valid computations, it can’t instantiate a program, so it can’t be conscious.

A point in opposition is presented by Bishop (2001). About a computational machine which is unable to support counterfactuals, Bishop says, “The first point to note in this response is that it seems to require a non-physical causal link between non-entered machine states and the resulting phenomenal experience of the system over a given time interval.” It’s precisely this “non-physical causal link” that I’m calling a “special signal”. That is, the signal produced by these hypothetical ICs is insufficient. The signal has to come from a causally connected system.

I’d agree with Bishop and would add that the defense is short sighted. Tim Maudlin (1989) proposes a similar thought experiment using buckets of water and an armature similar to a Turing machine which I’ll use as a guide for an extension on Zuboff’s story as follows. One could for example, rewrite Zuboff’s story slightly to suggest that these impulse cartridges are in fact connected to each other (wirelessly if you’d like) so that all the neurons are actually interacting through these IC’s. The IC’s could also be synchronized to allow for the problem of time delay as described by Zuboff. With all the neurons in close proximity, the time delay wouldn’t even be noticeable to the brain. Now we’d have a causally connected brain, able to support counterfactuals, and the signals provided by the IC’s should therefore support p-consciousness per Chalmers, Christley, Endicott, etc… We could also devise the IC’s such that they have a recording of every neuron firing that they are attached to so that they don’t even need to rely on a wireless signal from their corresponding IC to fire. We could suggest that if the recorded IC data matches the firing that comes from the neuron, no signal is sent. Only if the signal coming from the neuron is different does the IC send a signal to the mating IC attached to another neuron. The mating IC on the opposite neuron would also have this recording and they would fire the neuron as long as they don’t get a change signal from the corresponding IC they are mated to. Now you have a system that is causally connected only when there is counterfactual information needing to be computed. But when there is no counterfactual information, there is no connection between IC’s and the entire system per the counterfactual argument, is no longer conscious.

Bottom line, the signal going to each neuron in Zuboff’s story that is provided by these IC’s has certain properties such as electrical potential and ion flow. Any isolated neuron that is receiving these impulses will fire regardless of whether the signal comes from the IC or the neighboring neuron. But computationalism holds that only the neighboring neuron can allow for consciousness, the identical signal coming from an IC (if it is not causally connected) is insufficient. Physical properties are the same for both signals, so why is only one capable of supporting consciousness?



Bishop, J. M. Nov 2001 Public Lecture, ‘Dancing with Pixies: strong Artificial Intelligence & panpsychism’, British Computer Society, (Berkshire Branch)
Chalmers, D. J. 1996, ‘Does a Rock Implement Every Finite-State Automaton’, Synthese, 108, 309-333
Christley, R. L. 1994, ‘Why Everything Doesn’t Realize Every Computation’, Minds and Machines, 4(4), 403-420
Endicott, R. P 1996, ‘Searle, Syntax, and Observer Relativity’, Canadian J of Philosophy 26(1):101-122
Maudlin, T. 1989, ‘Computation and Consciousness’, Journal of Philosophy 86:407-432
 
Q_Goest said:
Clearly, the story by Zuboff isn’t a strawman. I think the first step has to be in understanding what the computationalist view is.

Well, you've accused me of being a computationalist several times, but I used the standard computationalist description: that neurons perform information processing and that this is relevant to our mental states.

If we argue that nonlinear systems are those systems necessary to support phenomenal consciousness, any claim that a computer (FSA, CSA, anything that uses microchips, relays, gears, mechanical switches, etc…) is capable of supporting phenomenal consciousness will have to be abandoned.

Nobody's arguing nonlinear systems are necessary to support conscioiusness. The point is that you have a well-known non-linear coupled system and you're pretending like it doesn't change anything to decouple them.

You must realize that my argument doesn't state that consciosness is a result of nonlinearity. It states that your argument doesn't consider the facts and you may as well be talking about a different system. And here's the main problem:

You have nowhere to generate an IC from. This is a lot like taking the Maxwell's demon scenario... and then claiming that it will still work without the demon. But the demon is the only thing making the problem physically sensible (as was shown later by follow up calculations that found the demon's measurement and action made up for the entropy "loss").

I think it is self evident, and hopefully no one here is ignorant of physics enough to argue, that one can take a computer, simulate a given input, and get a consistent output, and that we can do this to any portion of a computer. It can be performed on the machine as a whole, such as a “brain in vat” thought experiment uses, or it can be done on any small portion of the machine, right down to the single transistor, mechanical switch or other computational element. Computers are seperable, and that’s well understood by philosophers such as Hofstadter and Dennett as well as a large number of philosophers that have written papers claiming a special signal is somehow necessary.

So what? It's really another strawman to say that computationalism is the same as the computer metaphor. They're two totally different systems.

If one wishs to claim that only nonlinear systems are capable of phenomenal awareness, they have already abandoned computationalism as it stands today. However, if that’s the case, it would probably be best to separate out those discussions in a second thread.

I'm not sure it's abandoned computationalism yet. Computationalism is about information processing isn't it? Information processing is linear in a computer (why the hell would we make it nonlinear?). It developed in a nonlinear way in life (just like nearly every complex natural process in nonlinear. This is a well accepted fact: that the real world is nonlinear.)
But if you really think this is not computationalism then I will desist and you can stop calling me a computationalist.

Side note: when posting references, please provide a description of why they are relevant to the discussion. It seems as if you are asking that I read through your references and figure out for myself why they are relevant and how they support your views. It would be much more effective to do this yourself since you already have something in mind.

All you have to do is read the abstracts. They demonstrate the main point that you have coupled systems and their behavior is different when they're uncoupled.

Classical physics presents no definitive examples of either physical property holism or nonseparability.

Complex systems is in a strange place. It does allow for "holism" (that is, the assumption of superposition does not hold).

But complex systems don't require any fundamental change in classical physics. The systems fall right out of classical Newtonian treatment. The systems are deterministic and exist on a smooth manifold, just like every other classical problem. In the case of neurons, they're created by observing behavior of the system experimentally and modeling them with smooth mathematical equations (as Hodgkins and Huxley did about 50 years ago).

So some people (like my advisor) still consider nonlinear systems as classical systems. I personally can't find anything that makes them fundamentally different from Newtonian physics (a system of differential equations on a smooth manifold). I was only taught to use superposition as a neat trick to make things easier, I wasn't taught that it was the way the world was.

Even Newton, in his Copernican Scholium (that actually didn't hit print until 200 years or so after he wrote the original principia) admitted that the trajectory of planets was too irregular and complex to be exactly predicted with the separable mathematical approach.

http://plato.stanford.edu/entries/Newton-principia/

But once again, throughout all my classical training, linearity is an assumption, a special case; not the norm.
 
Wait a minute. From your discussion with ThomasT, the regions ARE still coupled, OR is the IC supposed to generate the signal AS IF.

I've been assuming the latter, which is why I referenced Maxwell's Demon
 
  • #10
From http://www.macrovu.com/CCTGeneralInfo.html" :

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for supporting consciousness.
2. Non-triviality condition: It is necessary that the system support http://en.wikipedia.org/wiki/Counterfactual_conditionals" - states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce identical mentality (if they produce any at all).

Imagine 2 pinball machines, the second of which has had exactly those pins removed that are never touched by the ball. We now have 2 different machines but it doesn't make a difference to the paths traced by the pinballs. The counterfactuals are different in each machine (the pinballs would behave differently if the first were to hit a "counterfactual" pin), but the physical activity of the 2 systems is, as it happens, identical. So, counterfactual differences in 2 systems are irrelevant to differences in the physical activity of those systems.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

Suggestions for avoiding this paradox:
1. Rejecting supervenience.
2. Assigning no consciousness to either machine (Eric Barnes, 1991):
Neither of the imagined machines is conscious, because neither of them can causally interact with the environment. According to the computationalists, causal interaction is a necessary component for any model of consciousness. The object of any cognitive act must play a causally active role for it to be true that one cognizes that object.

I think by rejecting supervenience, you actually reject computationalism. By accepting causal interaction you are forced to accept downward causation, which doesn't sound good too. So what's left is rejecting the non-triviality condition.
 
Last edited by a moderator:
  • #11
Ferris_bg said:
From http://www.macrovu.com/CCTGeneralInfo.html" :

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for supporting consciousness.
2. Non-triviality condition: It is necessary that the system support http://en.wikipedia.org/wiki/Counterfactual_conditionals" - states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce identical mentality (if they produce any at all).

Imagine 2 pinball machines, the second of which has had exactly those pins removed that are never touched by the ball. We now have 2 different machines but it doesn't make a difference to the paths traced by the pinballs. The counterfactuals are different in each machine (the pinballs would behave differently if the first were to hit a "counterfactual" pin), but the physical activity of the 2 systems is, as it happens, identical. So, counterfactual differences in 2 systems are irrelevant to differences in the physical activity of those systems.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

Suggestions for avoiding this paradox:
1. Rejecting supervenience.
2. Assigning no consciousness to either machine (Eric Barnes, 1991):
Neither of the imagined machines is conscious, because neither of them can causally interact with the environment. According to the computationalists, causal interaction is a necessary component for any model of consciousness. The object of any cognitive act must play a causally active role for it to be true that one cognizes that object.

I think by rejecting supervenience, you actually reject computationalism. By accepting causal interaction you are forced to accept downward causation, which doesn't sound good too. So what's left is rejecting the non-triviality condition.
Very nice Ferris, thanks for that. I've seen that reference before and thought it was decent but had forgotten about it. So do you think we should reject the non-triviality condition? Why/why not?
 
Last edited by a moderator:
  • #12
Pythagorean said:
Well, you've accused me of being a computationalist several times, but I used the standard computationalist description: that neurons perform information processing and that this is relevant to our mental states.
This description of computationalism is ok for now, though it isn't very rigorous. I think I know what you mean though. However, what you said earlier was:
[Zuboff's reductio] is a strawman because it misrepresents the computationalist position in a fundamental way.
Zuboff's reductio isn't a strawman and it doesn't misrepresent the computationalist position. The article was written by a philosopher, put in a book by 2 editors who are staunch computationalists, and Zuboff's article as well as many others like it are placed in peer reviewed philosophical journals. No one else is calling this a strawman. If that many profesional philosophers acknowledge there's validity in Zuboff's article, I think it's time to take it seriously and stop calling it a strawman.
So what? It's really another strawman to say that computationalism is the same as the computer metaphor. They're two totally different systems.
Strawman arguments... There are other phrases. Other ways of making a point. If you don't understand, please discuss instead of accusing everyone of strawman arguments. It's beginning to sound silly.
From your discussion with ThomasT, the regions ARE still coupled, OR is the IC supposed to generate the signal AS IF.
May I suggest reviewing the paper again? Zuboff is suggesting the IC's act in a way that does not support counterfactual information. My point to ThomasT is that we can rewrite his thought experiment such that the line between supporting counterfactual information and not is so blurred that it can't be defined.
 
  • #13
Q_Goest,

Ferris_bg cleared things up.

I was defending computationalism only on the basis of 1. Without first defining 'program', the definition does indeed lack rigor, which would make it easy to evade arguments that constrict it beyond that. Namely, because we can come up with "physical systems" from 1. that are capable of lots of interesting phenomena and decoupling a system into it's components would no longer be a single physical system, but a set of individual physical systems (since they are not causally connected... agree?).

but Ferris_bg's definition seems to be more relevant to your post.

3 (supervenience) I have considered before as a consequence of physicalism. I don't really see a problem with it, though I recognize the contradiction now, between 2 and 3 as ferris_bg presented it.

2 (non-triviality) I do not understand why this would be a condition for consciousness. From an evolutionary perspective, it makes sense that biological systems that were able to survive and propagate would have counterfactual states as a consequence of being well-prepared for their environment. Does this mean that it's required for consciousness? I don't know. It seems difficult to imagine life lasting very long without having the built-in redundancies that a) allow survival and b) supports counterfactual states.
 
  • #14
Q_Goest said:
To explain why the brain might no longer be able to support phenomenal consciousness, a character in the story, Cassander, suggests there are 4 guiding principals, anyone of which when violated, might prevent phenomenal experience from occurring. They are:
- The condition of proximity: The cells are no longer close together as they were in the brain.
- The condition of no actual causal connection: Also called “counterfactual information” the problem with no causal connection between the parts of the brain is the problem most philosophers focus on. That is, regardless of the fact there is a duplicate signal, the actual signal is what is needed to maintain phenomenal consciousness. This actual signal is the “special signal”.
- The condition of synchronization: The cells may no longer be in sync.
- The condition of topology: Whether or not the cells are in the same basic spatial relationship, or pointing in the right direction.
It is somewhat like a printed circuit being replaced by long wires. The system would work, but its efficacy would be affected. The factors pointed out by Cassander would affect the efficacy, and hence, the consciousness. However, being a thought experiment, we can assume that efficacy is not affected. In that case, the segmented brain will not lose its consciousness. So there is no need of any special signal; only the efficacy of the signals matters. Consciousness, I think, depends on the efficacy of the brain; there should be a minimum threshold efficacy for the consciousness to manifest explicitly as in the case of humans. The high efficacy of the brain is an evolutionary edge that we have against other animals. Maybe, self-awareness represents the evolutionary peak of the entity called life!
 
  • #15
finiter:

So do you mean efficacy in the electrical engineering sense, or the neuroscience sense?

Q_Goest:

Here is Valera's (et al.) point (one of the papers I referenced earlier):

These neural assemblies have a transient,
dynamical existence that spans the time required to
accomplish an elementary cognitive act (a fraction of a
second). But, at the same time, their existence is long
enough for neural activity to propagate through the
assembly, a propagation that necessarily involves cycles
of reciprocal spike exchanges with transmission delays
that last tens of milliseconds. So, in both the brain and
the Web analogy, the relevant variable required to
describe these assemblies is not so much the individual
activity of the components of the system but the
dynamic nature of the links between them.
(emphasis mine)
The brainweb: Phase synchronization and large-scale integration
Francisco Varela, Jean-Philippe Lachaux, Eugenio Rodriguez & Jacques Martinerie
Nature Reviews Neuroscience 2, 229-239 (April 2001) | doi:10.1038/35067550
http://www.nature.com/nrn/journal/v2/n4/full/nrn0401_229a.html

Which is why the construction that requires the 'special signal' seems awkward to me. You are actually physically changing the system in the dynamical systems perspective: you have changed the physical landscape and the information structure. And the dynamical systems perspective has been successful in explaining phenomena of the brain in terms of their neural correlates:
http://scholar.google.com/scholar?h...1DVCG_enUS354US354&um=1&ie=UTF-8&sa=N&tab=ws"
http://scholar.google.com/scholar?h...stem&btnG=Search&as_sdt=400&as_ylo=&as_vis=0"
http://scholar.google.com/scholar?h...ders&btnG=Search&as_sdt=400&as_ylo=&as_vis=0"

Does any of this betray computationalism? If it does, haven't the computationalists been ignoring the last 20 or so years of empirical data then (along with Zuboff)?

To me, it seems this eliminates the need for a "special signal". It also means that despite 3 (supervenience) being true in principle, it's impossible to do in practice (entropy + chaotic sensitivity means you can never have exactly two of anything).

I don't see any obvious consequence for 2 (non-triviality) but 2 seems to be a bit like claiming Maxwell's demon violates thermodynamics, in which case your problem is that you haven't included the demon's actions in your calculations: you have to make the measurement on the first system to produce the second system. That costs energy to replicate between the measurements and the construction, and that cost comes out of the very environment that will shape the rest of the system's history, so there's even further potential consequences.

addendum

A real life experimental analog of the thought experiment:

Variability is included in the models as subject-to-subject differences in the strengths of anatomical connections, scan-to-scan changes in the level of attention, and trial-to-trial interactions with non-specific neurons processing noise stimuli. We find that time-series correlations between integrated synaptic activities between the anterior temporal and the prefrontal cortex were larger during the DMS task than during a control task. These results were less clear when the integrated synaptic activity was haemodynamically convolved to generate simulated fMRI activity. As the strength of the model anatomical connectivity between temporal and frontal cortex was weakened, so too was the strength of the corresponding functional connectivity. These results provide a partial validation for using fMRI functional connectivity to assess brain interregional relations.
(emphasis mine, demonstrating the experimental analog of the IC)
Philos Trans R Soc Lond B Biol Sci. 2005 May 29;360(1457):1093-108.
Investigating the neural basis for functional and effective connectivity. Application to fMRI.
Horwitz B, Warner B, Fitzer J, Tagamets MA, Husain FT, Long TW.
http://www.ncbi.nlm.nih.gov/pubmed/16087450

Note that Varela's references, 92, 103, and 104 are similar sources of empirical evidence indicating the importance of the reciprocal dynamical connection. 92 is none other than Karl Friston's "Functional and effective connectivity in neuroimaging":

on page 59, Friston demonstrates (both hypothetically and experimentally) how coarse fractioning (decoupling) of the brain is a neural symptom of schizophrenia. Coincidentally, this matches the qualitative description of schizophrenia (and it's etymology: "splitting of the mind").

One plausible example of this a psychology professor of mine suggested is that, in the case of say, auditory hallucinations, Broca's area has malfunctions that dissociate (uncouple) it from the feedback network. So it appears to the schizophrenic, as if the speech generation (from Broca's area) is not coming from them, but from an external source.
 
Last edited by a moderator:
  • #16
Pythagorean said:
I was defending computationalism only on the basis of 1. Without first defining 'program', the definition does indeed lack rigor, which would make it easy to evade arguments that constrict it beyond that. Namely, because we can come up with "physical systems" from 1. that are capable of lots of interesting phenomena and decoupling a system into it's components would no longer be a single physical system, but a set of individual physical systems (since they are not causally connected... agree?).
What I think you're saying is that by decoupling a system into it's components, such as done by Zuboff , the physical system is no longer causally connected and therefore it is not a single, physical system. That is essentially what most philosophers are saying by saying it doesn't support counterfactual information. Saying it isn't "causally connected" is the same thing as saying it doesn't "support counterfactual information" which is the same thing as the "non-triviality condition" described by Maudlin and quoted by Ferris. It's all the same argument. The argument is intended to enforce a specific type of system configuration on a computational system. It is intended to say that certain, subjective phenomena will not occur in a system that isn't causally connected, doesn't support counterfactuals or violates the non-triviality condition.

The problem then is what Maudlen has stated and to quote Ferris on Maudlin:
Imagine 2 pinball machines, the second of which has had exactly those pins removed that are never touched by the ball. We now have 2 different machines but it doesn't make a difference to the paths traced by the pinballs. The counterfactuals are different in each machine (the pinballs would behave differently if the first were to hit a "counterfactual" pin), but the physical activity of the 2 systems is, as it happens, identical. So, counterfactual differences in 2 systems are irrelevant to differences in the physical activity of those systems.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

I'm looking at this same issue from a different perspective and saying, why should any subjective experience change if we maintain the same causal actions on every part of the brain? There is no way for any part of the brain to "know" the difference between the connected state and the unconnected one. The causal actions acting on every part of the brain are identical, so we have identical feedback on every part, it just isn't coming from the adjacent brain cell, it is coming from an IC instead. The difference then is a "special signal" which can't be differentiated by anything physical. You are asking for a NON-PHYSICAL signal!

Hopefully that clears up the OP. Maybe we need to move on from here. I'll address your second post later.
Pythagorean said:
3 (supervenience) I have considered before as a consequence of physicalism. I don't really see a problem with it, though I recognize the contradiction now, between 2 and 3 as ferris_bg presented it.

2 (non-triviality) I do not understand why this would be a condition for consciousness. From an evolutionary perspective, it makes sense that biological systems that were able to survive and propagate would have counterfactual states as a consequence of being well-prepared for their environment. Does this mean that it's required for consciousness? I don't know. It seems difficult to imagine life lasting very long without having the built-in redundancies that a) allow survival and b) supports counterfactual states.
I'm totally unclear on what you're trying to say here.
 
  • #17
A system not supporting certain counterfactual states won't be conscious if such states should have triggered, but this doesn't mean this system would have zero value of consciousness all the time. Good example of this is the http://en.wikipedia.org/wiki/Anosognosia" :
'The Neurology of Consciousness' said:
A well-known example of anosognosia is often found in hemispatial neglect patients. This condition is usually caused by a stroke to the right parietal lobe that causes disruption of attention and spatial awareness of the left side of space. They often behave as if the left side of the world does not exist. For example, they will only dress the right side of their body or eat all the food on the right side of a plate but not the left. Yet, despite the obviousness of the deficit to people observing the patients, the patients themselves are not aware of their deficit. They do not sense that anything is wrong with them!

There is no system to sense that something is wrong, so the patient assumes that everything is normal. For example, when these patients are confronted with a bimanual task in which they cannot complete because they are unable to move their left hand, they may reply with a statement such as "I didn't want to do that task". When the paralyzed hand is presented to them, they often respond with the rationalization "That's not my hand".


So now the example with the two systems one supporting counterfactual states and the other not, looks like "normal condition" software and "anosognosia condition" software applied to the system's attention modules. When identical physical activity is running through some of the other modules - both systems are conscious, when spatial awareness of the left side of space is triggered - only the "normal condition" machine is conscious, when attention at the right side of space is triggered - again both systems are conscious.
 
Last edited by a moderator:
  • #18
Q_Goest said:
Computationalism has a number of inconsistencies that haven’t logically been refuted. The purpose of this thread is to discuss one of them and see if there’s a way out of the problem and/or to help get perspectives on the problem. That problem I’ll call the “special signal problem”. (...) Certainly computationalism would predict that the dissociated brain was no longer experiencing anything.
I'm quite confortable with having 'computationalist' on a T-shirt. However, I certainly won't predict that. What I would say is that the “impulse cartridge” is itself a (partial or total) simulation of the cousciouness, so the consciousness is (partially or totally) in the cartridge.

What was the problem?
 
  • #19
Let's simplify this a bit:

Cartridge A - capable of providing input to the LHS of the brain, which is equivalent to what the RHS would normally provide.
Cartridge B - capable of providing input to the RHS of the brain, which is equivalent to what the LHS would normally provide.

So when A is connected to the LHS of a brain, the brain doesn't know the difference. And when B is connected to the RHS of a brain, the brain again doesn't know the difference. (As per the OP).

Now, what if you connect those two cartridges together? Is that system considered conscious?

I'd say, if the cartridges are reacting to external input and working together (providing each other inputs / accepting outputs), reacting appropriately, I'd say the system is conscious. After all, that's all our brain does.

If we then 'widen the gap' between A and B and use wires to bridge the gap, does that change anything?

To me, simply having a neuron / wire / any connector doesn't alter anything. It is what the system does 'as one' that matters.

A bunch of neurons reacting to artificial signals isn't a brain.

To me, this comes down more to what you consider conscious to be. A brain that doesn't react to any outside stimulus / input is nothing more than a computer running on a loop.
 
Last edited:
  • #20
jarednjames said:
Let's simplify this a bit:

Cartridge A - capable of providing input to the LHS of the brain, which is equivalent to what the RHS would normally provide.
Cartridge B - capable of providing input to the RHS of the brain, which is equivalent to what the LHS would normally provide.

So when A is connected to the LHS of a brain, the brain doesn't know the difference. And when B is connected to the RHS of a brain, the brain again doesn't know the difference. (As per the OP).

Now, what if you connect those two cartridges together? Is that system considered conscious?

I'd say, if the cartridges are reacting to external input and working together (providing each other inputs / accepting outputs), reacting appropriately, I'd say the system is conscious. After all, that's all our brain does.

If we then 'widen the gap' between A and B and use wires to bridge the gap, does that change anything?

To me, simply having a neuron / wire / any connector doesn't alter anything. It is what the system does 'as one' that matters.

A bunch of neurons reacting to artificial signals isn't a brain.

To me, this comes down more to what you consider conscious to be. A brain that doesn't react to any outside stimulus / input is nothing more than a computer running on a loop.
Good post jarednjames, thanks for that. So you're saying if the LHS and RHS aren't connected, that doesn't meet the definition of a brain so it isn't conscious? Or maybe there is just a small change in the experience. Perhaps we could then tell* someone, "Hey, my experience just changed when you stopped allowing the LHS talk to the RHS!"

If we could acknowledge the fact our experience changed, how did we know? What physical change to either side of the brain occurred that allowed us to distinguish this change? Didn't we just say the signals going into either side are physically identical regardless of whether or not the 2 halves are connected? Or perhaps our experience just slowly fades away as more and more cuts are made, but we are unable to report any change because we don't notice?

If we contest that the signals are physically identical but the phenomenal experience changes (and perhaps we can even report it) then we need a special signal, one that is somehow causally linked to the other side of the brain. We need something more than a signal that is a physical duplicate!

*Notice in Zuboff's story that he repeatedly goes back to checking the ability of the brain in vat to report it's condition.
 
  • #21
Well, if you cut my brain in half and then attach a computer (let's ignore memories and the like) that accepts input from my internal areas, eyes, nose etc and then processes in the same way as the RHS of my brain and even provides input the LHS where appropriate, I'm not going to notice a difference.

Now, if you had a cartridge attached acting as the RHS of my brain, if it isn't accepting external input from the above mentioned areas, eventually I will notice as it won't be reacting to changes.

The problem I see here is that we are assuming the LHS does exactly the same as the RHS and they report everything they do to each other and can therefore work independently of each other assuming input from each side is maintained / emulated. The moment you remove the RHS, you need to compensate for it by simulating it identically (whether it's by providing the required input to the LHS or to anywhere else in the body), otherwise changes will be noticed. The system won't be functioning as it was before and so isn't symetrical.

Brain in a jar or not, if it isn't attempting to monitor / process / compensate other areas of the body as it normally would, it is no longer functioning as a brain would.

If you had a cartridge which 'knows' exactly what it needs to provide signal wise (so it knows every future input the LHS would receive if the RHS were intact), you wouldn't notice a difference in so far as the LHS is concerned, but you would notice that certain features only present in the RHS aren't there.

An example of this would be: if your sense of smell was in the RHS of the brain, and no signal relating to it is sent to the LHS, if you remove the RHS you lose your sense of smell. You are no longer functioning in the same way you were before.
 
Last edited:
  • #22
Q_Goest said:
What I think you're saying is that by decoupling a system into it's components, such as done by Zuboff , the physical system is no longer causally connected and therefore it is not a single, physical system. That is essentially what most philosophers are saying by saying it doesn't support counterfactual information. Saying it isn't "causally connected" is the same thing as saying it doesn't "support counterfactual information" which is the same thing as the "non-triviality condition" described by Maudlin and quoted by Ferris. It's all the same argument. The argument is intended to enforce a specific type of system configuration on a computational system. It is intended to say that certain, subjective phenomena will not occur in a system that isn't causally connected, doesn't support counterfactuals or violates the non-triviality condition.

The problem then is what Maudlen has stated and to quote Ferris on Maudlin:


I'm looking at this same issue from a different perspective and saying, why should any subjective experience change if we maintain the same causal actions on every part of the brain? There is no way for any part of the brain to "know" the difference between the connected state and the unconnected one. The causal actions acting on every part of the brain are identical, so we have identical feedback on every part, it just isn't coming from the adjacent brain cell, it is coming from an IC instead. The difference then is a "special signal" which can't be differentiated by anything physical. You are asking for a NON-PHYSICAL signal!

Hopefully that clears up the OP. Maybe we need to move on from here. I'll address your second post later.

Well no, you don't have feedback on every part... it defies the definition of feedback to disconnect the two components and simulate them both at the same time.

To make this work, you need a Maxwellian demon to invest energy into measuring one system, processing how it would effect the other system, then doing the same with the other system. You're effectively reconnecting them causally to be able to make an IC in the first place. Anyway, post #15 has the empirical evidence and touches on the Maxwell demon problem.
 
  • #23
Q_Goest said:
*Notice in Zuboff's story that he repeatedly goes back to checking the ability of the brain in vat to report it's condition.
Not exactly. He repeatedly goes back to checking the ability of a brain equivalent to report it's condition. As the equivalent is suposed to be... well equivalent, then a computationnlist view need to say that consciousness remain the same.

The problem is not with computationnalism, the problem is with your assertion of what computationnalist will predict. Remove this incorrect assumption, and the paradox fade away.
 
  • #24
Great! We’re getting somewhere.

Now let’s go back to the story and reread what Zuboff says. If we argue off the top of our heads, we’ll end up misunderstanding and come up with reasons why the story doesn’t make any sense. Gotta read the story a few times perhaps before you grasp the implications.

jarednjames said:
Now, if you had a cartridge attached acting as the RHS of my brain, if it isn't accepting external input from the above mentioned areas, eventually I will notice as it won't be reacting to changes.
You’re absolutely right. But that’s not what Zuboff is saying.
Pythagorean said:
To make this work, you need a Maxwellian demon to invest energy into measuring one system, processing how it would effect the other system, then doing the same with the other system. You're effectively reconnecting them causally to be able to make an IC in the first place.
You’re in the same boat as jarednjames. That’s not quite what Zuboff has in mind.

What Zuboff is suggesting is that he wants to provide “experiences” to the brain. By that, he’s saying that hypothetically, the scientists in charge know how the brain is arranged and what it will do, so by providing these recorded experiences, they must know what every neuron is going to do before it does it. He suggests that these experiences are derived empirically from paid subjects. He says:
Zuboff said:
His scientist friends kept busy researching, by means of paid subjects, which patterns of neuron firings were like the natural neural responses to very pleasant situations; and, through the use of a complex electrode machine, they kept inducing only these neural activities in their dear friend's brain.

Like you, I’m sure that trying to figure out exactly what a brain is going to do, right down to the individual neuron, is a feat of science we may never be able to do. But in principal, the brain is a physical thing so therefore it must conform to the laws of nature which we must assume are knowable, in principal. If it makes it any easier, we might use something easier than a brain to predict. Something like a computer in which every transistor is completely predictable. We could duplicate a given computer many times over, determine what experiences it might have and duplicate those experiences on another, identical computer. Regardless, I think that in principal, Zuboff’s argument holds. All he wants to do is suggest that one can, in principal, know how a given physical substrate is going to behave in physical terms, so that the time evolution of that physical thing is predictable at least in principal.

Once we understand what every neuron does in a given situation or experience, we can simply ‘plug’ that input into the various senses so those receptors experience a given situation exactly as it would occur if the brain were in a human body. Now, we could watch as every neuron did exactly as we expected it to do, firing in a synchronous behavior and developing experiences in the brain.

And once we know what ever neuron is doing, Zuboff is suggesting that we cut this brain in half. Since we know what it will do, we can duplicate exactly, all the inputs to the two halves. Zuboff then continues cutting the brain into halves until finally he gets down to individual neurons.
Zuboff said:
First it was agreed that if a whole-brain experience could come about with the brain split and yet the two halves programmed as I have described, the same experience could come about if each hemisphere too were carefully divided and each piece treated just as each of the two hemispheres had been. Thus each of four pieces of brain could now be given not only its own bath but a whole lab-allowing many more people to participate. There naturally seemed nothing to stop further and further divisions of the thing, until finally, ten centuries later, there was this situation-a man on each neuron, each man responsible for an impulse cartridge that was fixed to both ends of that neuron -- transmitting and receiving an impulse whenever it was programmed to do so.
Does this help explain the story? What else needs to be explained? Perhaps I’ll stop here and look to ensure people have a grasp of the story before we try and discuss it.
 
  • #25
Q Goest, thanks for your lengthy and informative reply to my somewhat flippant post. I was expecting a warning. Anyway, I'm following the discussion in conjunction with reading the material you referenced, including, and of course especially, Zuboff's story.
 
  • #26
Q_Goest said:
May I suggest reviewing the paper again? Zuboff is suggesting the IC's act in a way that does not support counterfactual information.
I had miss that. Well, that's not what I understood from the paper. If the IC does not support 'counterfactual information', then it cannot replace an hemisphere, which possesses its own source of inputs. Then there is no computionnal equivalence with the biological brain, and all interest in this though experiment fade away.

In the end, it seems that this though experiment is just build on vague enough description so that one will either think that the biological brain is simulated or not. The trick is to pretend the first and to conclude from the last.

So Pythagorean was right from the beginning: that's straw man building. I'm not saying it is dishonest*, but the fact that it has ben discussed in books... how weak is that argument. The same is true for Penrose and Searle: bright guys, honest thinkers, and well... it's not hard to see the inconsistencies of their philosophy of mind.


*Maybe strawman has this taste in english: I'm just saying that the argument is build on an misguided assumptions about what computationnalists predictions should be.
 
  • #27
Pythagorean said:
So do you mean efficacy in the electrical engineering sense, or the neuroscience sense?
It is just the connections, the signals and the processing, whether it is neurons or other things. If any man-made processor can be as efficient as the brain, it will mimic consciousness. But, of course, it will need the required peripherals. Once we provide the peripherals, it will need the required environment. So, the basic difference, I think, is that we are born, not created. But a computer is created, not born; so it will always require a creator to provide the required environment. (The present environment itself, I think, is a deterministic outcome and not a random choice). Suppose we made a computer with peripherals suited for the existing environment and with a performance matching us, then it would resemble us in all respects, and that will tantamount to cloning ourselves. Then the only question would be of patent rights.
 
  • #28
Lets start from the very beginning,

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for
supporting consciousness.
2. Non-triviality condition: It is necessary that the system support counterfactual states -
states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce
identical mentality (if they produce any at all).

Definitions as presented in http://books.google.com/books?id=_8HjZaRwU-UC&source=gbs_navlinks_s , Daniel M. Hausman, 1998 and by David Lewis, 1973:

Causal connection:
CC: For all events A and B, A and B are causally connected if and only if they are distinct and either A causes B, B causes A, or A and B are effects of a common cause.

Counterfactual dependence implies causal connection:
CDCC: If A and B are distinct events and B counterfactually depends on A, then A and B are causally connected.

Counterfactual dependence: Effects counterfactually depend on their causes, while causes do not counterfactually depend on their effects and effects of a common cause do not counterfactually depend on one another.

Distinctness means that the events are not identical, neither overlaps the other, and neither implies the other.


Now let's look at the following scenario, where a machine is waiting for some input:

Waiting for input: Input a number N (we press 4):

Event A:
option 1) if N == 4: cause B
option 2) else if N == 7: cause C
option 3) else cause D

We have the following physical activity when we input 4:
A (B | C | D) --[ 4 ]-> B, which can be reduced to: A -> B

Lets remove option 2 now and input 7:
A (B | D) --[ 7 ]-> D, which can again be reduced to: A -> D

Now let's remove option 3, input 4 and see what are the possible results:
A (B) --[ 4 ]-> B? Can we reduce this to A -> B?
case 1) Yes, we can: the computation "if N == 4: cause B" still exist.
case 2) No, we can't: "N == 4" is NOT defined, because the "else condition" is missing. The condition where everything except the number 4 is defined, so that the concept of numbers is still defined. You can't define a single number, without defining the whole concept, so the "else condition" is required.


That's what Cassander is asking:
[PLAIN]http://themindi.blogspot.com/2007/02/147.html said:
[/PLAIN]
Now we are about to abandon yet another condition of usual experience - that of actual causal connection. Granted you can be clever enough to get around what is usually quite necessary to an experience coming about. So now, with your programming, it will no longer be necessary for impulses in one half of the brain actually to be a cause of the completion of the whole-brain pattern in the other hemisphere in order for the whole-brain pattern to come about. But is the result still the bare fact of the whole-brain experience or have you, in removing this condition, removed an absolute principle of, an essential condition for, a whole-brain experience really being had?


The scientists accept case 1) and it turns out that the consciousness disappears. Now let's go back to the other example.

It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't. The computationalist must claim, based on the non-triviality condition, that one is conscious and the other isn't. But this contradicts the supervenience thesis.

Here again the mistake of the Cassander's friends is being done, because the term "identical physical activity" implies the informational concept of the both machines to be identical, so removing the counterfactual states you change the whole concept thus the physical activities may seem identical to the observer, but they are in fact not. The machine may select its only one option left and the physical activity will look like "A -> B", but it can't be reduced to A -> B, because the option is taken "subconsciously" (the concept is not defined thus no computation is being made). We have a http://www.iep.utm.edu/functism/#H6" in the house.
 
Last edited by a moderator:
  • #29
Ferris_bg said:
Lets start from the very beginning,

Three premisses of computationalism (Maudlin, 1989):
1. Computational condition: Any physical system running an appropriate program is sufficient for supporting consciousness.
2. Non-triviality condition: It is necessary that the system support counterfactual states -
states the system would have gone into had the input been different.
3. Supervenience condition: Two systems engaged in the same physical activity will produce identical mentality (if they produce any at all).
The very begining, to me, it's Turing. I don't think the three premisses you cite are helpfull. In fact, there is a misleading use of the word 'program' with different meanings.

Try replacing 'program' by 'algorithm' and have a look at premisse 1&3. One algorithm can run a division on some input, and nothing defined if the input is not a number. The same way: you can have an algorithm that lead to counsciouness on some input, not on different input. So to make sense of these premisses, you need to think that the input are part of the physical system. But then premisse 2 cannot hold -you can't change the input without changing the physical system.

Try replacing 'program' by 'Turing machine'. Now premisses 1&3 are perfectly sound, but not premisse 2: the input is part of what define the Turing machine, so you don't have any Turing-machine that supports the non-triviality condition. Algorithms do, because each time the input is changed the same algorithm will correspond to a different Turing machine.

So for premisses 1 and 3 program equates Turing machine, whereas for premisse 2 program equates algorithm. This is what makes these 'premisses' misleading.

Ferris_bg said:
It's easy to imagine two machines with identical physical activity running a "consciousness program", one of which supports counterfactual states, while the other doesn't.
So no, it's not easy at all. You're saying that two machines can be be both identical and have different input. If it's identical in the sense of a Turing machine, the input is the same. If it's identical in terms of algorithm, then both will support 'counterfactual states'.
 
  • #30
Lievo said:
So Pythagorean was right from the beginning: that's straw man building.

Lievo said:
I don't think the three premisses you cite are helpfull. In fact, there is a misleading use of the word 'program' with different meanings.
There's a reason every other post in this forum is locked and this is why. People want to disregard the academic value of philosophy and talk about whatever pops into their head. Zuboff's story isn't a strawman as any philosopher will attest. Further, Ferris_bg's post is on the mark in quoting Maudlin's 1989 paper which is relevant to this thread. If you don't understand the literature, you need to ask, otherwise your ignorance of the literature here only confuses the discussion and takes us off course.
 
  • #31
Ferris_bg said:
Causal connection:
CC: For all events A and B, A and B are causally connected if and only if they are distinct and either A causes B, B causes A, or A and B are effects of a common cause.

Counterfactual dependence implies causal connection:
CDCC: If A and B are distinct events and B counterfactually depends on A, then A and B are causally connected.

Counterfactual dependence: Effects counterfactually depend on their causes, while causes do not counterfactually depend on their effects and effects of a common cause do not counterfactually depend on one another.

Distinctness means that the events are not identical, neither overlaps the other, and neither implies the other.

Now let's look at the following scenario, where a machine is waiting for some input:

Waiting for input: Input a number N (we press 4):

Event A:
option 1) if N == 4: cause B
option 2) else if N == 7: cause C
option 3) else cause D

We have the following physical activity when we input 4:
A (B | C | D) --[ 4 ]-> B, which can be reduced to: A -> B

Lets remove option 2 now and input 7:
A (B | D) --[ 7 ]-> D, which can again be reduced to: A -> D

Now let's remove option 3, input 4 and see what are the possible results:
A (B) --[ 4 ]-> B? Can we reduce this to A -> B?
case 1) Yes, we can: the computation "if N == 4: cause B" still exist.
case 2) No, we can't: "N == 4" is NOT defined, because the "else condition" is missing. The condition where everything except the number 4 is defined, so that the concept of numbers is still defined. You can't define a single number, without defining the whole concept, so the "else condition" is required.
Hi Ferris. Another good write-up!

I'd like to touch on the "else condition" you mention. In Maudlin’s paper, he shows how his beloved Olympia can be provided with all the right causal structure except that he adds “blocks” which don’t touch the machine and don’t interact unless counterfactual information is required, in which case those blocks prevent the machine from operating successfully. He calls this “argument by addition” and “argument by subtraction”. The blocks added are the addition and rusty chains are the subtraction.

In the end, Maudlin concludes:
Maudlin said:
Olympia has shown us at least that some other level besides the computational must be sought. But, until we have found that level and until we have explicated the relationship between it and computational structure, the belief that pursuit of the pure computationlist program will ever lead to the creation of artificial minds, or to understand the natural ones, remains only a pios hope.
I would say that Maudlin has concluded that the “else condition” is not a requirement for the creation of mind. He seems to have quite a few supporters in that regard including Zuboff, Hillary Putnam and http://www.gold.ac.uk/computing/staff/m-bishop/" . Mark is an interesting character. Being a professor of cognitive computing, you’d think such a person would naturally be a computationalist, but he’s defended the idea that counterfactuals are not a necessary condition for over a decade now, writing perhaps a dozen or more papers on it.

I think the most fundamental problem with the idea of counterfactuals is that people expect computers to be what they define them as. However, computers are symbol manipulation systems, and as such, they are observer relative. There is nothing intrinsic to nature about them.

I’d be interested in your feedback.
 
Last edited by a moderator:
  • #32
Lievo,

I suppose what you have in mind about the "Turing vs algorithm" confusion is well illustrated in this reference:
'Philosophy of Mind' said:
Turing's thesis: If two systems are input-output equivalent, they have the same psychological status; in particular, one is mental just in case the other is.

For machine functionalism is consistent with the denial of Turing's thesis: It will say that input-output equivalence, or behavioral equivalence, isn't sufficient to guarantee the same degree of mentality. What arguably follows from machine functionalism is only that systems that realize the same Turing machine - that is, systems for which an identical Turing machine is a correct machine description - enjoy the same degree of mentality.

It appears, then, that Turing's thesis is mistaken: Internal processing ought to make a difference to mentality. Imagine two machines, each of which does basic arithmetic operations for integers up to 100: Both give correct answers for any input of the form n + m, n x m, n - m, and n / m for whole numbers n and m less than or equal to 100. But one of the machines calculates ("figures out") the answer by applying the usual algorithms we use for these operations, whereas the other has a file in which answers are stored for all possible problems of addition, multiplication, subtraction, and division for integers up to 100, and its computation consists in "looking up" the answer for any problem given to it. The second machine is really more like a filing cabinet than a computing machine; it does nothing that we would normally describe as "calculation" or "computation". Neither machine is complex enough to be considered for possible mentality; however, the example should convince us that we should consider the structure of internal processing, as well as input-output correlations, in deciding whether a given system has mentality. If this is correct, it shows the inadequacy of a purely behavioral test, such as the Turing test, as a criterion of mentality.


Q_Goest,

By rejecting the non-triviality condition, Bishop and the others are only welcoming panpsychism, which does not contradict with functionalism. No matter if counterfactuals are crucial for consciousness or not, you are still nowhere as of disproving token identity theories. Personally I favor the system approach as the best materialistic option. I am an idealist, on a side note.
 
  • #33
Ferris_bg said:
By rejecting the non-triviality condition, Bishop and the others are only welcoming panpsychism, which does not contradict with functionalism.
You got me thinking on this one... Actually, Bishop is not welcoming panpsychism, he's suggesting that counterfactuals are not physical, similar to what I've stated in the OP, so we can't invoke them as a requirement for computationalism. Once we dispence with counterfactuals (because they're simply wrong), computationalism predicts panpsychism, and panpsychism is unacceptable. So in order to avoid panpsychism, we have to avoid computationalism. One might think the answer is to insist on the non-triviality condition, but Bishop is suggesting we reject that, just like Maudlin is. Once we reject non-triviality (once we reject the requirement for counterfactuals) computationalism predicts panpsychism. Perhaps we should go through that argument as well...

Bishop's argument follows work by Putnam which is published in his book, "Representation and Reality". In the appendix, Putnam has an argument that attempts to discredit functionalism to the extent that he feels functionalism only implies behaviorism. Putnam is retired now, so Bishop has taken up his flag so to speak, and continues to work on advancing Putnam's argument.
 
Last edited:
  • #34
Q_Goest said:
Zuboff's story isn't a strawman as any philosopher will attest.(...) your ignorance
Any philosopher, really? :redface: (...) Well you're downgrading from appeal to authority to simple insults. For the sake of the discussion you started, I hope you'll come to your sense and do something better -adressing my points for example, or the points of anyone not already agreeing with you.

Ferris_bg said:
Lievo,

I suppose what you have in mind about the "Turing vs algorithm" confusion is well illustrated in this reference:
one of the machines calculates ("figures out") the answer by applying the usual algorithms we use for these operations, whereas the other has a file in which answers are stored (...) and its computation consists in "looking up" the answer for any problem given to it.
Very pertinent. Here the question would be: how was the look-up table filled in first place?

It can be done by picking up numbers at random, at least theorically, so at least theorically there is the possibility for a philosophical zombie. But in practice, you won't be able to do that, no more you'll be able to see a ball tunneling through a wall -even if it's theorically allowed by quantum mechanic. What you'll need to do so as to fill the table is to compute it, and that's where consciousness should be said to be if the algorithm belongs to this sub-class.

In a sense, the bits that constitute this message are no more conscious that the look-up table, and in both case it's just the shadow of some mind.
 
Last edited:
  • #35
Q_Goest, you still haven't responded to post #15, which has sources and quotes from experts and confronts the physical premise for the thought experiment, which I feel I've demonstrated as flawed, which makes the question not so productive. The strawman is being built for physicalism, I guess, not computationalism.

The thought experiment explicitly only applies to one kind of physical system: linear systems (which don't truly exist, but are convenient approximations). I would think this is an important epistemological consideration.
 
  • #36
Q_Goest said:
(...) Once we dispence with counterfactuals (because they're simply wrong), computationalism predicts panpsychism, and panpsychism is unacceptable. (...) Putnam is retired now, so Bishop has taken up his flag so to speak, and continues to work on advancing Putnam's argument.


Here is the response from David Chalmers to Putnam - http://consc.net/papers/rock.html" .
Does a Rock Implement Every Finite-State Automaton? said:
If Putnam's result is correct, then, we must either embrace an extreme form of panpsychism or reject the principle on which the hopes of artificial intelligence rest.


Here is a response from Mark Bishop to Chalmers - http://docs.google.com/viewer?a=v&q...OWBL2&sig=AHIEtbTl0sSFE9SFNzqkg0u6CSWaJ3523Q".

Bishop asks what will happen if a "conscious" claimed robot R1 is step by step transformed into a robot Rn, by deleting the counterfactual states at each step (R1, R2, R3 ... Rn-1, Rn) - how will be changed the phenomenal perception through the stages? His only argument, before making the conclusion that the counterfactual hypothesis is wrong, is "it is clear that this scenario is implausible".

Basically from such example, illustrating the http://consc.net/papers/qualia.html" by Chalmers (which has weak points too, but that is another discussion), follow these possible results:
1) Every robot has the same degree of mentality (M):
-1.1) M == 0 -> Functionalism is wrong, it is reduced to behaviorism.
-1.2) M != 0 -> Functionalism could only exist as panpsychic theory.
2) Every robot has different degree of mentality -> Counterfactual states don't play causal role by itself, but somehow removing them the degree of consciousness changes -> See 1.2.
3) R1 is conscious, while the others are not -> Non-triviality condition holds.

I think both Chalmers and Bishop are looking at one hypothetical thing from different angles, giving arguments in the context of their own view. Only the time will show who was on the right side.
 
Last edited by a moderator:
  • #37
Pythagorean said:
Q_Goest, you still haven't responded to post #15, which has sources and quotes from experts and confronts the physical premise for the thought experiment, which I feel I've demonstrated as flawed, which makes the question not so productive. The strawman is being built for physicalism…
Strawman discussions are Red Herrings. They draw away from the discussion at hand, forcing explanations to be written needlessly. Tin Man arguments are then required to return the Red Herrings to the pond. :frown: Let's refrain from letting strawmen fish for red herrings, otherwise the tin man has extra work to do.

Pythagorean said:
The thought experiment explicitly only applies to one kind of physical system: linear systems (which don't truly exist, but are convenient approximations). I would think this is an important epistemological consideration.
Regarding your references in #15 and nonlinear systems. I remember various discussions around separability of classical systems taking place on many threads. I remember one comment in particular, not from yourself, that went something like, “Why is separability so important to consciousness anyway?” Well, that’s what we’re talking about. Are classical systems separable or not? And the reason separability is important should now be well understood that we’ve read Zuboff’s paper. There are those that wish to claime that nonlinear systems are more than the sum of their parts. There is something extra, though exactly what that extra is, is never detailed. Perhaps there are new laws of physics that are created by these systems that guide their evolution over time, like an unseen orchestra conductor imposing his will on the various musicians and guiding the band to play in a phase synchronized, large scale integrated fashion. That conductor, operating at an emergent level above the functional organization of the neurons, forms dynamic links mediating the synchrony of the orchestra over multiple frequency bands. The difference between separability and non separability regards just such an orchestra conductor, an emergent set of laws that not only guides the individuals pieces, but subsumes their behavior. Can any such laws be reasonably predicted? Or is there nothing more than the push and pull of molecules, acting on one another locally?

From Valera’s reference:
But, at the same time, their existence is long enough for neural activity to propagate through the assembly, a propagation that necessarily involves cycles of reciprocal spike exchanges with transmission delays that last tens of milliseconds.
I haven’t read his paper and don’t have time to, but it seems this indicates a propagation of local, efficient causes, not an orchestra conductor. I see no reason to invoke any kind of new physical laws that subsume the laws acting at the local level.

Regardless of what explanation one gives to explain how neurons interact, we must select one of two possible alternatives for that interaction. Either:
1) The interaction is separable.
2) The interaction is not separable.

Computationalism assumes that neuron interaction can be described using classical mechanics, not a quantum mechanical one. I don’t think anyone really argues that point. So the question can also be reduced to:
1) Classical mechanical descriptions of nature are separable.
2) Classical mechanical descriptions of nature are not separable.

I’d say if you don’t have any higher level physical laws coming into being when neurons, or any nonlinear physical system interacts, then there are only local interactions and thus those interactions can be expected to follow separability:
- to a very high degree of accuracy mathematically (analytically)
- to an exact degree physically (ie: even if we can’t figure out the math, it seems mother nature can. No one is claiming these systems aren't deterministic.)

That is in fact what is normally accepted. If you’d like some references, I’d be glad to share what I feel is pertinent, but I think that would make a wonderful new thread. Seriously, it would be worth forming your thoughts around that and posting a new thread.

I’d also like to point out one more issue in support of separability of neurons. Those other references you provide, such as Hodgkin and Huxley’s famous papers and compartment models of neurons in general, along with analytical models such as the Blue Brain project which use those compartment models, are all taking these nonlinear systems and reducing them to linear ones. The result is a model of how the brain functions to a very high degree of accuracy, at least I’m sure the scientists in charge think so. Similarly, other highly nonlinear systems are studied using the same methods, finite elements and computational fluid dynamics are used to study these exact kinds of problems and they do so to a high degree of accuracy. The basic premise that a brain is separable seems to be pervasive throughout the scientific field. If the brain is separable, it is by definition a sum of parts that can be duplicated without forcing them to interact within the actual brain.

Ok, just one more point... we often allow for brain in vat thought experiments. We might even create thought experiments around large numbers of brains in vats such as the Matrix. Now if brains as a whole can be put into vats and experiences duplicated, why can't we do the same thing to parts of brains? What's so special about the parts of the brain that the brain as a whole isn't?
 
  • #38
Ferris_bg said:
Here is the response from David Chalmers to Putnam ...
Clappin' for Ferris. Thanks for another excellent post. I'm very familiar with all of those papers and I think you've referenced them well. They sit on my desk with many others, highlighted and inked all over.

I don't think rejecting the counterfactual argument leads us into the stone wall that many think it does. In fact, I think it points to a better theory of mind. It's just that no one seems to see the door yet, and our bias for computationalism will be hard to change.
 
  • #40
Lievo said:
I'd love to say this sit on my desk with many others. Unfortunatly, the link doesn't work. You're talking abouthttp://www.ncbi.nlm.nih.gov/pubmed/12470628" ?
Works for me, but yes, it's the same one.
 
Last edited by a moderator:
  • #41
Q_Goest said:
I haven’t read his paper and don’t have time to
Well then just one minor point until you find the time to read it

Q_Goest said:
Computationalism assumes that neuron interaction can be described using classical mechanics, not a quantum mechanical one. I don’t think anyone really argues that point.
You'll find many to argue that point, simply because it's plain false. Quantum mechanics is as computable as classical mechanics. As the name indicates, computationnalism assumes computability, not computability by classical mechanics only. Penrose explains this very well in the "[URL ,[/URL] and that's why he postulates that the mind must rely on yet undiscovered quantum laws -he knows present quantum mechanic does not allows to go outside computationnalism.

Gokul43201 said:
Works for me, but yes, it's the same one.
Thanks. The bug seems specific to chrome.
 
Last edited by a moderator:
  • #42
Q_Goest,

You agree, I hope, that there's no requirement or law in classical physics that philosophical reductionism or separability apply to all systems. Scientists practice scientific reductionism, there's a lot of intuition and creativity that goes into tying it all together.

I have never heard that the whole must be greater than the sum of the parts for dynamical systems. That's strange. No... the statement (that is true, not a claim) is that the sum of the parts is not equal to the whole.

And here's how it's quantified (forgive my handwriting, didn't feel like tex'n it):

F = force of gravity
G = gravitational constant
M = reference mass
m = test mass
r = distance between masses

Here's what the superposition principle says when it holds:
attachment.php?attachmentid=31089&stc=1&d=1294344299.jpg



In the two examples below, I will show superposition holding for masses when considering gravitational force. But I will also show you, that in the same classical system, the dynamics are not separable. Superposition does not hold for distance between masses when considering gravitational force.

attachment.php?attachmentid=31090&stc=1&d=1294344299.jpg


So you see that you can not separate the dynamics out like you can the mass (assuming, of course, that r is changing... which it is... which is why Newton couldn't solve the 3-body problem.

There isn't anything fundamentally new about this. There's no new physics necessary. Unless of course, you mean the kind of new physics that is being discovered every day already that you're not paying attention to. Still nothing fundamental changing though, it's just an extension of the old stuff. This is the high-hanging fruit of classical physics that hadn't started getting picked until Poincare's geometrical analysis and the invention of computers that made numerical solutions tangible.
 

Attachments

  • superpos1.jpg
    superpos1.jpg
    24.3 KB · Views: 658
  • superpos2.jpg
    superpos2.jpg
    37.4 KB · Views: 656
  • #43
Q_Goest said:
Well, that’s what we’re talking about. Are classical systems separable or not?

This is the big presumption. And people who model complexity - Robert Rosen in Essays on Life Itself, Steven Strogatz in Synch, Scott Kelso in Dynamic Patterns - would say that systems are not in fact separable into their local atoms.

There is both top-down causality and bottom-up. So a complex system cannot be fully accounted for as the sum of its efficient causes.

To avoid confusion here, note that there are two ways of viewing emergence.

1) A collection of atoms produces collective constraints which form an emergent level of downwards causation. So the atoms are what exist, the constraints simply arise.

2) Then there is the strong systems view (following Peirce) where both the local atoms and the global constraints are jointly, synergistically, emergent. So you don't have atoms that exist. They too are part of what emerges as a system develops.

This second view fits an understanding of brain function rather well. As for example with the receptive fields of neurons. A neuron does not fire atomistically. Its locally specific action is shaped up by a prevailing context of brain activity. Its identity emerges as a result of a focusing context. The orchestra analogy is indeed apt.

To anyone who has studied neuroscience, any discussion that assumes neural separability just sounds immediately hokus.

A neuron on its own doesn't even know how to be an efficient cause. It can only fire in a rather unfocused way. You can talk about recreating the global context that tells the neuron how to behave, separating this information off in some impulse cartridge, but you have to realize that this is global information and not itself a collection of atomistic efficient causality.

An analogy is imagine trying to scoop a whorl of turbulence out of a stream in a jam jar. The whorl is indeed non-separable.
 
  • #44
Please explain what you're doing at the bottom of the second photo.

I'm assuming k = G * M * m

Then you have:

k/r2 + k/r2 (not equal) k/(r1+r2)2

What is r1, r2 ? Explain how you end up with that last equation.

I don't know that it matters though. I doubt our definitions of separability are going to match.
 
  • #45
Q_Goest said:
Please explain what you're doing at the bottom of the second photo.

I'm assuming k = G * M * m

Then you have:

k/r2 + k/r2 (not equal) k/(r1+r2)2

What is r1, r2 ? Explain how you end up with that last equation.

I don't know that it matters though. I doubt our definitions of separability are going to match.

yes on k

I just plugged F(r) into the superposition principle.

F(r1) + F(r2) = km/r12 + km/r22

F(r1 + r2) = km/(r1+r2)2

they're not equal, superposition doesn't hold.

I don't know that it matters though. I doubt our definitions of separability are going to match.

I'd think they should still be consistent with the thought experiment. That's what my focus is, showing how the actions taken in the thought experiment would effect a real physical system. Proponents of the thought experiment seem to be claiming that dynamically decoupling the neurons wouldn't effect it.

Theoretically and experimentally, we know there are physical systems we can't treat independently. We know that group properties aren't always properties of individual particles (Temperature, Pressure, feedback, force). We happen to model neural systems as such a class of systems. the 3+ body problem can't be reduced to the 2-body problem and the 2-body problem can't be reduced to a 1-body problem (because a 1-body problem is meaningless, there's no force associated with a single body. You have to bring a test mass along to measure the force of interaction between the two masses.)
 
  • #46
Ok, thanks for the correction. Now what is r1 and r2? Since r is the distance between masses, I still don't see what these are.
 
  • #47
Q_Goest said:
Ok, thanks for the correction. Now what is r1 and r2? Since r is the distance between masses, I still don't see what these are.

They're two different masses (so two different distances). You can designate them m1 and m2 if you want, I took them to be equal so that the proof would be a little simpler. Even if you distinguish the masses, you still have the same problem:

F(m1+m2,r1+r2) != F(m1,r1) + F(m2,r2)

You can actually see this in the reduced mass equation in which the masses themselves actually get coupled, and then you get:

m1m2/(m1+m2)

So the resulting acceleration doesn't come from summing the masses, the sum of the masses does not give you the appropriate value for the whole system. The appropriate value (the reduced mass) is actually less than the sum.
 
  • #48
Pythagorean said:
They're two different masses (so two different distances). You can designate them m1 and m2 if you want, I took them to be equal so that the proof would be a little simpler.
I'm still not clear what r1 and r2 is, but I'm going to step out on a limb and say that you're thinking of putting a third mass in with these two and now you're suggesting that r1 is the distance between m1 and the new mass and r2 is the distance between m2 and the new mass. Is that what you mean? (please confirm)

In this case, the gravitational vectors are additive.
Gravitational Field for Two Masses
The next simplest case is two equal masses. Let us place them symmetrically above and below the x-axis: (see link for picture)

Recall Newton’s Universal Law of Gravitation states that any two masses have a mutual gravitational attraction. A point mass m = 1 at P will therefore feel gravitational attraction towards both masses M, and a total gravitational field equal to the vector sum of these two forces, illustrated by the red arrow in the figure.

The Principle of SuperpositionThe fact that the total gravitational field is just given by adding the two vectors together is called the Principle of Superposition.
Ref: http://galileo.phys.virginia.edu/classes/152.mf1i.spring02/GravField.htm
In fact, given a completely static set of n gravitational bodies, the gravitational field at any point is easily calculable by adding the gravitational contribution of each mass to every point in the field. The problem arises when we allow the masses to move around and the differential equations become unsolvable. But that doesn't make such a system separable. I'm not an expert on the n-body problem by any means, so I can't authoritatively discuss the issues. However, the philosophy of separability in classical mechanics is easy enough to grasp for engineers and scientists not in that particular field. More later.
 
  • #49
Q_Goest said:
I'm still not clear what r1 and r2 is, but I'm going to step out on a limb and say that you're thinking of putting a third mass in with these two and now you're suggesting that r1 is the distance between m1 and the new mass and r2 is the distance between m2 and the new mass. Is that what you mean? (please confirm)

Remember that k = GMm. M is the reference mass (it's at the origin of my coordinate system). It's not a real mass, but it's a way to measure the field as if there were a mass was there. But I'm making it the center of the reference frame: it's at the origin, and you can divide by it to see the field independent of it, since it's a constant (our dependent variable is the r's in the nonlinear case).
http://en.wikipedia.org/wiki/Test_particle

m becomes m1 and m2.
so:
GM(m1/r1^2 + m2/r2^2) if the masses were different, then then m comes out for m1=m2.

The equation for Newton's gravity is a two-body problem already, but with one of the masses as the center of reference:
F = GMm/r^2

but it can be more complicated than that with a more general reference frame that leads to the reduced mass:
http://en.wikipedia.org/wiki/Two-body_problem

In this case, the gravitational vectors are additive.

Ref: http://galileo.phys.virginia.edu/classes/152.mf1i.spring02/GravField.htm
In fact, given a completely static set of n gravitational bodies, the gravitational field at any point is easily calculable by adding the gravitational contribution of each mass to every point in the field.

I completely agree, and this is actually an impressive and fascinating result (have you read "http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html"" We should be amazed by this!).

But regardless of whether you can appreciate the elegance of this, it doesn't make the whole system reducible to one component of the system. The value of the gravitational field isn't the whole story. We have also motion to consider, and the consequences of motion: collisions.

The problem arises when we allow the masses to move around and the differential equations become unsolvable. But that doesn't make such a system separable. I'm not an expert on the n-body problem by any means, so I can't authoritatively discuss the issues. However, the philosophy of separability in classical mechanics is easy enough to grasp for engineers and scientists not in that particular field. More later.

Reduction and separation in physics and engineering (especially 100 and 200-level courses) is ontological. Approximations and hand-waving are rampant in both disciplines and they're perfectly acceptable as long as they give us access to the switches and levers of nature. We will model things as random or do something to ensure the signal to noise ratio is high to do our best to ignore the nonlinearities.

We often take approximations to only to first order, rarely to second order, and the whole point is so that they remain linear, so that F(x1) + F(x2) = F(x1 + x2). Then we can just add all the x's together and do the operation (F) once and it's the same result as if we did each one individually and then added them together. In a complex system, the term on the left is the only meaningful term. You can't add all the objects together and perform the same operation and get the same result. x1 and x2 are now coupled: they interact with each other.

example: If different frequencies of electromagnetic waves interacted with each other, we could never separate it into bandwidth for information transfer as we do. However, since electromagnetic waves ARE separable, because they OBEY superposition, we can stand in the middle of a million intertwining signals and as long as each one has a unique frequency band, we can separate them. This is directly because superposition holds! And there are many physical systems for which superposition does not hold!

This is pretty much the meat of most physics and engineering courses (I've been through a whole undergraduate physics degree and I take many electrical engineering courses for my interdisciplinary graduate degree). Each department only has one graduate (600-level) class that offers nonlinear techniques... and they only offer them once every other year. I hope this gives an idea of how little prevalence it has in standard undergraduate physics and engineering courses. When we see nonlinear equations, we approximate them (for example, the simple pendulum) and remove the nonlinearity. That's a great deal of the training of undergraduate physics and engineering courses: making your system simpler so you can solve it faster and be more productive and efficient with government dollars.

It's not a very epistemological approach...

to ground this in the literature, here's an abstract:

The reduction of dynamical systems has a rich
history, with many important applications related to stability,
control and verification. Reduction is typically performed in an
“exact” manner—as is the case with mechanical systems with
symmetry—which, unfortunately, limits the type of systems to
which it can be applied. The goal of this paper is to consider a
more general form of reduction, termed approximate reduction,
in order to extend the class of systems that can be reduced.

Using notions related to incremental stability, we give conditions
on when a dynamical system can be projected to a lower
dimensional space while providing hard bounds on the induced
errors, i.e., when it is behaviorally similar to a dynamical system
on a lower dimensional space. These concepts are illustrated
on a series of examples.
(emphasis mine)

Approximate Reduction of Dynamical Systems
Paulo Tabuada, Aaron D. Ames, Agung Julius and George Pappas

Proceedings of the 45th IEEE Conference on Decision & Control
Manchester Grand Hyatt Hotel
San Diego, CA, USA, December 13-15, 2006
 
Last edited by a moderator:
  • #50
Lievo said:
As the name indicates, computationnalism assumes computability, not computability by classical mechanics only. Penrose explains this very well in the "[URL ,[/URL] and that's why he postulates that the mind must rely on yet undiscovered quantum laws -he knows present quantum mechanic does not allows to go outside computationnalism.
Thanks Lievo, I actually learned something here. After reviewing a few definitions of computationalism I’ve come to the conclusion that computationalism doesn’t necessarily rule out quantum theories of mind. So when I’ve used the term “computationalism” in the OP, I should explain that what I mean by that regards classical theories of mind as is the current paradigm of mind.

The Stanford Encyclopedia of Philosophy provides a very condensed definition of http://plato.stanford.edu/entries/computational-mind/" :
Over the past thirty years, it is been common to hear the mind likened to a digital computer. This essay is concerned with a particular philosophical view that holds that the mind literally is a digital computer (in a specific sense of “computer” to be developed), and that thought literally is a kind of computation. This view—which will be called the “Computational Theory of Mind” (CTM)—is thus to be distinguished from other and broader attempts to connect the mind with computation, including (a) various enterprises at modeling features of the mind using computational modeling techniques, and (b) employing some feature or features of production-model computers (such as the stored program concept, or the distinction between hardware and software) merely as a guiding metaphor for understanding some feature of the mind.

However, after reading through a paper by http://onlinelibrary.wiley.com/doi/10.1111/j.1747-9991.2009.00215.x/abstract" .

I’ve always taken computationalism to be the thesis that the interaction of neurons is a classical interaction and is what gives rise to the various phenomena such as qualia and self awareness that can be grouped under the heading of “consciousness”. Further, that any quantum mechanical description is not only unnecessary, but would not be considered a computational theory of mind. That neuroscience takes for granted that neuron interactions are ‘classical’ in the sense that they can be described by classical mechanics seems to be an axiom that I’ve always subscribed to, and for the sake of this thread, I’d ask that we go along with this view, only because it is by far the most prevalent one and to date. In fact, any theories that suggest that quantum mechanical interactions between neurons are widely considered "crackpottery".

For the sake of this thread, I’d like to look only at computationalist theories that presume that consciousness emerges from the classical interaction of neurons, not any presumed quantum mechanical ones. Further, any theory (such as Hameroff’s) that suggests consciousness emerges from quantum mechanical interactions within a neuron, can be distinguished from the classical mechanical versions of computationalism. In the future, I’ll make sure to distinguish between classical computationalist theories of mind and quantum mechanical ones. Thanks again for pointing that out.
 
Last edited by a moderator:
Back
Top