What is the special signal problem and how does it challenge computationalism?

  • Thread starter Thread starter Q_Goest
  • Start date Start date
  • Tags Tags
    Signal
Click For Summary
The "special signal problem" challenges computationalism by questioning whether a brain can maintain consciousness when its parts are disconnected yet still receive simulated signals. Arnold Zuboff's thought experiment illustrates this by depicting a brain in a vat that is cut into halves and then smaller sections, each receiving signals from an "impulse cartridge." Despite the signals being identical to those in a connected brain, the argument posits that without actual causal connections, phenomenal consciousness cannot be sustained. This leads to the conclusion that a unique "special signal" is necessary for consciousness, as mere duplication of signals does not suffice. The discussion emphasizes the need for a deeper understanding of how counterfactual alternatives are crucial to consciousness within any theory of mind.
  • #61
apeiron said:
This seems a good example to focus on, but I am confused as to what you are actually arguing.

A computationalist building an IC would probably say that the differing lengths/transmission times of the input axons would be one of those features he would be able to replicate (given unlimited resources).

Are you agreeing this in principle possible, or arguing against it?

Perhaps you are saying yes it is possible for a single neuron, but it would not be possible for a functioning network of many neurons (for some non-linear reason?).

I don't mean to say it's impossible at all. What I mean to say is that the space wouldn't matter when you modeled it (it would be an unnecessary variable); only the transmission times are relevant.

Theoretically, I'm referring strictly to biophysical models of the Hodgkins-Huxley type since they've been accepted by the neurobiology community for nearly 60 years now.

The motion of each particle isn't traced through Euclidian space like a planetary model. The whole neuron is considered with four differential equations (none of which depend on space) that model the interaction between ion channel activation and membrane potential.

So what's being modeled is information transfer. There must be some Euclidian aspect though, since the electrochemical theory behind the model employs the concept of diffusion, but I'm still having trouble seeing how the separability of volumes of Euclidian space implies that you can separate a system into it's components and then make all the components behave like they did in the system and then still consider it a system.

As an ultimate proof, though, if we keep separating a classical system, we eventually get a quantum system. So there's at least some contradiction with the hard statement "classical systems are not separable".

What's especially interesting about this line of reasoning is that it brings us back to the "explanatory gap" between quantum physics and classical physics. And, interestingly enough, dynamical systems has foothold in the subject (i.e. quantum chaos).
 
Physics news on Phys.org
  • #62
Pythagorean said:
I don't mean to say it's impossible at all. What I mean to say is that the space wouldn't matter when you modeled it (it would be an unnecessary variable); only the transmission times are relevant...So what's being modeled is information transfer.

But this does not really get to the nub of the argument then.

The hard problem depends on naive "separability". And if consciousness is just about the existence of information (in a static pattern) then Putnam's argument that any rock implements every finite state automaton may in principle go through. You are hinting that there is something more to be considered in talking about "information transfer" - states/patterns must change in some systematic fashion. But what is the nature of that essential change? And it it separable or non-separable?

Separable is a possibly confusing term here as it is properly a quantum states distinction. But we can keep using it as it is synonomous with digital (vs analog), discrete (vs continuous), reductionist (vs holistic), atomistic (vs well, again holistic) in this discussion.

Basically the hard problem arises when science can appear to separate the brain into its computational atoms, its digital components, and not lose anything essential in terms of causality. At some point a conscious human becomes a non-conscious heap of chemistry or a silicon zombie, or whatever. We are left with just the material, just the physical substance, and no causal account of the higher level "property".

Now it could be argued that there is indeed a hard boundary on separability in QM. But unless you are arguing consciousness is essentially a QM phenomenon - for which there is no good scientific backing - then this boundary seems irrelevant to philosophic thought experiments.

It could also be argued that non-linearity is another kind of hard boundary on separability - which is where I thought you were going with the cite of the three body problem. Again, this is probably a definite boundary on seperability (if non-linearity is being equated with an essential continuity of nature). Chaos theory would seem to say we cannot in practice pick out discrete instances in spacetime (measure exact initial conditions).

However I personally don't think non-linearity is a killer argument here. First, because chaos theory really models reality as it exists between the bounding extremes of the continuous and the discrete (if you have whorls of turbulence erupting, they are in some sense discrete structures in a continuous flow). And second because brain components like synapses, neurons and cortical circuits appear to be in some definite way structures with computational aspects.

For the purpose of philosophical thought arguments - based on what is at least broadly agreed and widely known about brains - it remains more plausible that the components of the brain are structures "striving to be digital even if made up of sloppy thermo stuff", rather than structures that are what they are, can do what they can do, because of some non-linear magic.

(Then I mentioned a third possible hard boundary on speculation - the relativistic issue of information density. Which like the QM separability bound, is a physically certain boundary, but again arguably irrelevant because such constraints only kick in at physically extreme scales).

Yet a further constraint on naive separability could be the argument that evolution is efficient and so the human brain (the most complex arrangement of matter in the known universe) is most likely to be close to the actual physical limits of complexity. Whatever it is that brains do to be conscious, we can probably expect that the way brains do it can't be beat. This makes it far less plausible that a technologist can come in and freely start stretching out the wiring connections, simplifying the paths, speeding up the transmission rates.

It might be argued by the likes of Zuboff that the brain as a natural machine is constrained by energetics - it is optimal, but optimised along a trade-off between consciousness production and metabolic cost. So a technologist with unlimited energy to make it all happen, could unpack a brain into a very different set of components. But the argument that evolution is efficient at optimisation, and so brains would resist the kind of naive separation proposed just on the grounds that there must be something significant about its physical parameters (its particular transmission times, its particular connection patterns, its particular molecular turnover, etc), must be at least dealt with in a thought experiment.

So we have two hard (but weak because they are distant in scale) constraints on speculation - QM and relativistic bounds.

We have a possible but unlikely constraint - non-linearity.

And we have the evolution optimisation constraint - which is probably weak here because it is easy enough I guess to imagine separating the energetic cost of replicating brain function.

Which all comes back to my original line of attack on the notion of separability. The systems view - which postulates a complex Aristotelean causality based on the interaction of bottom-up construction and top-down constraints - says reality is always dichotomised (divided into a local~global causality as just described) but never actually separated (reducible to either/or local causes, or global causes).

So what is going on here is that brains have both form and substance. They have their global organisation and their local components. There is indeed a kind of duality, but it is not the broken Platonic or Cartesean duality which leads to hard problems or waffling about emergence and supervenience. Instead, there is a duality of limits. You have a separation towards two different kinds of thing (bottom-up and top-down causation), but not an actual separation that divides reality. Just an asymptotic approach that produces two different "kinds" and in turn allows for the emergence of complexity in the synergistic interaction that results.

Translated into neuroscience, we should expect that the brain looks digital, computational, componential at the local level. This is what it is trying to be as this is what bottom-up, constructive, additive, causality looks like. But then we should also equally expect to be able to find the complementary global aspect to the brain which shows that it is an unbroken system. It must also be a system that can organise its own boundary constraints, its own global states of downward acting causality.

Which, as any neuroscientist knows, is what we see. Attention, anticipation, etc. Components have a definite local identity only because they are embedded in enactive contexts. Experiments at many levels of brain function have shown this. It is now a basic presumption of neuroscientific modelling (as shown for example, picking the cutting edge, Friston's Bayesian brain).

So if the hard problem arises because of a belief in physical separability, there are a number of arguments to be considered. But the best argument IMHO is the systems one. And note this explains both why consciousness and brain function are not ontically separated AND why they also appear as separated as possible.

Running the argument once more, local and global causality are in principle not separate (otherwise how do they interact?). Yet they are separated (towards physically-optimal limits - otherwise how would they be distinctive as directions of causality?).

Of course, this means that rather than selling an argument just about brains, you are trying to sell an argument about reality in toto.

But then if your thinking about the nature of things keeps leading you to the impasse of the hard problem, and its unpalatable ontic escape clauses like panpsychism, well you know that, Houston, you have a problem.

The hard problem should tell people that reductionism really is broke. If you accept the premise of separable causality, you end up looking over the cliff. So turn round and look at what got left behind. You have to rediscover the larger model of causality that was there when philosophy first got started.
 
  • #63
apeiron,

separability

That is originally where I was going with the n-body problem: I was previously using a casual definition of separability (similar to how you defined it just now) but Q_Goest introduced a very rigorous definition of separability and my point was that it seems too rigorous to be relevant to what we're talking about. His definition, as I was saying, seems to pertain to Euclidian space and that's not explicitly realized in the neural models I work with.

But yes, that aside, the definition of separable I was using before Q_Goest introduced this more rigorous definition was more to the point: "can you separate neurons and call it the same physical system?"

nonlinearity

I don't mean to imply at all that nonlinearity is a sufficient condition for consciousness. It's possibly necessary, but doubtfully sufficient.

The reason I bring up is nonlinearity up is that to me it appears that people attacking physicalism do so on the basis of linear physical systems, excluding the larger, more general class of physical systems that better describe our physical reality.

For the purpose of philosophical thought arguments - based on what is at least broadly agreed and widely known about brains - it remains more plausible that the components of the brain are structures "striving to be digital even if made up of sloppy thermo stuff", rather than structures that are what they are, can do what they can do, because of some non-linear magic.

There's no magic here, though. Nonlinearity is just unintuitive. It may appear as magic to someone who fails to understand the underlying mathematical principles, but it's really not. Everything "adds up" once you do the rigorous mathematical work.

And... the two views you presented here are not mutually exclusive. In fact (as I've already mentioned) a computer is realistically nonlinear itself. We simply accept slop, use filters, and set signal/noise ratios high enough that we can ignore the nonlinearities as "noise". So 1 is represented by ~5v and 0 is represented by ~0v, but 4.9 and .005 volts will work for a "1" and a "0" as well. We designed the system to be easy to interpret, regardless of the irregularity in the signal (as long as the signal is sufficiently larger than those irregularities).

So we actually ignore a lot of the "interesting" physics going in a computer because we don't want "interesting"; we want predictable because computers are extensions of our consciousness (we are their Maxwellian Demons).
 
  • #64
Pythagorean said:
The reason I bring up is nonlinearity up is that to me it appears that people attacking physicalism do so on the basis of linear physical systems, excluding the larger, more general class of physical systems that better describe our physical reality.

OK, agreed, and likewise this is why I point to the modelling of complexity by theoretical biologists such as Howard Pattee. Non-linear is simple complexity, then there is the analysis of systems complexity - the control of rate dependent processes (ie: self-organising, dynamical) by rate independent information (such as genes, words, neural connection patterns).
 
  • #65
I think the Maxwellian demon analogy is being misused here. The practical difficulty of duplicating a chaotic system can’t really be compared to Maxwell’s demon.

I don’t think anyone is disagreeing that in practice, trying to duplicate the causal influences acting on a single neuron so that the neuron undergoes the same physical changes in state while removed from the brain that it undergoes while in the brain, is going to be virtually impossible. Certainly we can claim it is impossible with our present technology. But that’s not the point of separability. Anyone arguing that in practice, the chaotic nature of neurons prevents the separability of the brain has already failed to put forth a legitamate argument. One needs to put forth an argument that shows that in principal, neurons are not separable from the system they are in. What principal is it that can be used to show neurons are not separable? Appealing the the practical difficulty of creating these duplicate physical states isn’t a valid argument.

The concept that nonlinear phenomena are "those for which the whole is greater than the sum of its parts" and thus aren't seperable has been appealed to by a few scientists and philosophers but that argument hasn’t been widely accepted. Further, it changes the present theory of mind. It says that digital computers can’t be conscious for starters since it is obviously very easy to duplicate the physical changes in state any portion of a computer undergoes. So now we need to say that some computational systems can be conscious and other computational systems can’t be, regardless of whether or not they are functionally the same.

If we’d like to use the argument that nonlinear systems are not separable, but we find just one nonlinear system that is in fact separable, then we have an even more difficult job of finding a reason why one nonlinear system can be consious but another can not. So let’s look at the n-body problem for a moment and contemplate whether or not it might be separable. To show separability, we only need to show that within a given volume of space, and over some time interval dt (ie: a spacetime region R), the gravitational field within that volume of space is identical to the gravitational field in another volume of space within a different n-body system. That is, if we find some spacetime region R1 within an n-body system that is identical with some other spacetime region R(identical), then we’ve shown separability in the sense that Zuboff is suggesting. These two regions of space undergo identical physical state changes over that time interval dt, despite the two being in different systems. Note here that Zuboff’s notion of separability is not just that physical processes supervene only on the measurable physical properties within some spacetime region R, but also that those physical processes can be duplicated within an identical spacetime region R(identical) without having R(identical) be part of the same overall physical process. In other words, the neuron in one system can be duplicated in another system. We might imagine those two neurons going through identical changes in state because the causal influences on them are identical which is just one way to understand what the story is about and what the problem is that we need to resolve.
 
  • #66
apeiron said:
The standard philosophical paradoxes arise because it is presumed that complex systems must be reducible to their "atoms", their component efficient causes. But a systems approach says that causality is holistic. It is not separable in this fashion. You cannot separate the bottom-up from the top-down as they arise in interaction.

Systems do of course appear to be composed of local component causes. But this local separability is in fact being caused by the top-down aspects of the system.


All this bottom-up and top-down talk, in as much as it's true, bears a striking similarity to Life(aliveness, being alive) if applied to the universe. May even shed light on the "special signal problem".
 
Last edited:
  • #67
Q_Goest said:
I don’t think anyone is disagreeing that in practice, trying to duplicate the causal influences acting on a single neuron so that the neuron undergoes the same physical changes in state while removed from the brain that it undergoes while in the brain, is going to be virtually impossible. Certainly we can claim it is impossible with our present technology. But that’s not the point of separability. Anyone arguing that in practice, the chaotic nature of neurons prevents the separability of the brain has already failed to put forth a legitamate argument. One needs to put forth an argument that shows that in principal, neurons are not separable from the system they are in. What principal is it that can be used to show neurons are not separable? Appealing the the practical difficulty of creating these duplicate physical states isn’t a valid argument.

Why is it the presumption here that neurons are separable rather than the converse?

But anyway, I have already given two "in principle" limits in QM and relativistic event horizons. Neurons would not be separable beyond these limits (or do you disagree?).

Then there is the "middle ground" attack (as QM and black holes clearly kick in only at the opposing extremes of physical scale).

And here I would suggest that networks of neurons, if ruled by global dynamics such as oscillatory coherence (an experimentally demonstrated correlate of consciousness), can be presumed to be NP complete.

This is the kind of argument indeed used within theoretical biology to show biology is non-computable - for example, the protein folding problem. You can know the exact sequence of bases, yet not compute the final global relaxation minima.

Here is an exert from Pattee's paper, CAUSATION, CONTROL, AND THE EVOLUTION OF COMPLEXITY, which explains how this is relevant (and how complexity is not just non-linearity).

The issue then is how useful is the concept of downward causation in the formation and evolution of complex systems. My conclusion would be that downward causation is useful insofar as it identifies the controllable observables of a system or suggests a new model of the system that is predictive. In what types of models are these condition met?

One extreme model is natural selection. It might be considered the most complex case of downward causation since it is unlimited in its potential temporal span and effects every structural level of the organism as well as social populations. Similarly, the concept of fitness is a holistic concept that is not generally decomposable into simpler components. Because of the open-ended complexity of natural selection we know very little about how to control evolution, and consequently in this case the concept of downward causation does not add much to the explanatory power of evolution theory.

At the other extreme are simple statistical physics models. The n-body problem and certainly collective phenomena, such as phase transitions, are cases where the behavior of individual parts can be seen as resulting from the statistical behavior of the whole, but here again the concept of downward causation does not add to the model's ability to control or explain.

A better case might be made for downward causation at the level of organism development. Here, the semiotic genetic control can be viewed as upward causation, while the dynamics of organism growth controlling the expression of the genes can be viewed as downward causation. Present models of developmental control involve many variables, and there is clearly a disagreement among experts over how much control is semiotic or genetic and how much is intrinsic dynamics.

The best understood case of an essential relation of upward and downward causation is what I have called semantic closure (e.g., Pattee, 1995). It is an extension of von Neumann's logic of description and construction for open-ended evolution. Semantic closure is both physical and logical, and it is an apparently irreducible closure, which is why the origin of life is such a difficult problem. It is exhibited by the well-known genotype-phenotype mapping of description to construction that we know empirically is the way evolution works. It requires the gene to describe the sequence of parts forming enzymes, and that description, in turn, requires the enzymes to read the description.

This is understood at the logical and functional level, but looked at in detail this is not a simple process. Both the folding dynamics of the polypeptide string and specific catalytic dynamics of the enzyme are computationally intractable at the microscopic level. The folding process is crucial. It transforms a semiotic string into a highly parallel dynamic control. In its simplest logical form, the parts represented by symbols (codons) are, in part, controlling the construction of the whole (enzymes), but the whole is, in part, controlling the identification of the parts (translation) and the construction itself (protein synthesis).

Again, one still finds controversies over whether upward semiotic or downward dynamic control is more important, and which came first at the origin of life. There are extreme positions. One extreme sees the universe as a dynamics and the other extreme sees the universe as a computer. This is not only a useless argument, but it obscures the essential message.

The message is that life and the evolution of complex systems is based on the semantic closure of semiotic and dynamic controls. Semiotic controls are most often perceived as discrete, local, and rate-independent. Dynamic controls are most often perceived as continuous, distributed and rate-dependent. But because there exists a necessary mapping between these complementary models it is all too easy to focus on one side or the other of the map and miss the irreducible complementarity.
 
  • #68
Q_Goest said:
I think the Maxwellian demon analogy is being misused here. The practical difficulty of duplicating a chaotic system can’t really be compared to Maxwell’s demon.

The maxwellian argument and the nonlinear argument are two different lines of reasoning.

I'm not saying that it's only in practice that complex systems are inseparable, I'm proposing that it's in principle. That's why I'm using mathematics to illustrate the point.

That they're nonlinear and complex is sufficient, and also, remember, I'm not claiming nonlinear systems are required for consciousness (as I've already said) just that the thought experiment narrows it's scope to linear systems and that neurons exhibit nonlinear behavior (just like the rest of the world does).

The concept that nonlinear phenomena are "those for which the whole is greater than the sum of its parts" and thus aren't seperable has been appealed to by a few scientists and philosophers but that argument hasn’t been widely accepted

And as I've already demonstrated, nonlinear phenomena formally only says "the whole doesn't need to be equal to the sum" and I've shown what exactly that means mathematically. It's quite obvious (from a function variable standpoint) how the variables, being acted on by the function, are not separable because of the nonlinearity.

Linearity literally allows us to reduce problems to their components, and this exactly falls out of the superposition (the function performed on each is the same as the function performed on all in a linear case).

To show separability, we only need to show that within a given volume of space, and over some time interval dt (ie: a spacetime region R), the gravitational field within that volume of space is identical to the gravitational field in another volume of space within a different n-body system. That is, if we find some spacetime region R1 within an n-body system that is identical with some other spacetime region R(identical), then we’ve shown separability in the sense that Zuboff is suggesting. These two regions of space undergo identical physical state changes over that time interval dt, despite the two being in different systems. Note here that Zuboff’s notion of separability is not just that physical processes supervene only on the measurable physical properties within some spacetime region R, but also that those physical processes can be duplicated within an identical spacetime region R(identical) without having R(identical) be part of the same overall physical process. In other words, the neuron in one system can be duplicated in another system. We might imagine those two neurons going through identical changes in state because the causal influences on them are identical which is just one way to understand what the story is about and what the problem is that we need to resolve.

But this is not what's being argued. One neuron in one system can be made to behave the same way in another system, that's not what's being contested. From the discussion by apeiron and I above, my claim is that a system of N coupled bodies (or neurons) is not the same as a system of N independent neurons, all exhibiting the same behavior independent of each other (i.e., no causal connection).

This seems to be reminding me of counterfactual states now. If an experimenter were to come in and probe one neuron to see how another acted, he wouldn't be able to find any causal relationship. The input and the output would appear to be completely random to him (and it may as well be, since the IC's can't predict the experimenters motives or else it would be a causal connection).

This would be a different result from if the experimenter ran the tests on the causally connected neurons. He would be able to find a consistent relationship between the input of neuron 1 and the output of neuron N.
 

Similar threads

Replies
5
Views
3K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 61 ·
3
Replies
61
Views
15K
  • · Replies 28 ·
Replies
28
Views
6K
  • · Replies 33 ·
2
Replies
33
Views
6K
Replies
113
Views
20K
  • · Replies 70 ·
3
Replies
70
Views
14K
  • · Replies 48 ·
2
Replies
48
Views
6K
  • · Replies 5 ·
Replies
5
Views
7K