What is the special signal problem and how does it challenge computationalism?

  • Thread starter Thread starter Q_Goest
  • Start date Start date
  • Tags Tags
    Signal
AI Thread Summary
The "special signal problem" challenges computationalism by questioning whether a brain can maintain consciousness when its parts are disconnected yet still receive simulated signals. Arnold Zuboff's thought experiment illustrates this by depicting a brain in a vat that is cut into halves and then smaller sections, each receiving signals from an "impulse cartridge." Despite the signals being identical to those in a connected brain, the argument posits that without actual causal connections, phenomenal consciousness cannot be sustained. This leads to the conclusion that a unique "special signal" is necessary for consciousness, as mere duplication of signals does not suffice. The discussion emphasizes the need for a deeper understanding of how counterfactual alternatives are crucial to consciousness within any theory of mind.
  • #51
I can't solve the three body problem, but I can take three bodies, let them move and attach an accelerometer to one of them. I can then take one of the three bodies and accelerate them in the exact same manner by pushing it with my hand (for precision's sake, probably a robotic arm of course), and it would move in the same way. That's what's going on in the story.

To go back to Maxwell's demon, it would be like having a box, and a demon opening and closing the door to separate hot and cold molecules. Then I take the box with the same exact molecular positions/velocities, and open and close the box in the same exact manner as the demon was doing. I will get the same results as the demon does.

The most immediate argument against this process being possible is that it's impossible to exactly replicate what the demon does, and have the exact same box again, and that being slightly wrong will cause the chaotic system to fall apart and render my opening and closing of the box meaningless. However, in the case of the three body problem, if I accelerate the body slightly differently, or put it in a slightly different starting position, the movement that I make the body perform will still be very close to what it originally did (in fact, it may even be a better approximation than it would be if I tried to reposition the three bodies and let them move again)

So the question of how chaotic the system is has to be applied specifically to consciousness/the brain. Specifically, how the neurons would fire in my brain are slightly different than everyone else's, even given the same stimuli. So you can argue that the input to the neurons is off from what it should be slightly when we apply it to the brain in the story. But this alone shouldn't be enough to kill consciousness: if you take a magnet and wave it around your skull, in theory it should induce some impulses amongst your neurons that are different from what would occur just by experiencing the things around you, but I doubt anyone would argue this means you lack consciousness.

On the other hand, in this case every neuron is receiving an input which is slightly off, which means that it triggers slightly differently from expected, which could result in an input which is more different from what the brain would have actually created. After a couple of rounds of this the neuron is just firing at random compared to what it would be doing if you were observing the stimuli that the scientists are attempting to re-create.

I see no reason why scientists should be able to perfectly re-create the input necessary for each neuron, similarly to how it would be impossible to perfectly re-create the box that Maxwell's demon has taught me how to divide into hot/cold
 
Physics news on Phys.org
  • #52
Q_Goest said:
For the sake of this thread, I’d like to look only at computationalist theories that presume that consciousness emerges from the classical interaction of neurons...

Again, how does your bottom-up stance deal with top-down causality? Are you arguing for weak or strong emergence?

This is a typical neuroscience paper on how top-down attention modulates neural receptive fields.

http://www.bccn-goettingen.de/Members/tobias/press-releases/nn1748.pdf

The standard philosophical paradoxes arise because it is presumed that complex systems must be reducible to their "atoms", their component efficient causes. But a systems approach says that causality is holistic. It is not separable in this fashion. You cannot separate the bottom-up from the top-down as they arise in interaction.

Systems do of course appear to be composed of local component causes. But this local separability is in fact being caused by the top-down aspects of the system. Exactly as experiments show. A neuron is not a fixed autonomous switch (the computational analogy is wrong). It is responding dynamically to the urgings of the orchestra conductor.

This kind of systems logic can be implemented in software of course. For instance, the neural nets of Stephen Grossberg.

But complexity is different from either computationalism or non-linearity/chaos. And it cannot be reduced to either of them in an ontological sense (though you can do so as pragmatic approximations).
 
  • #53
Truth be told, biological systems are actually semiclassical. The chemistry that is inherent to them is (more or less) a summary of quantum mechanics (the way molecules form is based on their bond shape and energy, which comes down to the shape of electron valence shells which is a direct result of quantum numbers like spin and momentum).

Hydrogen bonds making and breaking is also important in biological systems.

So there i a small issue with taking the physical system to be purely classical. Ontologically, it doesn't matter to a network approach. We just care how the neurons talk to each other, which is assumed to be classical electrodynamics. Of course, this ignores the ligand-gating going on that leads to depolarization in the first place, but that's ok with me, since it can be modeled probabilistically; I can ignore the QM.

But... isn't this a problem for somebody asking epistemological questions? Isn't this just a convenient approximation for a faster solution?
 
  • #54
Phythagorean, apeiron, That classical mechanical systems differ in a fundamental way has been an argument made in science and philosophy (both pro and con) just as nonlinear systems has. Further, I have no hallucinations that you and I can resolve the dispute. I’ll just say that from everything I’ve seen on the subject from those people that understand it best, classical mechanics is separable and quantum mechanics is not. From http://plato.stanford.edu/entries/physics-holism/" for example:
Classical physics presents no definitive examples of either physical property holism or nonseparability.

Physical Property Holism: There is some set of physical objects from a domain D subject only to type P processes, not all of whose qualitative intrinsic physical properties and relations supervene on qualitative intrinsic physical properties and relations in the supervenience basis of their basic physical parts (relative to D and P).

Nonseparability: Some physical process occupying a region R of spacetime is not supervenient upon an assignment of qualitative intrinsic physical properties at spacetime points in R.

The boiling of a kettle of water is an example of a more complex physical process. It consists in the increased kinetic energy of its constituent molecules permitting each to overcome the short range attractive forces which otherwise hold it in the liquid. It thus supervenes on the assignment, at each spacetime point on the trajectory of each molecule, of physical magnitudes to that molecule (such as its kinetic energy), as well as to the fields that give rise to the attractive force acting on the molecule at that point.

As an example of a process in Minkowski spacetime (the spacetime framework for Einstein's special theory of relativity), consider the propagation of an electromagnetic wave through empty space. This is supervenient upon an ascription of the electromagnetic field tensor at each point in the spacetime.

But it does not follow that classical processes like these are separable. For one may question whether an assignment of basic magnitudes at spacetime points amounts to or results from an assignment of qualitative intrinsic properties at those points. Take instantaneous velocity, for example: this is usually defined as the limit of average velocities over successively smaller temporal neighborhoods of that point. This provides a reason to deny that the instantaneous velocity of a particle at a point supervenes on qualitative intrinsic properties assigned at that point. Similar skeptical doubts can be raised about the intrinsic character of other “local” magnitudes such as the density of a fluid, the value of an electromagnetic field, or the metric and curvature of spacetime (see Butterfield (2006)).

One response to such doubts is to admit to a minor consequent violation of separability while introducing a weaker notion, namely

Weak Separability: Any physical process occupying spacetime region R supervenes upon an assignment of qualitative intrinsic physical properties at points of R and/or in arbitrarily small neighborhoods of those points.

Along with a correspondingly strengthened notion of

Strong Nonseparability: Some physical process occupying a region R of spacetime is not supervenient upon an assignment of qualitative intrinsic physical properties at points of R and/or in arbitrarily small neighborhoods of those points.

No holism need be involved in a process that is nonseparable, but not strongly so, as long as the basic parts of the objects involved in it are themselves taken to be associated with arbitrarily small neighborhoods rather than points.

Any physical process fully described by a local spacetime theory will be at least weakly separable. For such a theory proceeds by assigning geometric objects (such as vectors or tensors) at each point in spacetime to represent physical fields, and then requiring that these satisfy certain field equations. But processes fully described by theories of other forms will also be separable. These include many theories which assign magnitudes to particles at each point on their trajectories. Of familiar classical theories, it is only theories involving direct action between spatially separated particles which involve nonseparability in their description of the dynamical histories of individual particles. But such processes are weakly separable within spacetime regions that are large enough to include all sources of forces acting on these particles, so that the appearance of strong nonseparability may be attributed to a mistakenly narrow understanding of the spacetime region these processes actually occupy.
So to put into simpler words, I take separability to mean that a physical process such as these nonlinear processes, occupying a volume of space (what SEP calls a spacetime region R) supervenes on or is influenced by, the measurable (qualitative) physical properties within this volume of space (at points of R) and/or really, really close by.

I take Zuboff’s discussion about neurons and IC’s to be an example of separability. The neurons are subject to local, causal influences just like the computational elements within a desktop computer (ie: transistors), and not nonlocal ones. This goes along with weak emergence, not strong emergence. If this is true, then we have to question whether or not there is any difference between the connected neurons and the disconnected ones if we provide all the same measurable physical properties to the neuron. Clearly, this question has raised a lot of discussion because it’s important to understanding how consciousness can arise from the interaction of neurons.

Regarding the n-body problem, Kronze (2002) has claimed that classical systems can exhibit chaotic behavior only if its Hamiltonian is inseparable, an example of which would be an n-body system. But he states, “Because the direct sum is used in classical mechanics to define the states of a composite system in terms of its components, rather than the tensor product operation as in quantum mechanics, there are no nonseparable states in classical mechanics.” Where this borderline between a classical system and a quantum mechanical system lies is unclear, but that neurons operate at a "classical" level is largely agreed to.

I understand that we can’t agree on this, and I’ll leave it at that. I’d be glad to listen to any arguments you may have, but I’d also suggest that they be backed up by papers that address the philosophical implications of any claims to nonseperability, strong emergence, etc… I’ve seen way too much hand waving with these claims, and references to papers that don’t address the basic issues. Yes, the science is very important and the papers you've provided are valid to discuss the issues those papers are written around. What bothers me is the way a few scientific papers are being referenced when they don't explicitly support the views being posted here. Not all references, most are okay, but there are a few such as this one:
This is a typical neuroscience paper on how top-down attention modulates neural receptive fields.

http://www.bccn-goettingen.de/Member...ses/nn1748.pdf
If you feel a paper supports your views, please post specific passages and state what they mean and how they support your view. Try and be as thorough as possible. Thanks. :smile:

Kronz, F. M and J. T. Tiehen, 2002, ‘Emergence and Quantum Mechanics’ Philosophy of Science, 69 (2), 324-347
 
Last edited by a moderator:
  • #55
Q_Goest said:
If you feel a paper supports your views, please post specific passages and state what they mean and how they support your view. Try and be as thorough as possible. Thanks. :smile:

In what way exactly are you suggesting the paper cited does not support my view? It shows empirically that global state constrains local actions.

Now you may chose to make your ontological argument down at the level of classical vs QM micro-processes. The separable vs non-separable distinction has some pragmatic meaning there. But where is the argument that says complexity is not something more than these varieties of simplicity? Why should we believe that a workable micro-scale distinction holds also at the level of complex systems?

It is an article of faith perhaps among reductionists that all macro-scale complexity is composed of micro-scale simplicity. But I was challenging you for a justification of that faith - when neuroscience so clearly tells us something else looks to be the case.
 
  • #56
I do not advocate holism, downward causation, or a "conductor". As I said earlier, the whole is not necessarily greater than the parts, it's just not necessarily equal either... and you have to be careful to define what you're talking about (what are you summing?)

I have to do more thinking and researching I think before replying in full, but I will say that based on the definition above, and a thread I found by you:
https://www.physicsforums.com/showthread.php?t=304933

It seems like you're discussion is confined to Euclidian space, which is of course separable. That doesn't seem immediately relevant to me.
 
  • #57
Q_Goest said:
I’ve concluded that computationalism doesn’t necessarily rule out quantum mechanical theories of mind. (...) For the sake of this thread, I’d like to look only at computationalist theories that presume that consciousness emerges from the classical interaction of neurons, not any presumed quantum mechanical ones.
Glad to read that (...) I don't think it's the best move: if Zuboff’s argument was valid, it would be valid whatever which version of computationnalism best explain the data. This opens a way to asses its validity.

Suppose a brain, say an artificial or extra-terrestrial one, which is not separable because it is based on some macroscopic quantum rule. You'd agree that Zuboff's argument would not work and that at least this brain would be computable, wouldn't you?

Now, as this quantum brain would be computable, it turns out that you could construct a classical brain where each 'neuron' would simulate the quantum brain as a whole and act according to what the simulation says. This just follows from the definition.

What I'd like you to consider, is that Zuboff's argument would be supposed to apply in the last case, but not in the first case. As the two cases are in fact strictly equivalent, the conclusion should be the same -but it can't.

Please consider that I'm not saying either possibility is likely. I'm just saying that if Zuboff's argument fails on one of two equivalent cases, then it lacks logical consistency.
 
Last edited:
  • #58
I forgot to say that one way to at least put pragmatic boundaries on Zuboff-style speculation is the Margolus–Levitin theorem - http://en.wikipedia.org/wiki/Margolus–Levitin_theorem

His ICs would need to pack a lot of information to do the job he requires - so much information that the packing density would be constrained by a holographic event horizon. His IC would turn into a black hole before it could do its job.

Event horizons are of course a modern physical example of precisely the global constraints that underly top-down causality which I am always mentioning.

But anyway, it is clearly unphysical to base any argument on infinite information density. An exact answer can be given on where this ultimate constraint kicks in. The argument then becomes about whether it is plausible that an IC of the kind required could come in under this holographic budget. Which cannot be answered via a thought experiment but must now be informed by some proper evaluation of just how much information would in fact be needed.

The non-linear story becomes particularly relevant now, as a continuous process would clearly need infinite information. One does not expect neural activity to be completely non-linear (it is sort of digital to some degree, that seems a fair assumption based on neuroscience). But still, neural activity is likely to be tilted far enough towards a non-linear basis for the information constraint to become an issue rather quickly in the argument.

I note Bishop, in the cited paper, is alert to arguments about the non-computability issue as he references the shadowing theorem in chaos theory. A digital computation has to round off a non-linear calculation at every step, so in fact changing the initial conditions with each iteration. Simulations can get away with this because they are illustrative approximations and not actual non-linear calculations (the chaotic paths look right enough, even if they are not formally right).

I note Bishop's general communication/interaction approach to "brain processing" also is based on an ennactive, semiotic, approach such as I have advocated here. Indeed, Bishop even cites my own writings with some approval, which is nice :smile:

Anyway, the message I take from Zuboffs parable is the usual. Reductionist views of complexity run into paradox because they have no way of defining their boundary constraints. The best they can say is that "constraints appear to emerge". But that is not the same as modelling them. And because there is a fundamental lack of principles here, philosophical thought experiments have a habit of presuming chains of effective causality can proceed freely from the local to the infinite. With no way of drawing the line, no line gets drawn.

The Margolus–Levitin theorem is at least one hard constraint on unbounded computationalism that has been agreed.

It is not actually very useful for doing neuroscience of course. But it shows hard boundaries do exist and we need to get used to modelling them to avoid the unbound speculation that brings philosophy into disrepute.

Again I recommend reading Robert Rosen's Essays on Life Itself for a source of multiple arguments against philosophic and scientific reductionism.
 
  • #59
Separability

I'll expand a bit on what I said here:

It seems like you're discussion is confined to Euclidian space, which is of course separable. That doesn't seem immediately relevant to me.

When using the differential equation approach (i.e. Hodgkins-Huxley) we don't model volumes of space. Each four-dimensional system represents a neuron, but we add a term to each of the neurons so that it depends on it's neighbor (based on the way it does in nature, either synaptically or diffusively/gap-junction). So you can imagine a ring (a common basic network topology for looking at simple characteristics of a system) of neurons, and perhaps label them: n1,n2,n3,n4...

But both in nature and in this model, space is irrelevant. n1 and n3 could be closer together than n1 and n2. It's irrelevant. Consider in nature. A common example of neural processing is five neurons who's axons synapse on the dendrite of a single neuron (the incident neuron).

The lengths of the five different axons are all different, but the neural system (not just the neurons now, the glia are involved in this, as are other cell interchanges that only a molecular biologist could explain in satisfactory detail) doesn't care. All that it cares about is that, when necessary, the five axon signals all arrive at the incident neuron in such a way that they can spatially sum to produce a significant result.

For instance (a simplified example from the Handbook of Brain Theory and Neural Networks): five photoreceptor neurons incident on a neuron. When an object passes left to right in front of you, the different lengths of the axons allow for the signals to spatially sum on the incident neuron, firing it (and suddenly you consciously detect "something's moving to the right!" If you think this is strange, you may want to read about sight blindness where people can't actually see a picture out their eyes, but the visual processing that senses motion is still working.)

The system "self organizes" such that the length's of the axons are irrelevant. Consequently, an object moving in the opposite direction will now never sum on the incident neuron. If all the lengths were always equal, then there would be a problem of ambiguity (the incident neuron would fire whether the object was moving left to right or right to left).

Anyway, I agree that we are done with the separability discussion. I've presented my arguments in full, though from an effective standpoint. You're (Q Goest) being more textual about it, and I don't know how much it impacts the actual efficacy because space volumes are not explicit in the neural systems I'm familiar with. For example:

officeshredder said:
I can't solve the three body problem, but I can take three bodies, let them move and attach an accelerometer to one of them. I can then take one of the three bodies and accelerate them in the exact same manner by pushing it with my hand (for precision's sake, probably a robotic arm of course), and it would move in the same way. That's what's going on in the story.

I agree that you can do this. Make some volume of space behave as it did in the system. But that's not to say you're studying the system anymore. For instance, in officeshredder's example above, he's no longer studying gravity. Gravity isn't physically part of the system anymore. He's just studying the effects of gravity in terms of the behavior produced. It's a postdiction, not a prediction. This is the same way I feel about digital perspectives of the brain. We can say "yes a neuron fired" or "no a neuron didn't fire" after looking at an experiment and assign 1's and 0's to it, sure.

But the dynamical biophysical models tell you (using continuity) when a particular neuron in a system will fire based on the mechanisms underlying the firing.

Maxwellian Demon

officeshredder, apeiron, and I have all mentioned this now and I didn't want it to distract from separability discussion, but now that we're essentially through with that, I think it's time to start looking at it.

You would need a Maxwellian Demon to perform the thought experiment (actually, you have one, it's the scientist) which means that you're investing energy into the system (the Maxwellian Demon had to learn the information, then convey and apply it to a physical system; thus the replicated system is a different system than the original.

From an information theory perspective, for instance, what the IC's are doing is actually quite important. Infinite information is equivalent to infinite energy in this view and that's an especially important information for such highly sensitive systems (small errors can lead to much different qualitative behavior). And, as you may know, in classical systems, there's an infinite number of points between any two finite points. If you can't describe what happens at every one of those point (which you can't... unless you're some kind of Grand Master Maxwellian Demon: a supernatural creature) than you don't really have control of the system.
 
  • #60
Pythagorean said:
But both in nature and in this model, space is irrelevant. n1 and n3 could be closer together than n1 and n2. It's irrelevant. Consider in nature. A common example of neural processing is five neurons who's axons synapse on the dendrite of a single neuron (the incident neuron).

The lengths of the five different axons are all different, but the neural system (not just the neurons now, the glia are involved in this, as are other cell interchanges that only a molecular biologist could explain in satisfactory detail) doesn't care. All that it cares about is that, when necessary, the five axon signals all arrive at the incident neuron in such a way that they can spatially sum to produce a significant result.

This seems a good example to focus on, but I am confused as to what you are actually arguing.

A computationalist building an IC would probably say that the differing lengths/transmission times of the input axons would be one of those features he would be able to replicate (given unlimited resources).

Are you agreeing this in principle possible, or arguing against it?

Perhaps you are saying yes it is possible for a single neuron, but it would not be possible for a functioning network of many neurons (for some non-linear reason?).
 
  • #61
apeiron said:
This seems a good example to focus on, but I am confused as to what you are actually arguing.

A computationalist building an IC would probably say that the differing lengths/transmission times of the input axons would be one of those features he would be able to replicate (given unlimited resources).

Are you agreeing this in principle possible, or arguing against it?

Perhaps you are saying yes it is possible for a single neuron, but it would not be possible for a functioning network of many neurons (for some non-linear reason?).

I don't mean to say it's impossible at all. What I mean to say is that the space wouldn't matter when you modeled it (it would be an unnecessary variable); only the transmission times are relevant.

Theoretically, I'm referring strictly to biophysical models of the Hodgkins-Huxley type since they've been accepted by the neurobiology community for nearly 60 years now.

The motion of each particle isn't traced through Euclidian space like a planetary model. The whole neuron is considered with four differential equations (none of which depend on space) that model the interaction between ion channel activation and membrane potential.

So what's being modeled is information transfer. There must be some Euclidian aspect though, since the electrochemical theory behind the model employs the concept of diffusion, but I'm still having trouble seeing how the separability of volumes of Euclidian space implies that you can separate a system into it's components and then make all the components behave like they did in the system and then still consider it a system.

As an ultimate proof, though, if we keep separating a classical system, we eventually get a quantum system. So there's at least some contradiction with the hard statement "classical systems are not separable".

What's especially interesting about this line of reasoning is that it brings us back to the "explanatory gap" between quantum physics and classical physics. And, interestingly enough, dynamical systems has foothold in the subject (i.e. quantum chaos).
 
  • #62
Pythagorean said:
I don't mean to say it's impossible at all. What I mean to say is that the space wouldn't matter when you modeled it (it would be an unnecessary variable); only the transmission times are relevant...So what's being modeled is information transfer.

But this does not really get to the nub of the argument then.

The hard problem depends on naive "separability". And if consciousness is just about the existence of information (in a static pattern) then Putnam's argument that any rock implements every finite state automaton may in principle go through. You are hinting that there is something more to be considered in talking about "information transfer" - states/patterns must change in some systematic fashion. But what is the nature of that essential change? And it it separable or non-separable?

Separable is a possibly confusing term here as it is properly a quantum states distinction. But we can keep using it as it is synonomous with digital (vs analog), discrete (vs continuous), reductionist (vs holistic), atomistic (vs well, again holistic) in this discussion.

Basically the hard problem arises when science can appear to separate the brain into its computational atoms, its digital components, and not lose anything essential in terms of causality. At some point a conscious human becomes a non-conscious heap of chemistry or a silicon zombie, or whatever. We are left with just the material, just the physical substance, and no causal account of the higher level "property".

Now it could be argued that there is indeed a hard boundary on separability in QM. But unless you are arguing consciousness is essentially a QM phenomenon - for which there is no good scientific backing - then this boundary seems irrelevant to philosophic thought experiments.

It could also be argued that non-linearity is another kind of hard boundary on separability - which is where I thought you were going with the cite of the three body problem. Again, this is probably a definite boundary on seperability (if non-linearity is being equated with an essential continuity of nature). Chaos theory would seem to say we cannot in practice pick out discrete instances in spacetime (measure exact initial conditions).

However I personally don't think non-linearity is a killer argument here. First, because chaos theory really models reality as it exists between the bounding extremes of the continuous and the discrete (if you have whorls of turbulence erupting, they are in some sense discrete structures in a continuous flow). And second because brain components like synapses, neurons and cortical circuits appear to be in some definite way structures with computational aspects.

For the purpose of philosophical thought arguments - based on what is at least broadly agreed and widely known about brains - it remains more plausible that the components of the brain are structures "striving to be digital even if made up of sloppy thermo stuff", rather than structures that are what they are, can do what they can do, because of some non-linear magic.

(Then I mentioned a third possible hard boundary on speculation - the relativistic issue of information density. Which like the QM separability bound, is a physically certain boundary, but again arguably irrelevant because such constraints only kick in at physically extreme scales).

Yet a further constraint on naive separability could be the argument that evolution is efficient and so the human brain (the most complex arrangement of matter in the known universe) is most likely to be close to the actual physical limits of complexity. Whatever it is that brains do to be conscious, we can probably expect that the way brains do it can't be beat. This makes it far less plausible that a technologist can come in and freely start stretching out the wiring connections, simplifying the paths, speeding up the transmission rates.

It might be argued by the likes of Zuboff that the brain as a natural machine is constrained by energetics - it is optimal, but optimised along a trade-off between consciousness production and metabolic cost. So a technologist with unlimited energy to make it all happen, could unpack a brain into a very different set of components. But the argument that evolution is efficient at optimisation, and so brains would resist the kind of naive separation proposed just on the grounds that there must be something significant about its physical parameters (its particular transmission times, its particular connection patterns, its particular molecular turnover, etc), must be at least dealt with in a thought experiment.

So we have two hard (but weak because they are distant in scale) constraints on speculation - QM and relativistic bounds.

We have a possible but unlikely constraint - non-linearity.

And we have the evolution optimisation constraint - which is probably weak here because it is easy enough I guess to imagine separating the energetic cost of replicating brain function.

Which all comes back to my original line of attack on the notion of separability. The systems view - which postulates a complex Aristotelean causality based on the interaction of bottom-up construction and top-down constraints - says reality is always dichotomised (divided into a local~global causality as just described) but never actually separated (reducible to either/or local causes, or global causes).

So what is going on here is that brains have both form and substance. They have their global organisation and their local components. There is indeed a kind of duality, but it is not the broken Platonic or Cartesean duality which leads to hard problems or waffling about emergence and supervenience. Instead, there is a duality of limits. You have a separation towards two different kinds of thing (bottom-up and top-down causation), but not an actual separation that divides reality. Just an asymptotic approach that produces two different "kinds" and in turn allows for the emergence of complexity in the synergistic interaction that results.

Translated into neuroscience, we should expect that the brain looks digital, computational, componential at the local level. This is what it is trying to be as this is what bottom-up, constructive, additive, causality looks like. But then we should also equally expect to be able to find the complementary global aspect to the brain which shows that it is an unbroken system. It must also be a system that can organise its own boundary constraints, its own global states of downward acting causality.

Which, as any neuroscientist knows, is what we see. Attention, anticipation, etc. Components have a definite local identity only because they are embedded in enactive contexts. Experiments at many levels of brain function have shown this. It is now a basic presumption of neuroscientific modelling (as shown for example, picking the cutting edge, Friston's Bayesian brain).

So if the hard problem arises because of a belief in physical separability, there are a number of arguments to be considered. But the best argument IMHO is the systems one. And note this explains both why consciousness and brain function are not ontically separated AND why they also appear as separated as possible.

Running the argument once more, local and global causality are in principle not separate (otherwise how do they interact?). Yet they are separated (towards physically-optimal limits - otherwise how would they be distinctive as directions of causality?).

Of course, this means that rather than selling an argument just about brains, you are trying to sell an argument about reality in toto.

But then if your thinking about the nature of things keeps leading you to the impasse of the hard problem, and its unpalatable ontic escape clauses like panpsychism, well you know that, Houston, you have a problem.

The hard problem should tell people that reductionism really is broke. If you accept the premise of separable causality, you end up looking over the cliff. So turn round and look at what got left behind. You have to rediscover the larger model of causality that was there when philosophy first got started.
 
  • #63
apeiron,

separability

That is originally where I was going with the n-body problem: I was previously using a casual definition of separability (similar to how you defined it just now) but Q_Goest introduced a very rigorous definition of separability and my point was that it seems too rigorous to be relevant to what we're talking about. His definition, as I was saying, seems to pertain to Euclidian space and that's not explicitly realized in the neural models I work with.

But yes, that aside, the definition of separable I was using before Q_Goest introduced this more rigorous definition was more to the point: "can you separate neurons and call it the same physical system?"

nonlinearity

I don't mean to imply at all that nonlinearity is a sufficient condition for consciousness. It's possibly necessary, but doubtfully sufficient.

The reason I bring up is nonlinearity up is that to me it appears that people attacking physicalism do so on the basis of linear physical systems, excluding the larger, more general class of physical systems that better describe our physical reality.

For the purpose of philosophical thought arguments - based on what is at least broadly agreed and widely known about brains - it remains more plausible that the components of the brain are structures "striving to be digital even if made up of sloppy thermo stuff", rather than structures that are what they are, can do what they can do, because of some non-linear magic.

There's no magic here, though. Nonlinearity is just unintuitive. It may appear as magic to someone who fails to understand the underlying mathematical principles, but it's really not. Everything "adds up" once you do the rigorous mathematical work.

And... the two views you presented here are not mutually exclusive. In fact (as I've already mentioned) a computer is realistically nonlinear itself. We simply accept slop, use filters, and set signal/noise ratios high enough that we can ignore the nonlinearities as "noise". So 1 is represented by ~5v and 0 is represented by ~0v, but 4.9 and .005 volts will work for a "1" and a "0" as well. We designed the system to be easy to interpret, regardless of the irregularity in the signal (as long as the signal is sufficiently larger than those irregularities).

So we actually ignore a lot of the "interesting" physics going in a computer because we don't want "interesting"; we want predictable because computers are extensions of our consciousness (we are their Maxwellian Demons).
 
  • #64
Pythagorean said:
The reason I bring up is nonlinearity up is that to me it appears that people attacking physicalism do so on the basis of linear physical systems, excluding the larger, more general class of physical systems that better describe our physical reality.

OK, agreed, and likewise this is why I point to the modelling of complexity by theoretical biologists such as Howard Pattee. Non-linear is simple complexity, then there is the analysis of systems complexity - the control of rate dependent processes (ie: self-organising, dynamical) by rate independent information (such as genes, words, neural connection patterns).
 
  • #65
I think the Maxwellian demon analogy is being misused here. The practical difficulty of duplicating a chaotic system can’t really be compared to Maxwell’s demon.

I don’t think anyone is disagreeing that in practice, trying to duplicate the causal influences acting on a single neuron so that the neuron undergoes the same physical changes in state while removed from the brain that it undergoes while in the brain, is going to be virtually impossible. Certainly we can claim it is impossible with our present technology. But that’s not the point of separability. Anyone arguing that in practice, the chaotic nature of neurons prevents the separability of the brain has already failed to put forth a legitamate argument. One needs to put forth an argument that shows that in principal, neurons are not separable from the system they are in. What principal is it that can be used to show neurons are not separable? Appealing the the practical difficulty of creating these duplicate physical states isn’t a valid argument.

The concept that nonlinear phenomena are "those for which the whole is greater than the sum of its parts" and thus aren't seperable has been appealed to by a few scientists and philosophers but that argument hasn’t been widely accepted. Further, it changes the present theory of mind. It says that digital computers can’t be conscious for starters since it is obviously very easy to duplicate the physical changes in state any portion of a computer undergoes. So now we need to say that some computational systems can be conscious and other computational systems can’t be, regardless of whether or not they are functionally the same.

If we’d like to use the argument that nonlinear systems are not separable, but we find just one nonlinear system that is in fact separable, then we have an even more difficult job of finding a reason why one nonlinear system can be consious but another can not. So let’s look at the n-body problem for a moment and contemplate whether or not it might be separable. To show separability, we only need to show that within a given volume of space, and over some time interval dt (ie: a spacetime region R), the gravitational field within that volume of space is identical to the gravitational field in another volume of space within a different n-body system. That is, if we find some spacetime region R1 within an n-body system that is identical with some other spacetime region R(identical), then we’ve shown separability in the sense that Zuboff is suggesting. These two regions of space undergo identical physical state changes over that time interval dt, despite the two being in different systems. Note here that Zuboff’s notion of separability is not just that physical processes supervene only on the measurable physical properties within some spacetime region R, but also that those physical processes can be duplicated within an identical spacetime region R(identical) without having R(identical) be part of the same overall physical process. In other words, the neuron in one system can be duplicated in another system. We might imagine those two neurons going through identical changes in state because the causal influences on them are identical which is just one way to understand what the story is about and what the problem is that we need to resolve.
 
  • #66
apeiron said:
The standard philosophical paradoxes arise because it is presumed that complex systems must be reducible to their "atoms", their component efficient causes. But a systems approach says that causality is holistic. It is not separable in this fashion. You cannot separate the bottom-up from the top-down as they arise in interaction.

Systems do of course appear to be composed of local component causes. But this local separability is in fact being caused by the top-down aspects of the system.


All this bottom-up and top-down talk, in as much as it's true, bears a striking similarity to Life(aliveness, being alive) if applied to the universe. May even shed light on the "special signal problem".
 
Last edited:
  • #67
Q_Goest said:
I don’t think anyone is disagreeing that in practice, trying to duplicate the causal influences acting on a single neuron so that the neuron undergoes the same physical changes in state while removed from the brain that it undergoes while in the brain, is going to be virtually impossible. Certainly we can claim it is impossible with our present technology. But that’s not the point of separability. Anyone arguing that in practice, the chaotic nature of neurons prevents the separability of the brain has already failed to put forth a legitamate argument. One needs to put forth an argument that shows that in principal, neurons are not separable from the system they are in. What principal is it that can be used to show neurons are not separable? Appealing the the practical difficulty of creating these duplicate physical states isn’t a valid argument.

Why is it the presumption here that neurons are separable rather than the converse?

But anyway, I have already given two "in principle" limits in QM and relativistic event horizons. Neurons would not be separable beyond these limits (or do you disagree?).

Then there is the "middle ground" attack (as QM and black holes clearly kick in only at the opposing extremes of physical scale).

And here I would suggest that networks of neurons, if ruled by global dynamics such as oscillatory coherence (an experimentally demonstrated correlate of consciousness), can be presumed to be NP complete.

This is the kind of argument indeed used within theoretical biology to show biology is non-computable - for example, the protein folding problem. You can know the exact sequence of bases, yet not compute the final global relaxation minima.

Here is an exert from Pattee's paper, CAUSATION, CONTROL, AND THE EVOLUTION OF COMPLEXITY, which explains how this is relevant (and how complexity is not just non-linearity).

The issue then is how useful is the concept of downward causation in the formation and evolution of complex systems. My conclusion would be that downward causation is useful insofar as it identifies the controllable observables of a system or suggests a new model of the system that is predictive. In what types of models are these condition met?

One extreme model is natural selection. It might be considered the most complex case of downward causation since it is unlimited in its potential temporal span and effects every structural level of the organism as well as social populations. Similarly, the concept of fitness is a holistic concept that is not generally decomposable into simpler components. Because of the open-ended complexity of natural selection we know very little about how to control evolution, and consequently in this case the concept of downward causation does not add much to the explanatory power of evolution theory.

At the other extreme are simple statistical physics models. The n-body problem and certainly collective phenomena, such as phase transitions, are cases where the behavior of individual parts can be seen as resulting from the statistical behavior of the whole, but here again the concept of downward causation does not add to the model's ability to control or explain.

A better case might be made for downward causation at the level of organism development. Here, the semiotic genetic control can be viewed as upward causation, while the dynamics of organism growth controlling the expression of the genes can be viewed as downward causation. Present models of developmental control involve many variables, and there is clearly a disagreement among experts over how much control is semiotic or genetic and how much is intrinsic dynamics.

The best understood case of an essential relation of upward and downward causation is what I have called semantic closure (e.g., Pattee, 1995). It is an extension of von Neumann's logic of description and construction for open-ended evolution. Semantic closure is both physical and logical, and it is an apparently irreducible closure, which is why the origin of life is such a difficult problem. It is exhibited by the well-known genotype-phenotype mapping of description to construction that we know empirically is the way evolution works. It requires the gene to describe the sequence of parts forming enzymes, and that description, in turn, requires the enzymes to read the description.

This is understood at the logical and functional level, but looked at in detail this is not a simple process. Both the folding dynamics of the polypeptide string and specific catalytic dynamics of the enzyme are computationally intractable at the microscopic level. The folding process is crucial. It transforms a semiotic string into a highly parallel dynamic control. In its simplest logical form, the parts represented by symbols (codons) are, in part, controlling the construction of the whole (enzymes), but the whole is, in part, controlling the identification of the parts (translation) and the construction itself (protein synthesis).

Again, one still finds controversies over whether upward semiotic or downward dynamic control is more important, and which came first at the origin of life. There are extreme positions. One extreme sees the universe as a dynamics and the other extreme sees the universe as a computer. This is not only a useless argument, but it obscures the essential message.

The message is that life and the evolution of complex systems is based on the semantic closure of semiotic and dynamic controls. Semiotic controls are most often perceived as discrete, local, and rate-independent. Dynamic controls are most often perceived as continuous, distributed and rate-dependent. But because there exists a necessary mapping between these complementary models it is all too easy to focus on one side or the other of the map and miss the irreducible complementarity.
 
  • #68
Q_Goest said:
I think the Maxwellian demon analogy is being misused here. The practical difficulty of duplicating a chaotic system can’t really be compared to Maxwell’s demon.

The maxwellian argument and the nonlinear argument are two different lines of reasoning.

I'm not saying that it's only in practice that complex systems are inseparable, I'm proposing that it's in principle. That's why I'm using mathematics to illustrate the point.

That they're nonlinear and complex is sufficient, and also, remember, I'm not claiming nonlinear systems are required for consciousness (as I've already said) just that the thought experiment narrows it's scope to linear systems and that neurons exhibit nonlinear behavior (just like the rest of the world does).

The concept that nonlinear phenomena are "those for which the whole is greater than the sum of its parts" and thus aren't seperable has been appealed to by a few scientists and philosophers but that argument hasn’t been widely accepted

And as I've already demonstrated, nonlinear phenomena formally only says "the whole doesn't need to be equal to the sum" and I've shown what exactly that means mathematically. It's quite obvious (from a function variable standpoint) how the variables, being acted on by the function, are not separable because of the nonlinearity.

Linearity literally allows us to reduce problems to their components, and this exactly falls out of the superposition (the function performed on each is the same as the function performed on all in a linear case).

To show separability, we only need to show that within a given volume of space, and over some time interval dt (ie: a spacetime region R), the gravitational field within that volume of space is identical to the gravitational field in another volume of space within a different n-body system. That is, if we find some spacetime region R1 within an n-body system that is identical with some other spacetime region R(identical), then we’ve shown separability in the sense that Zuboff is suggesting. These two regions of space undergo identical physical state changes over that time interval dt, despite the two being in different systems. Note here that Zuboff’s notion of separability is not just that physical processes supervene only on the measurable physical properties within some spacetime region R, but also that those physical processes can be duplicated within an identical spacetime region R(identical) without having R(identical) be part of the same overall physical process. In other words, the neuron in one system can be duplicated in another system. We might imagine those two neurons going through identical changes in state because the causal influences on them are identical which is just one way to understand what the story is about and what the problem is that we need to resolve.

But this is not what's being argued. One neuron in one system can be made to behave the same way in another system, that's not what's being contested. From the discussion by apeiron and I above, my claim is that a system of N coupled bodies (or neurons) is not the same as a system of N independent neurons, all exhibiting the same behavior independent of each other (i.e., no causal connection).

This seems to be reminding me of counterfactual states now. If an experimenter were to come in and probe one neuron to see how another acted, he wouldn't be able to find any causal relationship. The input and the output would appear to be completely random to him (and it may as well be, since the IC's can't predict the experimenters motives or else it would be a causal connection).

This would be a different result from if the experimenter ran the tests on the causally connected neurons. He would be able to find a consistent relationship between the input of neuron 1 and the output of neuron N.
 
Back
Top