Pythagorean said:
I don't mean to say it's impossible at all. What I mean to say is that the space wouldn't matter when you modeled it (it would be an unnecessary variable); only the transmission times are relevant...So what's being modeled is information transfer.
But this does not really get to the nub of the argument then.
The hard problem depends on naive "separability". And if consciousness is just about the existence of information (in a static pattern) then Putnam's argument that any rock implements every finite state automaton may in principle go through. You are hinting that there is something more to be considered in talking about "information transfer" - states/patterns must change in some systematic fashion. But what is the nature of that essential change? And it it separable or non-separable?
Separable is a possibly confusing term here as it is properly a quantum states distinction. But we can keep using it as it is synonomous with digital (vs analog), discrete (vs continuous), reductionist (vs holistic), atomistic (vs well, again holistic) in this discussion.
Basically the hard problem arises when science can appear to separate the brain into its computational atoms, its digital components, and not lose anything essential in terms of causality. At some point a conscious human becomes a non-conscious heap of chemistry or a silicon zombie, or whatever. We are left with just the material, just the physical substance, and no causal account of the higher level "property".
Now it could be argued that there is indeed a hard boundary on separability in QM. But unless you are arguing consciousness is essentially a QM phenomenon - for which there is no good scientific backing - then this boundary seems irrelevant to philosophic thought experiments.
It could also be argued that non-linearity is another kind of hard boundary on separability - which is where I thought you were going with the cite of the three body problem. Again, this is probably a definite boundary on seperability (if non-linearity is being equated with an essential continuity of nature). Chaos theory would seem to say we cannot in practice pick out discrete instances in spacetime (measure exact initial conditions).
However I personally don't think non-linearity is a killer argument here. First, because chaos theory really models reality as it exists between the bounding extremes of the continuous and the discrete (if you have whorls of turbulence erupting, they are in some sense discrete structures in a continuous flow). And second because brain components like synapses, neurons and cortical circuits appear to be in some definite way structures with computational aspects.
For the purpose of philosophical thought arguments - based on what is at least broadly agreed and widely known about brains - it remains more plausible that the components of the brain are structures "striving to be digital even if made up of sloppy thermo stuff", rather than structures that are what they are, can do what they can do, because of some non-linear magic.
(Then I mentioned a third possible hard boundary on speculation - the relativistic issue of information density. Which like the QM separability bound, is a physically certain boundary, but again arguably irrelevant because such constraints only kick in at physically extreme scales).
Yet a further constraint on naive separability could be the argument that evolution is efficient and so the human brain (the most complex arrangement of matter in the known universe) is most likely to be close to the actual physical limits of complexity. Whatever it is that brains do to be conscious, we can probably expect that the way brains do it can't be beat. This makes it far less plausible that a technologist can come in and freely start stretching out the wiring connections, simplifying the paths, speeding up the transmission rates.
It might be argued by the likes of Zuboff that the brain as a natural machine is constrained by energetics - it is optimal, but optimised along a trade-off between consciousness production and metabolic cost. So a technologist with unlimited energy to make it all happen, could unpack a brain into a very different set of components. But the argument that evolution is efficient at optimisation, and so brains would resist the kind of naive separation proposed just on the grounds that there must be something significant about its physical parameters (its particular transmission times, its particular connection patterns, its particular molecular turnover, etc), must be at least dealt with in a thought experiment.
So we have two hard (but weak because they are distant in scale) constraints on speculation - QM and relativistic bounds.
We have a possible but unlikely constraint - non-linearity.
And we have the evolution optimisation constraint - which is probably weak here because it is easy enough I guess to imagine separating the energetic cost of replicating brain function.
Which all comes back to my original line of attack on the notion of separability. The systems view - which postulates a complex Aristotelean causality based on the interaction of bottom-up construction and top-down constraints - says reality is always dichotomised (divided into a local~global causality as just described) but never actually separated (reducible to either/or local causes, or global causes).
So what is going on here is that brains have both form and substance. They have their global organisation and their local components. There is indeed a kind of duality, but it is not the broken Platonic or Cartesean duality which leads to hard problems or waffling about emergence and supervenience. Instead, there is a duality of limits. You have a separation towards two different kinds of thing (bottom-up and top-down causation), but not an actual separation that divides reality. Just an asymptotic approach that produces two different "kinds" and in turn allows for the emergence of complexity in the synergistic interaction that results.
Translated into neuroscience, we should expect that the brain looks digital, computational, componential at the local level. This is what it is trying to be as this is what bottom-up, constructive, additive, causality looks like. But then we should also equally expect to be able to find the complementary global aspect to the brain which shows that it is an unbroken system. It must also be a system that can organise its own boundary constraints, its own global states of downward acting causality.
Which, as any neuroscientist knows, is what we see. Attention, anticipation, etc. Components have a definite local identity only because they are embedded in enactive contexts. Experiments at many levels of brain function have shown this. It is now a basic presumption of neuroscientific modelling (as shown for example, picking the cutting edge, Friston's Bayesian brain).
So if the hard problem arises because of a belief in physical separability, there are a number of arguments to be considered. But the best argument IMHO is the systems one. And note this explains both why consciousness and brain function are not ontically separated AND why they also appear as separated as possible.
Running the argument once more, local and global causality are in principle not separate (otherwise how do they interact?). Yet they are separated (towards physically-optimal limits - otherwise how would they be distinctive as directions of causality?).
Of course, this means that rather than selling an argument just about brains, you are trying to sell an argument about reality in toto.
But then if your thinking about the nature of things keeps leading you to the impasse of the hard problem, and its unpalatable ontic escape clauses like panpsychism, well you know that, Houston, you have a problem.
The hard problem should tell people that reductionism really is broke. If you accept the premise of separable causality, you end up looking over the cliff. So turn round and look at what got left behind. You have to rediscover the larger model of causality that was there when philosophy first got started.