ryan_m_b said:
I agree a big problem with "free will" is the definition of it. Broadly the idea of free will (as espoused by many ideologies from religion to law) is that one could have made a different choice about an action. Determinism undermines this ideology by point out that the decision was inevitable.
The idea of freewill is a religious/social construct and so not scientific. Every time the subject gets raised, it is because fall back into a sterile division between "what Newtonian mechanics says" and "what religious and romantic belief says".
The scientific approach would be to realize that Newtonian mechanics is a limited model of the universe and really an incredibly bad basis for talking about neurobiological complexity. To even try to argue from a Newtonian starting point is a category error.
An infodynamics approach, for instance, would say it is all about information and constraints.
So a simple Newtonian system has simple constraints. All the information is present in a rather direct fashion regarding both initial conditions and boundary conditions. The situation reduces to a constructive tale of local effecient causation. Everything is determined by discrete, atomistic, pushes and pulls in good mechanical fashion. The boundary conditions are not changing. The initial conditions likewise are set once. So there is no need in this stylised description of a system to pay attention to material, formal or final cause. These are all frozen still and the system becomes just the determinstic play of its parts - the degrees of freedom represented by the local atomistic pushes and pulls, or the system's efficient causes.
But complex systems are capable of actual development and change. The other three causes are not frozen but come into play and so must be tracked in the modelling.
We see this with QM. Final and formal cause are an issue because the future constrains the past (the various ways the observer issue makes itself felt). Deterministic chaos is another example of how a larger model is required because initial conditions actually need to be pinned down with arbitrary precision. Again there is an observer issue that has to be part of the model. The global constraints have to be precisely specified - they have to be known information - and this is a source of dynamism in the modelling as no two global states of constraint need be the same. Newtonian mechanics is of course presuming they are, so can be left out of the modelling.
So when it comes to modelling a complex system like a brain (a biologically evolved system embedded in turn in a memetically evolving culture), if you are going to insist on thinking in terms of efficient causality, you need to get a proper sense of the actual weight of atomistic actions involved.
A simple Newtonian system like an ideal gas has a rigid set of initial conditions and boundary constraints (the full story of efficient, material, formal and final cause). The only information that needs to be measured is the position and momentum of a collection of particles, then all the rest follows deterministically from efficient cause.
But with a brain, in a world which is a mix of the predictable and the unpredictable, which has been shaped by a history of millions of years of biological information, thousands of years of cultural information, and tens of years of developmental information, and tens of minutes or seconds of fairly immediate context, task and goal information - well, that is a heck of a lot more information specifying the system.
So just boiling a brain down to a collection of efficient causes (which is NOT an adequate description) you can see it already looks nothing like the kind of Laplacean ideal gas version of a determinstic system. Even a chaotic system is incredibly simple compared.
The information - the collection of determinstic acts - involved in any brain decision, any individual act of freewill, stretches back millions of years. An ideal gas has virtually no history. Once gone to equilbrium, it really has no history. But a brain is quite incredibly poised at some moment in a particular history.
Diehard Newtonians, missing the point, will say yes but every step along the way to a brain's current state is deterministic, so its next instant is also determined. The only problem for science is to go and measure those prior events as the initial conditions of a Newtonian model.
Well you could do that (except QM and deterministic chaos suggest perhaps you can't). But it would be an incredibly inefficient approach to modelling. In practice, we can already see it would be foolish to treat every single little atom of efficient cause in the history of an organism as of being of equal scale, of equal import (which is what the information theoretic approach would demand). Instead, we would want to coarse-grain. Some efficient causes are clearly going to be more proximate than others.
If I am sat at the lights about to turn green, what determines my next action (while also allowing me to be a contrary devil and deciding to sit there blocking the traffic and listen to the angry honking of those behind)?
Determinism of the kind that wants to insist that decision was inevitable and predictable since the big bang would have to give equal weight to every discernible event in my past light cone. Determinism of some more moderate kind would have to include the brain evolving experiences of my H.erectus forebears - so already the coarse-graining of the measurement of the initial conditions has begun. Determinism of a fairly practical kind we could actually recognise as science would just try to find something about me at that moment which explains why instead of doing the habitual thing (drive away) I instead did something out of the ordinary.
Perhaps I was having some petit mal fit, or winning a stupid bet, or could see an ambulance coming down the other way. This could be taken as an efficient cause (because in good Newtonian fashion, all the other causes seem unchanged - all my knowledge and training of how to react to a green light stayed exactly the same). But it would be an incredibly coarse-grain notion of the reason that determined an action. And so totally specific to the context that it could not be generalised as an explanation of freewill, or choice making, in general.
It could have been a fit, a bet, a "give way to emergency vehicles" rule, or any other number of possible efficient causes. Therefore we end up with the fairly useless model of the style: well, he would have driven off at the green light in usual circumstances, but various unmodellable micro-circumstances can drive different decisions.
A better model of a choice-making system would take into account that there is dynamism in material, formal and final cause as well. These would not be frozen out in the description of a system but knobs and settings that could be twiddled. Then we would have instead a model of "usual behaviour" in interaction with "specific circumstances".
ie: The kind of infodynamic models being pursued currently in neuroscience, such as the Bayesian Brain.
Nothing can stop these medieval sounding debates about Newtonian determinism vs conscious freewill. It is a meme now entrenched because it embodies the popular understanding of a fundamental conflict between scientism and religious doctrine. That is why there is still "the battle that must be won".
But Newtonian mechanics was a moment in time. Science has advanced hugely since then. The debate no longer reflects the state of human knowledge.
And science has to give up the notion that all is determined, all is local, atomistic, efficient cause, just as much as the unscientific have to give up cherished notions such as "freewill" as a substantial (physically causal) property of an immaterial mind.