Gold Barz said:
This might be going a little off-topic, but does systems thinking, in contrast to reductionism, solve the hard problem or does it just change the hard problem?
It so much depends on what you believe the hard problem is about. But both reductionism and the systems view are both modelling. Both also run out of steam where they cannot posit meaningful counterfactuals - something we can go measure as a different result of a different set of causes. So I don't think one will work where the other fails.
On the other hand, systems thinking being a more complete account of causality could well be expected to do a better ultimate job, in so far as the job can be done.
If we are judging success of a theory in the usual way - the control it gives over reality - then a concrete test would be which gets us closer to artificial mind or artificial life? Reductionism or systems?
But the hard problem gets its bite because it wants theory to answer the question of "what it feels to be like"? Not something we so much expect from a theory about quarks or rocks or ecosystems, but somehow it is a legitimate demand of a theory of mind.
If you want to be able to map a set of physical facts on to set of mental facts, we can do a tremendous amount of this already. As I type on the key pad, I can say all sorts of things about what is going on in my brain and how that relates to feelings of how automatically my fingers find the keys, why it takes a particular lag to catch typing mistakes, why there is a jolt of physiological reaction that accompanies that, etc.
So there seems nothing hard about this level of mapping physical facts to mental facts. I'm doing it all the time.
If I did what a lot of people do and go, whoo, matter, whoo, experience; I know I'm my brain but also that I am a view of the world; nothing figures, then yeah, it would seem a completely hard problem.
But then if you ask the question can everything be handled by mapping physical facts to mental facts, as I say, there does seem to be an irreducible residue for any kind of theory in that eventually you run into a lack of available counterfactuals.
Take the zombie argument. I can't actually imagine it being true that a brain could do everything a brain does and conceivably lack awareness. I have no grounds to doubt that it would be conscious so far as I can see. There are just too many physical facts that map to the mental facts for such a doubt to be reasonable.
A zombie is of course easier for a reductionist to believe in. But a systems view is that the top-down is essential to things happening, so a zombie without top-downness couldn't mirror the function of a normal brain. So a systems zombie would have to have attentional processes for instance, and anticipatory states. Once you start giving a zombie absolutely everything, what is this extra thing that is still missing which is the feeling of doing these things?
But on the other hand, I couldn't be so sure about a zombie's experience of red, or yours either. Would it be the same as mine, or could it be utterly different? Could the same neural processes be occurring, yet with a different phenomenal result? It seems unlikely but how can I check? How would I measure?
You can't even check your own story of whether your experience of red today is the same as yesterday.
Logic demands that if we have A, then not-A is conceivable. The one justifies the other and so sets up a counterfactual and the possibility of a definite measurement.
At the level of a zombie, we have so much going on that A (consciousness is a result of many physical facts) can be contrasted with not-A (a lack of even some of these facts results in a lack of conscious-like behaviour - a zombie that won't fool anyone).
But at the level of a qualia like red, what is not-red (yet same physical facts)? A zombie's lack of convincingness is open to measurement. But comparing actual experiences of red in terms of some "otherness" is not possible.