This webpage title poses the question: Can Mind Arise from Plain Matter?

  • Thread starter Thread starter Q_Goest
  • Start date Start date
  • Tags Tags
    Cause Events
AI Thread Summary
The discussion centers on the challenges of mental causation, particularly the exclusion argument for epiphenomenalism, which posits that mental events do not influence physical outcomes due to preexisting physical conditions. Yablo's dualism is highlighted, asserting that mental and physical phenomena are distinct yet both exist. The debate includes examples like computers processing information, questioning whether they experience consciousness or simply follow programmed responses. Participants argue about the implications of quantum phenomena and the potential for unknown causal powers, suggesting that materialism and determinism may not fully account for mental experiences. The conversation ultimately seeks to reconcile mental and physical events, emphasizing the need for a deeper understanding of consciousness and causality.
Q_Goest
Science Advisor
Messages
3,012
Reaction score
42
The primary problems with mental causation are nicely summed up by Yablo (“Mental Causation” The Philosophical Review, Vol. 101, No. 2 (April 1992) ).
http://www.jstor.org/pss/2185535
"How can mental phenomena affect what happens physically? Every physical outcome is causally assured already by preexisting physical circumstances; its mental antecedents are therefore left with nothing further to contribute." This is the exclusion argument for epiphenomenalism.
...
(1) If an event x is causally sufficient for an event y, then no event x* distinct from x is causally relevant to y (exclusion).
(2) For every physical event y, some physical event x is causally sufficient for y (physical determinism).
(3) For every physical event x and mental event x*, x is distinct from x* (dualism).
(4) So: for every physical event y, no mental event x* is causally relevant to y (epiphenomenalism).

Yablo defines dualism as follows:
… all I mean by the term [dualist] is that mental and physical phenomena are, contrary to the identity theory, distinct, and contrary to eliminativism, existents.
In other words, the physical description of the color red as it appears in the mind will be different than the mental description of the color red (distinct), but they both should be taken as phenomena that actually occur (existents). The physical description would discuss what neurons and sections of the brain are active when the phenomenon of ‘red’ occurs, while the mental description would focus on explaining the qualia.

Take for example, an allegedly conscious computer. For the sake of clarity, let’s model a computer as a large collection of switches, which is basically all a computer is. A transistor is at the heart of every modern computer which is nothing more than a switch.

We can examine a computer that is reporting that it sees the color red when looking at a fire truck for example. This computer will have a camera for eyes and a speaker for a mouth. So out of the speaker, when the camera is turned on a fire truck, it reports ‘red’. But did it report red because it actually is experiencing red, or because its circuit is designed such that red is reported? None of the transistors in the computer are influenced by any ‘experience’ of redness. Each transistor only changes state because an electrical current is either applied or removed. And per computationalism, the experience of the color red is a phenomena produced by the interactions of the transistors, it is not a phenomena produced by any given transistor.

For the computer, we have physical events (changes transistor states) which have physical causes (application or removal of electric current). Mental events therefore, are not causally relevant and are epiphenomenal.

Appeals to mental causation because of quantum phenomena may also be problematic. Very briefly, if a quantum physical event was somehow influenced by a mental event, such as protein folding for example, then the probability of the physical event will have been influenced by the mental event. If a quantum physical event has a 50/50 chance of occurring and that event is influenced by some mental event such that the physical event no longer occurs with a 50/50 chance. This might violate entropy since a system can now become more ordered because of mental events.

What’s your view? How can mental events be reconciled with physical events? Please provide references to the literature if at all possible. We don’t want personal theories.
 
Last edited by a moderator:
Physics news on Phys.org
My objections to his argument:

(2) I don't think this is an established fact. If we knew every cause and effect, then everything is known and science has nothing left to investigate. But we don't know everything, so this is not the case. We don't know if and how strings work, if there are other universes, what happens in black holes, what started the universe, how gravity works, how QM and GR can fit together, where the laws of physics came from, how atoms work , etc.(example of unexpected atomic interaction: http://www.sciencedaily.com/releases/2008/07/080702132209.htm, or the reasons they built the large hadron collider). So there is room for unknown causal powers (either physical or mental) in our universe.

(3) is avoided by adopting monism such as materialism, panpsychism, idealism or something else. With materialism: if mind = matter, then mind has the same causal powers as matter. Panpsychism: we can talk about "the physical" as if it is unconscious, but we don't really know. A physical body might operate according to a known mechanism, yet be conscious. There is no logic that states that mechanistically/deterministically behaving objects cannot be conscious or that consciousness cannot cause mechanistic/deterministic behaviour.

The issue with entropy seems more to do with free will, and not so much with mind in general. And if a mind with free will made a system more ordered in one location, yet this caused a decrease in order in another location, would it violate entropy? If not, then a free will mind has much room to operate in concordance with entropy.
 
Last edited:
Hi pftest,
pftest said:
My objections to his argument:
I’d rather not go through everyone’s own arguments and objections. Please review some of the literature on the topic.

Regarding atomic interactions, that’s a non starter. Those are physical processes that can be objectively measured and if they are random in nature, then as I’d pointed out before, the statistical chances of those processes occurring can be quantified. Radioactive decay for example, has a well defined statistical rate of occurance. No one has ever suggested that the liklihood of a physical process occurring is dependant on someone’s mood for example, which is what we’d need to find if mental causation is true.

pftest said:
With materialism: if mind = matter, then mind has the same causal powers as matter.
This leaves open the explanatory gap. Why should there be any feeling or phenomenal experience at all? How can we know what a computer experiences since everything a computer does can be FULLY explained in physical terms? This argument can be extended to the mind given the present computational paradigm. What we say and how we act can be explained by referencing the governing physical interactions between neurons and other physical interactions within the body. It also leaves open the issue of reliable reporting that we discussed in your other thread.

I’d suggest doing a search of the web for:
http://www.google.com/search?hl=en&q=mental+causation&aq=f&oq=&aqi=g2"
http://www.google.com/search?hl=en&source=hp&q=explanatory+gap+&aq=f&oq=&aqi=g1"
 
Last edited by a moderator:
Im sorry i don't have any references, but i shall look for some later. I will have a go at it now anyway, i believe it is not against the forum rules to do so.

Q_Goest said:
Regarding atomic interactions, that’s a non starter. Those are physical processes that can be objectively measured and if they are random in nature, then as I’d pointed out before, the statistical chances of those processes occurring can be quantified. Radioactive decay for example, has a well defined statistical rate of occurance. No one has ever suggested that the liklihood of a physical process occurring is dependant on someone’s mood for example, which is what we’d need to find if mental causation is true.
I brought the atomic interactions up because it shows that even in atoms there is room for unknown causal powers. That room itself is enough to dismiss the idea that there is no room for mental causation.

This leaves open the explanatory gap. Why should there be any feeling or phenomenal experience at all? How can we know what a computer experiences since everything a computer does can be FULLY explained in physical terms? This argument can be extended to the mind given the present computational paradigm. What we say and how we act can be explained by referencing the governing physical interactions between neurons and other physical interactions within the body. It also leaves open the issue of reliable reporting that we discussed in your other thread.
You are right, i don't think materialism solves this and i should not have mentioned it.

What i was trying to say is that "the physical" need not be not in conflict with "the mental". When we have an equation that predicts that an object will move in a straight line, from this it doesn't follow that thus the object has no mind.

The statements "the object will move in a straight line" and "the object has a mind" are not in conflict with each other. Similarly, I am saying that physical interactions need not be in conflict with mental ones. The end result is then panpsychism or neutral monism (but not materialism as i mistakenly said).
 
Last edited:
Q_Goest said:
We can examine a computer that is reporting that it sees the color red when looking at a fire truck for example. This computer will have a camera for eyes and a speaker for a mouth. So out of the speaker, when the camera is turned on a fire truck, it reports ‘red’. But did it report red because it actually is experiencing red, or because its circuit is designed such that red is reported?

It may be important to distinguish between a computer - a Turing machine - and machines more generally here. What you are describing is a rather hybrid system that confuses the essential issues I believe.

So a Turing machine is the familiar tape and gate system. If it is really "computing" that your machine is doing, then all those functions and activities can be reduced to the making and erasing of marks on an infinite tape. You can then ask the relevant questions of this most minimal model and see how they stack up.

Do this and you can see for example that you now have no clear place to insert your camera input and your speaker output. You can write these actions into the program as data on the tape. But the point is that YOU have to. The computer is not in a dynamic relationship with the world as a basic fact of its nature.

Now you can begin to think about how you would build up some actual modelling relation with a world - how you would build something more like a neural network that could learn from experience. What is it exactly that you are adding that was missing?

To cut a long story short, the whole epiphenomenal/dualistic debate arises because we insist on taking a strictly bottom-up, built from smallest components, approach to thinking about complex adaptive systems. Complexity involves both its forms - its global organisation - as well as its substances, its local material out of which things can get made (which includes the notion of information, or transistor bits, or marks on an infinite tape).

With neural networks, we are beginning to see signs of a global ongoing state that acts as a living context to the system's moment-to-moment reactions - the ideas or long term memories that frame the impressions or short term memories (see Grossberg, Rao, Hinton, McKay, Attneave, or anyone working on generative neural nets, forward models, Helmholtz machines, dietic coding, etc).

A Turing machine has no global organisation, no hierarchy of operational and temporal scale. So there is nothing like a top-down causation guiding and constraining its actions. There is no internal meaning or semiosis. It is only we programmers who found the marks on the tape meaningful when we first wrote them and when we looked again at how they were rearranged.

All this is perfectly obvious from a systems perspective and so these kinds of philosophical traumas have no real content. There is a problem of coming up with an adequate model of top-down causality as it applies to conscious human brains - it is a hard ask - but not an issue of actual causal principle.

To add a little reality to your hybrid machine, what if you allowed that it was sufficiently complex to be an anticipatory device?

This would mean that before a fire truck hoved into sight, it would be in a state of prevailing expectation of not seeing red in that particular part of the visual field. It would be expecting to see whatever colour of whatever was already in that place. The red fire truck would then be a surprise. Although hearing its sirens would prime for the sight of it coming around the corner (and if the truck were painted green, that would be an even bigger surprise).

And so on. The point being that the "mind" is always there as a global state of aniticipation and prepared habits. New information can be taken in. But there is always a prevailing context that is framing it. This is what a computer simulation would have to replicate in all its gloriously complex detail. And such a simulation would have even less to do with the canonical Turing machine than a neural net. It would in fact have to have the real-life dynamism of a human brain embedded in a human body.

So the standard trick of philosophical dualists is to say we can't imagine a Turing machine being conscious. Well neither can a systems theorist. And what is completely lacking in the one, and completely necessary in the other, is this hierarchy of scale, this interaction between bottom-up constructing causality and top-down contextualising, or constraining, causality.
 
  • Like
Likes mattt
Hi apeiron,
Thanks for the responce. I realize some folks feel a system approach and some form of downward causation are instructive. The paper "Physicalism, Emergence and Downward causation" by Campbell and Bickhard for example, is right up your ally. They discuss mental causation and reference Kim. To me, it's all mere handwaving.

I'm on the other side of the fence. Craver and Bechtel2 I think do a nice job of getting in between the two camps and provide an argument that you might find interesting. They suggest a way of thinking about "top down casuation" without resorting to downward causation. They suggest that interlevel relationships are only constitutive. To a system level approach they suggest, "...those who invoke the notion of top-down causation ... owe us an account of just what is involved.” I see very few individuals attempt to provide that account, and those that do have not been able to prove any kind of downward causation. Bedau1 discusses weak and strong emergence as well as downward causation. Bedau suggests that "weak emergence is all we are entitled to" and does a very good job pointing out that "emergent macro-causal powers would compete with micro-causal powers for causal influence over micro events, and that the more fundamental micro-causal powers would always win this competition." I see no evidence to challenge that.

Regardless of which camp you’re in, the system's approach doesn't do anything to change the conclusion. Every single transistor, switch or other classical element of any neural net only ever changes state because of local causal actions. In the case of a transistor, it's the current applied to the transistor. It really is that simple!

1. Bedau; "http://people.reed.edu/~mab/publications/papers/principia.pdf" "
2. Craver and Bechtel; "http://philosophyfaculty.ucsd.edu/faculty/pschurchland/classes/cs200/topdown.pdf" "
 
Last edited by a moderator:
  • Like
Likes mattt
Q_Goest said:
Thanks for the responce. I realize some folks feel a system approach and some form of downward causation are instructive. The paper "Physicalism, Emergence and Downward causation" by Campbell and Bickhard for example, is right up your ally. They discuss mental causation and reference Kim. To me, it's all mere handwaving.

Most philosophers don't take it seriously and yet most mathematical biologists do. Interesting that o:).

Here is my set own refs from an earlier thread on this...
(https://www.physicsforums.com/showthread.php?p=2469005&highlight=emmeche#post2469005)

http://www.ctnsstars.org/conferences...0causation.pdf

http://www.buildfreedom.com/tl/tl20d.shtml

http://people.reed.edu/~mab/papers/principia.pdf

http://www.nbi.dk/~emmeche/coPubl/2000d.le3DC.v4b.html

http://www.nbi.dk/~emmeche/coPubl/97e.EKS/emerg.html

http://pespmc1.vub.ac.be/CSTHINK.html

http://www.calresco.org/

http://books.google.co.nz/books?id=N...ollege&f=false

http://www.nbi.dk/~emmeche/pr/DC.html

http://www.isss.org/hierarchy.htm

https://webspace.utexas.edu/deverj/p...bingmatter.pdf

Q_Goest said:
Regardless of which camp you’re in, the system's approach doesn't do anything to change the conclusion. Every single transistor, switch or other classical element of any neural net only ever changes state because of local causal actions. In the case of a transistor, it's the current applied to the transistor. It really is that simple!

Not so fast cowboy. If you are dealing with hierarchically organised systems, you can't just blithly label all the causality as "local". The point is that there are in fact prevailing long term states of activity across the network that act as contextual constraints.

The on-ness or off-ness of a particular transistor gate is the result of events that happened in the past and predicted (with certain weight) in the future.

The on-ness or off-ness of a particular transistor gate has both some level of "now" meaning relating to some current spatiotemporal pattern of activation, and also some level of more general meaning as part of long term memory patterns.

If you look at the transistor gate from an outside point of view - and make that choice to measure its isolated state at some isolate instant in its history - then it may indeed seem you are only seeing bottom-up local causality. But you precisely then are missing the real deal, the internal systems perspective by which every local action has meaning because it occurs with a running global context.

If philosophers studied biology and systems theory, this would not be such a mystery.

There are honorable exceptions of course like Evan Thompson.

http://individual.utoronto.ca/evant/MBBProblem.pdf
 
Last edited by a moderator:
Hi apeiron,
apeiron said:
The on-ness or off-ness of a particular transistor gate is the result of events that happened in the past and predicted (with certain weight) in the future.
This is okay. Why "certain weight" though? Are you suggesting computers are not deterministic?
apeiron said:
The on-ness or off-ness of a particular transistor gate has both some level of "now" meaning relating to some current spatiotemporal pattern of activation, and also some level of more general meaning as part of long term memory patterns.

If you look at the transistor gate from an outside point of view - and make that choice to measure its isolated state at some isolate instant in its history - then it may indeed seem you are only seeing bottom-up local causality. But you precisely then are missing the real deal, the internal systems perspective by which every local action has meaning because it occurs with a running global context.
Would you agree that none of this changes the fact that transistors only ever change state because of a current being applied? And if that's true, then mental states per the standard computational paradigm still don't influence individual transistors any more than a brain state influences individual neurons. Macro-states at the classical scale simply don't influence micro-states except that macro-states provide these boundary conditions that put limits on potential micro-states. I only see bottom up causality (in classical mechanics) because that's all there is. That's why engineers, scientists, meteorologists, etc... use finite element analysis for all kinds of structural, fluid, heat transfer, electromagnetic... all classical phenomena. They can all be dealt with using local bottom up causation. That's the "real deal".

Regarding chemistry, biology, and condensed matter physics, there are many instances of new and unexpected things that can happen. There are some articles in the literature that have valid cases for their being non-separable physical states at or below the level where classical mechanics gives way to quantum mechanics. We might find some common ground there, but I doubt we'll ever see eye to eye on everything.
 
  • Like
Likes mattt
Q_Goest said:
This is okay. Why "certain weight" though? Are you suggesting computers are not deterministic?

Computers are certainly designed to be as deterministic as possible - that is part of their engineering spec. And of course we know how difficult this is becoming as chip gates get down to the nano-scale.

But no. The nodes of neural nets are weighted in the sense they do not switch indiscriminately but on the basis of their learning history, just like the real neurons they are meant to vaguely simulate.

Q_Goest said:
Would you agree that none of this changes the fact that transistors only ever change state because of a current being applied? And if that's true, then mental states per the standard computational paradigm still don't influence individual transistors any more than a brain state influences individual neurons. Macro-states at the classical scale simply don't influence micro-states except that macro-states provide these boundary conditions that put limits on potential micro-states. I only see bottom up causality (in classical mechanics) because that's all there is.

I was saying that if you insist on only measuring systems in simple ways, you will of course only extract simple measures of what is going on.

Your question is "why did this transistor switch"? You say it is only because of some set of inputs arriving at that moment. I say it is because of some history of past learning, some set of expectations about what was likely to happen, some current context in which its switching state makes a co-operative and cohesive sense.

You then say well I'm looking at the transitor and I can't see these things. I reply that is because all the fancy stuff that is actually making things happen has been hidden away from your gaze at the level of the software.

In a realistic neural simulation, for example, there would have to be some equivalent of neural priming, neural binding, population voting, evolving responses. Some thousands of transistors would be needed (a computational sub-system) to even begin getting this necessary global complexity represented as hardware.

So again, you are pursuing an illegitimate route to an argument.

The honest approach is to strip your thought experiment down to the bare essentials of a Turing machine and see if your idea still holds. And the standard outcome of such approaches is agreement that you have now clearly put all meaning outside the physical implementation. The writing and the interpreting of the programs is external to its running. And all you have done is break apart the bottom-up crunching from the top-down contextualisation. Not proved that the top-down part is actually unneccessary to the deal.

This is not a classical vs QM issue either. It applies to all systems (and thus all reality - reality being best understood as a system, except when you find it more useful to model it as a machine).
 
  • #10
Q_Goest said:
Would you agree that none of this changes the fact that transistors only ever change state because of a current being applied? And if that's true, then mental states per the standard computational paradigm still don't influence individual transistors any more than a brain state influences individual neurons.

But the applied current itself is based on inputs to the system; ultimately, from a user, who may be very well responding to an output from the system. I don't know how easily we can separate a computer from the user, or the engineers that designed it.

This leaves open the explanatory gap. Why should there be any feeling or phenomenal experience at all? How can we know what a computer experiences since everything a computer does can be FULLY explained in physical terms?

Why shouldn't there be a feeling or phenomenal experience? I'm not sure we can know what a computer experiences until we close the explanatory gap. I think we'd have to find the physical basis for our experience and start mapping it to get an idea of what physical process is associated with what kinds and parts of consciousness.

Speaking in magnitudes of centuries, I don't think we're that far off from being able to bring the mind into the physical arena.
 
  • #11
Hi Q_Goest, it was discussed some time ago about how experience is the action of several neurons and chemical processes recording an incident or event. If your computer records the "experience" of red then uses that earlier response in a more recent one, then it is experiencing red. Similarly with the brain doing relatively the same action or event.

What's different here is that your computer is not set up to experience red with genetic responses or "innate" response. Every cell in our entire bodies responds to the colour red as do plants and other animals. This is either genetic or a chemo-photosensitive trait that has been selected through our evolution.

I don't think the genetic or chemo-photosensitive reactions we have to the colour red are dependent on mental activity. The cells respond autonomically. And personally, I'd tell you that any brain with its visual centre, eyes etc... in working order is primarily autonomic as well. I would point out the obvious here and say that when our brain detects red there is an intellectual signal to stop the car... and that would indicate a mental cause of an action/event. (?)
 
  • #12
Hi pftest,
pftest said:
I brought the atomic interactions up because it shows that even in atoms there is room for unknown causal powers. That room itself is enough to dismiss the idea that there is no room for mental causation.
Sorry if my last post sounded a bit abrupt. I actually agree that there may be room for mental causation at a quantum level, but I don’t yet see how and I’ve not read enough of the literature to locate a good argument in this regard. One issue is the explanatory gap – why should any physical process be accompanied by a mental one? This is equally applicable to quantum interactions. Another issue is that there could be violations of ‘entropy’ in the sense that I provided in the OP. I think what quantum models have in their favor is that they provide a physical substrate which is intrinsically inseparable. Phenomena that can be described in classical terms however are separable (depending on how you define separable).


Hi apeiron,
Let’s clarify one issue. In your first post you mentioned “top-down causation” and when I read that in context it seemed to me you meant “downward causation”. Hence the focus on the transistor. Downward causation may or may not be what you have in mind, but I’m assuming it is. Top-down causation has been defined in different ways in the literature, so perhaps you’d like to clarify.
1) Top-down causation can mean “downward causation”. I’ll define “downward causation” below as defined by Bedau.
or
2) it can mean that the boundary conditions of a physical system restrict the potential micro-states of that system. For example, a point on a wheel rolling down a hill has its motion restricted by the boundary conditions on the wheel. See Bedau’s paper for more on this. This kind of top-down causation is not problematic, but it also doesn’t lend any help to mental causation.

You may have another meaning for top down causation in mind, so feel free to clarify.


Hi Pythagorean,
I’d asked for people to reference the literature, not so much because I like people to keep referencing things, but because the philosophy forum has a reputation for people ignoring the literature as if it doesn’t exist. The issues regarding cognition have already been considered in depth by others, so using our own intuitions about philosophy typically gets us in trouble.

Before getting into this, I want to define “downward causation” as given by Bedau:
The most stringent conception of emergence, which I call STRONG EMERGENCE, adds the requirement that emergent properties are supervenient properties with irreducible causal powers. These macro-causal powers have effects at both macro and micro levels, and macro-to-micro effects are termed “downward” causation. We saw above that micro determination of the macro is one of the hallmarks of emergence, and supervenience is a popular contemporary interpretation of this determination. Supervenience explains the sense in which emergent properties depend on their underlying bases, and irreducible macro-causal power explains the sense in which they are autonomous from their underlying bases.

By definition, such [downward] causal powers cannot be explained in terms of the aggregation of the micro-level potentialities; they are primitive or “brute” natural powers that arise inexplicably with the existence of certain macro-level entities. This contravenes causal fundamentalism – the idea that macro causal powers supervene on and are determined by micro causal powers, that is, the doctrine that “the macro is the way it is in virtue of how things are at the micro”

Downward causation is now one of the main sources of controversy about emergence. There are at least three apparent problems. The first is that the very idea of emergent downward causation seems incoherent in some way. Kim (1999, p. 25) introduces the worry in this way:
The idea of downward causation has struck some thinkers as incoherent, and it is difficult to deny that there is an air of paradox about it: After all, higher-level properties arise out of a lower-level conditions, and without the presence of the latter in suitable configurations, the former could not even be there. So how could these higher-level properties causally influence and alter the conditions fromwhich they arise? Is it coherent to suppose that the presence of X is entirely responsible for the occurrence of Y (so Y’s very existence is totally dependent on X) and yet Y somehow manages to exercise causal influence on X?
The upshot is that there seems to be something viciously circular about downward causation.

The second worry is that even if emergent downward causation is coherent, it makes a difference only if it violates micro causal laws (Kim 1997).
I’ll end it there. Hopefully you get the idea.

If you agree that transistors only change state because of there being a current applied to the base, then you can successfully rule out downward causation (and very likely, mental causation) for such a system of switches. I believe you must agree with that, so hopefully the above discussion by Bedau helps provide an understanding of what is meant by downward causation. I’d strongly recommend reading Bedau’s paper (link above).

There are others who accept this but still try to defend mental causation in some fashion. I’ve seen various methods of attack. I’d categorize these as largely appealing to the complexity of such a system and glossing over the simple facts. Once you rule out downward causation, mental causation (using the standard computational paradigm) becomes not only indefensible, but it creates a very nasty paradox.

If we rule out mental causation, we have a very serious paradox that is almost ignored in the literature. (almost but not quite) The problem is that if mental causation is false, then any behavior we express or reporting of mental states cannot be shown to correlate reliably with the actual mental states. In fact, in the worst case, we may even be forced to accept the worst form of panpsychism, that all matter experiences every possible phenomenal experience at the same time!* The standard line of defense on this issue is to say that mental states ARE physical states. However, this doesn’t help with the paradox one bit IMHO. If the mental states really are epiphenomenal on the physical states, then there is nothing we can do to determine what those mental states are. We can’t discover them by observing behavior and we can’t find out by asking people about them.

Ultimately, the behavior and reports of those mental states in a computer are completely governed by the physical states, so there is no chance of the mental state being reliably reported. For example, consider a computer animation on a screen of a man in pain saying he’d like you to stop pressing the down arrow because each time you do he feels a stabbing pain. If the computer really feels this pain, how can we know? Did the computer say so because of the change in physical states of the switches? Or did the computer experience something and it told you what it was feeling?

We can take the machine apart and we’ll find a series of transistors that change state just like dominos falling over. There is a physical reason for the behavior (and the reporting) that the animated character provides. The animation MUST act and say those things because there is a physical reason for the changes in state of the computer. However, the figure could equally be experiencing anything or nothing at all. There is no way for the animated figure to do anything but act and talk as if it were experiencing pain because that’s what the physical changes of state resulted in. Those physical changes of state can’t report mental states in any way, shape or form, so even behavior does not reliably correspond to mental states if mental causation is ruled out.

Per the paradox above, I think we’re forced to conclude that mental causation is a fact of nature. But the computational paradigm rules this out since it insists that classical scale physical processes govern the actions of the brain, and those processes are both separable and locally causal such that the overall macro-state of the brain has no causal influence on any individual neuron any more than the macro-state of a computer has a causal influence on any individual transistor.

*This was brought out by Mark Bishop, "Dancing with Pixies"
 
  • #13
Q_Goest said:
Let’s clarify one issue. In your first post you mentioned “top-down causation” and when I read that in context it seemed to me you meant “downward causation”.

I would happily use the terms interchangeably. And I don't actually think either of them are the best way to put it.

A first issue is that this is "top-down" and "downwards" in spatiotemporal scale. So it is better to speak of global causality. The action is from global moments to the local ones. So it is from a larger size, but also a longer time. Thus it is as much from before and after as it is from "above" in spatial scale. Which is why there is such a stress on history, goals and anticipation - the global temporal aspects.

A second point is that I want to stress the primacy of constraint as the form of causality we are talking about. I am dividing causality not just by direction or scale but also by kind.

Local bottom-up causality has the nature of "construction" - additive action. Global top-down causality has the nature of "constraint" - a suppression of local degrees of freedom (free additive constructive action).

Note this is different from versions of cybernetics or complexity theory, for example, where the top-down action is thought of as "control". Another different kind of thing. Although autonomous systems (like us humans) can appear to act in controlling causal fashion on the world.

As you suggest, a lot of people see control as indeed the definition of what consciousness is all about if consciousness is a something that does anything. But this is a wrong idea on closer analysis.
 
  • #14
Q_Goest said:
<...>

If we rule out mental causation, we have a very serious paradox that is almost ignored in the literature. (almost but not quite) The problem is that if mental causation is false, then any behavior we express or reporting of mental states cannot be shown to correlate reliably with the actual mental states. In fact, in the worst case, we may even be forced to accept the worst form of panpsychism, that all matter experiences every possible phenomenal experience at the same time!* The standard line of defense on this issue is to say that mental states ARE physical states. However, this doesn’t help with the paradox one bit IMHO. If the mental states really are epiphenomenal on the physical states, then there is nothing we can do to determine what those mental states are. We can’t discover them by observing behavior and we can’t find out by asking people about them.

Ultimately, the behavior and reports of those mental states in a computer are completely governed by the physical states, so there is no chance of the mental state being reliably reported. For example, consider a computer animation on a screen of a man in pain saying he’d like you to stop pressing the down arrow because each time you do he feels a stabbing pain. If the computer really feels this pain, how can we know? Did the computer say so because of the change in physical states of the switches? Or did the computer experience something and it told you what it was feeling?

We can take the machine apart and we’ll find a series of transistors that change state just like dominos falling over. There is a physical reason for the behavior (and the reporting) that the animated character provides. The animation MUST act and say those things because there is a physical reason for the changes in state of the computer. However, the figure could equally be experiencing anything or nothing at all. There is no way for the animated figure to do anything but act and talk as if it were experiencing pain because that’s what the physical changes of state resulted in. Those physical changes of state can’t report mental states in any way, shape or form, so even behavior does not reliably correspond to mental states if mental causation is ruled out.

Per the paradox above, I think we’re forced to conclude that mental causation is a fact of nature. But the computational paradigm rules this out since it insists that classical scale physical processes govern the actions of the brain, and those processes are both separable and locally causal such that the overall macro-state of the brain has no causal influence on any individual neuron any more than the macro-state of a computer has a causal influence on any individual transistor.

*This was brought out by Mark Bishop, "Dancing with Pixies"

Background

Ok, first some of my background: I have no formal education in anything mind science. I have an undergraduate degree in physics, so I'm very causal-minded. I am currently designing a master's degree in theoretical neuroscience and have been investigating the literature on my own (I start the relevant classes next semester).

I spent a little time looking at the top down approach, but for the most part, I've been looking at bottom-up approaches lately. I'm familiar with Koch (here's Kritoph Koch's laboratory home page: http://www.klab.caltech.edu/ ) and Daniel Dennit (philosopher who has lots of talks available online).

Preconceived Notion

Here are some experiments that seem to suggest that top-down causation doesn't exist:



Personally, I don't think there's such a thing as top-down causation. I tend to agree with Dennit that nobody's really running the wheelhouse (The problem of the Cartesian Theatre, as he calls it: http://en.wikipedia.org/wiki/Cartesian_theater ). If somebody's running the wheelhouse, then we still haven't answered the question of the mind, we've just specified the location (in the wheelhouse!).

I take the materialist view that our system of biological neural networks is handling inputs and transforming them into outputs. In this view, for instance, the interneural computations between sensory (input) neurons and motor (output) neurons might be responsible for higher-level consciousness, as well as the illusion of self-control, will-power, and other abstract ideas.

Paradox

If we define consciousness as a 1 and non-consciousness as a 0, then this paradox is sure to bother people, but Koch for example claims that there are many kinds of consciousness. (Though Koch also refrains from a pin-point definition of consciousness.)

If we assume that the many kinds of consciousness can be normalized and assigned a value between 1 and 0 instead of strictly 1 or 0, then it may be more palatable to say something like "The computer has a Class C consciousness rating of 0.3".

Like I said before though, I believe we will have to wait for people like Koch and other bottom-up theoretical neuroscientists to pin-down the physical system of events associated with consciousness before we can judge whether other systems experience some degree of consciousness.
 
Last edited by a moderator:
  • #15
Pythagorean said:
I take the materialist view that our system of biological neural networks is handling inputs and transforming them into outputs. In this view, for instance, the interneural computations between sensory (input) neurons and motor (output) neurons might be responsible for higher-level consciousness, as well as the illusion of self-control, will-power, and other abstract ideas.
Suppose you are right and consciousness is the computation of neurons (or maybe i misunderstood what you meant with 'responsible'). Since computation has causal powers (it does something in the physical world), this would grant causal powers to consciousness. If C does cause things, why is the sense of control still an illusion? It may be so that this causal power matches with the subjective sense of self-control.

Btw the free will issue can be disconnected from the mental causation issue. The Libet experiments for example may show that the decision feeling ("hey i just made a decision") comes after the decision has been physically made. But even prior to the decision feeling, the subject was already conscious and those conscious states may have influenced the physical processes anyway. So there may be mental causation, regardless if it felt like a decision or not. A simple example is just watching TV. You can have all kinds of experiences and your neurons will do all kinds of stuff, yet there is no feeling of "i just made a decision".

Paradox

If we define consciousness as a 1 and non-consciousness as a 0, then this paradox is sure to bother people, but Koch for example claims that there are many kinds of consciousness. (Though Koch also refrains from a pin-point definition of consciousness.)

If we assume that the many kinds of consciousness can be normalized and assigned a value between 1 and 0 instead of strictly 1 or 0, then it may be more palatable to say something like "The computer has a Class C consciousness rating of 0.3".

Like I said before though, I believe we will have to wait for people like Koch and other bottom-up theoretical neuroscientists to pin-down the physical system of events associated with consciousness before we can judge whether other systems experience some degree of consciousness.
I like the idea of a spectrum, since that is how all of nature seems to work. But... if there is some minimum degree of consciousness (0.000001), then at the very least everything is conscious to some degree.
 
Last edited:
  • #16
I just have some questions. Perhaps I missed it, but I haven't seen a definition of "consciousness". The Turing Test was designed to test for something called "intelligence". If a machine is "intelligent", is it therefore "conscious"? If an entity is "conscious" does it therefore have some level of "intelligence"?
 
  • #17
Pythagorean said:
Personally, I don't think there's such a thing as top-down causation. I tend to agree with Dennit that nobody's really running the wheelhouse (The problem of the Cartesian Theatre, as he calls it: http://en.wikipedia.org/wiki/Cartesian_theater ).

Oh how my heart sinks at the mention of these names, at the whole tenor of what you already believe.

I spent 15 years in this area. And all I can say is that you are heading down the hughest blind alley.

If you want a flavour of the neuroscience debate over top-down causality, see for example this...
http://www.dichotomistic.com/mind_readings_molecular_turnover.html

Those youtube clips are all about habit vs attention. It would be correct to think of habits as bottom-up in a sense. But habits would have originally been learned in the eye of (top down) attention) and then unfold within the context of some prevailing attentive state.

So if I know I am required to flex my finger, then that is the top-down anticipatory preparation. A whole lot of global brain set-up is taking place - that would look quite different from when I want to be very sure I'm not about to make some unnecessary twitch. The actual flexing of a finger is a routinised habit and so has to arise within the prepared context via activity from the relevant sub-cortical paths - striatum, cerebellum, etc.

But hey, if you are going to be studying neuroscience, you will learn these things anyway.
 
  • #18
apeiron said:
So if I know I am required to flex my finger, then that is the top-down anticipatory preparation. A whole lot of global brain set-up is taking place - that would look quite different from when I want to be very sure I'm not about to make some unnecessary twitch. The actual flexing of a finger is a routinised habit and so has to arise within the prepared context via activity from the relevant sub-cortical paths - striatum, cerebellum, etc.

But in the second clip I provided, they subject has a button in each hand and decides randomly to press the left or the right button. By the time the person has perceived his choice and pressed his button, the testers with their technology, have already (six seconds beforehand) predicted which side he was going to press.

Was the choice really conscious? It seems from this experiment, that the conscious decision came six seconds after the brain had already made its choice, leading me to believe the "conscious decision" wasn't really a decision at all, but a sensation resulting from a decision made by the neural network.
 
  • #19
Is there a simple example of top-down causality? One not involving minds, but just a simple physical system (the simpler the better).
 
  • #20
SW VandeCarr said:
I just have some questions. Perhaps I missed it, but I haven't seen a definition of "consciousness". The Turing Test was designed to test for something called "intelligence". If a machine is "intelligent", is it therefore "conscious"? If an entity is "conscious" does it therefore have some level of "intelligence"?

Conscious awareness: "The conscious aspect of the mind involving our awareness of the world and self in relation to it"

http://wps.pearsoned.co.uk/wps/media/objects/2784/2851009/glossary/glossary.html#C
 
  • #21
pftest said:
Suppose you are right and consciousness is the computation of neurons (or maybe i misunderstood what you meant with 'responsible'). Since computation has causal powers (it does something in the physical world), this would grant causal powers to consciousness. If C does cause things, why is the sense of control still an illusion? It may be so that this causal power matches with the subjective sense of self-contr

That's not quite what I meant. The computation is a description of the states of the neurons themselves. Consciousness is a phenomenal experience. In my argument, consciousness is a byproduct of the neural computation. That is, consciousness may be necessary for the computation to take place (i'm not sure whether it is or not!) but in my view, it would be something like waste heat from a generator. The waste heat doesn't generate the energy, but it's a necessary byproduct of energy generation.

My point is not that I know specifically what consciousness is in this way, it is that consciousness need not be responsible for causation, but may still be a necessary byproduct of neural activity.
 
  • #22
pftest said:
Is there a simple example of top-down causality? One not involving minds, but just a simple physical system (the simpler the better).

In this world one thing qualifies the other. For instance, an observer needs an observation to be an observer... so that, causation would appear to be equally distributed throughout all systems.

For instance the "top" is a result of the "bottom" and it would be equally as correct to say that the bottom is a result of the top. Example being wheat when it dies and decomposes, causing fertility in the soil that will produce more wheat. So, this cycle nullifies the idea that there is one cause to anyone event.

Which came first... egg or chicken?
 
  • #23
Pythagorean said:
But in the second clip I provided, they subject has a button in each hand and decides randomly to press the left or the right button. By the time the person has perceived his choice and pressed his button, the testers with their technology, have already (six seconds beforehand) predicted which side he was going to press.

Was the choice really conscious? It seems from this experiment, that the conscious decision came six seconds after the brain had already made its choice, leading me to believe the "conscious decision" wasn't really a decision at all, but a sensation resulting from a decision made by the neural network.

This is conceptual confusion on your part (and the hammy guy in the clip). You are confusing consciousness with self-regulation. You are making the mistake of trying to localise consciousness to instants in time. Your familiarity with simple machines - like input-output computers - is blinding you to the complex causality of living and mindful systems.

Psychology started with exactly these kinds of "conscious control" questions being asked experimentally by Helmholtz, Wundt and Donders over 100 years ago.

Consciousness is not a real-time process. It is a hierarchical construction. To talk properly about "when" particular things happen, you have to have a correct model of how the brain actually functions.
 
Last edited:
  • #24
pftest said:
Is there a simple example of top-down causality? One not involving minds, but just a simple physical system (the simpler the better).
Good question. I believe you really meant "downward causation". There are plenty of examples of top-down causation (depending on how that is defined). Let's define downward causation as Bedau (and many others) do, and top-down causation as what happens when there are boundary conditions on a macro-state that put limitations on a micro-state. Examples of top-down causation include the hinge on a door for example, which only allows a door to swing around a specific axis. Such examples of top-down causation are not particularly interesting. The question I suspect you want to ask regards downward causation. The answer isn't simple, but basically the answer is that no clear cases of downward causation are known to exist. Chalmers for example, states:
I do not know whether there are any examples of [downward causation] in the actual world... While it is certainly true that we can't currently deduce all high-level facts and laws from low-level laws plus initial conditions, I do not konw of any compelling evidence for high-level facts and laws (outside the case of consciousness) that are not deducible in principal.
See Chalmers, "Strong and Weak Emergence"
 
  • #25
Pythagorean said:
That's not quite what I meant. The computation is a description of the states of the neurons themselves. Consciousness is a phenomenal experience. In my argument, consciousness is a byproduct of the neural computation. That is, consciousness may be necessary for the computation to take place (i'm not sure whether it is or not!) but in my view, it would be something like waste heat from a generator. The waste heat doesn't generate the energy, but it's a necessary byproduct of energy generation.

My point is not that I know specifically what consciousness is in this way, it is that consciousness need not be responsible for causation, but may still be a necessary byproduct of neural activity.
So C is not the brain, its not the computation or anything else physical, but its something generated by the brain. But at this 'moment of generation', there is interaction between brain and C. Its like kicking a ball. You can't kick a ball without the ball also touching you.

I don't think noncausal byproducts exist. Heat from a generator may seem like a waste of energy from our perspective, but it still has causal powers, it could set a house on fire.
 
  • #26
Q_Goest said:
The answer isn't simple, but basically the answer is that no clear cases of downward causation are known to exist. Chalmers for example, states:

The whole universe can be thought of as a downward causation. Unless someone wants to clarify for me how elementary particles/waves have the properties to cause the macro realm(the universe).
Is everyone living under the impression that indeterminate, fuzzy states, that for many body systems are best treated as fields that strech to infinity, are the Cause of what we call the "Universe"?
 
  • #27
Q_Goest said:
Good question. I believe you really meant "downward causation". There are plenty of examples of top-down causation (depending on how that is defined). Let's define downward causation as Bedau (and many others) do, and top-down causation as what happens when there are boundary conditions on a macro-state that put limitations on a micro-state. Examples of top-down causation include the hinge on a door for example, which only allows a door to swing around a specific axis. Such examples of top-down causation are not particularly interesting. The question I suspect you want to ask regards downward causation. The answer isn't simple, but basically the answer is that no clear cases of downward causation are known to exist. Chalmers for example, states:

See Chalmers, "Strong and Weak Emergence"
Yes i meant downward causation. I didnt know there were no examples of it. It doesn't really strengthen the idea that consciousness does it.
 
  • #28
pftest said:
<...>But... if there is some minimum degree of consciousness (0.000001), then at the very least everything is conscious to some degree.

That doesn't bother me.

apeiron said:
<...>You are making the mistake of trying to localise consciousness to instants in time. Your familiarity with simple machines - like input-output computers - is blinding you to the complex causality of living and mindful systems.<...>

Even if you globalize consciousness we can still frame it as inputs and outputs. We've said nothing about how old the inputs are or whether they are being operated on by nonlinear functions or how they relate to time or space.

Neither am I disputing the property of emergence itself.

"living" and "mindful" systems is kind of vague, define it for me.

<...>Consciousness is not a real-time process. It is a hierarchical construction. To talk properly about "when" particular things happen, you have to have a correct model of how the brain actually functions.

I don't disagree with this. We'd be posting more in medical sciences subforum if it were the case that we new how the brain functioned. But there's people working on it from both sides (top down and bottom up) and I hold out hope that the bottom up people will be able to verify and falsify top down conclusions. I am interested in both sides, but my background seems more suitable for bottom up (modeling real neurons).

But why must it necessarily for the mind to be different from a community of cells? We can observe emergent properties in cell communities (some more interesting and complex then others, granted).

What about the weather? Is not temperature an emergent property?

It's difficult since we personally experience events in our brains. We don't seem to experience all the emergent properties of our brain's functions. We can say a thousand things about what the neurons are doing, but tying them to experiences is more tricky.

Something interesting to leave you with:
The Dictyosteliida, cellular slime molds, are distantly related to the plasmodial slime molds and have a very different life style. Their amoebae do not form huge coenocytes, and remain individual. They live in similar habitats and feed on microorganisms. When food runs out and they are ready to form sporangia, they do something radically different. They release signal molecules into their environment, by which they find each other and create swarms. These amoeba then join up into a tiny multicellular slug-like coordinated creature, which crawls to an open lit place and grows into a fruiting body. Some of the amoebae become spores to begin the next generation, but some of the amoebae sacrifice themselves to become a dead stalk, lifting the spores up into the air.
 
  • #30
Q_Goest said:
Let's define downward causation as Bedau (and many others) do, and top-down causation as what happens when there are boundary conditions on a macro-state that put limitations on a micro-state.

Bedau is attempting to make a distinction between weak and strong downward causation. And I agree with his basic approach - though calling it "weak" is a bit unnecessary, because bottom-up construction would also be "weak" for the same reasons in my book.

Strong upward and downward causality would be a dualistic situation. It would be claiming reality has been broken apart. Which is not really what a systems theorist wants to think.

So instead the argument is that causality is separated in two directions - a dichotomy. And they always remain in mutual interaction - a system.

Thus both upwards and downwards causation exist as "weak" versions. That is, they don't actually exist, just very nearly exist.
 
  • #31
Pythagorean said:
We'd be posting more in medical sciences subforum if it were the case that we new how the brain functioned.

But I do know how the brain functions. Therein seems to lie the difference.
 
  • #32
apeiron said:
But I do know how the brain functions. Therein seems to lie the difference.

This is kind of a meaningless post isn't it? Why don't you display your knowledge in a more responsive post?
 
  • #33
Pythagorean said:
This is kind of a meaningless post isn't it? Why don't you display your knowledge in a more responsive post?

Because you don't listen.

I posted a link to some actual neuroscience I wrote in my old Lancet Neurology column (now why would they ask me to be their commentator?). And which the Association for Consciousness Studies also re-ran as a keynote.

So why don't you respond to some knowledge?
 
  • #34
apeiron said:
Because you don't listen.

I posted a link to some actual neuroscience I wrote in my old Lancet Neurology column (now why would they ask me to be their commentator?). And which the Association for Consciousness Studies also re-ran as a keynote.

So why don't you respond to some knowledge?

Yes, you posted a link on molecular turnover. I don't see the conflict with anything I'm saying. I don't disagree with the facts you've presented. Your interpretation of what molecular turnover means differs from mine. A wave front is another example of a 'thing' that persists when it's molecules do not. Again, how is this different from the weather?

I could even agree with your conclusion:
This kind of topsy-turvey picture can only be resolved by taking a more holistic view of the brain as the organ of consciousness. The whole shapes the parts as much as the parts shape the whole. No component of the system is itself stable but the entire production locks together to have stable existence. This is how you can manage to persist even though much of you is being recycled by day if not the hour.

without conflicting with my point (depending on how you define "whole").

If you are calling consciousness or 'the mind' the "whole" then you'd have to provide a valid argument of why you think these things represent the whole (which you have not done in this article).
 
  • #35
Pythagorean said:
If you are calling consciousness or 'the mind' the "whole" then you'd have to provide a valid argument of why you think these things represent the whole (which you have not done in this article).

Good luck with your future career then.
 
  • #36
I like the whole slime mold analogy. But its just a metaphor for a much more complex and distant cousin, the brain. Today our brain cells even have the gall to turn off their p52 gene and refuse to die for any cause, even if it is to build a stalk to facilitate wide spread sporulation. The mutation that turns off the p52 gene also starts a whole other group of "immortal cells" known as cancer.

If mutations are caused by micro states/environments like em radiation or chemical abrasion, then would that be an example of upward causation?
 
  • #37
Pythagorean said:
That doesn't bother me.
Alright. I got the impression you were a materialist, so the idea of everything being conscious to some degree (not just brains) seemed to conflict with that.
 
  • #38
I am a materialist in the sense that I think everything we can ever experience can be explained in terms of physical events.

I don't find that restrictive on reality at all though. Physical interactions are rich, complex, and majestic.
 
  • #39
Hi Pythagorean,
Pythagorean said:
Here are some experiments that seem to suggest that top-down causation doesn't exist:



Personally, I don't think there's such a thing as top-down causation. I tend to agree with Dennit that nobody's really running the wheelhouse ...

The second youtube video is pertinent to this discussion. Note around 4:15 the guy states, "[counsiousness and brain activity are] different aspects of the same physical process. ... so your consciousness IS your brain activity." Let's call this the standard computational explanation (SCE) and it follows nicely from the exclusion argument per Yablo. The mental experience is what Yablo is calling x*. The physical activity is x which causes y. So Yablo concludes y is not caused by x*. He says x* can't influence y. x causes y, not x*. x* doesn't cause anything and can't be the cause of anything physical. Only physical things can create physical causes.

We then get the SCE: "The two are the same, so there's no problem!" The thing that is causing you to flinch in pain and to tell people you are in pain, per the SCE, isn't the mental state x*, it's the physical state x that causes the flinching and talking y. And the reason pain (x*) corresponds to the physical event where x causes y, is because the physical state x has come about to try and prevent pain. The reaction to pain is due to the physical change in state from x to y, not because of any mental state x* which causes y.

So far, this seems reasonable. x causes y, and x* = x, so x* doesn't need to produce physical state y because x already did that. In fact, x* just drifts off by itself, analogous to a shadow that never enters the causal chain. The paradox arises because we have excluded x* from doing any causal work but we still want to claim that there is a reliable correlation between our mental states and our reporting of them. The SCE still wants to claim that when physical state x reports x*, that the cause of y is x and not x*.

To understand this properly, we have to make a clear distinction between x and x*, the physical state and the mental state.
1) The physical state is what happens, it is the behavior and what is spoken. The physical state includes anything that is objectively measurable.
2) The mental state is how something feels. It includes the qualia that we experience such as the color red or the feeling of pain or the smell of a rose. The mental state includes anything that is only subjectively measurable.

The paradox therefore, is that the mental states, x*, can't be reliably reported by x. Here, the term "reliably reported" means that there is a reliable, 1 to 1 correlation between the physical state and the mental state which is everything the SCE wants to claim. The paradox arises because the SCE wants to claim that x not only correlates with x*, but that once x causes y, it has also provided a reliable report of x*. But if x causes y which provides a reliable report of x*, then x* has entered the causal chain of events and has influenced something physical.

The SCE attempts to get around this paradox by suggesting that "[counsiousness and brain activity are] different aspects of the same physical process. ... so your consciousness IS your brain activity." If the two are the same, then we should be able to objectively measure mental states, but remember that things defined as mental states are these subjectively measurable phenomena that are not things that regard the physical movement of matter. These phenomena may be supervenient on the physical comings and goings of physical matter, but they are not the physical movements themselves. Mental states are additional phenomena that are not explained by explaining the measurable interactions between neurons, chemicals, molecules, atoms, or subatomic particles. The description of those physical movements will never tell us ANYTHING about what it is like to experience the color red, pain, or the smell of a rose. So the attempt to get around the paradox by the SCE fails and mental states can not be reliably reported unless the mental states can enter the causal chain.

That's a real problem for anyone that wants to challenge mental causation. This issue hasn't been taken up in the literature to the extent I've done so here, though there are similar ideas that have been published. If mental states are reliably reported by physical states, then we have to accept that somehow mental states enter the causal chain, and thus we may have a case for downward causation. Whether or not it really is downward causation requires another discussion that is out of the scope of this thread.
 
Last edited by a moderator:
  • #40
Q_Goest said:
The paradox therefore, is that the mental states, x*, can't be reliably reported by x. Here, the term "reliably reported" means that there is a reliable, 1 to 1 correlation between the physical state and the mental state which is everything the SCE wants to claim. The paradox arises because the SCE wants to claim that x not only correlates with x*, but that once x causes y, it has also provided a reliable report of x*. But if x causes y which provides a reliable report of x*, then x* has entered the causal chain of events and has influenced something physical.

I would say it is better to think of x as the basic process of awareness - the brain~mind activity that animals have too. So really the story is y~x.

Then the extra x* issue is self-awareness. The ability to introspect "objectively" on conscious states.

Introspection is of course a learned socialised habit, not an innate "hardware" feature.

http://en.wikipedia.org/wiki/Lev_Vygotsky
http://en.wikipedia.org/wiki/George_Herbert_Mead

And also x* would not be epiphenomenal. Except in a certain sense.

The socialisation of the human brain through the self-regulatory mechanism of language is in fact a good example of downward causation - constraint exerted from a cultural level to the individual level.

Society teaches you to mind your manners, pursue certain goals, think in particular ways. The causality is from the global scale to the local so that you in your own head are negotating your needs vs the social needs.

See http://www.dichotomistic.com/mind_readings_JCS%20freewill%20article.html

That is of course why Sautoy was reacting with such feigned horror to the notion he had no freewill and his brain was deciding up to 10 seconds ahead of time. Society demands we be in control of our bodies. That is society's need - even if it is a fiction and leads to naive statements about the nature of consciousness.

(The length of the readiness potential in the precuneus is of course due to the task demand. The subject is being asked to "be random", so has to "load up" a preconscious intention, then sit on it long enough for it to appear to come after a decent "out of the blue" interval. The person is attentively conscious that "nothing has happened, nothing has happened" for long enough that the urge can be allowed to bubble up towards attentive execution. Note that the task demand could have been "feel the urge and then stop it". What would your interpretation of the "instant of consciousness" been then?)

Anway, the point is that x* examples like seeing the red of redness are pretty epiphenomenal because they are a fairly pointless and unnatural activity. The task demand now is just attend to the fact of some aspect of things, but not for any other reason except to note it is something that you are not in total control of. Some aspects of your conscious states are just wired in during development and indeed, part of your species genetic legacy. So they are very much bottom-up as you the individual are concerned, in your very moment to moment conscious way.

Of course, viewed over sufficient time (developmental and genetic) you will be able to see the top-down aspects of the causality involved in seeing redness. There was the evolutionary pressures (primates who added a third retinal pigment probably as an aid to picking out ripe fruit). And there were the more immediate developmental constraints. Does a newborn baby "see red"? Given the state of their cortexes at birth, plainly not. A world of red things is what is necessary to then constrain their neural development.

So the problem with physicists and philosophers is that they take an overly reductionist and mechanical approach to explaining anything. The only timescales they can see are the right here, right now ones of the smallest moments. But systems exist in thick time. They are multiscale in time. And if we want to talk about things or processes like consciousness, we have to respect that essential aspect of systems.

The study of timing issues - as with this "random decision of left or right" - is indeed rewarding and instructive. But I have to wonder why people are using youtube clips as their sampling of what is a huge literature.
 
  • #41
Q_Goest said:
The second youtube video is pertinent to this discussion. Note around 4:15 the guy states, "[counsiousness and brain activity are] different aspects of the same physical process. ... so your consciousness IS your brain activity." Let's call this the standard computational explanation (SCE) and it follows nicely from the exclusion argument per Yablo. The mental experience is what Yablo is calling x*. The physical activity is x which causes y. So Yablo concludes y is not caused by x*. He says x* can't influence y. x causes y, not x*. x* doesn't cause anything and can't be the cause of anything physical. Only physical things can create physical causes.

We then get the SCE: "The two are the same, so there's no problem!"
Maybe you already mentioned this in your own post but i wasnt sure:

Two things are said here:
1. It is said that "mind = brain".
2. It is said that x is not x* (and that only x can cause y).

These two statements contradict each other.
If mind truly is brain, then x*=x and x* can cause y.

The socialisation of the human brain through the self-regulatory mechanism of language is in fact a good example of downward causation - constraint exerted from a cultural level to the individual level.
But this example involves mind so we do not know if this is downward causation or not. If mind is as fundamental as some physical interaction then any causation mind does is still the usual upward causation.
 
Last edited:
  • #42
pftest said:
But this example involves mind so we do not know if this is downward causation or not. If mind is as fundamental as some physical interaction then any causation mind does is still the usual upward causation.

That is a problem for people trying to argue the mind is dualistically fundamental, not me.

It would be another incoherence resulting from taking that stance.
 
  • #43
I must admit, I only remember the experiments itself, not the introductory or commentary of the videos. I don't agree with the statement that consciousness IS brain activity. I'll go more into that later.

Q Goest said:
So far, this seems reasonable. x causes y, and x* = x, so x* doesn't need to produce physical state y because x already did that. In fact, x* just drifts off by itself, analogous to a shadow that never enters the causal chain. The paradox arises because we have excluded x* from doing any causal work but we still want to claim that there is a reliable correlation between our mental states and our reporting of them. The SCE still wants to claim that when physical state x reports x*, that the cause of y is x and not x*.

I wouldn't say x* = x. I would say, instead that x* is a different frame of reference of x (which means there would be a transform operation involved: x* = T(x). This fits the analogy of the shadow in that way. One may argue that a shadow is somehow causal, but we generally put the blame on the owner of the shadow as being the cause and the shadow itself being the effect (the owner is blocking the sun's photons from hitting the concrete sidewalk, what we call a shadow isn't a substance, it's a lack of 'substance': namely, photons.)

To understand this properly, we have to make a clear distinction between x and x*, the physical state and the mental state.
1) The physical state is what happens, it is the behavior and what is spoken. The physical state includes anything that is objectively measurable.
2) The mental state is how something feels. It includes the qualia that we experience such as the color red or the feeling of pain or the smell of a rose. The mental state includes anything that is only subjectively measurable.

in 1), did you mean to include "what is spoken" as x? You had elsewhere defined it as y, which I would have agreed more with.

Subjective experience may by far be the most difficult thing to figure out how to measure , but is it truly impossible? I can't find the paper right now (I will look harder after this post, or maybe somebody else knows the study that I'm referring to) that showed how dogs stored smells was very similar to how we stored notes. In the way we can detect octaves, the dog can detect an extra enzyme on an aroma.

Between humans, we can share the experience of red, and neurologists can measure brain activity in several test subjects imagining or observing red.

The more qualia we begin to map out in terms of neurological activity, the more chances we have of discovering emergent properties, and explaining (which we already can in terms of core physics) why some members of our species don't experience red like we do.

If you were a neuroscientist and a musician, wouldn't it be enriching for you to play different kinds of music on many different types of subjects while using something like an fMRI? Or even to do experiments in the qualia, red (given that you're not colorblind).

If we get a firm physical grasp of how we can experience the color red, and then we can physically altar someone who is colorblind to be able to experience red (using what we've discovered) have we not made the case?

What if we observed physically similar phenomena. Could we make a particular kind of weather pattern experience the color red? No probably not, but that's not suprising because it's not the same physical phenomena, it's just similar (hypothetically of course). It can never be the same physical phenomena without actually having the components of the brain.

The paradox therefore, is that the mental states, x*, can't be reliably reported by x. Here, the term "reliably reported" means that there is a reliable, 1 to 1 correlation between the physical state and the mental state which is everything the SCE wants to claim. The paradox arises because the SCE wants to claim that x not only correlates with x*, but that once x causes y, it has also provided a reliable report of x*. But if x causes y which provides a reliable report of x*, then x* has entered the causal chain of events and has influenced something physical.

I find no reason to believe the transform x* = T(x) has a 1 to 1 correlation with x. The transform could map n-dimensional space to m-dimensional space for all we know. You'd also have to define "reliable". We can report emotions to each other in a way that's vaguely consistent using language. In the same we, most of us agree on what the color red is (and the failure of a colorblind person to do so can be explained physically). There's always some confidence less than 100% in our report, but that goes with any observation. Of course, when reporting emotions, our confidence is considerably lower than when reporting something like length.


The SCE attempts to get around this paradox by suggesting that "[counsiousness and brain activity are] different aspects of the same physical process. ... so your consciousness IS your brain activity." If the two are the same, then we should be able to objectively measure mental states, but remember that things defined as mental states are these subjectively measurable phenomena that are not things that regard the physical movement of matter.

I personally don't agree that consciousness is brain activity. I only demand that consciousness results from brain activity. If you can stop all brain activity, you stop consciousness. I don't mean to say that consciousness exists in all brain activity; just that if you shut the whole thing down, you'll be sure to nail it.

These phenomena may be supervenient on the physical comings and goings of physical matter, but they are not the physical movements themselves. Mental states are additional phenomena that are not explained by explaining the measurable interactions between neurons, chemicals, molecules, atoms, or subatomic particles. The description of those physical movements will never tell us ANYTHING about what it is like to experience the color red, pain, or the smell of a rose. So the attempt to get around the paradox by the SCE fails and mental states can not be reliably reported unless the mental states can enter the causal chain.

In physics, we have lots of things that aren't the physical movements themselves. They are a summation or a statistical abstract of the system. We chose such parameters, not because they're inherent to the system (though they may be) but because they're relevant to the way in which we view the system and our process of understanding it in a categorical way (because stereotyping makes learning faster, if flawed).


That's a real problem for anyone that wants to challenge mental causation. This issue hasn't been taken up in the literature to the extent I've done so here, though there are similar ideas that have been published. If mental states are reliably reported by physical states, then we have to accept that somehow mental states enter the causal chain, and thus we may have a case for downward causation. Whether or not it really is downward causation requires another discussion that is out of the scope of this thread.

What if qualia are classification schemes that our brain uses to integrate and store sensory data? The definition of mind is vague, of course. If you would include all of the brain's activities and function as mind, then I'd think you'd be taking it too far. I was always under the impression that "mind" was only the part that you're aware of.

For instance, we don't notice that the floor is pushing up on our feet as we sit here reading posts. That stimulus isn't being directed the the higher functions of the brain that we associate with mind. It's being handled by lower function until the point where you begin to ponder "hey... the floor is pushing up on my feet".

In the same way, short of us pondering it, the color red isn't brought to our mind's attention when we observe it. One of our memory functions classifies light (with a particular range of frequencies) and files it away and compares it to similar observations in the future. We can view the resulting discussion, later some day, on physics forums, as a result of many different brain functions all fulfilling their "duties" in exactly the way the neurons allow them to.

That is, there may be no single decision-making process in the brain that we can wrap together in a tidy bow and call "mind". And there's no reason for me to believe our experience as an individual encompasses a significant fraction of all the things our brain is doing at once.
 
  • #44
Trying to get a better grasp of the idea of downward causation a bit, I ran across this:
http://www.ctnsstars.org/conferences/papers/The%20physics%20of%20downward%20causation.pdf
 
Last edited by a moderator:
  • #45
apeiron said:
That is a problem for people trying to argue the mind is dualistically fundamental, not me.

It would be another incoherence resulting from taking that stance.
Thats exactly why mind-examples of downward causation are disqualified as examples of downward causation. They depend on a metaphysical assumption so such examples are simply begging the question: "mind uses downward causation, because my metaphysics assumes it uses downward causation".

Now of course it may be true that mind is a higher level force using downward causation, but to support this it would be better to use a purely physical example that such a thing is possible. Otherwise downward causation becomes yet another new unknown power imbued on the mind but not found anywhere else in the natural world.

Btw what do you mean with "dualistically fundamental"? Panpsychism, neutral monism, idealism, and other metaphysics with mind as a lower level causal power are not forms of dualism.
 
  • #46
pftest said:
Thats exactly why mind-examples of downward causation are disqualified as examples of downward causation.

You have the wrong end of the stick so far as my own approach goes.

I've said often enough that I don't accept "consciousness" as any particular level - global or local. I stick close to the neuroscience facts as they have been uncovered over the past 150 years.

Therefore I find it meaningful to talk about local and global scales of causality in interaction. For example, the contrast between local impressions and global ideas, or local automaticisms and global attentive states. Stuff which we can actually pin down to mechanisms and pathways and neural network models.

The totality of local~global interaction is the system.

pftest said:
"mind uses downward causation, because my metaphysics assumes it uses downward causation".

The history of it is the other way round. I started with the cognitive neuroscience and went looking for the meta-level of theory that would be best suited to modelling the mind. It was obvious that the prevailing computationalism didn't have a hope of cutting it.

I found people who knew what they were talking about in theoretical biology. They had been through the same issues in the 1960s, so have had more time to think all this through.

I see it as more of an issue of mathematics than metaphysics now. Hierarchy theory, category theory, dissipative structures, scalefree nets, generative neural nets - these are maths models.

It becomes a philosophy of science issue of course when you have to ask why are so many people still stuck in a mechanical, reductionist, atomistic, mindset?

After all, it is not as if this approach to mind science has had any success :zzz:
 
  • #47
apeiron said:
You have the wrong end of the stick so far as my own approach goes.

I've said often enough that I don't accept "consciousness" as any particular level - global or local. I stick close to the neuroscience facts as they have been uncovered over the past 150 years.

Therefore I find it meaningful to talk about local and global scales of causality in interaction. For example, the contrast between local impressions and global ideas, or local automaticisms and global attentive states. Stuff which we can actually pin down to mechanisms and pathways and neural network models.

The totality of local~global interaction is the system.
Those examples (impressions, ideas, attentive states) involve mind, just like the language and culture examples. If you say that these can all be pinned down to mechanisms, pathways, then try and pick a different example than a human brain (or any organism).

If it is all just mechanisms then there must be purely physical systems which are capable of downward causation. The mathematics should work not just on brains and organisms.
 
  • #48
Some aspects from the integrated information theory of consciousness:
From 'The Neurology of Consciousness' said:
There are two main lessons to be learned from the study of consciousness in sleep. The first is that, during certain phases of sleep, the level of consciousness can decrease and at times nearly vanish, despite the fact that neural activity in the thalamocortical system is relatively stable. The second is that, during other phases of sleep, vivid conscious experience is possible despite the sensory and motor disconnection from the environment and the loss of self-reflective thought.

Why, then, does consciousness fade during certain phases of sleep and return during others?
An intriguing possibility is that the level of consciousness during sleep may be related to the degree of bistability of thalamocortical networks.

Why would the level of consciousness reflect the degree of bistability of thalamocortical networks?
A possible answer is offered by the integrated information theory of consciousness, which states that the level or quantity of consciousness is given by a system's capacity to generate integrated information. According to the theory, the brain substrate of consciousness is a complex of neural elements within the thalamocortical system that has a large repertoire of available states (information), yet cannot be decomposed into a collection of causally independent subsystems (integration). In this view, integrated information would be high during wakefulness because thalamocortical networks have a large repertoire of global firing patterns that are continuously available on a background of tonic depolarization. During early NREM sleep, by contrast, the ensuing bistability would reduce this global repertoire...


You can read more http://spectrum.ieee.org/computing/hardware/a-bit-of-theory-consciousness-as-integrated-information" (Koch, Tononi).
 
Last edited by a moderator:
  • #49
Ferris_bg said:
In this view, integrated information would be high during wakefulness because thalamocortical networks have a large repertoire of global firing patterns that are continuously available on a background of tonic depolarization.

Globally integrated states = local~global integration = bottom-up~top-down integration.

This is basic neuroscience. Why does the brain have so many more top-down connections than bottom up? Why did the human brain scale in powerlaw fashion so that "top-down" regions like prefrontal expand much more than "bottom-up" ones like thalamus? You cannot study brain architecture without this staring you in the face.
 
  • #50
To the extent that this thread involves the "Mind-Body Problem", I don't think there is a problem. The mind is conceptually different than the brain, but is nevertheless physical in the sense that patterns, information and entropy are aspects of the physical sciences. Patterns can, in principle, be transferred between suitably compatible objects which can support the necessary dynamics. If we could reverse engineer the brain (with its input and output organs), build several copies and locate them in different places; we ought to be able to transfer the information content of one's brain to anyone of these locations electromagnetically.

A dead brain obviously does not support a mind even if perfect anatomy is preserved. The mind is the information/entropy in the patterns of the electromagnetic field of a living brain which is supported by the architecture of the brain.

It is the mind, acting through the architecture of the body that indeed creates facts in the world, facts that cannot be predicted or explained by science as we know it.

http://www.newdualism.org/papers/H.Morowitz/Morowitz-1987.htm
 
Last edited:

Similar threads

Back
Top