Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The Mind is a Terrible Thing to WASTE

  1. Feb 24, 2008 #1

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Are mental phenomena such as experience, thought and qualia just an epiphenomena?

    The "exclusion argument" per Yablo:
    Ref: Yablo (1992) "Mental Causation"

    To put it even more simply, the cause of a switch in a computer changing position is due to the local affects of voltage on that particular switch. The switch is not affected by the configuration of the entire computer, only on the local causal actions.

    It seems the exclusion argument is perfectly valid when applied to computationalism. At the other extreme, it isn't valid for religious beliefs since religious beliefs accept a dualistic basis.

    There are however, numerous attempts to avoid this conclusion. It seems engineers, philosophers and scientists are not ready to accept that all mental phenomena are an epiphenomena. But if we try to avoid the exclusion argument* then don't we also have to give up computationalism? Or must we toss the baby (the mind) out with the bath water regardless?

    *For example: Alwyn Scott claimes that "nonlinear phenomena are those for which the whole is greater than the sum of its parts" and attempts to claim the mind, as well as every other nonlinear phenomena, is irreducible and that medium and weak downward causation is applicable.
    See Scott, "Reductionism Revisited" also Emmeche et al, "Levels, Emergence and Three Versions of Downward Causation". PM for papers.
     
  2. jcsd
  3. Feb 25, 2008 #2
    Q_Goest, very interesting topic.

    To account for the mind body dualism, I (no googling here), would have to initially propose, mental events as being the informative aspect of physical events. Qualia indeed seems like information overflow from a physical process.

    A really basic analogy consists of two rooms where one room is called the P-room and the other the M-room. All physical events occur in the P-room. For some events in P, M is informed and with other events in P, M is not informed.

    Epiphenomenalism claims there is no information transfer from the M-room, to the P-room , such that a new event in the P-room may occur.

    There are two cases to consider. The first is whether there are actually any causal events in the M-room. This question lies on whether M-room is solely an informative state, intrinsically relying on the P-room for itz computations. The second is whether casual events are permisible in the M-room.

    Let us suppose casual events are permissible in the M-room, then it is either those casual events are meaningless because only informed hermits live in the M-room, or there is some mechanism which allow the M-room to influence the activities of the P-room.

    It is quite possible mental events are only there to inform, making the M-room look like, "The Mind is a Terrible Thing to WASTE", otherwise the P-room and the M-room are entangled!

    p.s. Q_Goest, my brain states are supporting qualia, that wants to identify you as familiar!!
     
  4. Feb 26, 2008 #3

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Given computationalism, there doesn’t seem to be any way M can affect P. Would you agree? In which case, M consists of informed hermits, unable to influence the physical world.

    I think that’s a fundamental conclusion we have to accept, and a conclusion many computationalists would like to avoid. But I see no way to avoid it. To the best of my knowledge, no one has argued that computationalism allows for downward causation (ie: for mental phenomena to have influence on the physical).

    On the other hand, classical mechanics allows for nonlinear phenomena which Scott describes as “those for which the whole is greater than the sum of its parts”.

    Let’s assume computationalism is false, and suggest only that there are nonlinear, or other phenomena which are classical and are unlike computations such that those phenomena might give us a ‘loop hole’ to allow for mental causation.

    Does such a loop hole exist? Or are we still stuck accepting the exclusion argument?
     
  5. Feb 26, 2008 #4
    This seems to presupposes that there is a significant difference between mental and physical events/processes.
     
  6. Feb 26, 2008 #5

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Yes and no. What is the purpose of qualia if the phenomena has no influence on the behavior of the physical system? What is the purpose of having thoughts, desires, wishes or anything related to mental phenomena if none of that has any influence whatsoever on what we do?
     
  7. Feb 26, 2008 #6
    If consciousness and related phenomena are identical to certain physical brain states, then I see no issue with mental causation. Am I missing something?
     
  8. Feb 26, 2008 #7
    Demonstrate it. :approve:

    I can see the issues, itz asymmetry for 1 thing.
     
  9. Feb 26, 2008 #8
    I agree that seems the case for human babies.

    If For all P entails M then M cannot imply P.
    If For some P entails M then for some M, M can possibly imply some P.
     
    Last edited: Feb 26, 2008
  10. Feb 26, 2008 #9

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    I think it's important to first acknowledge what paradigm you wish to base a claim on. For example, let's base your claim on computationalism. In this case, you are essentially saying the physical computational state of the entire computer (equates to physical brain state) has a mental state which is causing something. But for the mental state to cause something, it must cause something physical, such as the change in state of a particular switch. This is called "downward causation".

    The exclusion argument simply acknowledges that for some physical state (P), there is some physical cause x which results in physical event y. If this physical state gives rise to a mental state (M), that's fine. Nothing wrong with that yet. But M is physically distinct from P. So if there is some event x* which is caused by M, then it is irrelevent to any event y since the physical cause x, is sufficient to explain y.

    clear as mud?

    The exclusion argument is perfectly acceptable for computationalism as far as I can tell. However, there are various ways philosophers, physicists, biologists, ... engineers, would like to get around this and actually have mental states be something other than an epiphenomena. Hence the interest in various definitions of downward causation.
     
  11. Feb 26, 2008 #10
    There seems to be a fundamental assumption here that a mental state is somehow distinct from a physical state? I'm sorry, it is getting late, so my brain is getting heavy.
     
  12. Feb 27, 2008 #11
    Q_Goest,

    Let me define conscious physical action A, as P1 + P2, where P1 and P2 are physically distinct, but spatially proximate. P1 implies M and M does not imply P1, that is, there is no symmetry between P1 and M. However P2 takes its configuration from M, and there is also no symmetry between P2 and M, in the sense that P2 does not affect M.

    (1) For automata, aware action, A is twice P1.
    (2) When in a coma: P1 does not imply M, therefore M cannot imply P2, and as such the patent is alive at <P1, but unaware because A has not reached its treshold value.
    (3) For life there is a minimum value/output of P1.

    This configuration seems a possible solution, when thinking about how automata can actually be aware in a sure and robust environment.

    The possibility for M to influence A, lies on the extent P2 will configure itself in relation to M, and whether M is solely dependent on P1 for itz own configuration.
     
  13. Feb 27, 2008 #12
    I find the idea of 'emergent' phenomena, much more compelling than reducing things to the epiphenomenal, but thats mostly a matter of intuition on my part. Reductionist thinking falls apart, in my view, because it assumes a kind of 'atomism' is implicit in causation, which I don't think is really supported in modern science.

    I'm not trying to imply anything metaphysical here, I'm thinking its more a matter of focus. We see objects causing things, but thats an artificial idea. Everything works within systems, and systems within systems. I'm not sure I would say that the sum is greater than the parts, but rather, the sum is simply of a different set. Although that last part is not strictly mathematical.

    I wouldn't say 'the mind' is anything special, its just a very obvious manifestation of emergent phenomena as I understand it.

    Just my two cents though.
     
    Last edited: Feb 27, 2008
  14. Feb 27, 2008 #13
    Ah, this is a great topic. But it is almost impossible to give you an easily intelligible answer. Sorry!

    There is a great article by Sober & Shapiro 2007. See this page: http://philosophy.wisc.edu/shapiro/

    Shapiro is very much pro-mental causation. As I recall, Sober & Shapiro think that counterfactuals of the form: "If there was a change in mental state X, there'd be a change in physical [i.e. neural] supervenience base Y" can be true counterfactuals - and that this is a pretty good criterion for causation.
     
    Last edited: Feb 27, 2008
  15. Feb 27, 2008 #14
    You're correct (as far as I know). Mental causation is a problem for dualists.
     
  16. Feb 27, 2008 #15

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    For Moridin, Lord Ping, and I think (not sure) this applies to basePARTICLE too.

    Consider an allegedly conscious computer named Hal with a hand that can feel pain. Hal sticks that hand on a hot surface and suddenly withdraws the hand. We might say the computer felt pain. The heat sensor activated when the temperature rose, which made various switches inside the computer change state, which made various other switches send a signal to say “ouch”, another set of switches to the hydraulics to withdraw the hand, another set of switches to … etc…

    To Moridin’s question, how is this mental state distinct from the physical state? We may say Hal felt pain, which caused Hal to withdraw his hand, say “ouch” and all the other things he did. But if Hal DIDN’T feel anything whatsoever (ie: was a p-zombie) then would his reaction be any different? This, assuming all the circuitry were identical of course. If the circuitry were identical, then Hal would have done the same thing, regardless of the fact he was a p-zombie, because it is the physical state which resulted in the behavior, NOT the mental state.

    If that still isn’t clear, consider that Hal may have felt something soft when he touched the hot plate, or had an orgasm, or somehow enjoyed the sensation of touching something hot. Regardless of what experience Hal had, his reaction would be the same, he would withdraw his hand and say “ouch” and when queried about the experience, he would say it hurt and would never do it again, because the behavior was determined by the interaction of various switches, not by some mental state.

    Basically, we can explain everything about the behavior without resorting to “mental states”, so we should consider the mental and physical states to be “distinct”.

    Regarding the paper, can you post a link to the paper and perhaps quote what you feel is relavent?
     
  17. Feb 27, 2008 #16

    jim mcnamara

    User Avatar
    Science Advisor
    Gold Member

    Moridin - doesn't there have to be a perfect inviolate one-to-one relationship with a quale and a phsyicial brain state? I believe Dennett makes a thought experiment as a counterargument. It supposes some of the correspondences of qualia to physical brain states or areas are "rerouted" - like red being routed to green.

    He then warms to the effect that 'we couldn't tell the difference, so therefore qualia cannot be defined'. I am reasonably sure this is not the case, from a scientific point of view. We could tell the difference. People who are color blind routinely report seeing colors, eg green, that they cannot perceive. And have never perceived. These reports occur apparently either from an extremely vivid dream or from sleep deprivation.

    Given this phenomenon, the color interpretation must more than just "red" coming through the optic nerve. It must route to a predestined or hardwired group of neurons. Neurons just for red. Not orange. Therefore sending in data about a red object that was yesterday red and then today orange is certainly going to cause a problem.

    Edit: oops, forgot the link :smile:
    http://ase.tufts.edu/cogstud/papers/quinqual.htm
     
    Last edited: Feb 27, 2008
  18. Feb 27, 2008 #17
    How would they know it is a color they are perceiving? If I never have seen a unicorn in my life or heard it described, how would I know I've seen a unicorn?

    I'm not sure I follow? It seems to be a version of Mary the color scientist?
     
  19. Feb 28, 2008 #18
    It's the paper with Sober on the link I gave earlier. "Epiphenomenalism". This one:

    http://philosophy.wisc.edu/shapiro/HomePage/shapiro and sober.pdf

    It's not an easy paper. But it does include a defense of mental causation on manipulationist grounds. A manipulation of mental state M1 would result in a change in mental state M2 (even if it's via the physical supervenience base) - and so M1 causes M2.
     
  20. Mar 1, 2008 #19

    Q_Goest

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Hi Lord Ping,
    Thanks for the link. Very much appreciated.

    I agree with your evaluation of the paper. As you say, Shapiro is trying to avoid epiphenomenalism. In so doing, he’s also arguing against Kim and thus against many very similar papers. Kim’s work is highly regarded as you probably know, so if Shapiro wants to shoot down Kim’s work he has quite a serious task ahead of him. Personally, I like Kim’s “causal inheritance principal” that Shapiro refers to on page 10, and I disagree Shapiro gets around it.

    Note that Shapiro, like many philosophers, doesn’t distinguish between different models of consciousness. I disagree with this approach, but it’s not uncommon. Had he carefully specified his fundamental assumptions, I might actually be in agreement. Shapiro wants as you say, to link M1 to P1 and thus suggest simply that M2 is a causal influence by appealing to what he calls “functional model of reduction”. On functional reduction:

    With functional reduction, Shapiro seems to agree with Kim. He uses this concept as he summarizes his case:

    I disagree it is so simple. Let’s apply this to computationalism, such as a conventional computer made up of interconnected switches. In this case, the computer is P1 which allegedly has a mental state M1. The computer changes in some deterministic fashion to physical state P2 and alleged mental state M2. Shapiro would have us believe that since P1 is the physical substrate of M1, then we can’t separate these. So when P1 changes to P2, it is just as acceptable to claim that M1 caused M2 which is the same as P2.

    There’s a problem with this, and I’m not sure if he addresses this or not. If so, it must be in the conclusion where he says:

    I’m not sure this is the same issue as my own, but thought I’d point it out just in case.

    The problem seems to be that P1 has what Shapiro describes as a microsupervenience base, MSB(X). In the case of a computer, MSB(X) consists of individual switches. Each switch then, is a single bit of binary information. We could reduce each switch to parts but since the amount of information that applies to P1 in a single switch is only 1 of 2 possible values, further reduction of the switch seems superfluous.

    Computationalism of course, posits that this single bit of binary information represented by a switch has no corresponding mental state by itself. The mental state M1 “emerges” from the totality of the entire MSB(X). P1 on the other hand is a summation of the entire MSB(X). P1 itself obtains its causal nature from the switches. The switches don’t change state because of physical state P1, they change state because of the local affect of voltage. Any state P1 therefore is entirely dependant on, and a summation of, the individual switches – what Shapiro is calling MSB(X). Therefore, any emergent property such as M1, is reliant on the MSB(X) of P1. If this is true, then M1 has no causal influence over the change of P1 to P2 – the change in state is dependant on the local, causal affects of voltage on the switches that make up the microsupervenience base.

    I’ve tried to apply Shapiro’s argument to computationalism, and I believe it fails. Perhaps Shapiro has addressed this concern, it’s hard to say.

    Note that the above argument might fail if we suggested that the physical base P1 was irreducible such as the argument provided by Scott for nonlinear phenomena. However, there is nothing physically nonlinear about a computer.

    It could be we are culturally predisposed as engineers or scientists to the reductionist point of view. Shapiro on the other hand seems to want to avoid reductionism. He’s not alone in this, but a computer is certainly reducible to its constituent parts. I see no arguing that point. A computer does what it does just like a series of dominoes falling over. There’s no causal influence created by a mental state M1 on a series of dominoes falling over, and certainly, the physical state of a set of dominoes can be mapped to a physical state of a computer. The problem of mental states being epiphenomenal for a comptational device isn’t something that can be avoided IMHO.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: The Mind is a Terrible Thing to WASTE
  1. Mind (Replies: 9)

  2. Terrible accident (Replies: 14)

  3. The mind (Replies: 4)

  4. A terrible experience (Replies: 22)

Loading...