- #1
Descartz2000
- 139
- 1
Who still believes in a true 'soul like' free will? Hasn't neuroscience done enough to refute this ancient idea? And if not true free will, then what guides human behavior?
And if not true free will, then what guides human behavior?
If free-will is physical, then it is based on bio processes. If true, then bio processes must occur prior to the experience of free-will.
I agree about the first part, but if we're right about that, then the whole concept of "free will" is meaningless. How would you even define it? As "the ability to sometimes feel like you're making a choice when an inevitable physical interaction takes place in your brain"?There is no reason to think that 'freewill' is not physical. And in fact, freewill depends on determinism.
I’d agree with all this. Also, what causes any physical interaction is only other physical interactions. For example, “What caused the transistor to change state?” The answer isn’t “free will” or “because I wanted it to” or “because it was red” or anything like that. Transistors change state because there is a change in voltage on the base. Our emotions, beliefs, desires, etc… can’t be invoked to prompt a physical change. Only physical changes can prompt other physical changes. So what’s the point of even discussing mental causation? There are no mental causes which change the state of anything physical. And if we claim that the mental state is an epiphenomena, or that it simply follows the physical change of state (and thus we might claim the mental state “causes” the physical state change) then we have two separate causes for a single physical change of state and also, the mental state doesn’t even “reliably correspond” to the physical change in state. In other words, it doesn’t matter if we desire something or not – the physical change of state is not governed by this “emergent phenomena” that we call desire or free will. It does not reliably correspond because there is a physical change of state that governs, not some “downward causation” that changes a switch position because of some overall configuration. The overall configuration of a computational device can be reduced to the individual configurations of the parts. A computer is merely the sum of the parts with no ability to change any particular part simply because there is some emergent phenomena allegedly controlling the computer.I agree about the first part, but if we're right about that, then the whole concept of "free will" is meaningless. How would you even define it? As "the ability to sometimes feel like you're making a choice when an inevitable physical interaction takes place in your brain"?
Is there any meaningful way to interpret the question "Do we have free will?"? The only interpretation that feels intuitively right to me is this: "Is consciousness something more than a physical interaction?" However, there's a problem with that interpretation. We don't have an exact definition of "physical interaction". We're of course talking about interactions between pieces of matter, but we don't have an exact definition of "matter" either (or "pieces"). So how about this instead: "Is it possible to find a scientific theory of consciousness?" Unfortunately, this has problems too. How do you define "consciousness"? The only way seems to be to say that an entity is conscious if it behaves in a certain way, but in that case, our "theory of consciousness" is just a "theory of behavior", and that sounds a lot less impressive, especially considering that it's not difficult at all to come up with a really simple theory of behavior that can make some correct predictions.
Most in the scientific world have the greatest problem not with behavior, but with mental causation and qualia/experience. Behavior is a study of how physical things change state. We can study how certain chemical interactions for example, change behavior. That’s all well and good… Qualia however, doesn’t have any measurable properties. Qualia include those things we experience, such as our beliefs (or our belief in free will). Also, mental causation is a problem as pointed out above.So it seems to me that what we're really asking is if there exists a (still undiscovered) scientific theory that can predict all kinds of human behavior.
It is no wonder then that for most philosophers, the causal efficacy of the mental is something that absolutely cannot be given away no matter how great the pressures are from other quarters. Jerry Fodor is amoung these philosophers; he writes:
“… if it is literally true that my wanting is causally responsible for my reaching, and my itching is causally responsible for my scratching, and my believing is causally responsible for my saying… , if none of that is literally true, then practially everything I believe about anything is false and it’s the end of the world.”
If mental causation is only an illusion, that perhaps is not the end of the world, but it surely seems like the end of a world that includes Fodor and the rest of us as agents and cognizers. The problem of determinism threatens human agency, and the challege of skepticism threatens human knowledge. The stakes seem even higher with the problem of mental causation, for this problem threatens to take away both agency and cognition.
How would you even define it?
Free will is the ability to do that which you want.
You can't use undefined terms in a definition. The meaning of what you guys said depends on what "you/one" is, and on what a "want/intention" is. Are "you" a physical interaction or a soul? Is your "intention" a physical state of a subsystem or a kind of "qualia"?The ability to change one's intention into action.
I'm happy to go with a dictionary definition.You can't use undefined terms in a definition.
We could argue about what any of those words mean, and whether words mean anything at all. But that's hardly productive.The meaning of what you guys said depends on what "you/one" is, and on what a "want/intention" is.
We have substantial evidence for physical interaction, none for soul. Soul is a secondary hypothesis with little explanatory, and no predictive, value.Are "you" a physical interaction or a soul?
Even if one believes in a soul, a physical state would be necessary, or the soul could not interact with the (physical) brain. Which means a soul hypothesis is at best unecessary, at least in terms of a general understanding of freewill. Now we can argue what 'physical' means... but in any meaningful dialogue some assumptions must be made, else communication is not possible, and conversation reduces to semantics (a definition war). This is a poll about freewill, not physicality.Is your "intention" a physical state of a subsystem or a kind of "qualia"?
Just as you’ve pointed out that a ‘soul’ can not interact with the physical brain, there is a much more fundamental problem with any ‘physical’ theory of mind. That problem is often referred to as simply “mental causation”. The problem is, how can phenomenal aspects of mind interact with, and be causally responsible for, our actions. At best, this causal influence is simply epiphenomenal. At worst, there is no reliable correspondence.Even if one believes in a soul, a physical state would be necessary, or the soul could not interact with the (physical) brain. Which means a soul hypothesis is at best unecessary, at least in terms of a general understanding of freewill.
Just as you’ve pointed out that a ‘soul’ can not interact with the physical brain, there is a much more fundamental problem with any ‘physical’ theory of mind. That problem is often referred to as simply “mental causation”. The problem is, how can phenomenal aspects of mind interact with, and be causally responsible for, our actions. At best, this causal influence is simply epiphenomenal. At worst, there is no reliable correspondence.
In defining the problem of mental causation, one should also define what paradigm they assume
Not sure what your point is here. Certainly there’s a difference between one program that runs without output and another that runs with output, the second needing more memory space.Consider, a computer that runs programs entirely within a command prompt, and then a second that uses a fully realized and realistic virtual 3d game space. The difference is non-trivial when it comes to both memory requirements, and raw computing power.
Agreed. Note that this ‘mental causation’ is something we need somehow to keep rather than give away as computationalism does.If consciousness is entirely epiphenomenal, then it has no effects on the organism. If this is true, it seems rather extraordinary that we have evolved such a detailed virtual reality environment in our brains... one that has absolutely no effect, and therefore, no use. One would think that something requiring so much energy to maintain, but which provides no benefit, would quickly go the way of the human appendix. And yet, our consciousness seems to be the key feature that allows us, a fairly weak, slow and ilequipped predator, to not only survive, but dominate the planet.
The term “mental causation” is well defined in the literature. It is the concept that phenomenal properties of mind as defined by Chalmers for example, actually have causal influence over the physical world.The problem is, we simply don't have a viable model of 'mental causation', let alone a good understanding of what either word really means.
Actually, one MUST select a specific paradigm, and the literature is replete with those who argue for or against a given paradigm because of the logical conclusions each paradigm must contend with. For example, Kim points out that mental causation is a problem for computationalism.So picking a random paradigm isn't much use.
Without output?? No, I never said that.Not sure what your point is here. Certainly there’s a difference between one program that runs without output and another that runs with output, the second needing more memory space.
We don't need to do anything other than follow the data. Parts of our experience may be epiphenomenal, other parts may not be. We really don't know what the mind is. Strict epiphenomenalism doesn't seem to work though. It seems to be an oversimplification.Agreed. Note that this ‘mental causation’ is something we need somehow to keep rather than give away as computationalism does.
A definition is easy. A model that works is something different.The term “mental causation” is well defined in the literature.
If you mean 'emergence', its not well understood, and often debated, Searle for example. You seem to want to characterize opinion as fact.It is the concept that phenomenal properties of mind as defined by Chalmers for example, actually have causal influence over the physical world.
You're not reading what I wrote. I never said you didn't.Actually, one MUST select a specific paradigm,
So what? All that implies is that either our idea of mental causation is wrong and/or computationalism is wrong.For example, Kim points out that mental causation is a problem for computationalism.
I agree with you Emanresu56, however, I think compatibilism is a watered down version of determinism. It is just what is focused on that is different. Meaning, to feel that one is not psychologically or emotionally compelled or forced by any variable, yet at the same time to be guided and directed by physical laws. It's still determined.
Compatibalism simply eliminates the idea that a determined choice is equivalent to lacking choice.
It's modern notions of 'probability' and 'uncertainty' that are problematic for classical determinism.
You can't use undefined terms in a definition. The meaning of what you guys said depends on what "you/one" is, and on what a "want/intention" is. Are "you" a physical interaction or a soul? Is your "intention" a physical state of a subsystem or a kind of "qualia"?
How does compatibalism allow for any kind of mental causation, free or otherwise? Phenomenal properties of mind can not cause a physical change in any deterministic physical system - only physical properties can invoke physical changes in deterministic physical systems (ie: computational structures). If phenomenal properties can't cause physical changes, and yet we maintain that these phenomenal properties 'reliably correspond' to the physical changes, then we have a much more serious issue, which is how these phenomenal properties could have ever come about since they are not needed and don't have any causal influence over a physical system.
This is like saying, "For a deterministic, computational chicken, why did the chicken cross the road." and in responce a compatibalist might incorrectly suggest "because he wanted to get to the other side." The problem here is in suggesting a desire by the chicken (ie: a phenomenal property of the chicken's mind) somehow influenced the deterministic physical system (ie: the chicken and its behavior/physical states). The truth is that the chicken crossed the road because the individual parts of the chicken interacted and resulted in the chicken crossing the road.
Any other mechanism, such as the phenomenal property of desire, can't have causal influence over any part of the chicken when there is already a physical property which causes that responce. This is the problem Kim has pointed out in a long series of papers and books. If one suggests that the phenomenal property corresponds to the behavior, such as compatibalism suggests, then we have a problem to explain. Why should there be a reliable correspondence between the phenomenal property and the physical cause? Why should we WANT to do something and then magically find that the physical interactions in our body match that desire? What evolutionary benefit is there to such an ad-hoc system? The compatibalist has to respond to the concern of why these phenomenal experiences should correspond with physical states in a way that is appropriate and generally truth-conductive. Compatibalism fails on the grounds that it has no way of explaining this reliable correspondence.
So when you say:The thesis of the emergence of consciousness out of complex neurophysiological processes is commonplace (Freeman, 2001). Yet it raises two major problems which are far from being correctly addressed. The first problem is a “category problem” (by due reference to G. Ryle’s notion of “category mistake”). Emergence concerns properties, to wit features that are intersubjectively accessible, and that can then be described in a third-person mode. Saying that a property “consciousness” emerges from a complex network of interacting neurophysiological properties therefore misses the crutial point: that consciousness is no ordinary “property” in this sense, but rather a situated, perspectival, first-person mode of access.
You are making a category mistake. Take for example a shadow. A shadow is an epiphenomenon as you’re well aware. However, in the case of a shadow, there is a reliable correspondence. There is an objectively measurable phenomena (the blocking of light) and a subsequent movement of that blockage which is caused by physical interactions. There are measurable, physical properties that can be used to explain the epiphenomena, just as there are measurable properties that can explain surface tension and floating corks. Shadows and floating corks are epiphenomena which reliably correspond. What are NOT objectively measurable, are phenomenal properties. Therefore, to compare subjective phenomenal properties to objectively measurable ones is a category mistake, regardless of whether the objectively measurable ones are an epiphenomenon or not. Describing phenomenal properties as an epiphenomena isn’t very descriptive as it misses the point that such properties are not objectively measurable.We can refute this entire line of reasoning by pointing out that compatibilism holds that all phenomenal phenomena are emergent properties of the brain, just like surface tension is an emergent property of a bunch of water atoms in a bucket. You are basically claiming that explaining a floating cork with surface tension is wrong, because its really just interacting atoms. You are making nothing but empty tautologies.
The ability to change one's intention into action.
People have competing motivations.Hmm... I don't think this is satisfactory. I think freewill goes back to the intention itself. If we have the intention to eat, its our capability that allows to turn that intention into action.
This is where the problem starts, you want to define freewill based on the idea that you can choose before choosing. You can't make a choice without intention. So choosing an intention makes no sense. This is more a linguistic problem than a philosophical problem, you're using the fact you can use intention as a verb, and as a noun, to create a paradox.But the question of freewill is whether those intentions were actually chosen or not. I could very easily argue that the intention to eat probably has little to do with free will.
Mental causation is a different problem. Without 'some kind' of mental causation, I agree freewill would be impossible. But there is still plenty of debate about mental causation.How does compatibalism allow for any kind of mental causation, free or otherwise? Phenomenal properties of mind can not cause a physical change in any deterministic physical system - only physical properties can invoke physical changes in deterministic physical systems (ie: computational structures). If phenomenal properties can't cause physical changes, and yet we maintain that these phenomenal properties 'reliably correspond' to the physical changes, then we have a much more serious issue, which is how these phenomenal properties could have ever come about since they are not needed and don't have any causal influence over a physical system.
This is a category error. You asked 'why the chicken(object) crossed the road', not why the chicken(parts) crossed the road. Which means you're not answering the question, you're criticizing the question from an extremely reductionist perspective. This is a linguistic issue; what is the object of the verb? And what does it mean to be an object?This is like saying, "For a deterministic, computational chicken, why did the chicken cross the road." and in responce a compatibalist might incorrectly suggest "because he wanted to get to the other side." The problem here is in suggesting a desire by the chicken (ie: a phenomenal property of the chicken's mind) somehow influenced the deterministic physical system (ie: the chicken and its behavior/physical states). The truth is that the chicken crossed the road because the individual parts of the chicken interacted and resulted in the chicken crossing the road.
The compatibalist has to respond to the concern of why these phenomenal experiences should correspond with physical states in a way that is appropriate and generally truth-conductive. Compatibalism fails on the grounds that it has no way of explaining this reliable correspondence.
Sorry JoeDawg but I believe you've missed the point.This is a category error. You asked 'why the chicken(object) crossed the road', not why the chicken(parts) crossed the road. Which means you're not answering the question, you're criticizing the question from an extremely reductionist perspective.
It’s because phenomenal properties are not measurable that makes my above line of reasoning so strong (it’s not my line of reasoning, but I’ll explain if your interested). So these non-measurable phenomenal properties are supposed to reliably correspond per compatibalism. That’s why I say that compatibalists don’t even understand the problem, because compatibilism makes this kind of category mistake.
To your second point regarding evolution, there still exists therefore, a problem which must be explained by any evolutionary theory of consciousness. Why should non-measurable properties arise at all?
In other words, there must be psychophysical laws which are supervenient on the physical, and have causal influence over those physical processes which are seemingly random from an objective perspective. So rather than being a “death knell” as you put it, psychophysical laws which supervene on ‘emergent’ physical structures allow for moral responsibility without invoking an external agent.