Does moral responsibility entail libertarian free will?

  • Thread starter moving finger
  • Start date
  • Tags
    Free will
In summary, moral responsibility is the idea that an agent is deserving of praise or blame for their actions based on what they have done or failed to do. This concept is often linked to the notion of libertarian free will, which argues that individuals have the ability to make choices that are not determined by external factors. However, there is a debate about whether libertarian free will is a necessary prerequisite for moral responsibility. Some argue that moral responsibility can still exist even without libertarian free will, as long as individuals feel they have the freedom to make choices. Others argue that causal determinism is a necessary condition for moral responsibility, as individuals must have the ability to foresee the consequences of their actions. Ultimately, there is a tension between these two arguments and it is up

Does moral responsibility entail libertarian free will?

  • yes

    Votes: 7 50.0%
  • no

    Votes: 7 50.0%
  • don't know

    Votes: 0 0.0%

  • Total voters
    14
  • #1
moving finger
1,689
1
Moral responsibility : When an agent performs or fails to perform a morally significant action, we sometimes think that a particular kind of response is warranted. Praise and blame are perhaps the most obvious forms this reaction might take. For example, one who encounters a car accident may be regarded as worthy of praise for having saved a child from inside the burning car, or alternatively, one may be regarded as worthy of blame for not having used one's cell phone at least to call for help. To regard such agents as worthy of one of these reactions is to ascribe moral responsibility to them on the basis of what they have done or left undone.

Libertarian free will : The ability of an agent making a "free will" choice to have chosen otherwise (sometimes called Could Have Done Otherwise or CHDO) than what it actually did choose in a given set of circumstances, if the precise circumstances (immediately prior to the moment of choice) could be "replayed" exactly as before.

Many defenders of the concept of libertarian free will seem to believe that such free will is a necessary pre-requisite for moral responsibility. In other words, if we deny the existence of libertarian free will, we must also deny that any agent is morally responsible for its actions.

Many determinists, on the other hand, argue that moral responsibility can have meaning if and only if our actions are precisely determined by our detailed and prevailing motives, wishes, desires, volitions, values etc, etc, and (given that such factors will be identical if the situation is replayed under identical circumstances) the concept of CHDO is incoherent and certainly not a pre-requisite for moral responsibility.

The question, then is :

Is libertarian free will an essential pre-requisite for moral responsibility, and (whether you think yes or no) why?

I'm interested to know the views of forum members on this question.

MF

(ps I posted this in Metaphysics and Epistemology becuse the questions it raises, regarding the requirement or otherwise for the existence of libertarian free will, are metaphysical and epistemological)
 
Last edited:
Physics news on Phys.org
  • #2
In order for one to be held responsible for one's actions, good or bad, one must have the ability to choose his or her actions, free will.
If I have no choice of actions or inactions, as in hard determinism then I have no responsiblity as my actions are compelled and beyound my control.
 
  • #3
As a compatibilist I say that libertarian free will is not necessary for moral responsibility, which I only acknowledge as INNER responsibility. Other people's opinions are meaningless in this context. And all that is required for me to feel I have a moral responsibility is that it appears to me that I have freedom of choice, and that I am not aware of any really strong evidence that I don't.
 
  • #4
Royce said:
In order for one to be held responsible for one's actions, good or bad, one must have the ability to choose his or her actions, free will.
If I have no choice of actions or inactions, as in hard determinism then I have no responsiblity as my actions are compelled and beyound my control.
I think we need to distinguish between moral responsibility on the one hand and plain responsibility pure and simple on the other.

Example : A dog can be held responsible for intentionally attacking a child, but most people would say it cannot be held morally responsible. The distinction is in the fact that though the dog controls its actions (for which we can hold it responsible in the sense that we can choose to punish it or even have it put down), it cannot be expected to have any understanding of what is morally right or wrong (hence cannot be held morally responsible).

What do you think?

Best Regards

MF
 
  • #5
selfAdjoint said:
As a compatibilist I say that libertarian free will is not necessary for moral responsibility, which I only acknowledge as INNER responsibility. Other people's opinions are meaningless in this context. And all that is required for me to feel I have a moral responsibility is that it appears to me that I have freedom of choice, and that I am not aware of any really strong evidence that I don't.

Interesting position. You would seem to be a determinist but at the same time you believe in the coherency of the concept of moral responsibility? (please correct me if I have misunderstood)

Let me present a couple of arguments based on work by Norman Swartz :

Argument #1 – There is No Moral Responsibility
Premise 1: Every action is either caused or uncaused (i.e. a random occurrence).
Premise 2: If an action is caused, then that action was not chosen freely and the person who performed that action is not morally responsible for what he/she has done.
Premise 3: If an action is uncaused (i.e. is a random occurrence), then the person who performed that action is not morally responsible for what he/she has done.

Thus: We are not morally responsible for what we do.


Argument #2 - Causal Determinism is a Necessary Condition for Moral Responsibility
Premise 1: Unless there are extenuating circumstances, persons are (to be) held morally responsible for their actions.
Premise 2: Being unable reasonably to have foreseen the consequences of their actions is one such extenuating circumstance. (Recall that young children who cannot reasonably foresee the consequences of their actions are not to be held morally responsible for the consequences.)
Premise 3: In order to be able to anticipate or foresee the likely (or even the remotely likely) consequences of one's actions, the world must not be random, i.e. the world must be fairly regular (or causally determined).

Thus: Moral responsibility requires that there be causal determinism.

There is a clear and unacceptable "tension" between these two arguments. In other words, one (or both) must be unsound.

Note that Argument #1 seems to be saying that the concept of moral responsibility is incoherent, whereas an implicit premise is Argument #2 is that the concept of moral responsibility is coherent. One obvious solution to the tension is thus to accept Argument #1 at face value, and to say that the concept of moral responsibility is indeed incoherent (thus Argument #2 is unsound, it rests on an incoherent concept).

From your post, it would seem that you would agree with the soundness of argument #2, but you disagree with premise 2 of argument #1. Can you perhaps explain what is wrong with this premise?

Best Regards

MF
 
Last edited:
  • #6
If there were a deterministic moral responsibility, so that in principle an algorithm could tell me what I should do in every circumstance, then I would not be free. Oh I might be free to ignore the algorithm, but I would always know when I did so that I was "wrong", "non-optimal", or whatever sneer-word was current.

As it is, I feel that there is no essence of me or us, and therefore no algorithm. But I choose to act in a way that I believe "in my heart" to be right. Call that a leap of faith or existentialism or whatever.
 
  • #7
Hi MF. I like the two arguments you gave. Note that they examine the issue from a specific perspective, one of assuming hard determinism or not. Like the blind men that examined the elephant that all came up with different descriptions because of their different perspectives, the arguments you gave also looks at the issue from a very specific perspective.

The arguments you provided examine the situation assuming a specific model of physical reality, a model that compares deterministic mechanisms with indeterminate ones. I think that's more of a model than something which is 'real'.

Someone in the judicial system for example, might have a different model and perspective, and might point out that without laws that govern society, if we didn't hold people responsible for their actions, the result would be disasterous. That model assumes people have something called free will. I wonder if free will is a phenomenon that may require emergent laws of physics that can't be reduced to determinate or indeterminate mechanisms.

Regardless, the judicial system perspective thus concludes that people must be held accountable for their actions, that people are capable of 'deciding' what action they are going to take, and that people are capable of making those decisions. Note that this perspective is more of a 'big picture' perspective that doesn't try to determine what model we should use for microscopic physical phenomena.

I'd have to agree with this judicial system perspective. I think the model used forces us to accept responsibility for our actions regardless of what physically may empower those decisions we make and actions we take. The model used seems to 'work'.

I'm not sure what assumptions "libertarian free will" makes. How does the libertarian veiw model reality?
 
  • #8
Choices made by an algorithm can be "morally good", since acting in a way that others appreciate, is a smart way for you (and others) to survive. Making others responsible for their actions is also beneficial for your survival. Robots can act morally, and therefore libertarian free will is not necessary.
 
  • #9
Lars Laborious said:
Choices made by an algorithm can be "morally good", since acting in a way that others appreciate, is a smart way for you (and others) to survive. Making others responsible for their actions is also beneficial for your survival. Robots can act morally, and therefore libertarian free will is not necessary.

So if I have an algorithm that tells me "abortion doctor = murderer" and also "murderers should be executed" (and many many people in our society have algorithms of this kind), then it is good for me to gun down an abortion doctor. And SOME people, the people I like and pay attention too, will like me for doing that, and it will make them happy.

Or if my algorithm says that Judaism is an evil disease of the blood which is infecting the true Noble Race with its poison, then... Well, SOME people will be happy.
 
  • #10
selfAdjoint said:
If there were a deterministic moral responsibility, so that in principle an algorithm could tell me what I should do in every circumstance, then I would not be free. Oh I might be free to ignore the algorithm, but I would always know when I did so that I was "wrong", "non-optimal", or whatever sneer-word was current.

As it is, I feel that there is no essence of me or us, and therefore no algorithm. But I choose to act in a way that I believe "in my heart" to be right. Call that a leap of faith or existentialism or whatever.
Do you believe that what is "in your heart" is uncaused (ie the source of some kind of libertarian free will), or do the feelings "in your heart" have causes just like everything else?

There is no need to leap. Even if the world is 100% deterministic, it is impossible in principle for an agent within that world to completely and accurately predict the future of the same world. An ontically deterministic world does not mean the world is epistemically determinable.

This supports the intuitive illusion that we are free agents.

Best Regards

MF
 
  • #11
Q_Goest said:
Hi MF. I like the two arguments you gave.
I should point out that I borrowed these from Norman Swartz.

Q_Goest said:
Note that they examine the issue from a specific perspective, one of assuming hard determinism or not. Like the blind men that examined the elephant that all came up with different descriptions because of their different perspectives, the arguments you gave also looks at the issue from a very specific perspective.
Agreed. Any argument must assume certain premises, both implicit and explicit.

Q_Goest said:
The arguments you provided examine the situation assuming a specific model of physical reality, a model that compares deterministic mechanisms with indeterminate ones. I think that's more of a model than something which is 'real'.

Someone in the judicial system for example, might have a different model and perspective, and might point out that without laws that govern society, if we didn't hold people responsible for their actions, the result would be disasterous. That model assumes people have something called free will. I wonder if free will is a phenomenon that may require emergent laws of physics that can't be reduced to determinate or indeterminate mechanisms.
I disagree. We do not need to assume "free will" (in the libertarian sense of free will) to understand why laws are necessary and efficacious from society's perspective. We need only assume that human agents can make choices, based on their rational wishes/desires/intentions, which determine their actions. Neither the choices, nor the rational wishes/desires/intentions which underlie them, need necessarily be based on any kind of libertarian free will. The entire judicial system and its objectives can be defended quite rationally from a purely deterministic perspective.

Q_Goest said:
Regardless, the judicial system perspective thus concludes that people must be held accountable for their actions, that people are capable of 'deciding' what action they are going to take, and that people are capable of making those decisions. Note that this perspective is more of a 'big picture' perspective that doesn't try to determine what model we should use for microscopic physical phenomena.
Agreed. But none of this requires libertarian free will to be effective. It works under a purely deterministic model.

Q_Goest said:
I'd have to agree with this judicial system perspective. I think the model used forces us to accept responsibility for our actions regardless of what physically may empower those decisions we make and actions we take. The model used seems to 'work'.
Agreed it works. But none of this requires libertarian free will to be effective. It works under a purely deterministic model.

Q_Goest said:
I'm not sure what assumptions "libertarian free will" makes. How does the libertarian veiw model reality?
A libertarian would presumably claim that to be morally responsible for our actions we must effectively be autonomous agents, the sources of our actions must be neither deterministic nor indeterministic but "something else" (this something else being mystical free will which nobody can explain). The concept seems incoherent to me.

Best Regards

MF
 
  • #12
selfAdjoint said:
So if I have an algorithm that tells me "abortion doctor = murderer" and also "murderers should be executed" (and many many people in our society have algorithms of this kind), then it is good for me to gun down an abortion doctor. And SOME people, the people I like and pay attention too, will like me for doing that, and it will make them happy.
Lars Laborious is arguing that an algorithm could make moral judgements, but he is not claiming that such moral judgements would always conform to everyone's idea of what is "right". The judgements made by the algorithm would depend on the underlying value systems (the raw data) that the algorithm is working on.

Humans work in just the same way. We all have different "raw data" that makes up our personal value systems, which is why there is no agreement between humans on whether abortion is morally right or wrong. If humans cannot unambiguously determine what is morally right or wrong, why should we expect an algorithm to be able to do so?

None of this, by the way, shows that our decision making is not deterministic.

Best regards

MF
 
  • #13
moving finger said:
Lars Laborious is arguing that an algorithm could make moral judgements, but he is not claiming that such moral judgements would always conform to everyone's idea of what is "right". The judgements made by the algorithm would depend on the underlying value systems (the raw data) that the algorithm is working on.

Humans work in just the same way. We all have different "raw data" that makes up our personal value systems, which is why there is no agreement between humans on whether abortion is morally right or wrong. If humans cannot unambiguously determine what is morally right or wrong, why should we expect an algorithm to be able to do so?

None of this, by the way, shows that our decision making is not deterministic.

Best regards

MF

I can't visualize a situation in which every individual forms a unique but deterministic moral code and an overall moral code emerges. Rawls' imaginary contract perhaps? But some individuals would not ever feel bound by such a contract. Calling them sociopaths doesn't make them go away, and in a deterministic case they are just as justified as anyone else.
 
  • #14
Hi MF.
You've provided us with Norman's Argument 1. Using premise 1 and 2, he's concluded "we are not morally responsible for what we do" if the world is ontically deterministic. I believe the conclusion stems from the concept that we could not have done otherwise. Our actions throughout life are ontically deterministic, even if the world is not epistemically determinable.

Similarly, I can understand his conclusion using premise 1 and 3 in which the world is NOT deterministic, but contains random elements. However, in this case, the conclusion reached stems from the fact there is no ability for some person to control their actions. I'm not sure this is reasonable and will attempt to address this in a moment.

About Argument 2, Norman says:
Causal determinism is [contrary to premise 2 of Argument #1] not only compatible with free will, it is a necessary condition of free will!
Ref: http://www.sfu.ca/philosophy/freewill3.htm

He then goes on to suggest that Argument 1 is faulty saying:
My own view is that the error occurs in premise 2 of Argument #1. I will argue (below) that it is false that causal determinism makes free will nonexistent. (I will argue that both arguments are valid, but only the second is sound [i.e. all of its premises are true]. The first argument, while valid, has a false premise, thus making that argument unsound.)

For me to try to show this, I will have to examine more closely the concept of what it is for an event or an action to be caused.

His argument for this is summed up on the next page of his notes where he talks about what he means by "causally determined".
Put another way, I think that the source of the problem of causal determinism and its supposed incompatibility with free will lies in the failure of many persons to fully shake off the historical view that laws of nature govern the world.

On a strictly descriptivist view, laws of nature do not govern the universe. To govern the universe, laws of nature would require unknown (dare I say, magical?) powers. Moreover, the view that laws of nature govern the universe turns the semantic theory of truth upside-down. It presupposes a theory which I think is, ultimately, unintelligible, namely an anti-Tarskian theory that propositions do not 'take their truth' from the way the world is, but rather 'impose' their truth on the world. We will do well to abandon this outmoded, supernatural, theory.
Ref: http://www.sfu.ca/philosophy/freewill4.htm

I can't be sure what he's really suggesting from all this. I understand his Argument 2 better, but I don't think he provides anything that refutes his Argument 1 as he claims. Regardless, the arguments are interesting.

I believe the following regards what you are calling "Libertarian free will". Regarding CHDO, along with using the model that the world is either ontically deterministic or just "fairly regular", I'm thinking there can be only the following conclusions:

1) Ontically deterministic: If we make the assumption that the world is ontically deterministic, (ie: if we "model" the world as having only determinite mechanisms), then I come to the conclusion that the world history line is unchangable. Nothing we do is going to change the future. We do what we do because there are mechanisms which create what happens which we have no control over, regardless of any subjective feelings to the contrary. From the second we are born to the second we die, there can't be any variation to anything that happens regardless of what a judicial system does. If you are to die by electric chair, then assuming the world is ontically deterministic, there is no way you are going to avoid that fate.
2) Fairly Regular: If we make the assumption that the world is only fairly regular but NOT ontically deterministic, (ie: if we "model" the world as having a small number of random mechanisms), then perhaps in this case we could make the case that there is some benefit to there being positive and negative outcomes to any given situation. In this case, there seems to be a way of changing the person's 'behavior' in the future to reduce the probability of certain unwanted actions. In this case, if the unwanted behavior resulted in a negative outcome (ie: the jury found the person guilty and sentanced them to a $5000 fine) then perhaps there is a chance that a person's negative behavior could be modified to become less likely in the future and their positive behaviors could become more likely in the future. Their behaviors would be modified in a fairly deterministic way depending on the positive or negative result of a random occurance. So in this case, the random mechanism doesn't benefit the immediate decision, but it may result in determinate mechanisms in the future changing the likelihood of that particular random occurance from happening again. This is analogous to evolution in which a random mutation may or may not benefit the individual but more importantly, the random mutation changes the likelihood of that particular occurance happening again in the future either for better or worse.

I don't see any way of modifying a person's behavior for Case 1, where we model the world as being ontically deterministic. Do you see any different conclusions to the two cases I've presented? When you say, "The entire judicial system and its objectives can be defended quite rationally from a purely deterministic perspective." are you making the same assumptions I am for an "ontically deterministic" world? If so, if you agree there is no change one can make to the future, that the future is just as determined as the past, then I'd like to understand how you come to that conclusion.

On a separate note, I think one might be able to argue that the world can't be properly modeled using only deterministic and/or random causal actions. I'd suggest there is another option to these two mechanisms which regards strongly emergent phenomena, thus I think one can suggest that the model is faulty to begin with and argument 1 provided by Norman is faulty because premise 1 is faulty. In this case, I think one has to clarify the question: Does moral responsibility entail libertarian free will? The clarification is that the world can't be modeled as being subject to only deterministic and/or random causal actions, that the world also depends on strongly emergent phenomena which can act to change the outcome of a random event. The world is dependant on a phenomena which can not be reduced to specific events or causal actions.
 
Last edited by a moderator:
  • #15
selfAdjoint said:
I can't visualize a situation in which every individual forms a unique but deterministic moral code and an overall moral code emerges. Rawls' imaginary contract perhaps?
What happens if different individuals have different moral values, for example on the abortion question? The only way to arrive at a single overall moral code is then by some form of compromise. But there is no guarantee that such compromise could always be reached, which ultimately would mean the overall moral code is determined either by majority vote, or by dictatorship.

selfAdjoint said:
But some individuals would not ever feel bound by such a contract. Calling them sociopaths doesn't make them go away, and in a deterministic case they are just as justified as anyone else.
In a deterministic case yes everyone is "justified", because in the face of strict determinism (I believe) moral responsibility is an incoherent concept.

One needs to ask fundamental questions like "what is the purpose of a (secular) moral code in the first place", and "what is the purpose of secular law and the enforcement of secular law?". The answers to these questions I think are very revealing.

Best Regards

MF
 
  • #16
Q_Goest said:
You've provided us with Norman's Argument 1. Using premise 1 and 2, he's concluded "we are not morally responsible for what we do" if the world is ontically deterministic. I believe the conclusion stems from the concept that we could not have done otherwise. Our actions throughout life are ontically deterministic, even if the world is not epistemically determinable.
Agreed.

Q_Goest said:
Similarly, I can understand his conclusion using premise 1 and 3 in which the world is NOT deterministic, but contains random elements. However, in this case, the conclusion reached stems from the fact there is no ability for some person to control their actions.
Agreed.

Q_Goest said:
I'm not sure this is reasonable and will attempt to address this in a moment.
OK

Q_Goest said:
I can't be sure what he's really suggesting from all this.
I’ve studied Norman’s work and corresponded with him in some detail.
Norman believes (and I have no problem with accepting this idea) that the “laws of nature” are descriptivist rather than prescriptivist. But he then goes on to argue that though determinism may be true, determinism rests on the assumption of “laws of nature”, and human free will is somehow able to intervene to “determine” (some of) the laws of nature at the moment of choice. This is why he argues that premise 2 of Argument 1 is false, ie he is arguing that (based on his notion of free will) that an action can be both caused and yet at the same time chosen freely, because in his explanation it is our free will choice which actually determines (to some extent) the laws of nature.

Unfortunately, this to me just seems like so much handwaving. What he fails to do, to my mind, is to provide any rational or coherent explanation of “how” this mechanism (that free will is somehow a rational and yet “uncaused cause”) is supposed to work.

Q_Goest said:
I believe the following regards what you are calling "Libertarian free will". Regarding CHDO, along with using the model that the world is either ontically deterministic or just "fairly regular", I'm thinking there can be only the following conclusions:

1) Ontically deterministic: If we make the assumption that the world is ontically deterministic, (ie: if we "model" the world as having only determinite mechanisms), then I come to the conclusion that the world history line is unchangable. Nothing we do is going to change the future.
Yes, but be careful. What does it mean to say something like “change the future”, since the future has not happened yet?
In a deterministic world the future is determined by what we do, thus to simply say that “nothing we do is going to change the future” can be misleading. If I do A the future will be one way, if I do B the future will be another way. Therefore what I do determines the future.

Q_Goest said:
We do what we do because there are mechanisms which create what happens which we have no control over, regardless of any subjective feelings to the contrary. From the second we are born to the second we die, there can't be any variation to anything that happens regardless of what a judicial system does. If you are to die by electric chair, then assuming the world is ontically deterministic, there is no way you are going to avoid that fate.
Agreed. But this is not the same as saying “nothing we do is going to change the future”.
It also, by the way, is not a reason for saying that secular punishment for crimes committed is “wrong”, even if it always was determined that those crimes would be committed (we can get to that later).

Q_Goest said:
2) Fairly Regular: If we make the assumption that the world is only fairly regular but NOT ontically deterministic, (ie: if we "model" the world as having a small number of random mechanisms), then perhaps in this case we could make the case that there is some benefit to there being positive and negative outcomes to any given situation. In this case, there seems to be a way of changing the person's 'behavior' in the future to reduce the probability of certain unwanted actions. In this case, if the unwanted behavior resulted in a negative outcome (ie: the jury found the person guilty and sentanced them to a $5000 fine) then perhaps there is a chance that a person's negative behavior could be modified to become less likely in the future and their positive behaviors could become more likely in the future.
But exactly the same argument applies if the world is 100% deterministic. In a 100% deterministic case, if the unwanted behavior resulted in a negative outcome (ie: the jury found the person guilty and sentanced them to a $5000 fine) then perhaps there is a chance that a person's negative behavior could be modified (deterministically) to become less (epistemically) likely in the future and their positive behaviors could become more (epistemically) likely in the future.

Thus, invoking indeterminism does not change the situation.


Q_Goest said:
Their behaviors would be modified in a fairly deterministic way depending on the positive or negative result of a random occurance. So in this case, the random mechanism doesn't benefit the immediate decision, but it may result in determinate mechanisms in the future changing the likelihood of that particular random occurance from happening again. This is analogous to evolution in which a random mutation may or may not benefit the individual but more importantly, the random mutation changes the likelihood of that particular occurance happening again in the future either for better or worse.
The analogy to evolution by natural selection is misleading. Mutations could be 100% ontically deterministic, there is no a priori reason why they need be indeterministic. In which case the analogy is false.

Q_Goest said:
I don't see any way of modifying a person's behavior for Case 1, where we model the world as being ontically deterministic.
Sure we can. In a 100% deterministic scenario the person’s behaviour is determined by antecedent conditions. If those antecedent conditions include the fact that the person has been fined already for breaking the law, it is reasonable to assume that this could be a contributing determining factor which causes him to avoid breaking the law in future. There is no need to invoke indeterminism (which doesn’t actually change the situation in any useful way anyway), or unexplained metaphysical free will (which is an attempt at hocus pocus).

Q_Goest said:
Do you see any different conclusions to the two cases I've presented? When you say, "The entire judicial system and its objectives can be defended quite rationally from a purely deterministic perspective." are you making the same assumptions I am for an "ontically deterministic" world? If so, if you agree there is no change one can make to the future, that the future is just as determined as the past, then I'd like to understand how you come to that conclusion.
My view on law and punishment is as follows :
Why do we have secular law in the first place, what is the purpose of secular law, and what is the purpose of enforcing secular law?

If we assume the premise of determinism, then any agent P which commits a morally wrong act A is causally determined to commit that act. If A occurs then, given determinism, P could not have failed to commit (was powerless to prevent itself from committing) A, hence A could not have not occurred. In other words, A was nomologically necessary. I think you will agree with this.

Apportioning “blame” or seeking “retribution” for the occurrence of A is therefore (as an end in itself) meaningless, and simply apportioning blame or seeking retribution is (I suggest) not the ultimate purpose of either secular law or secular punishment.

The purpose of secular law is, quite simply, to try to ensure that A does not occur in the first place. The presence of laws, and the enforcement of those laws via punishment, acts as part of the antecedent conditions which determine whether A either occurs or does not occur. If, despite the prevailing law and punishment, A still occurs then this points to a failure of the law or punishment, and not necessarily to a failure of the agent (the agent, after all, is acting deterministically – how can it be held responsible?). A perfectly functioning and "ideal" society is one in which the laws and punishments act as part of the deterministic antecedent conditions to ensure that no morally wrong acts occur (ie to prevent morally wrong acts occurring in the first place). The ultimate purpose of secular law in relation to moral responsibility is therefore preventive – it’s purpose is to prevent morally unacceptable acts from occurring in the first place

Contrast this with divine law and divine judgement. In the case of divine law (eg the so-called day of judgement), the purpose IS to judge, it is to apportion blame and it is to seek retribution. In the case of divine judgement, divine law by definition cannot be wrong, therefore if an agent is found wanting under divine law then it must be the case that the agent is responsible. Theism therefore must assume free will, and even in a perfectly functioning society with free will, the laws and punishments may not necessarily prevent wrong acts from occurring (because the behaviour of the agents in that society is not determined). The ultimate purpose of divine law and punishment is therefore NOT simply preventative (ie its ultimate purpose, unlike secular law, is not solely to prevent moral wrongs from occurring), instead part of its ultimate purpose is indeed to apportion blame and to punish those free will agents who commit morally wrong acts.

Q_Goest said:
On a separate note, I think one might be able to argue that the world can't be properly modeled using only deterministic and/or random causal actions.
I think that I have shown above that it can be.

If you think the deterministic explanation fails to accurately model the world I would be interested to know exactly how you think it fails, because I cannot see how it does. What does it fail to explain?

Q_Goest said:
In this case, I think one has to clarify the question: Does moral responsibility entail libertarian free will? The clarification is that the world can't be modeled as being subject to only deterministic and/or random causal actions, that the world also depends on strongly emergent phenomena which can act to change the outcome of a random event.
I do not deny that there are “emergent phemonena”. But I do not believe anyone can come up with any rational explanation of how such phenomena can give rise to free will “ab initio”. To suppose that free will exists seems to me to indulge in hand-waving, I have not seen anyone able to make a rational attempt at explaining how free will can possibly arise.

Q_Goest said:
The world is dependant on a phenomena which can not be reduced to specific events or causal actions.
Sounds metaphysical to me. Are you suggesting that free will exists, but the mechanism which gives rise to free will cannot be rationally understood or exlained?

But why posit the existence of such metaphysical free will anyway? What is it exactly about the world that remains unexplained if we assume determinism is true and free will an illusion?

Best Regards

MF
 
  • #17
Hi MF,
Yes, but be careful. What does it mean to say something like “change the future”, since the future has not happened yet?
In a deterministic world the future is determined by what we do, thus to simply say that “nothing we do is going to change the future” can be misleading.
If all events in time are fixed, then there is no 'decision' being made in the sense that a decision allows the selection of more than one possibility. Determinism reduces a 'decision' to nothing more than another causal action. Which raises the question of why anyone should have the subjective experience of making a decision if they are nothing but a marionette dancing to the tugs of strings that they know nothing about. The concept of making a decision is meaningless.

If I do A the future will be one way, if I do B the future will be another way. Therefore what I do determines the future.
Similarly, there is no A or B. The future will not be one way or another in a deterministic world. The world will only ever be one way.

But exactly the same argument applies if the world is 100% deterministic. In a 100% deterministic case, if the unwanted behavior resulted in a negative outcome (ie: the jury found the person guilty and sentanced them to a $5000 fine) then perhaps there is a chance that a person's negative behavior could be modified (deterministically) to become less (epistemically) likely in the future and their positive behaviors could become more (epistemically) likely in the future.

Thus, invoking indeterminism does not change the situation.
I have to disagree with that, the two cases are different. In the case of a 100% deterministic world, there is no reason to suggest there is a 'me, you, we, I' because this infers the relatively ancient belief that there is some kind of 'soul' or responsible agent that the body is a container for.

If we claim the actions of a physical body is determinate, there is NO chance that a person's future actions can be changed, any more than the actions of the judicial system can have more than one possible future.


The purpose of secular law is, quite simply, to try to ensure that A does not occur in the first place. The presence of laws, and the enforcement of those laws via punishment, acts as part of the antecedent conditions which determine whether A either occurs or does not occur. If, despite the prevailing law and punishment, A still occurs then this points to a failure of the law or punishment, and not necessarily to a failure of the agent (the agent, after all, is acting deterministically – how can it be held responsible?). A perfectly functioning and "ideal" society is one in which the laws and punishments act as part of the deterministic antecedent conditions to ensure that no morally wrong acts occur (ie to prevent morally wrong acts occurring in the first place). The ultimate purpose of secular law in relation to moral responsibility is therefore preventive – it’s purpose is to prevent morally unacceptable acts from occurring in the first place
My impression is you're suggesting that future events can be influenced somehow by applying a legal system with laws and rules. You wrote "occurs or does not occur", which is incorrect. In a deterministic universe, these laws and rules have no influence over future events, they are simply the points in time leading up to a determined event. We can't change any future morally wrong act in a deterministic world. Those morally wrong acts, whatever they may be, will occur just as the acts of the judicial system will occur.

In the case of a deterministic world, there is no possibility to change future events. We are not preventing, nor reducing the chance of any event. You can claim we can't possibly know the future, but that doesn't change the fact that if we assume determinism, the future is fixed and no change can be made. Determinism (to me) clearly means there are no agents of any kind which have any influence over future events. To suggest there is a 'we, me, I, them' is to make the error of falling back on the idea there is some kind of responsible agent as suggested by religion or even before such concepts were created.

~

I believe our opinions also differ considerably about what a strongly emergent phenomena is. Please correct me if I'm wrong, but I believe you are of the opinion that strongly emergent phenomena describes more than molecular interactions and conscious phenomena. As far as I know, science only recognizes some molecular interactions as being strongly emergent. I don't claim to understand solid state physics, but from what I've read, strongly emergent phenomena at the molecular level may be called for.

I don't believe consciousness is generally considered a strongly emergent phenomena, it is only weakly emergent, meaning it is reducible to the interactions of it's constituent parts (ie: neurons). To suggest it is strongly emergent (to me) indicates one can not reduce the phenomena to the interactions of it's constituent parts, but that is ill defined. There is no theory at present to determine if and how something can be reduced to constituent parts in order to determine if it truly is strongly emergent or not.
 
  • #18
Q_Goest said:
Which raises the question of why anyone should have the subjective experience of making a decision if they are nothing but a marionette dancing to the tugs of strings that they know nothing about.
Do you mean “Why should anyone have the subjective experience of acting freely?”
Simply because we are not aware (imho) of the precise deterministic reasons (the causes behind) our decisions. The detailed reasons are hidden from us (what Metzinger calls the “hidden darkness” of our consciousness), hence we naively believe that we are somehow “free agents”.

But think about it carefully – if you believe our source of will is not deterministic, then what is it? Random? Stochastic? What exactly?

Q_Goest said:
The concept of making a decision is meaningless.
Not at all. Let us say that you do either A or B. If you do A then the future happens one way, if you do B then the future happens another way. The future depends on what you do, and as a causal agent you choose to do either A or B. I am sure you will not say that the concept of “doing A or B” is meaningless (since the future depends on whether you do A or B). What is meaningless (imho) is the notion that we act with (libertarian) free will when we choose to do either A or B.

Q_Goest said:
The future will not be one way or another in a deterministic world. The world will only ever be one way.
That is correct. The future “will only ever be one way” in just the same way that the past “only ever was one way”. But we do not know which way the future will be in advance, which gives us the illusion that we act with free will. Another way of saying this is that the world is ontically deterministic but epistemically indeterminable.

Q_Goest said:
In the case of a 100% deterministic world, there is no reason to suggest there is a 'me, you, we, I' because this infers the relatively ancient belief that there is some kind of 'soul' or responsible agent that the body is a container for.
Incorrect. There is indeed an “I” and a “you” etc, in the sense of the “deterministic agent causing the action”. Where we are misled (imho) is in thinking that this agent is an “uncaused cause”, acting with (libertarian) free will. There is no “free will I” or “free will you”, but there is a deterministic agent which we call “I” and “you”.

Q_Goest said:
If we claim the actions of a physical body is determinate, there is NO chance that a person's future actions can be changed, any more than the actions of the judicial system can have more than one possible future.
Think carefully about what you have just written. “Changed” from what? A person’s future actions have not happened yet, how can it make sense to talk of “changing” actions which have not happened yet? The present determines the future (imho), therefore the presence of laws and the enforcement of those laws can have a real deterministic effect on the future actions of agents.

Q_Goest said:
My impression is you're suggesting that future events can be influenced somehow by applying a legal system with laws and rules.
And you don’t? Seriously? Do you instead think that the presence of laws and the enforcement of those laws is not an influential factor in encouraging people to be law-abiding citizens? (Think carefully)

The whole purpose of having secular law, and of enforcing that law, is precisely to encourage people to behave in accordance with certain standards. How can it possibly do this unless there is some “causal relationship” between the “law” and “people following the law”?

Q_Goest said:
You wrote "occurs or does not occur", which is incorrect.
Why is it incorrect? It must be the case that event A “either occurs or does not occur”, or do you think otherwise?

Q_Goest said:
In a deterministic universe, these laws and rules have no influence over future events, they are simply the points in time leading up to a determined event.
Incorrect. In a deterministic universe, each event in the present or past (eg the enactment of a law, the enforcement of a law) may be an antecedent and therefore causal event which influences events in the future.

Q_Goest said:
We can't change any future morally wrong act in a deterministic world.
There you go again. It makes no sense to talk of “changing the future”, because the future hasn’t happened yet. But the events of the present can and do have influence on the events of the future, The future “is what it is” only because the present “is what it is”. And secular law is part of the present.

Q_Goest said:
Those morally wrong acts, whatever they may be, will occur just as the acts of the judicial system will occur.
Yes, in a deterministic world, if event A happens then event A was always going to happen. But it is wrong to conclude from this that “the future is what it is regardless of what we do today”, because the future depends precisely on what we do today. Our actions today determine the future.

Q_Goest said:
We are not preventing, nor reducing the chance of any event.
Of course we are. The present determines the future. What we do today determines what will happen in the future. How can you say that this means “we are not preventing nor reducing the chance of any event”?

Q_Goest said:
Determinism (to me) clearly means there are no agents of any kind which have any influence over future events.
Incorrect. It means there are no “free agents” who can act to cause events “through their own free will”; but deterministic agents have causal powers, and their actions determine future events. It is instead the entire concept of libertarian free will which is incoherent.


Q_Goest said:
To suggest there is a 'we, me, I, them' is to make the error of falling back on the idea there is some kind of responsible agent as suggested by religion or even before such concepts were created.
You are with respect confusing “agent” here with “free will agent”. An agent by definition can be 100% deterministic, and can refer to itself as “I”, but you seem to think that all agents by definition must be “free will agents”, I’m not sure why.

Q_Goest said:
As far as I know, science only recognizes some molecular interactions as being strongly emergent. I don't claim to understand solid state physics, but from what I've read, strongly emergent phenomena at the molecular level may be called for.
Can anyone show how strongly emergent phenomena could account for libertarian free will? I don’t think so (which makes the appeal to emergent phenomena as an explanation for free will ineffectual). Any event A must be either deterministic (determined by antecedent events), or indeterministic (random), or stochastic (probabilistic). There is no logical or rational alternative to one, or a combination of, these. Would you like to explain how any combination of these might give rise to something we call libertarian free will?

Q_Goest said:
I don't believe consciousness is generally considered a strongly emergent phenomena, it is only weakly emergent, meaning it is reducible to the interactions of it's constituent parts (ie: neurons). To suggest it is strongly emergent (to me) indicates one can not reduce the phenomena to the interactions of it's constituent parts, but that is ill defined.
The mechanisms of emergent phenomena can be rationally and coherently explained, at least in principle, in a manner which is consistent with determinism. Would you like to propose one phenomenon that is generally agreed to be an emergent phenomenon but cannot be rationally explained in this way?

The problems with the notion of libertarian free will is (a) that it explains nothing that we observe about the world that cannot be explained by determinism and (b) that there is no way even in principle that the mechanism of free will can be explained on a rational or coherent basis.

Best Regards

MF
 
Last edited:
  • #19
moving finger said:
Argument #1 – There is No Moral Responsibility
Premise 1: Every action is either caused or uncaused (i.e. a random occurrence).
Premise 2: If an action is caused, then that action was not chosen freely and the person who performed that action is not morally responsible for what he/she has done.
Premise 3: If an action is uncaused (i.e. is a random occurrence), then the person who performed that action is not morally responsible for what he/she has done.

Thus: We are not morally responsible for what we do.

I've voted no, simply because I hold that the only requirement a society needs to hold its member morally accountable for their actions is a fairly well agreed upon set of moral norms as well a fairly well agreed upon set of circumstances under which a person is to be held accountable. There may or may not be any metaethical justification for this practice, but I don't think there needs to be. A purely pragmatic justification is enough. The members of a society have the right to create a culture by which their society will continue to cohesively exist, and holding members of that society morally accountable for deviating from the norms of the society is one way of doing this. This does get us into very tricky waters of what exactly constitutes an acceptable norm and under what circumstances a member of a given society is justified in not accepting an unjust norm, but I don't think these need to be addressed for the purposes of this discussion.

The above argument is question-begging, though. Premises 2 and 3 both contain hypothetical conditionals concluding that one is not morally responsible for one's actions, and uses these to conclude that one is not morally responsible for one's actions. A further argument needs to be made for why we should accept these hypothetical conditionals. Why should an action being either caused or uncaused be reason to preclude its actor from moral responsibility? That seems to be the very question we are attempting to answer, and one cannot argue for a negative answer by assuming that answer from the outset.
 
  • #20
Hi loseyourname

Thank you for a very well written post.

moving finger said:
Argument #1 – There is No Moral Responsibility
Premise 1: Every action is either caused or uncaused (i.e. a random occurrence).
Premise 2: If an action is caused, then that action was not chosen freely and the person who performed that action is not morally responsible for what he/she has done.
Premise 3: If an action is uncaused (i.e. is a random occurrence), then the person who performed that action is not morally responsible for what he/she has done.

Thus: We are not morally responsible for what we do.

loseyourname said:
The above argument is question-begging, though. Premises 2 and 3 both contain hypothetical conditionals concluding that one is not morally responsible for one's actions, and uses these to conclude that one is not morally responsible for one's actions. A further argument needs to be made for why we should accept these hypothetical conditionals. Why should an action being either caused or uncaused be reason to preclude its actor from moral responsibility? That seems to be the very question we are attempting to answer, and one cannot argue for a negative answer by assuming that answer from the outset.
Agreed. The premises make implicit assumptions about the definition of moral responsibility.
Premise (2) is making an implicit assumption that moral responsibility entails free will.
Premise (3) is also making an implicit assumption that moral responsibility entails actions are not random.

I believe the assumption in Premise (3) is valid, but the question as to whether moral responsibility entails free will is the very question of this thread. I also believe it does not.

If we assume that moral responsibility does not entail free will, then Premise (2) is false, and the Argument is unsound.

Best Regards

MF
 
  • #21
MF says"But why posit the existence of such metaphysical free will anyway? What is it exactly about the world that remains unexplained if we assume determinism is true and free will an illusion?"

This is exactly as if one said "it is God's will". Determinism is the same type of metaphysical assumption. It explains nothing.
Like solipsism it is incapable of logical proof or disproof.
It it were true, since everything is predetermined, it is only rational to behave as if it were not. As we do.

Ernies
 
  • #22
Ernies said:
Determinism is the same type of metaphysical assumption. It explains nothing.
Like solipsism it is incapable of logical proof or disproof.
To "explain" anything requires premises or assumptions. To expect to have "proof" of the very basic premises or assumptions of one's beliefs is the philosophical equivalent of tilting at windmills.

The premise of "free will" does not allow us to do away with the premise of determinism, but the premise of determinism does allow us to dispense with the premise of free will.

In other words, the additional premise of free will "adds nothing" in terms of explanatory power. As Laplace commented in a similar context : "I have no need of that hypothesis".

Best Regards
 
  • #23
moving finger said:
The premise of "free will" does not allow us to do away with the premise of determinism, but the premise of determinism does allow us to dispense with the premise of free will

Best Regards

We obviously mean different things by "free will". In my vocabulary it is totally incompatible with determinism.(Yes, I know Lutheran, Calvinist and other theologiansthought differently, but they surely were not very rational on the subject). We all --including you-- feel that we have free will, and act so. Why introduce the additional hypothesis of 'Illusion'? You drive me to quoting the old saying "If it swims like a duck, quacks like a duck...".
Deterninism seems to me to be simply a desperate attempt to avoid the logical puzzles which arise from our limited capabilities.
 
  • #24
Ernies said:
We obviously mean different things by "free will". In my vocabulary it is totally incompatible with determinism.

You misunderstand.

Positing human free will does not allow us to dispense with determinism in the world – unless you wish to suggest that computers also have free will?

In other words, one can posit that the world contains both free will agents and deterministic agents, or one can posit simply that all agents are deterministic agents.

This is what I meant by :

moving finger said:
The premise of "free will" does not allow us to do away with the premise of determinism, but the premise of determinism does allow us to dispense with the premise of free will

Best Regards
 
  • #25
moving finger said:
You misunderstand.

Positing human free will does not allow us to dispense with determinism in the world – unless you wish to suggest that computers also have free will?

In other words, one can posit that the world contains both free will agents and deterministic agents, or one can posit simply that all agents are deterministic agents.
Looking back I cannot find you saying so. You spoke in the last few posts as though the world was completely deterministic: but we are part of the world. Determinism in the non-sentient world is a matter for experiment, and QM says that it cannot be shown to be wholly so, only at most statistically. There is no evidence at all that the sentient world is deterministic. Your faith surprises me, and again reminds me of theology.

Ernies
 
  • #26
Ernies said:
Determinism in the non-sentient world is a matter for experiment, and QM says that it cannot be shown to be wholly so, only at most statistically. There is no evidence at all that the sentient world is deterministic.
Fundamentally, there is no “evidence” that the world is either deterministic or stochastic at the quantum level – the Heisenberg Uncertainty Principle tells us that there is a limit to our epistemic horizon – the world is fundamentally indeterminable (but it does not follow from this that it is either indeterministic or stochastic). If one believes in determinism or indeterminism (or stochastic behaviour) this is indeed a matter of faith.

But then, ALL logical arguments, and ALL explanations, rest on assumed premises. Assumed premises are "a matter of faith".

But what does this have to do with the topic of this thread?

Are you suggesting that “human free will” arises from indeterministic or stochastic behaviour?

Best Regards
 
  • #27
moving finger said:
But then, ALL logical arguments, and ALL explanations, rest on assumed premises. Assumed premises are "a matter of faith".

But what does this have to do with the topic of this thread?

Are you suggesting that “human free will” arises from indeterministic or stochastic behaviour?

Best Regards

Your choice of pseudonym, obviously from FitzGerald's translation of Omar Khayam (a wonderful piece of verse) and the bulk of your arguments show your mind-set. If that is your faith, very well. And clearly you 'came out the same door as in you went'. That is precisely why I cannot accept a wholly deterministic world--it offers no possibility of change or acceptance of responsibility, and no convincing rational evidence for its truth.

I most certainly agree with the the post (in another forum?) pointing out that 'explanation' is NOT a sub-set of 'description' I will not quibble as to whether they partly overlap, though I cannot call to mind examples where they do.

I make no suggestion as to how human free will arises. I don't know. Whether we can ever know, I don't know either. But I have seen no convincing rational argument for the emergence of an 'illusion' of it.
Until I do, I will consider it as a reality.

Increasingly the arguments put forward remind me of the 12th century scholiasts debate. All born from a desire to impose a single explanation for everything, and ----- but I'd better not quote Shakespeare on the point.

Ernies
 
  • #28
Ernies said:
That is precisely why I cannot accept a wholly deterministic world--it offers no possibility of change or acceptance of responsibility
In what sense does it offer "no possibility of change"?
Change with respect to what?
Of course it offers possibility of acceptance of responsibility – see post #16 of this thread. The notion that we must possesses some incoherent metaphysical free will in order to accept responsibility for our actions is philosophically naïve.

Ernies said:
and no convincing rational evidence for its truth.
It fits the observed data, what better evidence in support of an hypothesis could one ask for?
Are you suggesting that there is better evidence for the truth of indeterminism, or of stochastic behaviour, or even some kind of metaphysical free will?

Ernies said:
I most certainly agree with the the post (in another forum?) pointing out that 'explanation' is NOT a sub-set of 'description' I will not quibble as to whether they partly overlap, though I cannot call to mind examples where they do.
Perhaps you could give me an example of an “explanation”, and I will be pleased to show you why it is also a “description”.

Ernies said:
I make no suggestion as to how human free will arises. I don't know. Whether we can ever know, I don't know either. But I have seen no convincing rational argument for the emergence of an 'illusion' of it.
Until I do, I will consider it as a reality.
Everyone is of course free to believe in their illusions.

Best Regards
 
  • #29
Hi MF, I enjoy your posts. You always have something thought provoking to say.

In what sense does it offer "no possibility of change"?
Change with respect to what?
Of course it offers possibility of acceptance of responsibility – see post #16 of this thread.

In your previous arguments, it seems you are suggesting change with respect to the present. That is, the future is a change of the existing state. I think the use of the concept "the future changes with respect to the present" doesn't properly address the argument regarding moral responsibility.

What I disagree with is the suggestion that moral responsibility can be applied to a deterministic world in which there is no possibility of a change with respect to what could be. I don't see how you can attach moral responsibility to an agent that has no capacity to change "the future with respect to different possible futures".

So we have two different meanings for the phrase "change with respect to". They are:
1. The future is a change of state with respect to the present.
2. The future is a change of state with respect to different possible futures.

I don't see how the use of the first one can be applied to the concept of moral responsibility. Certainly the future is a change with respect to the present, just as a tidal wave changes the landscape of the shore line, but that certainly doesn't have anything to do with moral responsibility. We don't say a tidal wave is morally responsible for it's actions.

Applying the second one (to me at least) absolves any deterministic agent from moral responsibility simply because moral responsibility implies there is more than one possible future. Just as a tidal wave has no ability to change the future with respect to different possible futures, neither does ANY deterministic agent, regardless of how you define that agent. A person is no different than a tidal wave given a completely deterministic world. They are both deterministic agents. How can one be "morally responsible" but the other not? How do you define "moral responsibility" with respect to a deterministic agent? Perhaps more to the point, what is the definition of a deterministic agent which can be assigned moral responsibility as opposed to one which can not?

The following discussion hopefully will aid in your response.

Disclaimer: I won't claim tremendous knowledge regarding FSA's here. I'm still learning.

It seems there's considerable debate between computationalists and non that any open system can be mapped onto an FSA (finite state automaton) (ex: Putnam, Bishop, Maudlin). If this is the case, we're forced to concede either panpsychism is true or computationalism is dead. The problem I see is that if we add the premise of a deterministic world to this argument, I believe one can safely say that: any system, regardless of what control volume* is used to define it, can be defined as an FSA. I'll try and explain that premise if you tell me it is unclear, but the following will utilize that premise.

If we simply hold that ONLY an FSA with the right input and output as argued by Chalmers can be conscious as I believe you're assuming in making your arguements, and since an FSA with input and output is NOT necessarily conscious, then how do you define moral responsibility? Both FSA's (conscious and non-conscious ones) are equally deterministic. Do you say only the FSA's that support consciousness are morally responsible?

I'm not sure how we even define consciousness in the case of an FSA in a completely deterministic world. More difficult yet, I think the definition of "moral responsibility" stands on exceptionally questionable grounds. Both conscious and unconscious agents can be considered some kind of agent for change, so from that perspective they are completely equal in every way (ie: they are both deterministic and they are both equally capable of making change in any way the term "change" might be defined). Thus, if an unconscious FSA is NOT morally responsible, one will need to prove that a conscious one IS. The presumption has always been that an unconsious agent is NOT morally responsible. Regardless of whether an FSA is conscious or not, each is equal in terms of ability to make a change, and thus even the conscious FSA is no different in that regard than an unconscious one, therefore neither of them are morally responsible (unless we redefine moral responsibility).

* The term "control volume" here is roughly defined as any three dimensional volume around which we wish to identify causal inputs and outputs, thus for a deterministic world, any control volume is an FSA.
 
  • #30
Q_Goest said:
I don't see how you can attach moral responsibility to an agent that has no capacity to change "the future with respect to different possible futures".
I’m not sure the notion “change the future with respect to different possible futures” makes sense?

Do you mean “select one future from a range of different (epistemically) possible futures”? (which does seem to make sense to me).

Q_Goest said:
So we have two different meanings for the phrase "change with respect to". They are:
1. The future is a change of state with respect to the present.
2. The future is a change of state with respect to different possible futures.
Why invoke the phrase “change with respect to”? How can the future “change” with respect to different possible futures if that future has not happened yet? The “different possible futures” are not existing physical entities, they are epistemic possibilities.
I would simply say, for (2), the future is a state which is selected from different (epistemically) possible states.

Q_Goest said:
Applying the second one (to me at least) absolves any deterministic agent from moral responsibility simply because moral responsibility implies there is more than one possible future. Just as a tidal wave has no ability to change the future with respect to different possible futures, neither does ANY deterministic agent, regardless of how you define that agent. A person is no different than a tidal wave given a completely deterministic world. They are both deterministic agents. How can one be "morally responsible" but the other not? How do you define "moral responsibility" with respect to a deterministic agent? Perhaps more to the point, what is the definition of a deterministic agent which can be assigned moral responsibility as opposed to one which can not?
I believe free will is an incoherent concept, but that would be a separate discussion.

Below is an explanation of how I believe moral responsibility arises in deterministic agents, followed by my suggestion for the necessary and sufficient conditions for moral responsibility :

We do not hold a very young child to be morally responsible for her actions, but we do normally hold a mature adult to be morally responsible for her actions. Moral responsibility is clearly something we are not born with, but which is learned and acquired over time.

How does this happen? Here is a suggested mechanism.

Maria is a very young child. Initially, Maria’s parents will make decisions for her. As Maria starts to become self-aware and she experiments within her world, her parents may allow Maria to make some minor exploration of her own, but all the while they will continue to provide external parental control and guidance. Maria’s parents retain responsibilty for her actions.

By exploring and interacting with her world, under the care and supervision of her parents, Maria begins to understand and appreciate some of the consequences of various actions which she carries out. She learns how to exercise control over particular bodily actions, and she begins to understand that these actions often have consequences beyond the direct result of the action itself. She may discover for example that knocking over a glass of milk causes a messy result on the carpet. Over a period of time, by a process of trial and error, Maria builds up a fairly comprehensive internal mental database (though she need not necessarily be consciously aware of that fact) consisting of “various actions and the reasonably expected consequences of those actions”.

In parallel with this, Maria’s parents and other adults will be providing “value feedback” to Maria on the desirability and appropriateness of various actions and behaviour under particular circumstances, guiding her and correcting her in her actions and her behaviour where appropriate. The implicit objective in such guidance is to provide Maria with basic information on which actions and behaviour (and consequences thereof) are desirable, and which are undesirable, according to the mores of her parents and other adults. This information also becomes digested and assimilated by Maria, and becomes incorporated into her mental database of “various actions, the reasonably expected consequences of those actions, and the relative desirability or undesirability of those consequences”. This is the way that Maria’s parents start to provide her with the beginnings of a moral value and belief framework.

At this stage, Maria is still not expected to be held morally responsible for her actions. Once Maria has acquired significant experience in both the consequences of her actions and the relative desirability/undesirability of those consequences, she should be able to start making rational judgements of her own about whether to take a particular action or not. Such rational judgements may not always be conscious, most will likely be subconsciously evaluated.

Let us summarise Maria’s developmental position at this stage :

Anticipation : Using her internal mental database built from experience, Maria is now able to anticipate at least some of the reasonably expected (predictable) consequences of a certain action A.

Evaluation : Using the same mental database, Maria is also able to evaluate at least some of the expected consequences from action A against an internal standard of what is desirable or undesirable.

Determination : According to her evaluation of the anticipated consequences, Maria is also able to determine (choose) whether to carry out action A or not.

Maria now has the basic elements in place (anticipation, evaluation and determination) to enable her to start making rational decisions about possible actions. Again, it is important to understand that such steps may not be consciously taken by Maria, much of the processing involved in the anticipation, evaluation and determination of actions and behaviour is likely to be subconscious.

What happens from this point on is that through a continuous process of experimentation and feedback, Maria is able to reinforce and consolidate her internal “database” of actions, consequences and desirability, to build upon that database and to expand it to incorporate new scenarios, new environments and new circumstances. At the same time, Maria is also becoming more and more subconsciously and consciously familiar with the rationality of the process steps of anticipation, evaluation and determination. It is important to understand that though Maria may not always formally or consciously carry out such an explicit evaluation, the rational evaluation is nevertheless being carried out within her subconscious, and the results are then being presented to her conscious mind.
As Maria is doing this, she is also gradually experimenting with “taking ownership and responsibility” for the internal database and the results of the evaluation process within her mind. This phase will also be encouraged and reinforced by her parents, who will start to praise her for making good judgements, and possibly chastise her for making bad judgements. Maria is still not being held morally responsible, but she is effectively “praised and blamed” for her various actions simply as a means to both develop and reinforce the actions/consequences/desirability database and the associated rational processing that is already taking place.

There now follows an informal handover period. As Maria’s parents realize that she is indeed able to start acting “responsibly” (she controls her actions and behaviour according to the value standards that her parents and other adults have taught her), they start to allow her to take more and more decisions for herself. In parallel with this, as they see that Maria is making rational and responsible decisions, Maria’s parents will also start to transfer responsibility for these decisions more and more to Maria. The question arises : At what point does Maria actually start to take moral responsibility, and how exactly does this happen?

The answer is that it does not happen “all at once”. There is no point in time at which we can say “before this time Maria is not responsible, and after this time she is fully responsible”. There is a period of transition, of handover, where Maria is effectively testing and trialling her decision making process, taking various actions and checking that the consequences are indeed in line with expectations and desirability. In a sense, Maria is “test-driving” her decision-making process, making sure that both she and her parents and other adults are happy and comfortable with the outcomes in a range of circumstances. There is no formal handover point at which her parents say “OK Maria, now you are ready to take responsibility for your own actions”. Instead there is a gradual and phased handover, step by step allowing Maria to take increasing levels of responsibility. At the same time Maria is becoming more and more internally confident that she does actually accept and “own” the internal mental database and decision-making process which guides her actions, to the eventual point where she indeed takes responsibility.

Ownership is key to taking responsibility. At a certain point in her development, Maria will accept that the internal rational decision-making process which has developed in her mind over a period of years (including the anticipation/evaluation/determination stages and the associated actions/consequences/desirability database) are now “hers”, they are owned by her as a part of “what she is”, internally and mentally. Of course Maria does not do this explicitly by saying “I now take ownership of my internal decision-making process”. Instead she gradually develops certain dispositional beliefs about herself, namely she comes to believe that she is a rational and causative agent with the power to initiate and control her actions in appropriate ways. In addition, the people around her (parents, friends, other adults) also develop corresponding beliefs about her, they also come to believe that she is a rational and causative agent with the power to initiate and control her actions in appropriate ways. It is from this point forward that Maria can and does start to accept moral responsibility for her actions.

The entire human process of “learning to own one’s decision-making process and learning to take responsibility for one’s actions” can and does take a very long time. For example, in some parts of the world the law does not recognise an individual’s right to “take responsibility for drinking alcohol” until that individual is 21 years old.

Note that the process described herein is entirely deterministic, there is no point in any of this where we need to invoke any kind of mystical or metaphysical notion of “free will”.
From the above, we can extract and summarise the necessary and jointly sufficient conditions for moral responsibility :

We say that an agent is responsible, and may be held accountable, for an action A when the agent is able to own and to follow the following rational process:
1) anticipate the reasonably expected (predictable) consequences of both A and ~A
2) evaluate the expected consequences from (1) against an internally accepted standard of what is desirable or undesirable
3) determine (choose) either A or ~A according to the outcome of the evaluations in conditions (1) and (2) above.

These three conditions, the ability to anticipate, evaluate and determine, are the three necessary and jointly sufficient conditions for moral responsibility; these are all that is required to generate ownership and responsibility.

Best Regards
 
Last edited:
  • #31
moving finger said:
Everyone is of course free to believe in their illusions.

Tu quoque! 'Free'! You certainly are.

Ernies
 
  • #32
Hi MF,
We say that an agent is responsible, and may be held accountable, for an action A when the agent is able to own and to follow the following rational process:
1) anticipate the reasonably expected (predictable) consequences of both A and ~A
2) evaluate the expected consequences from (1) against an internally accepted standard of what is desirable or undesirable
3) determine (choose) either A or ~A according to the outcome of the evaluations in conditions (1) and (2) above.
I guess we'll have to simply agree to disagree on #3 above. "Choice" implies many things which don't apply here IMHO. In the case of a deterministic agent (an FSA or Touring machine) that "choice" can, in principal, be reduced to a single change in some entry on a table which is placed there by a read/write head where the action of the read/write head is ontically deterministic, regardless of whether it's knowable. I see no more reason to assign "moral responsibility" to the change in that particular entry than any other entry change. "Moral responsibility" also implies many things which can't be applied to the deterministic action of any FSA or Touring machine.
 
  • #33
Please don't be offended: but I think you mean Turing machine, named after the famous Alan Turing, not "Touring machine".
 
  • #34
Ernies said:
Tu quoque! 'Free'! You certainly are.

Ernies
Yes, we all are.

Deterministically free :biggrin:

(which to me means "the ability to choose, unconstrained by external forces")

Best Regards
 
  • #35
moving finger said:
We say that an agent is responsible, and may be held accountable, for an action A when the agent is able to own and to follow the following rational process:
1) anticipate the reasonably expected (predictable) consequences of both A and ~A
2) evaluate the expected consequences from (1) against an internally accepted standard of what is desirable or undesirable
3) determine (choose) either A or ~A according to the outcome of the evaluations in conditions (1) and (2) above.
Q_Goest said:
I guess we'll have to simply agree to disagree on #3 above. "Choice" implies many things which don't apply here IMHO.
Such as? Can you elaborate?

Do you deny that a deterministic machine can make choices?

(many seem to assume that “ability to choose” is inconsistent with determinism, but that is simply incorrect)

Q_Goest said:
"Moral responsibility" also implies many things which can't be applied to the deterministic action of any FSA or Touring machine.
Such as? Can you elaborate?
What else do you think is needed before an agent can be deemed responsible for its actions?

(I suspect you are going to suggest that moral responsibility cannot arise from determinism alone, and must entail something called "free will"?)

But before you answer, consider the following :

Imagine, if you will, a thought experiment.
Let's say that you could have chosen to have either tea or coffee with your breakfast this morning.
Your choice was unconstrained by external factors, you were able to choose according to your will.
Let's say (for the sake of argument) that you chose coffee.

Suppose that now we could "rewind the clock", and set absolutely everything back to precisely the same way that it was just before your decision (including all your internal neurophysiological states etc). (I know this is impossible in practice - it's a thought experiment after all). Would your decision be the same again (would you again choose coffee) the "second time around"?

If you think it would not be the same, what explanation would you suggest for it being different to the first time (ie what rational or logical reason can you give for it being different)?

Suppose you could now repeat this thought experiment 100 times, so that you get 100 results. What do you think would be the outcome?

Would you choose "coffee" 100 times out of 100? (this would imply causal determinism)

Would you choose "tea" 50 times and "coffee" 50 times? (this would imply simple random behaviour).

Would you choose "tea" perhaps 20 times and "coffee" 80 times? (this would imply stochastic behaviour).

What empirical outcome would you expect from the above experiment if you really had "genuine free will", and why?

You may say "but I always had the option to choose tea!"
Yes, you had the option to choose tea. Nobody, and no force, external to you forced you to choose coffee “against your will”. You acted according to your will. And your will determined that you wanted coffee. And if you could reset the clock and take precisely the same decision all over again, it would still be the case that you could choose tea if you wanted to, but it would also still be the case that your “will” determines that you choose coffee. To suggest otherwise (to suggest that we would choose differently if we could repeat the choice) implies either that our will is NOT in control, or it implies that our will is acting randomly.

Who wants to believe that their willed decisions are random? How could random willed decisions be the basis for taking responsibility for our actions? The only way we can claim responsibility for our actions is if those actions are determined precisely by what we want to happen. And what we want to happen will be exactly the same if we rewind the clock and repeat the choice.

It’s not easy to give up the notion that we are not deterministic agents. But if you think carefully and logically about the issue, and try to avoid jumping to irrational emotional conclusions, you’ll see that it’s the only rational course to take.

Best Regards
 
Last edited:

Similar threads

  • General Discussion
Replies
32
Views
7K
  • General Discussion
Replies
10
Views
2K
Replies
47
Views
5K
  • General Discussion
Replies
19
Views
3K
Replies
17
Views
6K
  • General Discussion
Replies
5
Views
5K
Replies
109
Views
54K
Back
Top