The so-called Darwinian model of free will

  • Thread starter Thread starter moving finger
  • Start date Start date
  • Tags Tags
    Free will Model
Click For Summary
The discussion centers on the "Darwinian model" of free will, which is proposed to demonstrate how indeterminacy can grant free will to deterministic agents. The model is criticized and rebranded as the "Accidental model," as it introduces randomness through a "random idea generator" (RIG) and a deterministic "sensible idea selector" (SIS). Critics argue that this model does not genuinely provide free will, as it can lead to capricious behavior and non-optimal decision-making. The conversation also touches on the flawed assumption that free will can emerge solely from the interplay of determinism and indeterminism, suggesting instead that free will is a feature of consciousness. Ultimately, the Accidental model fails to convincingly endow free will, raising questions about the relationship between consciousness, determinism, and indeterminism.
  • #31
moving finger said:
I believe that free will defined in this way can explain everything that we know and experience in respect of free will.
Tournesol said:
No, it leaves at least one thing out...
moving finger said:
What this definition does NOT show is that “we could have done differently to what we actually did (given identical circumstances)”.
The important question is : Is the concept “we could have done differently to what we actually did (given identical circumstances)” something that we genuinely “know and experience in respepct of free will”, or is it simply an assumption?
Tournesol said:
You are confusing two diffeent issues together here:
a)whether could-have-done-otherwise is part of our concept of free will, and
b) whether there is real evidence for it.
(Hereafter “could have done otherwise” is abbreviated to CHDO)
Statement (a) containing “our concept of free will” is a subjective statement. Tournesol’s “concept of free will” may be different to moving finger’s “concept of free will”.
I have provided my definition of free will, which I claim encapsulates everything that we genuinely experience when we say that we have free will.
I would argue that “CHDO” is not something that we “genuinely experience”, rather it is a belief that (some of us) hold (which belief may or may not be true).
Tournesol said:
We can see that a) is the case simply by noting that there has been centuries of dispute between the claims of FW and determinism; if CHDO had never been part of our concept of FW, that would not have been the case.
CHDO has never been part of moving finger’s concept of free will, and I would guess that it is also not part of many others’ concept of free will. I am not sure whom you refer to when you say that “CHDO is part of our concept of free will”, and regardless of how many “votes” you have in support of your view, “argumentum ad numerum” (arguing on the basis that more people support your particular proposition) is logically fallacious. To establish the soundness of the CHDO concept, it needs to be SHOWN that the hypothesis "CHDO is a necessary element of free will" is a logically sound hypothesis, and not “dictated by popular vote”.
Tournesol said:
You may have removed it from your definition, but all that shows is that you are using an idiosyncratic definition.
CHDO has not been “removed from my definition”. It never has been part of my definition, because I see no logical basis to support CHDO.
To suggest that a particular definition is simply “idiosyncratic” is merely an unsubstantiated matter of opinion. Even if it could be shown that the “majority of people believe that CHDO is a necessary element of free will” this would prove nothing – this is equivalent to an “argumentum ad populum” (arguing by appealing to the people) and/or “argumentum ad numerum” (arguing on the basis that more people support your particular proposition) – both of which are logically fallacious arguments. Logical truth is not determined by democratic vote.
Tournesol said:
As to b), the only evidence against CHDO is evidence for strict ontic determinism -- which, you say, is lacking, along with evidence for indeterminism. But if (in)determinism is an open question, so is CHDO.
The real problem with the concept of CHDO, which disqualifies it as a scientific hypothesis, is explained in the following :

The Reason Why “CHDO is a necessary element of free will” Is An Unsupportable Hypothesis
1) CHDO is supposed to be a necessary element of free will.
2) It follows from (1) that any agent without CHDO does not possesses free will. CHDO thus “endows” the ability to act freely on an otherwise “unfree” agent.
3) CHDO is incompatible with strict determinism. I expect that you will agree with this.
4) Therefore IF CHDO does exist, it must be based on indeterminsim.
5) The fundamental problem is that nobody can come up with any workable hypothesis (including a coherent model) which shows just how indeterminism is supposed to endow free will on an otherwise “unfree” agent. In other words, there is no workable hypothesis which shows how CHDO “works in respect of endowing free will”.
(By “show how CHDO works” I do not mean simply show how indeterminism allows different possible futures, I mean show how this indeterminism can be translated into anything that we could recognise as "free will in action". In other words, CHDO as the basis of free will is an empty concept with no real explanatory power).

Why should we logically reject CHDO as being necessary for free will?
It is incorrect to reject CHDO on the grounds of personal preference.
It is incorrect to reject CHDO on the grounds of popular vote.
It is incorrect to reject CHDO on the grounds of being falsified experimentally.
It is possible to reject CHDO on the basis of Occam’s razor (though this is not done here).
It is correct to reject "CHDO as being necessary for free will" on the basis (as explained above) that it cannot provide a coherent and workable hypothesis which has any explanatory power in respect of free will.
Tournesol said:
To say "there is no evidence for indeterminism, therefore determinism is true, and CHDO is false" is a dubious manouvre
This statement assumes incorrectly the reasons for rejection of “CHDO as the basis for free will”, hence is irrelevant.

May your God go with you

MF
 
Last edited:
Physics news on Phys.org
  • #32
I believe that free will defined in this way can explain everything that we know and experience in respect of free will.


No, it leaves at least one thing out...

What this definition does NOT show is that “we could have done differently to what we actually did (given identical circumstances)”.

No, of course not. No definition shows the actual existence of anything. If we
are unable to discover unicorns, the conlusion is tha unicorns, as defined
do not exist; we do not react by changing the definition. Likewise, if we make
the empirical discovery that there is no CHDO, the conclusion is that Fw does
not exist (as determinists indeed claim), not that it needs to be re-defined.



The important question is : Is the concept “we could have done differently to what we actually did (given identical circumstances)” something that we genuinely “know and experience in respepct of free will”, or is it simply an assumption?

It's part of the definition of FW. Defining a word in a partiuclar way assumes
nothing about what is or is not true.

You are confusing two diffeent issues together here:
a)whether could-have-done-otherwise is part of our concept of free will, and
b) whether there is real evidence for it.

(Hereafter “could have done otherwise” is abbreviated to CHDO)
Statement (a) containing “our concept of free will” is a subjective statement. Tournesol’s “concept of free will” may be different to moving finger’s “concept of free will”.

You find out what the conventional defintion of a word is by reference to
dictionaries etc. I have argued that the traditional definition must include
CHDO, or there would have been no centuries-long debate between libertarians
and determinists.

I have provided my definition of free will, which I claim encapsulates everything that we genuinely experience when we say that we have free will.

Do we experience someone else's inability to predict our actions ?

You obviously half-way agree that CHDO is part of FW , or you would not have
included unpredictability as a substitute. Without that extra element, FW
would just reduce to raionallity, and it is difficuly to see hwo people
could wrangle for 2,000 years about whether rationallity is compatible with
the laws of nature.


I would argue that “CHDO” is not something that we “genuinely experience”, rather it is a belief that (some of us) hold (which belief may or may not be true).

And you repat your usual error here: one can believe that CHDO is part of the
definition of FW without believieng it actually exists; indeed determinists
believe that is is part of the defitition, and that FW doesn't exist
precisely becuase of the incompatibility of CHDO with the nature of
physical law, as they see it.



We can see that a) is the case simply by noting that there has been centuries of dispute between the claims of FW and determinism; if CHDO had never been part of our concept of FW, that would not have been the case.
CHDO has never been part of moving finger’s concept of free will, and I would guess that it is also not part of many others’ concept of free will. I am not sure whom you refer to when you say that “CHDO is part of our concept of free will”,

All the people who have taken sides on the free-will vs determinism debate
over the ages.

and regardless of how many “votes” you have in support of your view, “argumentum ad numerum” (arguing on the basis that more people support your particular proposition) is logically fallacious.

Only if you make your usual mistake of confusing matters of definition with
matters of fact.

Empirical statemens are true or false, and their truth
or falsehood is not given by popular assent.

Defintions are not so much true and false as conventional or unusual. Popular
assent is indeed enough to establish that a definition is conventional;
indeed, it is the only way.

To establish the soundness of the CHDO concept, it needs to be SHOWN that the hypothesis "CHDO is a necessary element of free will" is a logically sound hypothesis, and not “dictated by popular vote”.

That statement is entirely based on your usual mistake.


You may have removed it from your definition, but all that shows is that you are using an idiosyncratic definition.
CHDO has not been “removed from my definition”. It never has been part of my definition, because I see no logical basis to support CHDO.
To suggest that a particular definition is simply “idiosyncratic” is merely an unsubstantiated matter of opinion.

I have substantiated my claim by reference to the existence of a
free-will/determinism debate in philosophy.


Even if it could be shown that the “majority of people believe that CHDO is a necessary element of free will” this would prove nothing – this is equivalent to an “argumentum ad populum” (arguing by appealing to the people) and/or “argumentum ad numerum” (arguing on the basis that more people support your particular proposition) – both of which are logically fallacious arguments. Logical truth is not determined by democratic vote.

No, it does not prove anything as a matter of fact, by itself.



As to b), the only evidence against CHDO is evidence for strict ontic determinism -- which, you say, is lacking, along with evidence for indeterminism. But if (in)determinism is an open question, so is CHDO.

The real problem with the concept of CHDO, which disqualifies it as a scientific hypothesis, is explained in the following :

The Reason Why “CHDO is a necessary element of free will” Is An Unsupportable Hypothesis
1) CHDO is supposed to be a necessary element of free will.




2) It follows from (1) that any agent without CHDO does not possesses free will. CHDO thus “endows” the ability to act freely on an otherwise “unfree” agent.

As a necessary but insufficient criterion. Other things, such as ratioanllity,
are needed to.

3) CHDO is incompatible with strict determinism. I expect that you will agree with this.

Yes.


4) Therefore IF CHDO does exist, it must be based on indeterminsim.


Yes.

5) The fundamental problem is that nobody can come up with any workable hypothesis (including a coherent model) which shows just how indeterminism is supposed to endow free will on an otherwise “unfree” agent. In other words, there is no workable hypothesis which shows how CHDO “works in respect of endowing free will”.

Indeterminism implies that there is more than possible outcome to a given
situation, whether or not that situation involves a rational agent.

The difficult bit is to see how indeterminism fails to undermine rationality
(ie how it fails to result in the situation of an agent possesing CHDO,
but not possessing rationallity, and therefore lacking FW, since rationality
is a necessary criterion of FW).

That hard problem is precisely what "Buridan vs Darwin" addresses.

(By “show how CHDO works” I do not mean simply show how indeterminism allows different possible futures, I mean show how this indeterminism can be translated into anything that we could recognise as "free will in action". In other words, CHDO as the basis of free will is an empty concept with no real explanatory power).


*You* definition of FW requires
1) unpredictability in the eyes of an observer
2) rationallity


1) is given a-fortiori by indeterminism (epistemic unpredictability is a
corollary of ontic indeterminism).

2) is explained by my argument.

What is your actual objection ? Are you saying that I have failed to rescue
rationallity ? Or are you appealing to some further aleged feature of FW?
(ie to a *third* definition ?)


Why should we logically reject CHDO as being necessary for free will?
It is incorrect to reject CHDO on the grounds of personal preference.
It is incorrect to reject CHDO on the grounds of popular vote.
It is incorrect to reject CHDO on the grounds of being falsified experimentally.
It is possible to reject CHDO on the basis of Occam’s razor (though this is not done here).
It is correct to reject "CHDO as being necessary for free will" on the basis (as explained above) that it cannot provide a coherent and workable hypothesis which has any explanatory power in respect of free will.

It is part of the defintion of FW, not something proposed in order to explain
it. (But what underpins CHDO, indeterminism, *would* explain your substitute for CHDO, namely
epistemic unpredictability -- ontic indeterminism explains epistemic unpredictability )
 
  • #33
Tournesol said:
No definition shows the actual existence of anything. If we are unable to discover unicorns, the conlusion is tha unicorns, as defined do not exist; we do not react by changing the definition.
Here you are confusing “definition of CHDO” with “definition of free will”
Your logic would seem to run as follows :
“If we are unable to discover unicorns, the conlusion is that unicorns, as defined
do not exist”
The corollary in the case of CHDO is :
If we are unable to discover CHDO, the conlusion is that CHDO, as defined
does not exist.
The non-existence of CHDO says nothing necessarily about the existence or non-existence of free will.
Tournesol said:
if we make the empirical discovery that there is no CHDO, the conclusion is that Fw does not exist (as determinists indeed claim), not that it needs to be re-defined.
The correct conclusion in this case is that “TO-FW does not exist” (where by TO-FW we mean “free will as defined by Tournesol”).
Free will in the sense defined by MF (let us call this MF-FW) does not require CHDO, hence showing that CHDO does not exist would have no implications for the existence of MF-FW.
moving finger said:
The important question is : Is the concept “we could have done differently to what we actually did (given identical circumstances)” something that we genuinely “know and experience in respepct of free will”, or is it simply an assumption?
Tournesol said:
It's part of the definition of FW.
Its part of the definition of TO-FW. It is not part of the definition of MF-FW.
Tournesol said:
I have argued that the traditional definition must include CHDO, or there would have been no centuries-long debate between libertarians and determinists.
And what about compatibilists?
The reasons for the debate are complex, imho it is not simply the case that all sides in the debate agree on the concept of CHDO. A compatabilist for example would not necessarily agree that CHDO exists, but also would not agree that FW does not exist.
The debate on “free will” actually has its origins long before terms like libertarian,
determinist and compatabilist were coined. It has long been thought by some that “we are captains of our own fate” in the sense that we humans can somehow act more or less independently of the physical world. Philosophers have argued for centuries how or whether such a concept can possibly be coherent – witness the ongoing debate on Cartesian dualism. This, and not CHDO, is at the root of free will. The concept of CHDO is simply one small component in this ongoing debate.
My position here is that the hypothesis of CHDO must rest on indeterminism (which you agree with), that (as far as I am aware) nobody has ever shown how indeterminism can endow anything to an agent apart from an element of random behaviour, and free will (if it is anything to do with being “captains of our own fate”) is not endowed by random behaviour. My claim is therefore that the hypothesis of CHDO has no explanatory power in respect of what we think of as free will, and certainly the hypothesis of CHDO does nothing to support the position that “we are captains of our own fate”.
I would be very happy to be shown that I am wrong in this.
Can you show how indeterminism (which we agree must be at the foundation of CHDO) endows anything to an agent which explains how the agent might be “captain of its own fate”?
Does the hypothesis of CHDO make any testable predictions?
Tournesol said:
Do we experience someone else's inability to predict our actions ?
Yes. I can test by experiment the predictions of the hypothesis “Tournesol is unable to consistently predict my actions”– and show that it is indeed true.
Tournesol said:
You obviously half-way agree that CHDO is part of FW , or you would not have included unpredictability as a substitute.
Actually the reverse is true. Unpredictability is an epistemic property. The world could be 100% deterministic, an ontic property, (and if it is I think you agree this would rule out CHDO), and yet it still would be unpredictable.
Tournesol said:
Without that extra element, FW would just reduce to raionallity, and it is difficuly to see hwo people could wrangle for 2,000 years about whether rationallity is compatible with the laws of nature.
With respect, FW is not “just about rationality”, FW is about the question “how can we be captains of our own fate?” and thus (in many ways) it is about complexity, chaos, game theory, evolution, survival of the fittest, consciousness – concepts that we are only just beginning to understand.
Fundamentally, imho the free will question is “how can we define and model free will such that both the definition and model explain how we can be captains of our own fate?”, and how can this model and definition of free will at the same time be coherent, consistent, explanatory, and fit with what we actually observe?
It is not obvious to me that CHDO is an essential or even useful part of either this model or definition.
Tournesol said:
Empirical statemens are true or false, and their truth or falsehood is not given by popular assent.
The truth of any statement depends on the definitions of the terms used. If the agents debating the truth of the statement do not agree on the definitions of the terms used then they may not agree on the truth of the statement.
moving finger said:
To suggest that a particular definition is simply “idiosyncratic” is merely an unsubstantiated matter of opinion.
Tournesol said:
I have substantiated my claim by reference to the existence of a free-will/determinism debate in philosophy.
And I have answered that claim above.
Even if it were the case that “CHDO has always been part of the concept of free will” (which I have disputed), it is clear that the basic free will question (“how can we define and model free will such that it explains how we can be captains of our own fate?”) is still unresolved – CHDO contributes nothing to the explanatory power of the model - perhaps its time for a new paradigm.
Tournesol said:
The difficult bit is to see how indeterminism fails to undermine rationality (ie how it fails to result in the situation of an agent possesing CHDO, but not possessing rationallity, and therefore lacking FW, since rationality is a necessary criterion of FW).
The difficult bit imho is seeing how indeterminism allows us to be “captains of our own fate”. I can’t see how it does, can you?
Tournesol said:
That hard problem is precisely what "Buridan vs Darwin" addresses.
With respect, Buridan vs Darwin may address, but does not provide an answer to, the free will question. It shows how indeterminism must be at the root of CHDO, but it does not show how indeterminism endows any agent with the ability to be “captain of its own fate”.
So far, I have not seen any explanation of how indeterminism can endow anything that could be called free will – in other words, the “CHDO hypothesis” is an empty hypothesis – there is nothing behind it which actually explains anything useful about free will.
Tournesol said:
*You* definition of FW requires
1) unpredictability in the eyes of an observer
2) rationallity
1) is given a-fortiori by indeterminism (epistemic unpredictability is a
corollary of ontic indeterminism).
An ontically indeterministic system is epitemically unpredictable, but that does not allow us to conclude that epistemic unpredictability implies ontic indeterminism.
A deterministically chaotic system is by definition deterministic, but nevertheless it is unpredictable.
MF-FW requires unpredictability, and unpredictability is consistent both with determinsim and with indeterminism, hence MF-FW is consistent both with a deterministic and with an indeterministic world.
Tournesol said:
What is your actual objection ? Are you saying that I have failed to rescue rationallity ? Or are you appealing to some further aleged feature of FW?
(ie to a *third* definition ?)
My objection is that IF CHDO exists, the “source” of CHDO must be indeterminism – but indeterminism simply causes random behaviour - neither you nor anyone else can come up with an explanatory model which shows how indeterminism endows anything on an agent which we would recognise as free will in action (ie which “allows us to be captains of our own fate”). Therefore CHDO has no real explanatory power in respect of free will.
If you believe that you can come up with an explanatory model which shows how such indeterminism endows anything on an agent which we would recognise as free will in action (ie how it is that we could claim to be “captains of our own fate”) then I would be only too pleased to take a look at it (the explanations of Buridan vs Darwin I have seen so far do not explain how indeterminism endows anything that could be called free will in this sense).
MF
 
  • #34
moving finger said:
Here you are confusing “definition of CHDO” with “definition of free will”
Your logic would seem to run as follows :
“If we are unable to discover unicorns, the conlusion is that unicorns, as defined
do not exist”
The corollary in the case of CHDO is :
If we are unable to discover CHDO, the conlusion is that CHDO, as defined
does not exist.
The non-existence of CHDO says nothing necessarily about the existence or non-existence of free will.
If CHDO is part of the definition of FW, it says something the existence of FW,
(just as the presence or absence of a horn on an animal's
forehead says something about the (non)existence of unicorns).
The correct conclusion in this case is that “TO-FW does not exist” (where by TO-FW we mean “free will as defined by Tournesol”).
and philosophical tradition.
Free will in the sense defined by MF (let us call this MF-FW) does not require CHDO, hence showing that CHDO does not exist would have no implications for the existence of MF-FW.
Its part of the definition of TO-FW. It is not part of the definition of MF-FW.
The reasons for the debate are complex, imho it is not simply the case that all sides in the debate agree on the concept of CHDO. A compatabilist for example would not necessarily agree that CHDO exists, but also would not agree that FW does not exist.
Compatiblists feel the need to come up with substitutes for CHDO.
Even you do, in the form of epistemic unpredictability.
The debate on “free will” actually has its origins long before terms like libertarian,
determinist and compatabilist were coined. It has long been thought by some that “we are captains of our own fate” in the sense that we humans can somehow act more or less independently of the physical world. Philosophers have argued for centuries how or whether such a concept can possibly be coherent – witness the ongoing debate on Cartesian dualism. This, and not CHDO, is at the root of free will. The concept of CHDO is simply one small component in this ongoing debate.
It is an extraordinary claim that CHDO is quite separate from being "captains of our own fate". It is a concept arrived at precisely by putting the poetic
concept "captains of our own fate" on a precise footing. If we are not
COF's , presumably we are cuasally determined by the rest of the universe - and therefore have no CHDO.
If you have an alternative analysis , fine -- but don't just pretend that
the two concepts have nothing to do with each other.
My position here is that the hypothesis of CHDO must rest on indeterminism (which you agree with), that (as far as I am aware) nobody has ever shown how indeterminism can endow anything to an agent apart from an element of random behaviour,
That is of course precisely what I have shown.
and free will (if it is anything to do with being “captains of our own fate”) is not endowed by random behaviour.
That of course depends on one's definition of FW. I have shown
that indeterminism does endow my defintion (CHDO+a realistic
amount of ratoanllity). I can't see how how it can fail to
endow your definition (epistemic unpredictability+irrationallity),
since that is essentially a weaker variant of my defintion.
I have asked you before where you think the specific failure to "endow free will" lies, and you seem to be avoiding the question.
My claim is therefore that the hypothesis of CHDO has no explanatory power in respect of what we think of as free will, and certainly the hypothesis of CHDO does nothing to support the position that “we are captains of our own fate”.
You clearly have no alternative way of putting the vague and poetic
COF requirement on a precise and logical footing, so that is a vacuous complaint.
I would be very happy to be shown that I am wrong in this.
Can you show how indeterminism (which we agree must be at the foundation of CHDO) endows anything to an agent which explains how the agent might be “captain of its own fate”?
It shows a) how an agent is not a causal puppet of the universe in general
(simply by being indeterministic)
b) how an agent is nonetheless not a puppet of indeterminism itself.
since the RIG is filered and selected byt the SIS -- the SIS is the element
of control.
What else is there ? When are you going to stop saying that I am wrong and start saying why I am wrong.
Does the hypothesis of CHDO make any testable predictions?
Yes. I can test by experiment the predictions of the hypothesis “Tournesol is unable to consistently predict my actions”– and show that it is indeed true.
Actually the reverse is true. Unpredictability is an epistemic property. The world could be 100% deterministic, an ontic property, (and if it is I think you agree this would rule out CHDO), and yet it still would be unpredictable.
Whatever. The traditional theory of FW requires ontic indeterminism, and I have shown how this can exist without endangering rationality of a realistic
kind.
With respect, FW is not “just about rationality”, FW is about the question “how can we be captains of our own fate?” and thus (in many ways) it is about complexity, chaos, game theory, evolution, survival of the fittest, consciousness – concepts that we are only just beginning to understand.
Your previously stated definition does require rationallity.
You seem to be withdrawing that deifinition in favour of a statement to
the effect that we can't even specify what FW is (concepts that we are only just beginning to understand.=).
Is that correct ?
Would it not be more honest to make you abandomnment of your previous
definition of FW explicit, if that is what you are indeed doing ?
Fundamentally, imho the free will question is “how can we define and model free will such that both the definition and model explain how we can be captains of our own fate?”, and how can this model and definition of free will at the same time be coherent, consistent, explanatory, and fit with what we actually observe?
I believe I have done all that. Replacing my defintion of FW with another
,
less tractable defintion does not overturn that at all. If the name of the
game is to come up with a definition that works, that is what I have done.
The fact that you can come up with another defintion that doesn't work,
doesn't affect that.
It is not obvious to me that CHDO is an essential or even useful part of either this model or definition.
Then why halfway buy into it with epistemic unpredictability?
The truth of any statement depends on the definitions of the terms used. If the agents debating the truth of the statement do not agree on the definitions of the terms used then they may not agree on the truth of the statement.
OK. What is the defintion of "Captain of one's Fate" ?
Even if it were the case that “CHDO has always been part of the concept of free will” (which I have disputed), it is clear that the basic free will question (“how can we define and model free will such that it explains how we can be captains of our own fate?”) is still unresolved – CHDO contributes nothing to the explanatory power of the model - perhaps its time for a new paradigm.
CHDO is part of the defintion. Ontic indeterminsim is part of the model,
The difficult bit imho is seeing how indeterminism allows us to be “captains of our own fate”. I can’t see how it does, can you?
That depends on what you mean by COF. What do you mean by COF?
With respect, Buridan vs Darwin may address, but does not provide an answer to, the free will question. It shows how indeterminism must be at the root of CHDO, but it does not show how indeterminism endows any agent with the ability to be “captain of its own fate”.
It does explain that if being COF is a combination of CHDO and rationallity.
It also does if COF is your alternative defintion of FW.
It only doesn't if COF is maintained as a completely fuzzy, ill-defined
concept -- or rather we can't tell whether it does or not. But that is
a completely spurious manouvre. It is not a valid
form of argument to reject defnitions where they are offered in favour
of a vague, illogical approach.
So far, I have not seen any explanation of how indeterminism can endow anything that could be called free will – in other words, the “CHDO hypothesis” is an empty hypothesis – there is nothing behind it which actually explains anything useful about free will.
I have written hundereds of words of explanation. There is no substance
to your rejection because you either don't know what you mean by
FW, or you mean something I can explain easily. Either way , you have no real argument.
An ontically indeterministic system is epitemically unpredictable, but that does not allow us to conclude that epistemic unpredictability implies ontic indeterminism.
I didn't say it did. But if I can suceed in showing how ontic indeterminsim
is hypothetically compatible with FW, I have succeeded afortiori in showing
how epistemic unpredictability is also compatible.
MF-FW requires unpredictability, and unpredictability is consistent both with determinsim and with indeterminism, hence MF-FW is consistent both with a deterministic and with an indeterministic world.
Irrelevant. The question is whether my model
is explains FW as defined by you: it does. (Bearing
in mind that COF is not a defintion, but an attempt
at avoiding precision).
My objection is that IF CHDO exists, the “source” of CHDO must be indeterminism – but indeterminism simply causes random behaviour - neither you nor anyone else can come up with an explanatory model which shows how indeterminism endows anything on an agent which we would recognise as free will in action (ie which “allows us to be captains of our own fate”).
Whether it does or not depends on the definition of FW being used.
It does according to my defintion and your clear defintion.
Whether it does or not according to COF is unclear, since no-one
knows what COF means. Youcan't have it both ways.
You can't employ a delibarately vague concept, and then make
definitive statements about whether it has been explained or not.
therefore CHDO has no real explanatory power in respect of free will.
If you believe that you can come up with an explanatory model which shows how such indeterminism endows anything on an agent which we would recognise as free will in action (ie how it is that we could claim to be “captains of our own fate”) then I would be only too pleased to take a look at it (the explanations of Buridan vs Darwin I have seen so far do not explain how indeterminism endows anything that could be called free will in this sense).
No-one knows what "this sense" is. You clearly cannot specify what
the problem is. You expect us to take your word that a concept
you cannot specify logically has not been explained, but there is no
reason why anyone should.
 
Last edited:
  • #35
Tournesol said:
If you have an alternative analysis , fine -- but don't just pretend that the two concepts have nothing to do with each other.
With respect Tournesol, I’m not “pretending” anything – I am simply asking if anyone can SHOW how these two concepts – CHDO and free will – are actually associated with each other (apart from the rather meaningless method of defining one in terms of the other).
In other words, and to be specific, can anyone show how the basic principle underlying CHDO, ie indeterminism, results in an agent which is now COF, where it is NOT COF in the absence of indeterminism?
moving finger said:
My position here is that the hypothesis of CHDO must rest on indeterminism (which you agree with), that (as far as I am aware) nobody has ever shown how indeterminism can endow anything to an agent apart from an element of random behaviour,
Tournesol said:
That is of course precisely what I have shown.
Where have you shown this? If you are referring here to the Darwinian model, I don’t see how this model shows how the basic principle underlying CHDO, ie indeterminism, results in an agent which is now COF, where it is NOT COF in the absence of indeterminism.
moving finger said:
and free will (if it is anything to do with being “captains of our own fate”) is not endowed by random behaviour.
Tournesol said:
That of course depends on one's definition of FW. I have shown that indeterminism does endow my defintion (CHDO+a realistic amount of ratoanllity).
Indeterminism endows CHDO, yes. But that is not the point I am making here.
Can you or anyone else show how indeterminism (which is necessary for CHDO) also results in an agent which is now COF, where it is NOT COF in the absence of indeterminism?
Tournesol said:
I have asked you before where you think the specific failure to "endow free will" lies, and you seem to be avoiding the question.
With respect, I am trying to make the essential question as clearly as possible. Let me repeat it again :
Can you or anyone else show how indeterminism (which is necessary for CHDO) also results in an agent which is now COF, where it is NOT COF in the absence of indeterminism?
Now, which question is it that you think I am avoiding?
Tournesol said:
You clearly have no alternative way of putting the vague and poetic COF requirement on a precise and logical footing, so that is a vacuous complaint.
Imho the only way to truly understand free will is first to identify and then abandon the ideas which are not working, such as the idea that CHDO is a part of free will – and then to move to a new paradigm which involves a definitioon of free will in terms of concepts that DO work. But I cannot explain how this can be done as long as you insist on clinging to the concept that “CHDO must be part of free will”. First of all we have to establish whether or not CHDO provides any real answers.
moving finger said:
Can you show how indeterminism (which we agree must be at the foundation of CHDO) endows anything to an agent which explains how the agent might be COF?
Tournesol said:
It shows a) how an agent is not a causal puppet of the universe in general (simply by being indeterministic)
OK, I can see how indeterminism results in “random behaviour” in an otherwise rational machine, but how is it that “tacking on indeterminism” suddenly makes an agent “captain of its own fate” where it was not COF before? How exactly is it supposed to work?
Tournesol said:
b) how an agent is nonetheless not a puppet of indeterminism itself. since the RIG is filered and selected byt the SIS -- the SIS is the element of control.
Imho this still does not show how an otherwise deterministic agent has now suddenly been transformed into an agent which is COF.
Tournesol said:
What else is there ? When are you going to stop saying that I am wrong and start saying why I am wrong.
I am trying to show you why it is wrong. I am trying to show you that “this is not enough”. What you suggest only endows indeterminism, it does not endow free will in the sense of the agent now being COF.
The Darwinian model, as far as I can see, is simply a random idea generator tacked onto the front of a deterministic decision-maker. I can see how the entire system would then no longer be deterministic, but how is it that you think such a system is NOW suddenly COF where it was NOT COF in the absence of the random idea generator?
Tournesol said:
The traditional theory of FW requires ontic indeterminism, and I have shown how this can exist without endangering rationality of a realistic kind.
You have shown how indeterminism can exist, yes, and also how it could endow CHDO. I do not dispute that. But the indeterminism you have introduced is nothing more than a “random element” in an otherwise deterministic machine – how is it that this machine can now claim to be COF?
Tournesol said:
Your previously stated definition does require rationallity.
You seem to be withdrawing that deifinition in favour of a statement to
the effect that we can't even specify what FW is (concepts that we are only just beginning to understand.=).
I am withdrawing nothing. My belief is that we have to re-think what we actually mean by free will, because the “traditional concept” of free will as you like to call it, is incoherent. We need a new paradigm – but we cannot work towards a new paradigm as long as we continue to cling onto ideas (such as indeterminism and CHDO) that do not work (ie have no explanatory power).
Tournesol said:
Would it not be more honest to make you abandomnment of your previous definition of FW explicit, if that is what you are indeed doing ?
I am trying to understand firstly whether or not I am correct in my belief that indeterminism and CHDO are vacuous concepts in terms of explaining free will – once that is established we may be able to work towards a new definition that does explain what is going on.
moving finger said:
Fundamentally, imho the free will question is “how can we define and model free will such that both the definition and model explain how we can be captains of our own fate?”, and how can this model and definition of free will at the same time be coherent, consistent, explanatory, and fit with what we actually observe?
Tournesol said:
I believe I have done all that. Replacing my defintion of FW with another, less tractable defintion does not overturn that at all. If the name of the
game is to come up with a definition that works, that is what I have done.
You believe you have done that?
Great!
Then you surely can explain eaxctly how indeterminism (which is necessary for CHDO) also results in an agent which is now COF, where it is NOT COF in the absence of indeterminism?
moving finger said:
It is not obvious to me that CHDO is an essential or even useful part of either this model or definition.
Tournesol said:
Then why halfway buy into it with epistemic unpredictability?
We can verify experimentally that human agents behave unpredictably. Therefore it must be the case EITHER that they are at least epistemically unpredictable OR that they are indeed ontically indeterministic. One or the other (or both) must be true in order to fit the observed facts.
Tournesol said:
OK. What is the defintion of "Captain of one's Fate" ?
My brief suggestion : An agent which is captain of its fate can be said to be acting rationally and at the same time not controlled or unduly influenced by external factors.
I am very happy to accept improvements to the definition, or even a completely different definition, if you have any to suggest.
Tournesol said:
I have written hundereds of words of explanation. There is no substance to your rejection because you either don't know what you mean by
FW, or you mean something I can explain easily.
I agree you have written hundreds of words, but I cannot see the explanation in there anywhere.
OK, If you are so convinced you are right and I am wrong, and that your concept of CHDO is part of free will, what exactly do you mean by free will?
MF
 
  • #36
There are two disparate assumptions here.

Assumption #1: "Free will is endowed by indeterminism."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that consciousness is not computational, that it relies on quantum mechanics and that quantum mechanics is indeterminate. I say this because no other known natural phenomenon can provide for indeterminate processes.*

Assumption #2: "Free will is endowed by determinate processes."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that one of the most contensious features of consciousness is determinate which would imply, but not prove, that consciousness is computational.

Would you agree? How can one prove either case? It doesn't seem like there's a resolution to be had, because in the end, the results of what you have proven speak volumes about consciousness itself. You'll need more than a good argument if you're to prove either. You need a theory which can examine the phenomenon analytically and determine if it is possible or not.

Personally, I think the best you can do is to suggest free will is a feature of consciousness, and attempt to disprove/prove that. But that seems like an axiom as opposed to something which needs to be proven. You could also create definitions around that assumption, such as what I've suggested earlier, that free will is the sensation of making a decision, and one can then argue whether that sensation feels as if it is determinate or not, but not if it is truly determinate or not. Certainly the sensation feels 'random', but can you also say that the sensation feels indeterminate? It seems the argument is based on a gut feel regarding this sensation - more than any strict logic which can be built upon to prove either case.

*Note: Yes, MF, I know, I know. <grin> Determinism/indeterminism is beyond our ability to know because of non-local hidden variables, etc… We must however make the assumption that if we prove something is indeterminate, then we've also proven indeterminacy exists and the most likely candidate is QM.
 
  • #37
Why the Darwinian Model Does Not Work

Tournesol said:
What else is there ? When are you going to stop saying that I am wrong and start saying why I am wrong.

As succinctly as possible - here is what is wrong with the Darwinian model (in fact, here is what is wrong with the whole idea that "indeterminsim endows free will").

Taking the definitions of RIG and SIS as before, let us suppose we "run" the model twice under identical conditions. In other words we are simply doing what the model claims to do which is to "allow it to do otherwise" given identical circumstances. Let us call these two runs "Run 1" and "Run 2" .

In Run 1 the RIG throws up (randomly) a finite number of possible courses of action. Let us suppose that included in the possible courses of action thrown up by the RIG are actions A and B. The SIS examines these and selects Action A as the "best choice" out of the ones made available by the RIG. Therefore, given a "free choice" between A and B, the SIS deterministically chooses A.

In Run 2 the RIG throws up (randomly) another finite number of possible courses of action. This time action B is included in the actions thrown up, but action A is not. The SIS examines these and now selects Action B as the "best choice" out of the ones made available by the RIG.

Clearly the model has indeed "done otherwise" in Run 2 compared to Run 1, just as (it would seem) CHDO requires - in Run 1 it chose A and in Run 2 it chose B.

But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG? The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!

Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?

The important question that we need to ask is : Since the model chose A rather than B in run 1, exactly WHY did it choose B in Run 2?

It is the answer to this question "WHY" which shows us where the Darwinian model fails.

Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.

Which kind of "free will" would you prefer to have?

One where you can choose rationally between all possible courses of action (the deterministic model)...
Or one where your choices are necessarily restricted by indeterminism such that you may be forced to take a non-optimum choice, whether you like it or not (the Darwinian model)?

Free will is supposed to be "acting freely of restrictions"...
Please do point out any errors in interpretation or conclusion.

MF

Postscript : For the avoidance of doubt, I am not here assuming or asserting that the world is either deterministic or indeterministic. I am simply looking at some of the characteristics and implications of the so-called of Darwinian model of "free will", to discover whether it has any explanatory power in the sense of endowing anything that could be described as free will in any sense of the word. I think it is quite clear from the above example that the Darwinian model (and this fact is shared by all indeterministically-driven models of free will), rather than endowing anything that we might wish to have in the form of free will, in fact "robs us" of the possibility of making optimum choices. The Darwinian model is designed to deliberately restrict (via the RIG) some of the choices available to an agent, thereby forcing the agent to make non-optimal choices in the misnomer of "CHDO".

MF
 
Last edited:
  • #38
Q_Goest said:
There are two disparate assumptions here.
Assumption #1: "Free will is endowed by indeterminism."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that consciousness is not computational, that it relies on quantum mechanics and that quantum mechanics is indeterminate. I say this because no other known natural phenomenon can provide for indeterminate processes.*
Assumption #2: "Free will is endowed by determinate processes."
- If you can prove that, and if you further assume that free will is a feature of consciousness, you have proven that one of the most contensious features of consciousness is determinate which would imply, but not prove, that consciousness is computational.
Before we can do either, we need to agree a definition of free will. The basic problem is that "libertarians" will not accept a definition of free will which is based on determinism (evidence my debate with Tournesol above), and determinists will of course not accept a definition of free will which is based on indeterminism (or - determinists simply deny the existence of free will).

Q_Goest said:
Would you agree? How can one prove either case?
imho the solution to the problem is to
(1) remain open-minded on exactly "what free will is" (ie do not rule anything in or anything out), then
(2) explore the implications of various models and paradigms (as I have been examining the implications of the Darwinian model in post #37 above), to see "what kind of free will" (ie what are the proeprties of the free will that) these models endow, then
(3) ask oneself, for each model, "is this the kind of free will that is worth having?" (as I have done for the Darwinian model in #37 above)
(4) repeat steps 1 to 3 for all possible models (indeterministic and deterministic), and decide which one(s) is(are) best

As far as I can see, my analysis of the Darwinian model (above) can be applied generally to all indeterministic models of decision making. The general conclusion is that indeterminism does not endow anything that we would "want" to have as free will - it endows at worst only random behaviour or (at best, in the case of the Darwinian model) it necessarily restricts the choices available to us, so that we are always forced (by the random behaviour) to make non-optimum choices.

Q_Goest said:
It doesn't seem like there's a resolution to be had, because in the end, the results of what you have proven speak volumes about consciousness itself.
There is no resolution as long as people remain dogmatic and prejudiced in their definitions, along the lines of "free will MUST contain an element of CHDO, by definition!".

I am asking people to free their minds of preconceptions, free themselves of dogma, and start looking at the world objectively and scientifically. Only this way can we arrive at a true understanding.

Q_Goest said:
You'll need more than a good argument if you're to prove either.
The "proof" is in demonstrating the properties of various models - using the 4-step process I have outlined above, and eliminating those models which do not work.

Q_Goest said:
You need a theory which can examine the phenomenon analytically and determine if it is possible or not.
Personally, I think the best you can do is to suggest free will is a feature of consciousness, and attempt to disprove/prove that.
That may turn out to be the case. But I think we can do more than that. I think we can ask questions like "is indeterminism necessary for free will, and what are the consequences of this hypothesis?"

Q_Goest said:
But that seems like an axiom as opposed to something which needs to be proven. You could also create definitions around that assumption, such as what I've suggested earlier, that free will is the sensation of making a decision, and one can then argue whether that sensation feels as if it is determinate or not, but not if it is truly determinate or not. Certainly the sensation feels 'random', but can you also say that the sensation feels indeterminate? It seems the argument is based on a gut feel regarding this sensation - more than any strict logic which can be built upon to prove either case.
I feel the road to understanding free will is NOT to get locked in dogmatic definitions of "what free will is" or "what free will is not", and then get backed into a corner trying to defend thoise definitions. The road to understanding is to rise above definitional prejudice and dogma, and examine the real consequences of some of the proposed models.

Q_Goest said:
*Note: Yes, MF, I know, I know. <grin> Determinism/indeterminism is beyond our ability to know because of non-local hidden variables, etc… We must however make the assumption that if we prove something is indeterminate, then we've also proven indeterminacy exists and the most likely candidate is QM.
lol - yes, ok, point taken :biggrin: . I know i can be like a stuck record. the important point is that we have NOT proven that QM is indeterminate (only that it might be).

MF
 
Last edited:
  • #39
An agent which is captain of its fate can be said to be acting rationally and at the same time not controlled or unduly influenced by external factors

If "not controlled by" means "not causally determined by" that is
simply a re-statement of my definition of free will. If it does not
mean that...what does it mean ?
 
  • #40
Tournesol said:
If "not controlled by" means "not causally determined by" that is
simply a re-statement of my definition of free will. If it does not
mean that...what does it mean ?
I'm happy to agree that my suggested definition of "captain of one's fate" is essentially the same as your definition of free will. I have no problem with that at all.

I have pointed out why I consider the "Darwinian model" fails to endow anything that could be considered to be "free will" in post #37 of this thread, would you care to respond?

With respect

MF
 
  • #41
MF, your argument in post #37 above seems reasonable to me, and I'd agree there seems to be no obvious beneficial factors to having a random or indeterminate ability to select between any given number of choices. But on the other hand, this doesn't seem like an argument that can provide any insight into why free will should emerge from a choice being made or endowed by the process of making that choice regardless of whether that choice is deterministic or not. I don't suppose it was intended for that though, it is an attempt to dispute indeterministic processes as being beneficial which could be true. But being beneficial doesn't do anything to suggest why something would be endowed by a physical process.

If a choice is nothing more than a switch comparing two inputs and either making a determinate switch position or an indeterminate switch position, then how is this magical switch which gives rise to this feeling of free will any different from another switch? Do all switches produce the sensation of free will? (rhetorical question, don't answer <lol>)

If you are a computationalist, you might suggest that free will emerges from the sum total of all switch positions or in other words, by the sum of all computations. There is no single switch which endows anything.

If you disagree that computationalism can provide for consciousness (what is the term for that - "anti-computationalist"? hehe) then you might complain that deterministic processes in the brain can't provide for consciousness because there's no need to be aware of making a decision when the decision is simply the result of a calculation. The determinist's "free will" doesn't have any meaning whatsoever. There is no choice made and there is no such thing as a choice, so why should one expect a sensation from such a thing?

The comeback would seem to be that the determinist would say a choice WAS made, that two or more concepts emerged in the computation and a selection was made. I think this largely proves the point that it doesn't matter if you're a computationalist or an anti-computationalist, free will doesn't emerge except from conscious experience.

Before we can do either, we need to agree a definition of free will.

Def: Free will is a feature of consciousness. It is not a process, so suggesting it is determinate or indeterminate is false IMHO. Suggesting free will is endowed by determinate or indeterminate processes is no different than suggesting love, hate, curiosity or any other emotion is endowed by determinate or indeterminate processes. The question about free will being endowed by a process is non-sensical to begin with. That's my story and I'm stickin' to it! lol

I think we can ask questions like "is indeterminism necessary for free will, and what are the consequences of this hypothesis?"

I respectfully disagree. To suggest you can ask questions as you've proposed presupposes they can be answered in such terms as determinate and indeterminate processes. Since free will is a feature of consciousness, you can't answer the question without having some pre-defined concept of consiousness.

I feel the road to understanding free will is NOT to get locked in dogmatic definitions of "what free will is" or "what free will is not", and then get backed into a corner trying to defend thoise definitions. The road to understanding is to rise above definitional prejudice and dogma, and examine the real consequences of some of the proposed models.

Yep. Doing my best to sidestep any dogmatic concepts of processes endowing such things! <lol>
 
  • #42
Try to herd cats.

The answer comes out
like an arrow.
 
  • #43
meL said:
Try to herd cats.
The answer comes out
like an arrow.
that cats are unpredictable, yes :smile:

MF
 
  • #44
Modelling Decision-Making Machines

We have seen (post #37) that the simple so-called Darwinian model, which comprises a single Random Idea Generator folowed by a Sensible Idea Selector, does not endow any properties to an agent which we might recognise as “properties of free will”. In particular, rather than endowing the ability of Could Have Done Otherwise (CHDO), the simple RIG-SIS combination acts to RESTRICT the number of possible courses of action, thus forcing the agent to make non-optimal choices (a feature I have termed Forced to Do Otherwise, FDO, rather than CHDO).

What now follows is a description of a slightly more complex model based on a parallel deterministic/random idea generator combination, which not only CAN endow genuine CHDO but ALSO is one in which the random idea generator creates new possible courses of action for the agent, rather than restricting possible courses of action.

Firstly let us define a Deterministic Idea Generator (DIG) as one in which alternate ideas (alternate possible courses of action) are generated according to a rational, deterministic procedure. Since it is deterministic the DIG will produce the same alternate ideas if it is re-run under identical circumstances.

Next we define a Random Idea Generator (RIG) as one in which alternate ideas (alternate possible courses of action) are generated according to an epistemically random procedure. Since it is epistemically random the RIG may produce different ideas if it is re-run under epistemically identical circumstances.
Note that the RIG may be either epistemically random and ontically deterministic (hereafter d-RIG), or it may be epistemically random and ontically indeterministic (hereafter i-RIG). Both the d-RIG and the i-RIG will produce different ideas when re-run under epistemically identical circumstances. (For clarification – a d-RIG behaves similar to a computer random number generator (RNG). The RNG produces epistemically random numbers, but if the RNG is reset then it will produce the same sequence of numbers that it did before).

Next we define a Deterministic Idea Selector (DIS) as a deterministic mechanism for taking alternate ideas (alternate possible courses of action), evaluating these ideas in terms of payoffs, costs and benefits etc to the agent, and rationally choosing one of the ideas as the preferred course of action.

Finally we define a Random Idea Selector (RIS) as a mechanism for taking alternate ideas (alternate possible courses of action), and choosing one of the ideas as the preferred course of action according to an epistemically random procedure. Since it is epistemically random the RIS may a produce different choice if it is re-run under epistemically identical circumstances (ie with epistemically identical input ideas).

These four basic building bloocks, the DIG, RIG, DIS and RIS, may then be assembled in various ways to create various forms of “idea-generating and decision-making” models with differing properties.

In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities). As we have seen in post #37, the so-called Darwinian model is an example of FDO.

Deterministic Agent
DIG -> DIS

The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.
.The Deterministic agent will always make the same choice given the same (identical) circumstances.
Clearly there is no possibility of CHDO.
A Libertarian would claim that such an agent does not possesses free will, but a Compatabilist might not agree.

Capricious (Random) Agent
DIG -> RIS

The Capricious agent comprises a DIG which outputs possible courses of action which are then input to a RIS.
Also known as the Buridan’s Ass model.
The Capricious agent will make epistemically random choices, even under the same (epistemically identical) circumstances.
Clearly there is the possibility for the agent to “choose otherwise” given epistemically identical circumstances, but since the choice is made randomly and not rationally this is an example of FDO.
I doubt whether even a Libertarian would claim that such an agent possesses free will.
(Note that making the agent ontically random, ie indeterministic, rather than epistemically random does not change the conclusion.)

So-called Darwinian Agent
RIG -> DIS

The so-called Darwinian agent comprises a RIG which outputs possible courses of action which are then input to a DIS.
See http://www.geocities.com/peterdjones/det_darwin.html#introduction for a more complete description of this model.
The so-called Darwinian agent will make rational choices from a random selection of possibilities.
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
Because of this property (ie FDO rather than CHDO) no true-thinking Libertarian should claim that such an agent possesses free will, even in the case of an i-RIG where the agent clearly behaves indeterministically.

Parallel DIG-RIG Agent
DIG -> }
..... -> } DIS
RIG –> }

The parallel DIG-RIG agent comprises TWO separate idea generators, one deterministic and one random, working in parallel. The deterministic idea generator outputs rational possible courses of action which are then input to a DIS. Also input to the same DIS are possible courses of action generated by the random idea generator. The DIS then evaluates all of the possible courses of action, those generated deterministically and those generated randomly, and the DIS then rationally chooses one of the ideas as the preferred course of action.
Since a proportion of the possible ideas is generated randomly, the Parallel DIG-RIG agent (just like the Capricious and Darwinian agents) can appear to act unpredictably. If the RIG is deterministic (a d-RIG) then the Parallel DIG-RIG agent behaves deterministically but still unpredictably. If the RIG is indeterministic (an i-RIG) then the Parallel DIG-RIG behaves indeterministically (and therefore also unpredictably).
Since a proportion of the possible ideas is also generated deterministically and rationally (by the DIG), not only does the agent NOT behave capriciously but also the agent is NOT in any way restricted or forced by the RIG to choose a non-optimal or irrational course of action (all rational courses of action are always available as possibilities to the DIS via the DIG, even if the RIG throws up totally irrational or non-optimum possibilities).
The Parallel DIG-RIG model therefore combines the advantages of the Deterministic model (completely rational behaviour) along with the advantages of the Darwinian model (unpredictable behaviour) but with none of the drawbacks of the Darwinian model (the agent is not restricted or forced by the RIG to make non-optimal choices).
If the Parallel DIG-RIG is based on a d-RIG then it behaves deterministically but unpredictably. Importantly, it does NOT then endow CHDO (since it is deterministic) therefore presumably would not be accepted by a Libertarian as an explanatory model for free will. Interestingly though, this model (DIG plus d-RIG) explains everything that we observe in respect of free will (it produces a rational yet not necessarily predictable agent), and the model should be acceptable both to Determinists and Compatabilists alike (since it is deterministic).
If the Parallel DIG-RIG is based on an i-RIG then it is both indeterministic and unpredictable. Therefore it does endow CHDO (and this time it is GENUINE CHDO, not the FDO offered by the Darwinian model), therefore presumably would be accepted by a Libertarian (but obviously not by either a Determinist or a Compatabilist) as an explanatory model for free will.

Conclusion
We have shown that the so-called Darwinian model incorporating a single random idea generator gives rise to an indeterministic agent without endowing genuine CHDO. However, a suitable combination of deterministic and indeterministic idea generators, in the form of the Parallel DIG-RIG model, can form the basis of a model decision making machine which does endow genuine CHDO, and is genuinely both indeterministic yet rational.

Constructive criticism welcome!

MF
 
Last edited:
  • #45
Moving finger, I really appreciated this careful post. http://cscs.umich.edu/~crshalizi/notebooks/symbolic-dynamics.html", via Marcus's Atiyah thread on Strings Branes and LQG, and comments on Peter Woit's blog which Marcus links to, is another way to bring stochastic behavior (even the "real" kind) out of continuous deterministc physics by means of limiting coarse-graining strategies. The essay is by Cosma Shalizi, a very respected mathematician in this area (Nonlinear stochastic models), and I especially want to tout the many links he gives, which taken together amount to a copious training resource on it.
 
Last edited by a moderator:
  • #46
MF, I like the way you've laid this out. Your thinking is clear, and it forms a fairly good basis to discuss advantages/disadvantages of deterministic and indeterministic processes to AI. Taking the focus of the discussion away from simply "what processes endow free will" and focusing on what advantages/disadvantages there are to AI would IMHO be of great benefit to this discussion.

I noticed you also broke out the DIS from the RIS as up until now the logic you've used to berate indeterminism was focused on JUST the RIS and not the RIG which is where indeterminism may actually be of use. More on this in a moment.

Finally, thanks also for putting in bold the abbreviations as my memory is as useful as a spaghetti strainer for drinking coffee. Now I only need to remind myself to scroll up!

It's interesting you've concluded that indeterminate processes can be of value in decision making. The conclusion you've reached is echoed by the "Darwin" model, here:
Objection 3: "Indeterminism would disrupt the process of rational thought, and result in a capricious, irrational kind of freedom not worth having."
Is that so ? Computer programmes can consult random-number generators where needed (including 'real' ones implemented in hardware). The rest of their operation is perfectly deterministic. Why should the brain not be able to call on indeterminism as and when required, and exclude it the rest of the time ? And if random numbers are useful for computers, why should indeterministic input be useless for brains ? Is human rationality that much more hidebound than a computer ? Even including all the stuff about creativity ? Pseudo-random numbers (which are really deterministic) may be used in computers, and any indeterminism the brain calls on might be only pseudo-random.

I'd agree that for an AI computer program, some type of RIG, whether determinate or not could potentially make use of such a feature.

One thought on reading this - it seems the insinuation you've provided is that when talking about a RIG is that the ideas generated may or may not be of value at all:
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.

Similarly, you've insinuated that the DIG only provides useful solutions:
The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.

Note also you've similarly implied the DIS chooses optimal solutions and the RIS chooses poor ones, or at best, random ones. This seems fairly reasonable and you may have some good logic behind why you hold these views. However, it seems to me the reasoning you may have is that anything indeterministic or random will weigh every choice (RIS) or create every solution (RIG) on equal footing. That is, a RIS for example, won't try to determine the best, it will pick one solution at random. Similarly the RIG will simply bubble up ideas and provide as many useless ones as useful ones. The thing is, I don't see why that should necessarily be the case. Certainly a RIG or RIS could be designed to do so, but that doesn't mean it is an optimal solution for its function.

Take for example radioactive decay. What is indeterministic (or random if you don't like the i word) is WHEN it decays. Despite that, the decay process must still remain probabilistic. That feature of indeterminism can also be made use of, and if I'm not mistaken it largely is for computer programs that require the use of a random number generator. One can 'design' a RIG that incorporates a process that minimizes useless ideas and maximizes useful ones while at the same time coming up with ideas a DIG might not. Similarly, a RIS could be designed to incorporate a process that minimizes useless ideas and maximizes useful ones. The benefit of such a RIS would be the ability to keep a predator guessing, so to speak. If we always made the most logical choice, wouldn't a predator use that to its advantage by assuming for example, you will always run away instead of fight? It would seem to me that there are benefits to doing non-optimal things at times, since honestly we can't and don't always (or even usually) try to weigh choices as logically the best or worst. People and animals often do what is 'not optimal' and in so doing glean an advantage of surprise.

Hope that was constructive. :smile:

PS: The link in your last post doesn't work, you might want to check it.
 
  • #48
Q_Goest said:
It's interesting you've concluded that indeterminate processes can be of value in decision making.
Yes, and that conclusion was totally counter-intuitive to me! :blushing:
In fact the conclusion is more than just “indeterminate processes can be of value in decision making”, it’s that “random (both epistemically and ontically random) processes can be of value in decision making”. It’s no secret that up until that post I had taken the position that randomness simply makes decision-making random, and that’s it. I have to “eat humble pie”, but I’m happy to do so because I now feel that I have a much better understanding of what’s going on.
Q_Goest said:
The conclusion you've reached is echoed by the "Darwin" model, here:
Quote:
Objection 3: "Indeterminism would disrupt the process of rational thought, and result in a capricious, irrational kind of freedom not worth having."
Is that so ? Computer programmes can consult random-number generators where needed (including 'real' ones implemented in hardware). The rest of their operation is perfectly deterministic. Why should the brain not be able to call on indeterminism as and when required, and exclude it the rest of the time ? And if random numbers are useful for computers, why should indeterministic input be useless for brains ? Is human rationality that much more hidebound than a computer ? Even including all the stuff about creativity ? Pseudo-random numbers (which are really deterministic) may be used in computers, and any indeterminism the brain calls on might be only pseudo-random.
Yes, I agree.
The reason why randomness can “add value” to an otherwise deterministic decision-making machine is simply because the random idea generator may be able to throw up possible solutions which are not included in the “set of possible solutions” afforded by a deterministic idea generator – the total set of possible solutions available to the agent is therefore possibly greater if it uses both deterministic and random idea generators.
Q_Goest said:
I'd agree that for an AI computer program, some type of RIG, whether determinate or not could potentially make use of such a feature.
One thought on reading this - it seems the insinuation you've provided is that when talking about a RIG is that the ideas generated may or may not be of value at all:
moving finger said:
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
I think this follows. If the RIG is simply “thowing out random ideas” then it is quite possible (in one particular run) that all of the ideas generated in that run may be of no value; equally it is possible that some of the ideas may be of value, hence it is true that “the ideas generated may or may not be of value at all”.
Q_Goest said:
Similarly, you've insinuated that the DIG only provides useful solutions:
moving finger said:
The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.
I didn’t actually say “useful solutions” – I said “rational solutions” – and this seems quite reasonable to me. It is quite possible that, for a particular problem, the DIG will produce no “useful” solutions at all (even though it still provides rational possible solutions). I am assuming of course that the deterministic idea generator is operating according to a rational deterministic algorithm – given this it seems reasonable that it will produce rational possible solutions. It is exactly because the DIG produces only rational solutions that the RIG might add value – by providing random or non-rational solutions which might be more useful than the rational solutions of the DIG.
Q_Goest said:
Note also you've similarly implied the DIS chooses optimal solutions and the RIS chooses poor ones, or at best, random ones. This seems fairly reasonable and you may have some good logic behind why you hold these views. However, it seems to me the reasoning you may have is that anything indeterministic or random will weigh every choice (RIS) or create every solution (RIG) on equal footing. That is, a RIS for example, won't try to determine the best, it will pick one solution at random.
I guess this is correct – because this is how I define the RIS (it picks a solution at random – it does not try to evaluate the solutions).
Q_Goest said:
Similarly the RIG will simply bubble up ideas and provide as many useless ones as useful ones. The thing is, I don't see why that should necessarily be the case. Certainly a RIG or RIS could be designed to do so, but that doesn't mean it is an optimal solution for its function.
I completely agree that we could explore more complex models, mixing random and deterministic behaviour in the idea generators and idea selectors for example – my intention here was to take the simplest possible cases to see if they provide an agent with the properties we are looking for (eg CHDO and unpredictability combined with rational behaviour), and I think I have done that. We can develop more complex models of course, but the conclusion stays the same – indeterminism and randomness can add value for decision making agents.
Q_Goest said:
Take for example radioactive decay. What is indeterministic (or random if you don't like the i word) is WHEN it decays. Despite that, the decay process must still remain probabilistic. That feature of indeterminism can also be made use of, and if I'm not mistaken it largely is for computer programs that require the use of a random number generator. One can 'design' a RIG that incorporates a process that minimizes useless ideas and maximizes useful ones while at the same time coming up with ideas a DIG might not. Similarly, a RIS could be designed to incorporate a process that minimizes useless ideas and maximizes useful ones. The benefit of such a RIS would be the ability to keep a predator guessing, so to speak. If we always made the most logical choice, wouldn't a predator use that to its advantage by assuming for example, you will always run away instead of fight? It would seem to me that there are benefits to doing non-optimal things at times, since honestly we can't and don't always (or even usually) try to weigh choices as logically the best or worst. People and animals often do what is 'not optimal' and in so doing glean an advantage of surprise.
Agreed. I am not suggesting that my simple model of parallel and pure DIG-RIG is the way that people and animals actually behave (that never was my intention), only that this simple model is an example of indeterminism and randomness “adding value” for decision making agents.
Q_Goest said:
Hope that was constructive.
Very much so! Thanks :smile:
MF
 
  • #49
moving finger said:
But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG? The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!
It was not constrained by the R.I.G. because the RIG is not external.
Whatever the internal causal basis of your actions is, it is not
something external to you that is overriding your wishes and pushing you around.
Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?
That agent is the totality of SIS, RIG and everything else. One part
of you does not constrain or force another.
Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.
Which kind of "free will" would you prefer to have?
The kind you are describing does not sound very attractive, but I can always amend the model so that "If the RIG succeeds in coming up with an option on one occasion, it will always include it on subsequent occasions".
After all, I only have to come up with a model that workds
 
  • #50
moving finger said:
We have seen (post #37) that the simple so-called Darwinian model, which comprises a single Random Idea Generator folowed by a Sensible Idea Selector, does not endow any properties to an agent which we might recognise as “properties of free will”. In particular, rather than endowing the ability of Could Have Done Otherwise (CHDO), the simple RIG-SIS combination acts to RESTRICT the number of possible courses of action, thus forcing the agent to make non-optimal choices (a feature I have termed Forced to Do Otherwise, FDO, rather than CHDO).

Of course RIG+SIS is not restricted compared to pure determinsim, because
under pure determinism there is always exactly one (physically) possible choice. Other options are may be considered as theories or ideas, but
they weill be inevitably rejected.




In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities).

I think that is misleading. It would be better to talk about
sub-optimal CHDO and irrational CHDO. In particular it is
a conceptual error to talk about agents being "forced"
by internal processes that constitute them.


As we have seen in post #37, the so-called Darwinian model is an example of FDO.

It turns out to be sub-optimal CHDO only by making unnecessary assumptions.
So-called Darwinian Agent
RIG -> DIS

The so-called Darwinian agent comprises a RIG which outputs possible courses of action which are then input to a DIS.
See http://www.geocities.com/peterdjones/det_darwin.html#introduction for a more complete description of this model.
The so-called Darwinian agent will make rational choices from a random selection of possibilities.
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
Because of this property (ie FDO rather than CHDO) no true-thinking Libertarian should claim that such an agent possesses free will, even in the case of an i-RIG where the agent clearly behaves indeterministically.

There is no reason to suppose that the RIG, having succeeded in coming up
with option A at time t will fail to come up with it again -- this objection
is based on an arbitrary limitation.

Parallel DIG-RIG Agent
DIG -> }
..... -> } DIS
RIG –> }

The parallel DIG-RIG agent comprises TWO separate idea generators, one deterministic and one random, working in parallel. The deterministic idea generator outputs rational possible courses of action which are then input to a DIS. Also input to the same DIS are possible courses of action generated by the random idea generator. The DIS then evaluates all of the possible courses of action, those generated deterministically and those generated randomly, and the DIS then rationally chooses one of the ideas as the preferred course of action.
Since a proportion of the possible ideas is generated randomly, the Parallel DIG-RIG agent (just like the Capricious and Darwinian agents) can appear to act unpredictably. If the RIG is deterministic (a d-RIG) then the Parallel DIG-RIG agent behaves deterministically but still unpredictably. If the RIG is indeterministic (an i-RIG) then the Parallel DIG-RIG behaves indeterministically (and therefore also unpredictably).
Since a proportion of the possible ideas is also generated deterministically and rationally (by the DIG), not only does the agent NOT behave capriciously but also the agent is NOT in any way restricted or forced by the RIG to choose a non-optimal or irrational course of action (all rational courses of action are always available as possibilities to the DIS via the DIG, even if the RIG throws up totally irrational or non-optimum possibilities).
The Parallel DIG-RIG model therefore combines the advantages of the Deterministic model (completely rational behaviour) along with the advantages of the Darwinian model (unpredictable behaviour) but with none of the drawbacks of the Darwinian model (the agent is not restricted or forced by the RIG to make non-optimal choices).
If the Parallel DIG-RIG is based on a d-RIG then it behaves deterministically but unpredictably. Importantly, it does NOT then endow CHDO (since it is deterministic) therefore presumably would not be accepted by a Libertarian as an explanatory model for free will. Interestingly though, this model (DIG plus d-RIG) explains everything that we observe in respect of free will (it produces a rational yet not necessarily predictable agent), and the model should be acceptable both to Determinists and Compatabilists alike (since it is deterministic).

It doesn't explain the subjective sensation of having multipl epossibilities that are open
to you at the present moment, nor the phenomenon of regret, which
implies CHDO. Of course, I have argued against compatiblism (and
hence against d-RIG as adequate for a fully-fledged idea of FW)
in my article.
 
  • #51
More thoughts about “CHDO"

I now agree (see post #44) that incorporating random elements into a rational and otherwise deterministic decision making agent may provide a wider “set of alternate possibilities” for the agent to choose from.
But I was mistaken in thinking (as stated in my post #44 ) that such random elements could somehow give rise to genuine CHDO.
What exactly do we mean by CHDO in the context of an agent’s will?
Do we mean simply that “things could have turned out differently, whether I wanted them to or not?”. This is effectively the kind of CHDO that we have in the case of a RIG. The RIG is throwing up random possible courses of action, and the idea selector is (via the RIG) being restricted from choosing certain courses of action. This is precisely how the RIG is supposed to endow the so-called CHDO. But I suggest that this is not what we “free will agents” really mean when we say we “Could Have Done Otherwise”.

What is GENUINE CHDO?
I humbly suggest that what an agent really means by CHDO in the context of free will is the following :
CHDO Definition : What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the “possibility to do otherwise”, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.

In other words, given a free choice between action A or action B, I will select action A if and only if I CHOOSE to select action A. If the situation is re-run under identical circumstances, both choices A and B must be available to me once again, and I would then “do otherwise than select A” if and only if I then CHOOSE to select action B rather than A. This, to me, is what we mean when we say that we choose freely. It is a choice free of constraint. After all, why would I WANT to do B unless I freely CHOOSE to do B? Being “forced” to do B because the option of doing A “is no longer available to me” (this is what the RIG does) is NOT an example of free will.
Incorporating a random idea generator (RIG) (even in parallel with a DIG) does NOT result in an agent which possesses the above GENUINE CHDO properties. The RIG acts to RESTRICT the choices available to the DIS. The DIS is therefore once again FORCED to do otherwise. The DIS can only make a FREE CHOICE between A and B if the idea generator offers up both A and B as alternate possibilities. If the idea generator does not throw up both A and B then the “choice” is effectively being forced on the DIS by the random nature of the RIG, rather than the DIS “choosing freely”.
moving finger said:
But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG? The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!
Tournesol said:
It was not constrained by the R.I.G. because the RIG is not external.
It makes no difference whether one places the RIG “external” or “internal” to the agent – the simple fact remains that the way the RIG works is to offer up a limited and random number of alternate possibilities to the idea selector – this is true even if one places the RIG “internal to the agent”. Given a choice between A and B, the agent can only “choose freely” between A and B if both A and B are offered up (externally or internally) as alternate possibilities. If either A or B is not offered up (because the RIG only throws up one and not the other) then the agent is making a restricted or constrained choice, and NOT a free choice.
Tournesol said:
Whatever the internal causal basis of your actions is, it is not
something external to you that is overriding your wishes and pushing you around.
Agreed. But whatever the causal basis of my actions, I would rather have a free will which is based on a rational evaluation of all possible alternatives, rather than one which is somehow forced to make decisions because the choice is restricted by some random element. Thus to suggest that a random idea generator which acts to restricts choices somehow endows the ability to “have freely done otherwise” is incoherent and false. The “doing otherwise” in the case of the RIG is compelled upon the agent by the random nature of the RIG, it is not something the agent rationally and freely chooses to do.
moving finger said:
Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?
Tournesol said:
That agent is the totality of SIS, RIG and everything else. One part
of you does not constrain or force another.
We are trying here to “model” the causal basis of our actions. It has been suggested that a random element may somehow be the source of CHDO. But as discussed at the beginning of this post, REAL CHDO would be an agent choosing freely between two alternate possibilities A and B in both runs.
As I showed above, it makes no difference whether that random element is external or internal to the agent, if the random element acts to “restrict the possibilities being considered by the agent”, such that the agent no longer has a free choice between A and B, then it is no longer a case of CHDO, it is instead FDO.
moving finger said:
Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.
Which kind of "free will" would you prefer to have?
Tournesol said:
The kind you are describing does not sound very attractive, but I can always amend the model so that "If the RIG succeeds in coming up with an option on one occasion, it will always include it on subsequent occasions".
But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, with conditions EXACTLY as they were before, in other words if we re-run the model again nunder identical circumstances, then I could still choose to do otherwise”.
If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……
Tournesol said:
After all, I only have to come up with a model that workds
Yep. And I think I have shown that so far you haven’t come up with a model that works for genuine CHDO.

MF
 
  • #52
moving finger said:
In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities).
Tournesol said:
I think that is misleading. It would be better to talk about sub-optimal CHDO and irrational CHDO.
With respect, I think suggesting that “being forced to do otherwise results in some kind of CHDO” is misleading.
We need to focus on whether ANY kind of model can endow genuine CHDO, which is where the agent says : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”
I do not believe any model based on indeterminism can do this. The Darwinian model does not. The Parallel DIG-RIG does not. If you think you have such a model then I would love to see the details.
Tournesol said:
it is a conceptual error to talk about agents being "forced" by internal processes that constitute them.
OK. Therefore it follows from your statement that we cannot say a completely deterministic agent is lacking in free will?
EITHER an agent “chooses freely” between alternate possible courses of action, A and B, or it does not. If one of two possible courses of action, either A or B, is not available to the agent then the agent is not choosing freely.
moving finger said:
As we have seen in post #37, the so-called Darwinian model is an example of FDO.
Tournesol said:
It turns out to be sub-optimal CHDO only by making unnecessary assumptions.
It turns out NOT to be genuine CHDO. The agent did not CHOOSE to do otherwise, it was constrained to do otherwise because it no longer had a free choice.
Tournesol said:
There is no reason to suppose that the RIG, having succeeded in coming up with option A at time t will fail to come up with it again -- this objection is based on an arbitrary limitation.
But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, if I could reset the clock, if I could have my time over again, with conditions EXACTLY as they were before, in other words if we re-run the model again under identical circumstances, then I could still choose to do otherwise”.
If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the SAME time t - the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……
If this does not convince you, then let us just reverse the sense of the argument – let us say that in the first run the RIG throws up only B, and in the second run it throws up both A and B. The DIS chooses A as better than B. Why then did it choose B in the first run? Certainly not because it “selected B from a free choice between A and B”.
Tournesol said:
It doesn't explain the subjective sensation of having multipl epossibilities that are open to you at the present moment,
The sensation of “having multiple possibilities available” (which you rightly say is subjective) is very easily explained through epistemic uncertainty. No agent has certain knowledge of the future, every agent has an epistemic horizon beyond which it cannot see, therefore it may simply have the illusion that there are multiple possible futures. There is no way the agent can ever know for sure that multiple possible futures actually existed.
Tournesol said:
nor the phenomenon of regret, which implies CHDO
It implies nothing of the sort.
“regret” is simply the feeling that we may have made a “bad” decision or a “bad” choice”. But the choice we made at the time was (or should have been) the best we could have made given the circumstances. It does not follow from this that we would then genuinely “choose differently” if we could turn the clock back – because turning the clock back would simply reset everything to exactly the same way it was before, and we would then make the same “bad choice” again. We can only learn from our bad choices in a linear timeline – resetting the clock from run 1 to run 2 does not allow for any learning to be carried over from run 1 to run 2
Tournesol said:
Of course, I have argued against compatiblism (and hence against d-RIG as adequate for a fully-fledged idea of FW) in my article.
I did not expect that Tournesol would accept a d-RIG. But a determinist or compatabilist might.

An agent which believes it is acting freely will say : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”
“genuine CHDO” is an incoherent concept in face of the above. So far none of the random or indeterministic models considered would result in an agent with genuine CHDO.
MF
 
Last edited:
  • #53
Why CHDO is an empty concept

A Libertarian agent which believes in CHDO, having selected option A over option B, would say “if I could have the chance all over again, with conditions EXACTLY as they were before, if I could turn the clock back, in other words if we re-run the exact same situation again under identical circumstances, then I would still be free to choose option B, and I could still select option B – in other words I could have done otherwise to what I actually did”

But any agent which believes it is acting freely will say : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.” This applies to Libertarian free will agents just as much as any other free will agent.

It follows that in “re-running” the selection, the free will Libertarian agent will select option A ONLY if it chooses to select option A; it will select option B ONLY if it chooses to select option B. But IF the circumstances are identical in the re-run then it follows that the agent (if it is behaving rationally and not capriciously) will wish to choose the same way that it chose in the first run. Nothing has changed in the second run – by definition it is a precise re-run of the first run - WHY would the agent therefore WANT to choose any differently than it did in the first run? What possible reason would the agent have for wanting to choose any differently – unless of course it’s very choice is somehow influenced by random or indeterministic behaviour…… But addding indeterminism to the selection process simply adds capricioiusness to the agent – it simply detracts from the rational behaviour of the agent – it is equivalent once again to the Buridan’s ass model - adding indeterminism into the selection process has nothing to do with a free will choice.

CHDO implies “if we re-run the exact same situation again under identical circumstances, then I would be free to select option B in the second run, even though I selected option A in the first run”

Free will implies “If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”

From free will we can see that when we re-run the selection between A and B, there is no reason why my choice should be any different to what it was before – the circumstances are exactly the same as before, therefore why (unless I am a random or capricious agent) would I wish to choose any differently? It follows that I would NOT do otherwise that what I did before, because (if I am free) I do what I choose to do, and there is no rational reason why my choice should be any different to what it was in the first run.

Conclusion
Thus, it matters not whether or not “I could really have done otherwise”. What happened is that I was free to choose, and I chose to do what I wished to do, without constraint. This is free will. If I re-run the situation there is absolutely no rational reason why my wishes or my choice should be any different to the way it was before.

MF
 
Last edited:
  • #54
What is GENUINE CHDO?
I humbly suggest that what an agent really means by CHDO in the context of free will is the following :
CHDO Definition : What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the “possibility to do otherwise”, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.

I suggest that this is already part of the analysis of FW I am using under the
rubric of "ultimate origination" or "ultimate responsibility". Since it is
already part of the definition of FW, it does not need to be added again as
part of the defintion of CHDO (which is of course itself already part of the
definition of FW).



In other words, given a free choice between action A or action B, I will select action A if and only if I CHOOSE to select action A. If the situation is re-run under identical circumstances, both choices A and B must be available to me once again, and I would then “do otherwise than select A” if and only if I then CHOOSE to select action B rather than A. This, to me, is what we mean when we say that we choose freely. It is a choice free of constraint. After all, why would I WANT to do B unless I freely CHOOSE to do B? Being “forced” to do B because the option of doing A “is no longer available to me” (this is what the RIG does) is NOT an example of free will.

It is not an example of constraint either, since it is not external.
What is the alternative ? Having every possible option available to you at all
times ? As I have pointed out before, that is a kind of god-like omniscience.


Incorporating a random idea generator (RIG) (even in parallel with a DIG) does NOT result in an agent which possesses the above GENUINE CHDO properties. The RIG acts to RESTRICT the choices available to the DIS.

No it doesn't. A deterministic mechanism cannot come up with the rich and
original set of choices that an indeterministic mechanism can come up with.
Unplugging the RIG does not unleash some hidden creativity in the SIS.


The DIS is therefore once again FORCED to do otherwise. The DIS can only make a FREE CHOICE between A and B if the idea generator offers up both A and B as alternate possibilities.

If the RIG comes up with more than one option, the SIS can make a free (in the
compatiblist sense -- no external constraint) choice between them. What the
RIG adds to the SIS is the extra, incompatiblist, freedom of CHDO.

The SIS cannot choose an option that is not presented to it by the RIG.
It can only choose from what is on the menu -- that is what we normally
mean by a free choice. The alternative -- an ultra-genius level of insight
and innovation for every possible situation -- may be worth wanting,
but is not naturalistically plausible.



If the idea generator does not throw up both A and B then the “choice” is effectively being forced on the DIS by the random nature of the RIG, rather than the DIS “choosing freely”.

1) The fact that the SIS (or any other component) of the agent is causally
influenced by other components does not constitute freedom-negating constraint
because it is not extenal.

2) Not being able to choose an option that is not presented to
you is not lack of free choice. A finite, natural, agent will
have internal limitations; they are not limitations on freedom
because they are not external. A caged bird is unfree because it
cannot fly; a pig cannot fly either, but that is not an example
of unfreedom because it is an inherent, internal limitation.

But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG?

Well, the RIG is part of the model, and you can'tbe constrained by something
internal.


The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!

The RIG did not block from choice A -- choice A was never on the menu. It
certainly *failed* to come up with choice A. Failures and limitations
are part of being a finite, natural being.



It was not constrained by the R.I.G. because the RIG is not external.
It makes no difference whether one places the RIG “external” or “internal” to the agent – the simple fact remains that the way the RIG works is to offer up a limited and random number of alternate possibilities to the idea selector – this is true even if one places the RIG “internal to the agent”. Given a choice between A and B, the agent can only “choose freely” between A and B if both A and B are offered up (externally or internally) as alternate possibilities. If either A or B is not offered up (because the RIG only throws up one and not the other) then the agent is making a restricted or constrained choice, and NOT a free choice.

It is not a constrained choice because nothing is doing the constraining. All
realistic choices are from a finite, limited, list of options. You are asking
for god-like omnipotence.

Whatever the internal causal basis of your actions is, it is not
something external to you that is overriding your wishes and pushing you around.

Agreed. But whatever the causal basis of my actions, I would rather have a free will which is based on a rational evaluation of all possible alternatives,

What natural mechanism can provide all possible choices ex nihilo ? How is Ug
the caveman to know that rubbing two sticks together and starting a fire
is the way to keep warm ? I don't doubt for a minute that what you want is
desiable; but how do *you* think it is possible ?


rather than one which is somehow forced to make decisions because the choice is restricted by some random element. Thus to suggest that a random idea generator which acts to restricts choices somehow endows the ability to “have freely done otherwise” is incoherent and false.
It doesn't restrict choices, becuase the choices don't exist priori to be
restricted. The RIG is a GENERATOR not a filter. It does endow CHDO
(absent your modifications) by generating choices. It doesn't generate
all optimal choices, but optimality in all situations is not part
of any derfinition of FW except your own.

The “doing otherwise” in the case of the RIG is compelled upon the agent by the random nature of the RIG, it is not something the agent rationally and freely chooses to do.


The RIG is not separate from the agent.


Is this an example of “Could Have Done Otherwise”? Or would a more accurate description be “Forced to Do Otherwise”? The agent in run 2 was effectively forced to choose B rather than A (it was forced to do otherwise in run 2 than it had done in run 1) because of the limited choices afforded to it by the RIG. Perthaps a better acronym for this kind of model is thus FDO (forced to do otherwise) rather than CHDO?

That agent is the totality of SIS, RIG and everything else. One part
of you does not constrain or force another.


We are trying here to “model” the causal basis of our actions. It has been suggested that a random element may somehow be the source of CHDO. But as discussed at the beginning of this post, REAL CHDO would be an agent choosing freely between two alternate possibilities A and B in both runs.
As I showed above, it makes no difference whether that random element is external or internal to the agent, if the random element acts to “restrict the possibilities being considered by the agent”, such that the agent no longer has a free choice between A and B, then it is no longer a case of CHDO, it is instead FDO.

1) The internality or externality does make a difference
2) The RIG does not restrict the SIS, it provides a range of possibilities
which the SIS is not able to provide itself
3) Failure by the RIG to provide an option which looks desirable with 20:20
hindsight is not failure of CHDO, or of FW, it is failure of omniscience.


Did the model choose B rather than A in run 2 because it was "acting freely" in its choice, and preferred B to A? No clearly not (because in a straight choice between A and B the model always chooses A).
Or did the model choose B rather than A in run 2 because its choices were actually being RESTRICTED by the RIG, such that it was NOT POSSIBLE for it to choose to do A in Run 2, even though A was always a better choice than B? Yes, this is indeed clearly the case.
Which kind of "free will" would you prefer to have?

Originally Posted by Tournesol
The kind you are describing does not sound very attractive, but I can always amend the model so that "If the RIG succeeds in coming up with an option on one occasion, it will always include it on subsequent occasions".

But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, with conditions EXACTLY as they were before, in other words if we re-run the model again nunder identical circumstances, then I could still choose to do otherwise”.


We are talking about both. The idea that the RIG might fail to come up with an
option it succeeded in coming up with before, is an engineering issue.

Fixing it does not affect CHDO; the agent could have doen otherwise becuase
the RIG could have come up with a different, and preferable option;
amending it so that it does not "forget" options does not affect that.

If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……

The objection being what ? That is isn't guaranteed to come up with the best
possible option out of all the options every time ? True, but that is human
frailty, not constraint.


Yep. And I think I have shown that so far you haven’t come up with a model that works for genuine CHDO.

it works for CHDO as standardly defined.
 
  • #55
In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities).

I think that is misleading. It would be better to talk about sub-optimal CHDO and irrational CHDO.


With respect, I think suggesting that “being forced to do otherwise results in some kind of CHDO” is misleading.


And I am suggesting that making sub-optimal choices is not the same thing as
being forced.

We need to focus on whether ANY kind of model can endow genuine CHDO, which is where the agent says : “What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the possibility to do otherwise, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.”
I do not believe any model based on indeterminism can do this. The Darwinian model does not. The Parallel DIG-RIG does not. If you think you have such a model then I would love to see the details.


You haven't genuinely established this. You are just appealing to your
favourite -- if not sole -- manouvre of tendentious redefinition.
Sub-optimal choice is not compulsion.

it is a conceptual error to talk about agents being "forced" by internal processes that constitute them.

OK. Therefore it follows from your statement that we cannot say a completely deterministic agent is lacking in free will?
EITHER an agent “chooses freely” between alternate possible courses of action, A and B, or it does not. If one of two possible courses of action, either A or B, is not available to the agent then the agent is not choosing freely.


Choosing freely means choosing without compulsion between options that are actually available.
The absence of options is not the same as the presence of external compulsion.

There is no reason to suppose that the RIG, having succeeded in coming up with option A at time t will fail to come up with it again -- this objection is based on an arbitrary limitation.

But we are not talking about “subsequent occasions” in a normal linear timeline, are we? What a Libertarian means by free will is “if I could have the choice all over again, if I could reset the clock, if I could have my time over again, with conditions EXACTLY as they were before, in other words if we re-run the model again under identical circumstances, then I could still choose to do otherwise”.
If we “re-run the model under identical circumstances” this is equivalent to rewinding the clock back to the original start point – the SAME time t - the RIG will have absolutely no “memory” of its earlier selection of choices, there will be no possible mechanism for ensuring that it comes up with the same choices as before (unless it is in fact deterministic)……
If this does not convince you, then let us just reverse the sense of the argument – let us say that in the first run the RIG throws up only B, and in the second run it throws up both A and B. The DIS chooses A as better than B. Why then did it choose B in the first run? Certainly not because it “selected B from a free choice between A and B”.

Maybe it chose B from a free choice between B, C and D. What's the alternative
, anyway? When Ug wants to cross a stream, his RIG doesn't just throw up
"Ug swim" and "Ug float on log" but "Ug build suspension bridge" and "Ug fly
in helicopter".


It doesn't explain the subjective sensation of having multipl epossibilities that are open to you at the present moment,
The sensation of “having multiple possibilities available” (which you rightly say is subjective) is very easily explained through epistemic uncertainty. No agent has certain knowledge of the future, every agent has an epistemic horizon beyond which it cannot see, therefore it may simply have the illusion that there are multiple possible futures. There is no way the agent can ever know for sure that multiple possible futures actually existed.

But why that particular illusion. Why don't we see our decisions as random,
or caused forces beyond our control ?


nor the phenomenon of regret, which implies CHDO

It implies nothing of the sort.
“regret” is simply the feeling that we may have made a “bad” decision or a “bad” choice”. But the choice we made at the time was (or should have been) the best we could have made given the circumstances.

There would be nothing to regret if it were.

It does not follow from this that we would then genuinely “choose differently” if we could turn the clock back – because turning the clock back would simply reset everything to exactly the same way it was before, and we would then make the same “bad choice” again.

If determinsm is true, we would make the same choice. But then what does the
determinist regret? The inevitable ?


It follows that in “re-running” the selection, the free will Libertarian agent will select option A ONLY if it chooses to select option A; it will select option B ONLY if it chooses to select option B. But IF the circumstances are identical in the re-run then it follows that the agent (if it is behaving rationally and not capriciously) will wish to choose the same way that it chose in the first run.

The agent will want to come up with the best solution, as judgedby her pesonal
SIS, to the problem. If her RIG comes up with a better solution on the re-run
the agent would wish to choose that.

Nothing has changed in the second run – by definition it is a precise re-run of the first run

Only if determinism is true. By the definition of indeterminism, a re-run
situation will probably turn out different.

- WHY would the agent therefore WANT to choose any differently than it did in the first run?


Who wouldn't want a better solution to a problem ?

What possible reason would the agent have for wanting to choose any differently – unless of course it’s very choice is somehow influenced by random or indeterministic behaviour…… But addding indeterminism to the selection process simply adds capricioiusness to the agent – it simply detracts from the rational behaviour of the agent – it is equivalent once again to the Buridan’s ass model - adding indeterminism into the selection process has nothing to do with a free will choice.

Which is why I put the randomness into the RIG; the RIG can come up with
different inspirations, and the SIS would be motivated to choose differently
if different choices are available, so long as the new choices are
better by its weightings.

From free will we can see that when we re-run the selection between A and B, there is no reason why my choice should be any different to what it was before – the circumstances are exactly the same as before, therefore why (unless I am a random or capricious agent) would I wish to choose any differently? It follows that I would NOT do otherwise that what I did before, because (if I am free) I do what I choose to do, and there is no rational reason why my choice should be any different to what it was in the first run.

Assuming that the RIG will come up with the same options. But why should it ?
 
  • #56
moving finger said:
CHDO Definition : What I do, I do because I CHOOSE to do it, and not otherwise. If I have free will and I have the “possibility to do otherwise”, I will ONLY do otherwise if I CHOOSE to do otherwise, and not because I am FORCED to do otherwise.
Tournesol said:
I suggest that this is already part of the analysis of FW I am using under the
rubric of "ultimate origination" or "ultimate responsibility". Since it is
already part of the definition of FW, it does not need to be added again as
part of the defintion of CHDO (which is of course itself already part of the
definition of FW).
If it is already part of your definition of FW, then there surely can be no objection to re-inforcing this in the definition of CHDO? Or do you think this would invalidate your arguments?
moving finger said:
In other words, given a free choice between action A or action B, I will select action A if and only if I CHOOSE to select action A. If the situation is re-run under identical circumstances, both choices A and B must be available to me once again, and I would then “do otherwise than select A” if and only if I then CHOOSE to select action B rather than A. This, to me, is what we mean when we say that we choose freely. It is a choice free of constraint. After all, why would I WANT to do B unless I freely CHOOSE to do B? Being “forced” to do B because the option of doing A “is no longer available to me” (this is what the RIG does) is NOT an example of free will.
Tournesol said:
It is not an example of constraint either, since it is not external.
(see end of post for response)
Tournesol said:
What is the alternative ? Having every possible option available to you at all
times ? As I have pointed out before, that is a kind of god-like omniscience.
I have not suggested “all possible options are available”..
But to suggest that our options to choose are necessarily limited by some “indeterministic idea generator”, and that this is the source of our free will and “CHDO”, is a gross misconception and misrepresentation of both free will and CHDO.
moving finger said:
Incorporating a random idea generator (RIG) (even in parallel with a DIG) does NOT result in an agent which possesses the above GENUINE CHDO properties. The RIG acts to RESTRICT the choices available to the DIS.
Tournesol said:
No it doesn't. A deterministic mechanism cannot come up with the rich and
original set of choices that an indeterministic mechanism can come up with.
Unplugging the RIG does not unleash some hidden creativity in the SIS.
I have never suggested that a deterministic mechanism does!
A deterministic idea generator does not endow CHDO. But my point is that neither does a random idea generator! BOTH generators endow “forced to do otherwise” – and not “could have done otherwise”
A random idea generator gives the “possibility that things could turn out differently”, NOT because we FREELY CHOOSE them to turn out differently, of our free will, but because the RIG FORCES them to turn out differently! The RIG constrains the choices available, just as much as the DIG does, regardless of our rational will. And this is true regardless of whether the RIG is internal or external.
moving finger said:
The DIS is therefore once again FORCED to do otherwise. The DIS can only make a FREE CHOICE between A and B if the idea generator offers up both A and B as alternate possibilities.
Tournesol said:
If the RIG comes up with more than one option, the SIS can make a free (in the
compatiblist sense -- no external constraint) choice between them
The key here is the “If”……
What if the RIG does not come up with option A, when the DIS prefers A to B?
Tournesol said:
. What the
RIG adds to the SIS is the extra, incompatiblist, freedom of CHDO.
If the RIG offers up both A and B, then let us say the DIS chooses A.
The only reason the DIS would choose B rather than A is NOT from some “free will choice of the agent”, but because it is CONSTRAINED by the RIG, because the RIG does not offer up A as a choice in the first place!
Do you call this free will?
Do you call this “could have done otherwise”?
I call it “forced to do otherwise”.
Tournesol said:
The SIS cannot choose an option that is not presented to it by the RIG.
It can only choose from what is on the menu -- that is what we normally
mean by a free choice. The alternative -- an ultra-genius level of insight
and innovation for every possible situation -- may be worth wanting,
but is not naturalistically plausible.
My point is that it is precisely CHDO that “may be worth wanting”, but is not naturalistically plausible. CHDO does not exist.
I can ALWAYS do what I wish to do if I have free will. Why would I then want to have some kind of random idea generator which constrains the choices available to me, just so that it can provide the artificial conditions necessary for your alleged “CHDO”, which is actually FDO after all?
moving finger said:
If the idea generator does not throw up both A and B then the “choice” is effectively being forced on the DIS by the random nature of the RIG, rather than the DIS “choosing freely”.
Tournesol said:
1) The fact that the SIS (or any other component) of the agent is causally
influenced by other components does not constitute freedom-negating constraint
because it is not extenal.
(see reply at end of post)
Tournesol said:
2) Not being able to choose an option that is not presented to
you is not lack of free choice.
Of course it is!
“not presenting an option” is a constraint on my free will to choose. The whole function of the RIG is to randomly present options – some will be available and some will not – on a random basis.
Tournesol said:
A finite, natural, agent will
have internal limitations; they are not limitations on freedom
because they are not external. A caged bird is unfree because it
cannot fly; a pig cannot fly either, but that is not an example
of unfreedom because it is an inherent, internal limitation.
In our example of A and B, both A and B are possible choices that the agent might make. The agent always chooses A rather than B if given a free will choice between A and B. The ONLY reason the agent would choose otherwise (ie the only reason the agent would choose B) is simply because the option of doing A is NOT MADE AVAILABLE (by the RIG). Whether the RIG is internal to the agent or external the effect is the same - the agent chooses B simply because A is not "deemed to be available", and NOT because the agent prefers (freely chooses) B compared to A.
Do you call this a free will choice between A and B? Do you call this CHDO?
I call it an artificial constraint (albeit maybe internal) which is FORCING the agent to choose B rather than A. The agent is not choosing B rather than A because it freely wishes to choose B rather than A. This is FDO, not CHDO.
moving finger said:
But did the model “do otherwise” in run 2 out of “free will choice to do otherwise”, or was it “constrained to do otherwise” by the RIG?
Tournesol said:
Well, the RIG is part of the model, and you can'tbe constrained by something
internal.
(see reply at end of post)
moving finger said:
The RIG remember is responsible for “throwing up possible alternative choices”. In run 2, the RIG did NOT throw up the possibility of choice A, thus in effect the RIG BLOCKED the agent from the possibility of choosing A, even when A would have been (rationally) a better choice than B!
Tournesol said:
The RIG did not block from choice A -- choice A was never on the menu.
Choice A was certainly on the menu in the first run. Why not on the second run?
Tournesol said:
It
certainly *failed* to come up with choice A. Failures and limitations
are part of being a finite, natural being.
Thus you are saying that the agent “chose to do B rather than A simply because it failed to come up with the option of doing A”, and NOT because “it chose to do B rather than A out of a free will choice to do B rather than A”?
This is what you understand by free will?
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?
This a very strange kind of free will, and not one that many would recognise!
Tournesol said:
It was not constrained by the R.I.G. because the RIG is not external.
(see reply at end of post)
Tournesol said:
It is not a constrained choice because nothing is doing the constraining. All
realistic choices are from a finite, limited, list of options. You are asking
for god-like omnipotence
No. I am asking whether CHDO genuinely exists. It clearly does not.
Tournesol said:
What natural mechanism can provide all possible choices ex nihilo ? How is Ug
the caveman to know that rubbing two sticks together and starting a fire
is the way to keep warm ? I don't doubt for a minute that what you want is
desiable; but how do *you* think it is possible ?
I am not asking for omniscience. I am asking whether CHDO exists.
I will never know if I have considered all possible alternatives, that is why I have already acknowledged that a RIG CAN add value to a decision-making agent by perhaps throwing up some additional possible alternatives.
But that is ALL it does. The RIG does NOT endow CHDO, the most it can ever endow is FDO.
Tournesol said:
It doesn't restrict choices, becuase the choices don't exist priori to be
restricted. The RIG is a GENERATOR not a filter.
In one run the RIG might throw up A and B. In another run it might throw up only B. Thus the RIG controls whether A is made available to the agent or not. Whether you look upon this as a filter or as a generator makes no difference – the fact is that in one run A is made available, in another it is not.
This is what you understand by free will?
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?
This a very strange kind of free will, and not one that many would recognise!
The agent is not constrained by the RIG because the RIG is not external.
Most of your objections to my argument seem to be based on the idea that the RIG is not external to the agent – it is supposed to be internal. Therefore the RIG cannot be looked upon as an external “constraint” to the agent’s free will. Correct?
OK. But then you are saying the indeterminism (in the RIG) is internal to the agent. That the source of the agent’s free will is based on some kind of internal indeterminism in the agent’s decision-making process.
But if the indeterminism is supposed to be internal to the agent, this must surely undermine the rationality of the agent. How can an agent believe that it is acting rationally if it at the same time thinks its choices are somehow controlled by an indeterministic mechanism?
Speaking for myself, I certainly would not like to think that my rational decision-making processes were based on an indeterministic mechanism. How on Earth could I believe that such a thing is the source of my free will?
The net result is the same. In your model, free will means the following :
“I freely choose to do B rather than A, not because I WANT to do B rather than A, but simply because I did not think of doing A in the first place”?
This a very strange kind of free will, and not one that many would recognise!
MF
 
Last edited:
  • #57
moving finger said:
We have seen (post #37) that the simple so-called Darwinian model, which comprises a single Random Idea Generator folowed by a Sensible Idea Selector, does not endow any properties to an agent which we might recognise as “properties of free will”. In particular, rather than endowing the ability of Could Have Done Otherwise (CHDO), the simple RIG-SIS combination acts to RESTRICT the number of possible courses of action, thus forcing the agent to make non-optimal choices (a feature I have termed Forced to Do Otherwise, FDO, rather than CHDO).

What now follows is a description of a slightly more complex model based on a parallel deterministic/random idea generator combination, which not only CAN endow genuine CHDO but ALSO is one in which the random idea generator creates new possible courses of action for the agent, rather than restricting possible courses of action.

Firstly let us define a Deterministic Idea Generator (DIG) as one in which alternate ideas (alternate possible courses of action) are generated according to a rational, deterministic procedure. Since it is deterministic the DIG will produce the same alternate ideas if it is re-run under identical circumstances.

Next we define a Random Idea Generator (RIG) as one in which alternate ideas (alternate possible courses of action) are generated according to an epistemically random procedure. Since it is epistemically random the RIG may produce different ideas if it is re-run under epistemically identical circumstances.
Note that the RIG may be either epistemically random and ontically deterministic (hereafter d-RIG), or it may be epistemically random and ontically indeterministic (hereafter i-RIG). Both the d-RIG and the i-RIG will produce different ideas when re-run under epistemically identical circumstances. (For clarification – a d-RIG behaves similar to a computer random number generator (RNG). The RNG produces epistemically random numbers, but if the RNG is reset then it will produce the same sequence of numbers that it did before).

Next we define a Deterministic Idea Selector (DIS) as a deterministic mechanism for taking alternate ideas (alternate possible courses of action), evaluating these ideas in terms of payoffs, costs and benefits etc to the agent, and rationally choosing one of the ideas as the preferred course of action.

Finally we define a Random Idea Selector (RIS) as a mechanism for taking alternate ideas (alternate possible courses of action), and choosing one of the ideas as the preferred course of action according to an epistemically random procedure. Since it is epistemically random the RIS may a produce different choice if it is re-run under epistemically identical circumstances (ie with epistemically identical input ideas).

These four basic building bloocks, the DIG, RIG, DIS and RIS, may then be assembled in various ways to create various forms of “idea-generating and decision-making” models with differing properties.

In what follows I shall distiguish between genuine Could Have Done Otherwise (CHDO, where the agent can rationally choose between a set of possibilities which includes at least all of the rational possibilities) and Forced to Do Otherwise (FDO, where the agent EITHER simply chooses randomly, ie the choice is not rational, OR is forced to choose from a set of random possibilities which may not include all of the rational possibilities). As we have seen in post #37, the so-called Darwinian model is an example of FDO.

Deterministic Agent
DIG -> DIS

The Deterministic agent comprises a DIG which outputs rational possible courses of action which are then input to a DIS, which rationally chooses one of the ideas as the preferred course of action.
.The Deterministic agent will always make the same choice given the same (identical) circumstances.
Clearly there is no possibility of CHDO.
A Libertarian would claim that such an agent does not possesses free will, but a Compatabilist might not agree.

Capricious (Random) Agent
DIG -> RIS

The Capricious agent comprises a DIG which outputs possible courses of action which are then input to a RIS.
Also known as the Buridan’s Ass model.
The Capricious agent will make epistemically random choices, even under the same (epistemically identical) circumstances.
Clearly there is the possibility for the agent to “choose otherwise” given epistemically identical circumstances, but since the choice is made randomly and not rationally this is an example of FDO.
I doubt whether even a Libertarian would claim that such an agent possesses free will.
(Note that making the agent ontically random, ie indeterministic, rather than epistemically random does not change the conclusion.)

So-called Darwinian Agent
RIG -> DIS

The so-called Darwinian agent comprises a RIG which outputs possible courses of action which are then input to a DIS.
See http://www.geocities.com/peterdjones/det_darwin.html#introduction for a more complete description of this model.
The so-called Darwinian agent will make rational choices from a random selection of possibilities.
As shown in post #37 of this thread, even if the RIG is truly indeterministic (i-RIG) the random nature of generation of the alternative possibilities means that this model forcibly restricts the choices available to the agent, such that non-optimum choices may be made. The model thus endows FDO and not CHDO.
Because of this property (ie FDO rather than CHDO) no true-thinking Libertarian should claim that such an agent possesses free will, even in the case of an i-RIG where the agent clearly behaves indeterministically.

Parallel DIG-RIG Agent
DIG -> }
..... -> } DIS
RIG –> }

The parallel DIG-RIG agent comprises TWO separate idea generators, one deterministic and one random, working in parallel. The deterministic idea generator outputs rational possible courses of action which are then input to a DIS. Also input to the same DIS are possible courses of action generated by the random idea generator. The DIS then evaluates all of the possible courses of action, those generated deterministically and those generated randomly, and the DIS then rationally chooses one of the ideas as the preferred course of action.
Since a proportion of the possible ideas is generated randomly, the Parallel DIG-RIG agent (just like the Capricious and Darwinian agents) can appear to act unpredictably. If the RIG is deterministic (a d-RIG) then the Parallel DIG-RIG agent behaves deterministically but still unpredictably. If the RIG is indeterministic (an i-RIG) then the Parallel DIG-RIG behaves indeterministically (and therefore also unpredictably).
Since a proportion of the possible ideas is also generated deterministically and rationally (by the DIG), not only does the agent NOT behave capriciously but also the agent is NOT in any way restricted or forced by the RIG to choose a non-optimal or irrational course of action (all rational courses of action are always available as possibilities to the DIS via the DIG, even if the RIG throws up totally irrational or non-optimum possibilities).
The Parallel DIG-RIG model therefore combines the advantages of the Deterministic model (completely rational behaviour) along with the advantages of the Darwinian model (unpredictable behaviour) but with none of the drawbacks of the Darwinian model (the agent is not restricted or forced by the RIG to make non-optimal choices).
If the Parallel DIG-RIG is based on a d-RIG then it behaves deterministically but unpredictably. Importantly, it does NOT then endow CHDO (since it is deterministic) therefore presumably would not be accepted by a Libertarian as an explanatory model for free will. Interestingly though, this model (DIG plus d-RIG) explains everything that we observe in respect of free will (it produces a rational yet not necessarily predictable agent), and the model should be acceptable both to Determinists and Compatabilists alike (since it is deterministic).
If the Parallel DIG-RIG is based on an i-RIG then it is both indeterministic and unpredictable. Therefore it does endow CHDO (and this time it is GENUINE CHDO, not the FDO offered by the Darwinian model), therefore presumably would be accepted by a Libertarian (but obviously not by either a Determinist or a Compatabilist) as an explanatory model for free will.

Conclusion
We have shown that the so-called Darwinian model incorporating a single random idea generator gives rise to an indeterministic agent without endowing genuine CHDO. However, a suitable combination of deterministic and indeterministic idea generators, in the form of the Parallel DIG-RIG model, can form the basis of a model decision making machine which does endow genuine CHDO, and is genuinely both indeterministic yet rational.

Constructive criticism welcome!

MF
Good grief!

Anything said in the impermanence
about existence is a lie...of course, you know how to make toast...

We talk about something
that doesn't exist.

Be simple.
Dont faint.
Its OK.
The universe is perfect...go figure:-p

JC said we are satan.
What does THAT really mean?
satan is a liar.
satan is NOT EVIL.

i am not an xtian or any
other bs type of believer.
:shy:
 
Last edited:
  • #58
Let me illustrate my argument that "CHDO is an empty concept" with a simple example.

Suppose that Mary is a Libertarian faced with a simple binary decision – let us say either “to have an egg for breakfast” (let us call this choice A) or “NOT to have an egg for breakfast” (let us call this choice B). Clearly in this case Mary must choose either to have an egg for breakfast, or not to have an egg for breakfast. There are no other possibilities.

Suppose in our example that Mary chooses A.

Having enjoyed her breakfast, Mary (believing as she does in CHDO) would presumably claim that “if I could have the chance to make that decision again, if I could rewind the clock and set everything exactly back as it was before, then I would still have the free and unconstrained ability to choose either A or B, and I could indeed freely choose to do B rather than A”

Mary’s belief seems to be that she possesses “genuine CHDO”. Mary believes that she could have freely chosen NOT to have an egg for breakfast if she could choose again and everything was reset exactly the way it was before.

Would you agree this is what CHDO actually means? It certainly looks like CHDO to me.

Now let us look at how the “Darwinian model” would work in this scenario.

Presumably the “first” time the model is run, the RIG throws up both A and B as possible course of action, and Mary selects A rather than B via the deterministic DIS. This indeed explains why Mary actually chose to have an egg for breakfast.

But what about Mary’s claim that she “could have done otherwise” – in other words that if she could have the chance to make that decision again, if she could rewind the clock and set everything exactly back as it was before, then she would still have the free and unconstrained ability to choose either A or B, and she could indeed freely choose to do B rather than A”

How could we re-run the Darwinian model and generate the outcome that Mary chooses B instead of A, to support Mary’s claim that she indeed CHDO?

The DIS is deterministic – it will always choose A given a straight choice between A and B. So no solution here.

The only way to generate the desired outcome from the Darwinian model so that Mary could indeed “do otherwise” is to suggest that the RIG must come up with only one possible course of action – B. If we want Mary to “be able to do otherwise”, the RIG must NOT throw up the possibility of doing A in the second run, such that Mary then has no choice but to do B. But in this case, Mary is NOT choosing rationally and freely between the two options A and B – the “choice” of doing B is actually already made for her – by the random nature of the RIG (which is not under her control).

We thus have three possible ways that the Darwinian model could play itself out in this example –
EITHER the RIG throws up both A and B (in which case, as we have seen, Mary will always deterministically choose A),
OR the RIG throws up only A (in which case Mary must choose A)
OR the RIG throws up only B (in which case Mary must choose B).

There are no other possibilities.

Thus the outcome “whether Mary chooses A or B” is actually precisely determined by the random nature of the RIG. If the RIG throws up “only A” or “A and B” then Mary will choose A; if the RIG throws up “only B” then Mary will choose B.

Whether the RIG is “internal” to Mary or not makes no difference to the way it all works.
(Having an "internal RIG" actually means that there are random processes involved in our internal decision-making - ie our decision making is neither completely rational nor under our complete control)

By suggesting that the RIG is the source of “CHDO” we are actually saying that the ultimate “choice” of whether to do A or B is purely random.

Is this free will?
Is this CHDO?

MF
 
Last edited:
  • #59
MF

You have choice...you move away from pain toward pleasure.

Fush the logic. Be simple.

ENJOY
.
 
  • #60
meL said:
MF
You have choice...you move away from pain toward pleasure.
Fush the logic. Be simple.
ENJOY
.
yes all agents have choice

even a thermostat chooses whether and when to switch on or off

this is not the question here

the question is "could we have freely done otherwise than what we actually did"

the world is only as simple as some would like it to be if one wears rose-tinted spectacles

:smile:

MF
 

Similar threads

Replies
26
Views
5K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 379 ·
13
Replies
379
Views
52K
Replies
5
Views
3K
  • · Replies 31 ·
2
Replies
31
Views
9K
Replies
45
Views
6K
  • · Replies 54 ·
2
Replies
54
Views
6K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
10
Views
4K
Replies
72
Views
10K