vector03 said:
Yes. I understand all the explanations presented and I understand the mathematics behind the calculation. Each one who has attempted an explanation has presented what I would consider a reasonably good explanation (for whatever that's worth). My big "hang-up", and I'm not sure I agree, at least yet, that it's possible for a player to have a 1/2 chance of winning while another player at the same point in time with exactly the same set of conditions can have a 2/3 chance of winning.
Either something wrong with the theory or there is something wrong with the application that doesn't consider (or account for) the "hypothetical" new player (3rd observer).
I think to achieve 2/3 chance of wiinning requires and depends on an event that has a 100%chance of occurring --> "mechanically" requiring the original player to "switch" everytime which takes away from, in my opinion, some of the "randomness". An event which must occur 100% of the time is, in my opinion, not random.
So bottom line, yes... I've understood your explanations and appreciate them yet I'm just having a hard time "wrapping" my thoughts 100% around them.
While it is true that the second player has a 50% probability of winning, and the first player has a 2/3 probability of winning if he switches doors, this is not a contradiction, because the probability values describe
different events. The .5 probability describes the probability that the second player will win if he selects from the two doors at random. The 2/3 probability describes the probability that the first player will win
if he switches doors. Note that it does not actually matter who is playing in order for these probability values to hold. The second player also has a 2/3 probability of winning
if he selects the door that player 1 can switch to, and the first player has a 1/2 probability of winning if, after being asked if he wishes to switch, he makes his selection at random.
No inconsistencies arise from the fact that the probability values are not equal, because they describe events occurring under separate conditions.
The 2/3 probability of winning applies only to a player who chooses to select the door that was not selected initially by the first player, and that was not opened by the host. The 1/2 probability of winning applies only to the player that selects from the two remaining doors
at random after the host has opened the third door. Similarly, the 1/3 probability of winning applies only the player who chooses to stay with the door that was selected before the losing door was opened.
To summarize, let door 1 represent the door that is first opened, door 2 represent the door that is opened by the host, and door 3 represent the door that player 1 has an opportunity to switch to.
There is a 1/3 probability that either player will win
if he selects door 1.
There is a 2/3 probability that either player will win
if he selects door 3.
There is a 1/2 probability that either player will win
if he selects a remaining door at random after the host has revealed one of the goats.
Note that each distinct probability value is associated with a
distinct condition. So you see that there is internal consistency between them. Does this clarify things?
vector03 said:
Personally, I would vote for the host's chances as 0 since the host is generally not allowed to play.
I note the qualifications of "long run" or "repeated many times" and respectfully submit that the theory is based on the "long run" assumption. In this particular case, that assumption is not met. This experiment is setup as a one time chance of winning. If the player had hundreds of chances, in the long run, his chances would approach a limit of 2/3. However, the player only get's 1 chance and that invalidates any use of the "repeated many times" assumption. The player only has one chance.
Applying any theory that is based on certian assumptions being met to the solution of a problem where those assumptions are not being met does not seem cosistent
The derivation of the solution to this problem is not predicated on the assumption of repeated trials. I only broached the topic of repeated trials to bring a deeper understanding of the implications of the asserted probability value, because we know that the theoretical probability value represents the frequency of occurrence that we will converge to in the limit as the number of trials approaches infinity. The theoretical probability value represents the frequency that a hypothetical experiment would converge to in the limit of arbitrarily many trials, regardless of whether or not such an experiment is actually conducted. My discussion of a large number of trials was only meant as another means of interpreting theoretical probability values.
In a similar sense, I might say that there is a .5 probability of landing heads on a coin flip, and expand on what this means by asserting that if we conduct many trials, we can reliably expect to obtain heads approximately 50% of the time. However, the fact that we do not actually conduct these trials does not change the fact that there is a .5 probability of obtaining heads in a single trial. The notion of many trials simply furnishes us with another perspective for understanding what a theoretical probability value means.
Because we know that the results of an experiment of many trials will tend to converge toward the theoretical probability value for a single trial, we can use our expectations of the results of such an experiment to determine whether or not our theoretical value seems intuitively reasonable.