B What should Sleeping Beauty's credence be in a life-or-death coin toss?

  • Thread starter Thread starter Moes
  • Start date Start date
  • Tags Tags
    Beauty
Click For Summary
The discussion centers on the Sleeping Beauty problem, where the focus is on determining the credence Sleeping Beauty should assign to the outcome of a coin toss upon waking. A referenced article argues that credence does not need to align with betting odds, suggesting that new information can alter perceived probabilities. Participants debate the implications of anthropic reasoning and the nature of "new information" in the context of the problem, ultimately concluding that Sleeping Beauty's confidence in heads should be 1/3. The conversation highlights the complexity of defining credence and the limitations of using betting scenarios to illustrate belief. Overall, the problem remains a contentious topic in philosophical discussions about probability and belief.
  • #31
Dale said:
There is no doubt about either of these probabilities.
Then the doubt is which one of them to use when she is awake and reasoning about the probabilities. But I actually don’t think it’s a doubt she cannot be any more sure that the coin landed tails then that it landed heads. So her credence is 1/2.
 
Physics news on Phys.org
  • #32
Moes said:
Then the doubt is which one of them to use when she is awake and reasoning about the probabilities. But I actually don’t think it’s a doubt she cannot be any more sure that the coin landed tails then that it landed heads. So her credence is 1/2.
So, what in the definition of credence requires the use of the unconditional probability? Surely people are allowed to let their degree of belief depend on the conditions.

When she is awakened she clearly knows that she is awake, so why should she be forced to believe ##P(heads)## and not ##P(heads|awake)##?
 
  • Like
Likes sysprog
  • #33
Dale said:
To all in this thread. There have already been several very long threads on the Sleeping Beauty problem. Those previous threads have been closed, and if this is to simply be another then it will be closed quickly.

If this thread is to remain open then it must be different from these others. The OP is about a specific reference and its concept of credence. That is a more focused and a distinct topic which can remain.
I don’t think I should keep arguing anymore but even if I do I would like if you can just tell me to stop when you don’t think we are getting anywhere. I would like this thread to remain open so others can later give there opinions.
 
  • #34
Moes said:
I don’t think I should keep arguing anymore but even if I do I would like if you can just tell me to stop when you don’t think we are not getting anywhere. I would like this thread to remain open so others can later give there opinions.
Then let’s discuss credence and not sleeping beauty.
 
  • #35
Dale said:
Then let’s discuss credence and not sleeping beauty.
Ok, one way I think I could explain it, is that you can only let your belief depend on the conditions if the conditions could have been different. In this case she could not have been thinking about the probabilities when she was sleeping. So the condition that she is awake can not be used to decide her credence. The probability of the coin toss was 50/50 the condition that she is now awake shouldn’t change anything. So the probability should remain 50/50
 
  • #36
@Moes, Her credence is equal to her assessment of the probability; that doesn't change between Sunday and Wednesday ##-## on Sunday she knows that ##P(heads)## = 1/2, and ##P(heads|awake)## = 1/3, just as she does when the ##awake## condition is being fulfilled. If asked on Sunday, what is the probability that the coin toss is heads, she will report ##P(heads)## = 1/2 and if asked what will you report if asked the same question when you are awakened on Monday or Tuesday, she will say ##P(heads|awake)## = 1/3, because there are 2 chances for tails; i.e. tails Monday or tails Tuesday, but only one for heads, i.e. heads Monday.
 
  • #37
Moes said:
Ok, one way I think I could explain it, is that you can only let your belief depend on the conditions if the conditions could have been different. In this case she could not have been thinking about the probabilities when she was sleeping. So the condition that she is awake can not be used to decide her credence. The probability of the coin toss was 50/50 the condition that she is now awake shouldn’t change anything. So the probability should remain 50/50
Why? There is nothing in any definition of credence that requires that. In fact, to me it seems the opposite. If the condition could not be different then you cannot use the unconditional probability.

In fact, this type of reasoning is explicitly seen in discussions of fine tuning. In that context it is called the anthropic principle, and basically says that the relevant probability for our laws of physics is ##P(laws|intelligence)## precisely because if there were no intelligent observers there would be nobody to calculate the probability of the laws of physics.

So in general discussions of probability the restriction you mention does not exist, and there is no such restriction in the definition of credence. So it seems that this is a custom-built restriction pulled out of nowhere.

Edit: one other problem besides the general non-existence of such a restriction, is that even if such a restriction existed it wouldn’t apply to the SB problem. Here the “awake” condition is shorthand for “awake and being interviewed on Monday or Tuesday as part of the experiment”. The condition is in fact different both before and after the experiment.
 
Last edited:
  • Like
Likes sysprog
  • #38
Dale said:
In fact, this type of reasoning is explicitly seen in discussions of fine tuning. In that context it is called the anthropic principle, and basically says that the relevant probability for our laws of physics is P(laws|intelligence) precisely because if there were no intelligent observers there would be nobody to calculate the probability of the laws of physics.
The anthropic principle is exactly what I think confirmes my claim. It would say that whether the coin landed heads or tails P(heads|awake)=1 [Edit: P(awake)=1] so neither is more probable.

Dale said:
So in general discussions of probability the restriction you mention does not exist, and there is no such restriction in the definition of credence. So it seems that this is a custom-built restriction pulled out of nowhere.
https://en.wikipedia.org/wiki/Anthropic_Bias_(book)#Self-sampling_assumption

https://en.wikipedia.org/wiki/Anthropic_principle
According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1.
As I mentioned in my first post I think Nick Bostrom‘s book Anthropic Bias is a good one to read to understand anthropic reasoning.

https://www.anthropic-principle.com/q=book/chapter_11/#11d
On the other hand, the intuition that Beauty’s credence of Heads should be [1/2] is justified in cases where there is only one run of the experiment and there are no other observer-moments in the awakened Beauty’s reference class than her other possible awakenings in that experiment. For in that case, the awakened Beauty does not get any relevant information from finding that she has been awakened, and she therefore retains the prior credence of 1/2.

Those who feel strongly inclined to answer P(Heads) = 1/2 on Beauty’s behalf even in cases were various outsiders are known to be present are free to take that intuition as a reason for choosing a reference class that places outsiders (as well as Beauty’s own pre- and post-experiment observer-moments) outside the reference class they would use as awakened observer-moments in the experiment. It is, hopefully, superfluous to here reemphasize that such a restriction of one’s reference class also needs to be considered in the broader context of other inferences that one wishes to make from indexical statements or observations about one’s position in the world. For instance, jumping to the extreme view that only subjectively indistinguishable observer-moments get admitted into one’s reference class would be unwise, because it would bar one from deriving observational consequences from Big-World cosmologies.

I don’t think I can explain my opinion any better then I already did. I still didn’t understand any argument for the thirder position and I found the sources that support my view. So I think I should stop discussing this further.
 
Last edited:
  • #39
I don't see how anything reasonable could lead one to conclude that ##P(heads)=P(heads|awake)## for an epistemically-sound respondent ##-## anthropic principle, indexicality, etc. notwithstanding.
 
  • Like
Likes Dale
  • #40
Moes said:
The anthropic principle is exactly what I think confirmes my claim. It would say that whether the coin landed heads or tails P(heads|awake)=1 so neither is more probable.
No, there is no doubt whatsoever that ##P(heads|awake)=1/3##. If you have any doubt whatsoever about that, simply run a Monte Carlo simulation and prove it to yourself. Any claim to the contrary is not an argument, it is simply misinformation.

While you are free to argue that the credence should be equal to ##P(heads)## or that the credence should be calculated in some entirely different manner, there is simply no avoiding the fact that ##P(heads)=1/2## and ##P(heads|awake)=1/3##.
the conditional probability of finding yourself in a universe compatible with your existence is always 1.
So calculating such conditional probabilities is indeed valid, contrary to your argument.
 
Last edited:
  • #41
@Moes, please understand that the the representations ##P(heads)## and ##P(heads|awake)## have strictly defined meanings; that's why @Dale can be so unequivocal about their values. Incidentally, his Monte Carlo simulation shows that in this case, as usual, there's no significant/appreciable distance between the Bayesian and frequentist interpretations of probability.

@Dale, it might be instructive or entertaining to @Moes and to others if you were to post your Monte Carlo code for this simulation. :smile:
 
  • #42
sysprog said:
I don't see how anything reasonable could lead one to conclude that ##P(heads)=P(heads|awake)## for an epistemically-sound respondent ##-## anthropic principle, indexicality, etc. notwithstanding.
Agreed. The argument is about credence. The probabilities are indisputable. They can be directly determined as the long run frequencies in a Monte Carlo simulation of a million Sleeping Beauty experiments. I did that simulation previously and reported the results. I could probably find it, but it would be easier to re-do the simulation.
 
  • #43
Dale said:
Agreed. The argument is about credence. The probabilities are indisputable. They can be directly determined as the long run frequencies in a Monte Carlo simulation of a million Sleeping Beauty experiments.I did that simulation previously and reported the results. I could probably find it, but it would be easier to re-do the simulation.
Oh , you mean like this?
Python:
import numpy as np
 n = input('Number of samples: ')
print np.sum(np.random.rand(n)**2+np.random.rand(n)**2<1)/float(n)*4
(from https://rosettacode.org/wiki/Monte_Carlo_methods#Python ##-## uses Monte Carlo method to calculate the value of ##\pi##)
 
  • Like
Likes Dale
  • #44
sysprog said:
@Dale, it might be instructive or entertaining to @Moes and to others if you were to post your Monte Carlo code for this simulation. :smile:
Sure, this is Mathematica code:

[CODE title="Sleeping Beauty Monte Carlo"]In[1]:= flips = RandomChoice[{heads, tails}, 1000000];

In[2]:= runs =
Flatten[Table[{{i, mon, awake}, {i, tue,
If[i === heads, asleep, awake]}}, {i, flips}], 1];

In[3]:= N[heads/(tails + heads) /. Counts[runs[[All, 1]]]]

Out[3]= 0.50053

In[4]:= N[
heads/(tails + heads) /.
Counts[Select[runs, (#[[3]] == awake) &][[All, 1]]]]

Out[4]= 0.333805[/CODE]

Line 1 flips a million coins. Line 2 runs the standard Sleeping Beauty experiment for each flip. Line 3 calculates ##P(heads)## and line 4 calculates ##P(heads|awake)##. This is standard frequentist probability.
 
  • Like
Likes sysprog
  • #45
  • #46
Moes said:
It would say that whether the coin landed heads or tails P(heads|awake)=1
Sorry I meant it would say that whether the coin landed heads or tails P(awake)=1

The results of the simulation are obvious.
 
  • #47
Moes said:
The results of the simulation are obvious.
Agreed.

So we are back to the question of why the conditional probability should be forbidden in the calculation of credence. Your justification above seems both invalid in general and inapplicable in the specific case of Sleeping Beauty.

The simple fact is that people’s beliefs are highly conditional. A definition of credence which seeks to restrict that seems obviously wrong.
 
  • #48
Dale said:
Agreed.

So we are back to the question of why the conditional probability should be forbidden in the calculation of credence. Your justification above seems both invalid in general and inapplicable in the specific case of Sleeping Beauty.

The simple fact is that people’s beliefs are highly conditional. A definition of credence which seeks to restrict that seems obviously wrong.
Using the anthropic principle as I was saying it comes out there is a 100% chance that she would be awake whether the coin landed heads or tails. So how do you think this condition of being awake could make tails more probable than heads?
 
  • #49
Moes said:
Using the anthropic principle as I was saying it comes out there is a 100% chance that she would be awake whether the coin landed heads or tails.
That is not correct. I am not sure how you come to that conclusion.

Moes said:
So how do you think this condition of being awake could make tails more probable than heads?
Because the Monte Carlo simulation shows it so. Again, the credences are disputable, but the probabilities are not.

I thought you said the results of the simulation were obvious. Then why do you say things that are obviously wrong?
 
  • #50
Dale said:
Moes said:
Using the anthropic principle as I was saying it comes out there is a 100% chance that she would be awake whether the coin landed heads or tails.
That is not correct. I am not sure how you come to that conclusion.
I think that the intended meaning is that there's an awakening no matter what, and, erroneously, that the anthropic principle is something to which recourse may be had for purpose of negating the consequence of there being either 1 or 2 awakenings for tails, and only 1 for heads.
 
  • #51
Dale said:
That is not correct. I am not sure how you come to that conclusion.
Dale said:
In fact, this type of reasoning is explicitly seen in discussions of fine tuning. In that context it is called the anthropic principle, and basically says that the relevant probability for our laws of physics is P(laws|intelligence) precisely because if there were no intelligent observers there would be nobody to calculate the probability of the laws of physics.
The same way you understand that in the discussions of fine tuning the condition “intelligence“ can be added to figure out the probability of us living in a universe with our laws of physics, likewise in the sleeping beauty problem when figuring out the probability of her being awake given that the coin landed heads you should understand that we should need to add the condition of intelligence.

So it comes out P(awake)= P(awake|intelligence)=P(awake|awake)=1

Therefore,
Moes said:
The anthropic principle is exactly what I think confirmes my claim. It would say that whether the coin landed heads or tails P(awake)=1 so neither is more probable.

I don‘t know how to write a condition on the condition but it should come out that we are looking for P(heads) with the condition that she is awake but only on condition that she is awake which is the same as P(heads) which is 1/2.
 
  • Skeptical
Likes sysprog
  • #52
@Moes, please re-read that post, viewing it as if someone else had written it, and see if it doesn't look like nonsense to you. It's hard to avoid writing nonsense if you're trying to embrace a false idea. And if you don't know how to write conditional probabilities, then please learn how before writing them. Thanks.
 
  • Like
Likes Dale
  • #53
Moes said:
So it comes out P(awake)= P(awake|intelligence)=P(awake|awake)=1
That isn't what it says. So going back to this:
the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1
So if we denote a universe with laws of physics that are compatible with life as ##NiceLaws## and the fact that I exist as ##IExist## then the anthropic principle is just pointing out that ##P(NiceLaws|IExist)=1##. The point is that it is valid to form conditional probabilities, a statement that you have previously opposed.

For the Sleeping Beauty problem, the equivalent statement, in my opinion, would be that if she is being interviewed and asked about her credence then she is awake. So ##P(awake|interview)=1##.

Your statement that ##P(awake|awake)=1## is true, but it is tautologically true for any proposition, whereas the anthropic principle doesn't apply for all propositions. So although ##P(awake|awake)=1## is true, I would not associate its truth with the anthropic principle in any way.

Furthermore, the claim that ##P(awake)=P(awake|awake)## is simply wrong. The conditional and the unconditional probabilities are not the same, and even if they were they provide no information on ##P(heads|awake)## which is the relevant probability.

Moes said:
I don‘t know how to write a condition on the condition but it should come out that we are looking for P(heads) with the condition that she is awake but only on condition that she is awake which is the same as P(heads) which is 1/2.
There is no such thing as conditions on conditions. There are just multiple conditions. Multiple conditions are typically written as ##P(event|Condition1,Condition2)## or as ##P(event|Condition1 \cap Condition2)##. In general ##P(A|B,B) = P(A|B \cap B) =P(A|B) \ne P(A)##.
 
Last edited:
  • Like
Likes sysprog
  • #54
sysprog said:
I think that saying that SB's contigent awakening is "new information" to her introduces an inclarity. It's not new information with respect to her knowing on Sunday what her answer should or will be if/when she awakens on Monday or Tuesday. For that, she needs only the information that is given to her on Sunday.
What you are objecting to, are elements Elga added to describe his solution to the problem he published. Not the problem as he published it, or the way I tried to address the problem he published. The only inclarity is due to trying to use these elements when they are not present in what I asked.

I said nothing about Sunday, Monday, or Tuesday. That's information Elga added as part of his solution. I said nothing about knowing, before the experiment starts, what you (original problem) or SB (Elga's solution) would answer. The new information I described is not "relative to" what you know before being put to sleep[1], it is about comparing the current state to the state you know, right now, was used to decide if a waking occurs.

When you (not SB) are awake in the experiment I described:
  1. You know that a decision was made, while you were asleep at time T0, about whether to wake you.
  2. You know that the state of a dime and a quarter, at time T0, was well-described by the sample space {(H,H), (T,H), (H,T), (T,T)} with probability distribution {1/4,1/4,1/4,1/4}.
  3. You know that the decision was made to wake you. That would not have happened if the actual state, at time T0, had been (H,H).
  4. So you know that the sample space that describes the state of the coins now, at time T1>T0, is {(T,H), (H,T), (T,T)}.
This information, about the difference in the probabilistic states at time T0 and T1, is new information. Since only the state (H,H) was affected, you can update the probability distribution to {1/3,1/3,1/3}. Since the only remaining state where the Quarter is currently showing Heads is (T,H), your degree of belief that the Quarter is showing Heads should be 1/3.

+++++

[1] The same is true in Elga's solution, which is where halfers go wrong. The new information is about what SB knows about the coin at the moment she answers the question, compared to what she knows was true when it was flipped. This is AFTER she was put to sleep. Elga's introduction of days apparently muddles that issue for some. That's why I used two coins, and asked about the current state compared to the state when the decision was made to awaken you.
 
Last edited:
  • #55
I agree with you regarding the correctness of the 1/3 conclusion.
 
  • #56
Dale said:
So if we denote a universe with laws of physics that are compatible with life as NiceLaws and the fact that I exist as IExist then the anthropic principle is just pointing out that P(NiceLaws|IExist)=1. The point is that it is valid to form conditional probabilities, a statement that you have previously opposed.
You are missing the point of the anthropic principle. We don’t need the anthropic principle to tell us that it is valid to form conditional probabilities. That is obvious. The point of the anthropic principle is to explain fine tuning. I don’t see how your explaining how we just happened to be in a universe that’s fine tuned. Part of the problem is why does this condition “IExist” actually exists despite the fact that it was so improbable.
Dale said:
Your statement that P(awake|awake)=1 is true, but it is tautologically true for any proposition, whereas the anthropic principle doesn't apply for all propositions. So although P(awake|awake)=1 is true, I would not associate its truth with the anthropic principle in any way.

Furthermore, the claim that P(awake)=P(awake|awake) is simply wrong. The conditional and the unconditional probabilities are not the same, and even if they were they provide no information on P(heads|awake) which is the relevant probability.
What I was trying to say is that when she wants to figure out the probability of her being awake she needs to account for the precondition that she must be awake to be trying to figure out the probability. So she needs to add a condition to P(awake) so that the probability that she is awake is P(awake|awake). If you understood the anthropic principle you should understand why it applies here.
Dale said:
There is no such thing as conditions on conditions
I’m not sure what that means. It definitely makes sense to ask what the probability of A is conditioned with the fact that B is true but B is only true if B is true. It might be pointless since it’s the same as asking what the probability of A is but but the statement makes sense.
sysprog said:
@Moes, please re-read that post, viewing it as if someone else had written it, and see if it doesn't look like nonsense to you. It's hard to avoid writing nonsense if you're trying to embrace a false idea. And if you don't know how to write conditional probabilities, then please learn how before writing them. Thanks.
I guess I just don’t know how to write and explain things well. But I fully understood Nick Bostrom’s argument for the halfer position which I think is exactly the way I understand it. If you are interested in understanding it maybe try reading his book.
 
  • #57
Moes said:
We don’t need the anthropic principle to tell us that it is valid to form conditional probabilities. That is obvious.
Then I don’t understand your previous statement:
Moes said:
Ok, one way I think I could explain it, is that you can only let your belief depend on the conditions if the conditions could have been different.
The point of the anthropic principle is that conditional probabilities are indeed valid, even when such conditions could not be any other way, in contradiction to your earlier claim.

Moes said:
I’m not sure what that means. It definitely makes sense to ask what the probability of A is conditioned with the fact that B is true but B is only true if B is true. It might be pointless since it’s the same as asking what the probability of A is but but the statement makes sense.
No, the statement doesn’t make any sense. If you are indeed making a valid point here then you will need to find a statistical (not philosophical) reference that explains what you are trying to say. I have never heard of conditions on conditions, just multiple conditions.

Moes said:
So she needs to add a condition to P(awake) so that the probability that she is awake is P(awake|awake).
##P(awake|awake)=1## has no bearing on ##P(heads|awake)=1/3\ne P(heads)##
 
Last edited:
  • Like
Likes sysprog
  • #58
Dale said:
I have never heard of conditions on conditions, just multiple conditions.
##(p\Rightarrow(a\Rightarrow b))\iff((p\wedge a) \Rightarrow b)##
 
  • #59
Maybe it would help if we drop the weird thing about being awake and having no memory.

Once a week someone flips a coin, and if it's heads they turn a light on and leave it on for one day, then they go back and turn it off, and if it's tails they turn a light on for two days they go back and turn it off. You're aware of this, but you don't remember which day of the week they flip the coin on. You walk into the room one day and see the light is on. What is the probability the coin flip was tails?
 
  • Like
Likes sysprog
  • #60
Very nice restatement. If that doesn't do it perhaps we should stop beating this dead horse.
I am amused that the re-opened thread essentially recapitulated the initial thread.
 

Similar threads

Replies
126
Views
8K
  • · Replies 45 ·
2
Replies
45
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
47
Views
8K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
32
Views
6K
  • · Replies 12 ·
Replies
12
Views
3K
Replies
6
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K