Stephen Tashi said:
The fact that a probability is called a "Bayesian" probability does not exempt it from being defined on a probability space.
But it means that any question that your uncertain about, such as "Is today Monday?" can have an associated probability. The issue is then just to assign probabilities to the exclusive and exhaustive cases: Monday & Heads, Monday & Tails, Tuesday & Heads, Tuesday & Tails. So your claim that Monday isn't an event is not true for Bayesian probability. Any statement with uncertainty can be an event.
As you know, the usual scenario in applying Bayesian probability is to have a "prior" distribution on some probability space and then to compute a "posterior distribution" by conditioning on some event that can be defined in that space.
Yes, the obvious prior for the coin toss is P(H) = P(T) = 1/2. The difficulty is to calculate priors for P(Monday) and P(Tuesday). However, there is really only one choice that makes sense: P(Monday) = P(Tuesday) = 1/2. (Argument at the end)
Given these priors, we compute:
P(Awake) = P(Monday) + P(Tuesday) P(T)
(she's always awake on Monday, and she's awake on Tuesday only if it's tails).
So we conclude: P(Awake) = 3/4
Now finally we compute:
P(H | Awake) = P(H & Awake)/P(Awake)
Since P(H & Awake) only happens if it's Monday, then we can write: P(H & Awake) = P(H & Monday) = P(H) P(Monday) = 1/4
So we conclude:
P(H | Awake) = (1/4)/(3/4) = 1/3
Here's an argument for why P(Monday = 1/2):
First, note that if you told Sleeping Beauty that today is Monday, then she would have no reason to think heads more likely than tails, since the difference between them only shows up on Tuesday. So P(H | Monday) = 1/2.
Second, note that if you told Sleeping Beauty that the coin result was tails, then she would have no reason to think Monday more likely than Tuesday, since they are only different in the case of heads. So P(Monday | T) = 1/2.
Now, compute the conditional probability: P(Monday | H)
We can use Bayes' theorem to write that as:
P(Monday | H) = P(H | Monday) P(Monday)/P(H)
But we already have P(H | Monday) = P(H) = 1/2. So we get:
P(Monday | H) = P(Monday)
Finally, we compute:
P(Monday) = P(Monday | H) P(H) + P(Monday | T) P(T) = 1/2 P(Monday) + 1/4
which implies that P(Monday) = 1/2
As far as I can see, those statements have no mathematical interpretation in the theory of probability. The usual measure of "uncertainty" is the standard deviation of a random variable, which is not a probability. People do say informally that a probability is a "measure of uncertainty", but what is the mathematical definition for a "measure of uncertainty"?
It's subjective. There isn't a right or wrong answer, except that it has to obey the laws of conditional probability. However, in many cases of interest (such as this one), we can appeal to symmetry: If there are two possibilities, and there is no reason to think one more likely than the other, then the probability of each should be 1/2. That principle alone is enough to get unique probabilities in this case.