cfgauss
- 3
- 0
I've been reading some about Bayesian statistics, and am a little confused. I never really covered Bayesian stuff as an undergrad, and am now a physicist, so I haven't had to learn it.
I have been trying to think of things in terms of more interesting examples than the trivial coin flipping / ball grabbing ones I've seen in books, but am apparently a little confused about how to interpret everything.
Let's suppose I have two events,
H = I do something.
E = someone tries to stop me from doing it.
Let my prior probability of doing something, P(H)=h.
Then, the probability I do something given that someone tries to stop me is,
P(H|E) = \frac{P(E|H)P(H)}{P(E)}
where
P(E) = P(E|H)P(H) + P(E|!H)P(!H) because the only options are me doing it or not (H and !H).
Let's call P(E|H) = x, P(E|!H)=y.
Then,
P(H|E) = \frac{P(E|H)P(H)}{P(E|H)P(H) + P(E|!H)P(!H)} = \frac{h x}{h x + y (1-h)}.
But, this means that if x\approx 1, y \approx 0, then P(H|E) > h.
P(E|H) = x = the probability they try to stop me, given I do something,
P(E|!H) = y = the probability they try to stop me, given I don't do something.
So, the probability of me doing something increases given that they try to stop me from doing it, provided that they would not try to stop me if I didn't do it?
This doesn't make any sense to me, because it clearly contradicts the purpose of event E.
Where has my interpretation gone wrong? How would I properly model something like this?
I have been trying to think of things in terms of more interesting examples than the trivial coin flipping / ball grabbing ones I've seen in books, but am apparently a little confused about how to interpret everything.
Let's suppose I have two events,
H = I do something.
E = someone tries to stop me from doing it.
Let my prior probability of doing something, P(H)=h.
Then, the probability I do something given that someone tries to stop me is,
P(H|E) = \frac{P(E|H)P(H)}{P(E)}
where
P(E) = P(E|H)P(H) + P(E|!H)P(!H) because the only options are me doing it or not (H and !H).
Let's call P(E|H) = x, P(E|!H)=y.
Then,
P(H|E) = \frac{P(E|H)P(H)}{P(E|H)P(H) + P(E|!H)P(!H)} = \frac{h x}{h x + y (1-h)}.
But, this means that if x\approx 1, y \approx 0, then P(H|E) > h.
P(E|H) = x = the probability they try to stop me, given I do something,
P(E|!H) = y = the probability they try to stop me, given I don't do something.
So, the probability of me doing something increases given that they try to stop me from doing it, provided that they would not try to stop me if I didn't do it?
This doesn't make any sense to me, because it clearly contradicts the purpose of event E.
Where has my interpretation gone wrong? How would I properly model something like this?