Bayesian Inference: Understanding the Relationship between Events and Causality

In summary: Do you have any insight into how to model this sort of thing at all?In summary, the conversation is about Bayesian statistics and the confusion surrounding its interpretation in the context of a specific example. The speaker is trying to think of a more interesting example but is struggling to interpret the results of the calculation. The expert explains the correct interpretation of Bayes law and how it applies to the example, but the speaker is still unclear about how to model this type of situation. The expert suggests that Bayesian inference assumes that observations do not directly affect the hidden state and offers no further insight into how to model this scenario.
  • #1
cfgauss
3
0
I've been reading some about Bayesian statistics, and am a little confused. I never really covered Bayesian stuff as an undergrad, and am now a physicist, so I haven't had to learn it.

I have been trying to think of things in terms of more interesting examples than the trivial coin flipping / ball grabbing ones I've seen in books, but am apparently a little confused about how to interpret everything.

Let's suppose I have two events,
[tex]H[/tex] = I do something.
[tex]E[/tex] = someone tries to stop me from doing it.

Let my prior probability of doing something, [tex]P(H)=h[/tex].
Then, the probability I do something given that someone tries to stop me is,
[tex]P(H|E) = \frac{P(E|H)P(H)}{P(E)}[/tex]
where
[tex]P(E) = P(E|H)P(H) + P(E|!H)P(!H)[/tex] because the only options are me doing it or not ([tex]H[/tex] and [tex]!H[/tex]).

Let's call [tex]P(E|H) = x[/tex], [tex]P(E|!H)=y[/tex].

Then,
[tex]P(H|E) = \frac{P(E|H)P(H)}{P(E|H)P(H) + P(E|!H)P(!H)} = \frac{h x}{h x + y (1-h)}[/tex].

But, this means that if [tex]x\approx 1, y \approx 0[/tex], then [tex]P(H|E) > h[/tex].
[tex]P(E|H) = x = [/tex] the probability they try to stop me, given I do something,
[tex]P(E|!H) = y = [/tex] the probability they try to stop me, given I don't do something.

So, the probability of me doing something increases given that they try to stop me from doing it, provided that they would not try to stop me if I didn't do it?

This doesn't make any sense to me, because it clearly contradicts the purpose of event E.

Where has my interpretation gone wrong? How would I properly model something like this?
 
Physics news on Phys.org
  • #2
That's not really a correct interpretation of applying Bayes law. Here's how I interpret the statement:

P(H|E) = probability of H given observation E

That is, Bayes law tells you how your knowledge of H changes due to making observation E.

For example, let's say you have two coins. One is a perfectly fair coin with a 50% chance of heads and a 50% chance of tails. The other is a weighted coin with a 80% chance of heads and a 20% chance of tails. Unfortunately, you forgot which coin is which so you decide to perform an experiment. You pick up one of the coins and flip it 10 times and get 7 heads and 3 tails. What is the probability that the coin that you picked up is the weighted coin? Here your H is the statement that "the coin is weighted" while E is the observation of 7 heads and 3 tails.

I'll leave the answer as an exercise to you. If you need help, I can post the solution later.
 
  • #3
Yes, I've seen plenty of those examples done, and they seem to make sense to me.

But there's nothing stopping me from rewording my questions as:
H = Someone does something
E = I observe someone trying to stop them,
P(H|E) = probability of H (someone doing something) given observation E (I observe someone trying to stop them).

Which has is the same form as the way you say should be the correct interpretation, but my example still comes to conclusions that don't make sense, even when written like that.
 
  • #4
Here's how I interpret the results of your first calculation (with p(E|H) = 1 and p(E|!H)=0 for simplicity):

Person A will try to stop person B whenever person B performs an action. You have no knoweldge of what person B is doing, but if you observe person A going to stop person B, then you know that person B is performing his action.

Now, if you interpret H as performing an action successfully, this also makes sense. Since p(E|!H) = 0, person A will never try to stop person B if person B is unsuccessful. Therefore, whenever person A stops person B, you know that person B will be successful. However, this model is highly unlikely because it assume that person A has some knowledge of whether person A will be successful or not.

[edit adding the following paragraph]
I think the problem here is that Bayesian inference depends on the assumption that the observations have no direct affect on the hidden state of H. H is assumed to be in the same state before and after the observation E, and observation E affects only your information about state H. I don't have any insight in how this assumption arises in the derivation of Bayes law, but this is my best guess at why you are being confused.
 
Last edited:
  • #5
cfgauss said:
Then,
[tex]P(H|E) = \frac{P(E|H)P(H)}{P(E|H)P(H) + P(E|!H)P(!H)} = \frac{h x}{h x + y (1-h)}[/tex].

But, this means that if [tex]x\approx 1, y \approx 0[/tex], then [tex]P(H|E) > h[/tex].

[tex]P(E|H) = x = [/tex] the probability they try to stop me, given I do something,
[tex]P(E|!H) = y = [/tex] the probability they try to stop me, given I don't do something.

So, the probability of me doing something increases given that they try to stop me from doing it, provided that they would not try to stop me if I didn't do it?

Right. If they only try to stop you if you try, and we know that they *did* try to stop you, it follows that you must have tried to do something. I don't see why this bothers you. It's not the probability of "you doing something in the first place" that changes; that's still just h. It's the "posterior probability that you did something given that they tried to stop you" that comes out higher than h. By specifying that they did try to stop you, we've removed part of the possible sample space from the problem (the part where you don't do anything).
 
  • #6
Ygggdrasil said:
I think the problem here is that Bayesian inference depends on the assumption that the observations have no direct affect on the hidden state of H. H is assumed to be in the same state before and after the observation E, and observation E affects only your information about state H. I don't have any insight in how this assumption arises in the derivation of Bayes law, but this is my best guess at why you are being confused.

Do you have any insight on a good way to model this kind of situation, then?

quadraphonics said:
It's the "posterior probability that you did something given that they tried to stop you" that comes out higher than h.

I'm confused about what you're saying. My confusion is because this seems to imply that trying to stop them *increases* the probability that they will succeed. Which really would mean any attempt to stop them would only be counterproductive.
 
  • #7
cfgauss said:
I'm confused about what you're saying. My confusion is because this seems to imply that trying to stop them *increases* the probability that they will succeed. Which really would mean any attempt to stop them would only be counterproductive.

I think the issue here is that you haven't defined the variables to correspond to what you seem to want to describe. The one variable is "I do something," not "I try to do something." It's only a question of whether you actually bother to do it in the first place. There's no analysis to be done of how successful the attempts to stop you are, as the way the problem is defined implies that they are *always* unsuccessful at stopping you.

On the other hand, if what you actually meant was not "I do something" but "I try to do something," then you have a similar problem. This leaves open the possibility that they succeed in stopping you, but since there is no variable assigned to the final outcome, there's no way to make statements about how different actions affect the final outcome. You may or may not try to do something, and they may or may not try to stop you, but there would be no mention of the actual outcome, and so you can't get any paradoxes about attempting to stop you being counterproductive.

The bottom line is that you need three separate variables to describe the situation you are talking about. One would be "I try to do something." The second would be "they try to stop me." And the third would be "I succeed." Then you can pose questions about how different strategies would affect the final outcome, and the answers will make sense.

It probably would have been better to just stick with flipping coins or drawing colored balls out of a hat. There's a reason those examples are so widely employed.
 
  • #8
cfgauss said:
Do you have any insight on a good way to model this kind of situation, then?

You'd probably have to have two variables assigned to your hidden states. For example, you can describe the hidden state in terms of (x,y) where x = {you are trying to perform an action, you are NOT trying to perform an action} and y = {someone is trying to stop you, someone is NOT trying to stop you}, and assign prior probabilities and probabilities of success to each hidden state.
 
  • #9
cfgauss said:
I'm confused about what you're saying. My confusion is because this seems to imply that trying to stop them *increases* the probability that they will succeed. Which really would mean any attempt to stop them would only be counterproductive.
You are confusion the occurrence of the event E with the existence of a causal link between E and H. H here more or less causes E; E does not cause H. However, observing E is nonetheless a very good indicator that H has occurred.
 

1. What is Bayesian statistics?

Bayesian statistics is a statistical approach that uses prior knowledge or beliefs about a certain topic, along with observed data, to make statistical inferences about that topic. It is based on the Bayes' theorem, which calculates the probability of an event occurring based on prior knowledge and new evidence.

2. How is Bayesian statistics different from traditional statistics?

Unlike traditional statistics, which relies solely on observed data, Bayesian statistics incorporates prior knowledge or beliefs into the analysis. This allows for a more flexible and personalized approach to statistical inference, as it takes into account individual's prior knowledge and beliefs about a topic.

3. What are the advantages of using Bayesian statistics?

Some advantages of using Bayesian statistics include the ability to incorporate prior knowledge, the flexibility to adjust and update beliefs as new evidence is obtained, and the ability to quantify uncertainty in the form of probabilities. This approach can also be useful for small sample sizes and complex data structures.

4. What are the limitations of Bayesian statistics?

One limitation of Bayesian statistics is that it requires prior knowledge or beliefs about a topic, which may be subjective and can vary among individuals. Additionally, it can be computationally intensive and may not be suitable for large datasets. It also relies on assumptions about the underlying data distribution.

5. In what fields is Bayesian statistics commonly used?

Bayesian statistics is commonly used in fields such as biology, medicine, psychology, and finance. It is also increasingly being used in areas such as machine learning, natural language processing, and artificial intelligence.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
780
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
990
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
916
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
332
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
855
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
2K
  • Advanced Physics Homework Help
Replies
8
Views
1K
Back
Top