Question regarding probability of observation

  • Context: Undergrad 
  • Thread starter Thread starter sri_newbie
  • Start date Start date
  • Tags Tags
    Observation Probability
Click For Summary

Discussion Overview

The discussion revolves around the interpretation of conditional probabilities in the context of two binary random variables, A and B, where B is dependent on A. Participants explore the implications of observing these variables and how it affects the calculation of probabilities, particularly using Bayes' theorem.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • The initial poster questions whether observing B=T implies P(B=T)=1.0, contrasting it with the observation of A=T where P(A=T) becomes 1.0.
  • Some participants suggest that if A is observed as true, then P(A) becomes 1.0, leading to confusion about the meaning of probabilities after observation.
  • One participant proposes constructing a probability tree to analyze the situations that could lead to B being true, which may help clarify the relationships between A and B.
  • There is a discussion about the meaning of P(A) after observing A, with some arguing that it loses its significance as a standalone probability.
  • Another participant illustrates a coin analogy to explain how observations affect probabilities, emphasizing the need to consider the reverse process of observation.

Areas of Agreement / Disagreement

The discussion reflects a lack of consensus on the interpretation of observed probabilities and how they should be applied in calculations. Participants express differing views on the implications of observations on the probabilities of A and B.

Contextual Notes

Participants have not reached a resolution on how to handle the probabilities after observations, and there are unresolved questions about the definitions and implications of conditional probabilities in this context.

Who May Find This Useful

This discussion may be useful for individuals interested in probability theory, particularly those exploring conditional probabilities and Bayesian reasoning in the context of dependent random variables.

sri_newbie
Messages
2
Reaction score
0
Hi Everyone,

I am a newbie in probability theory and following is my question:

Consider we have two binary random variables A and B. B is dependent on A. So we have two conditional probability tables P(A) and P(B|A) with the following parameters :

A P(A)
----------
F 0.3
T 0.7


A P(B=T|A)
------------------
F 0.4
T 0.6

Suppose that A=T is observed. So, now the probability of A being True is 1.0 instead of 0.3 and P(A=F) = 0.0 instead of 0.7. Observing A=T the probability of B=T is going to be 0.4, by just looking up the corresponding tuple in B's CPT. Consider a different scenario where we now observe only B=T. My first question is,

1) is P(B=T) going to be 1.0, since we have observed it, comparing to the first scenario of observing A=T? I know that if nothing is observed then P(B=T) is calculated as P(A=T)xP(B=T|A=T)+P(A=F)xP(B=T|A=F), which is the marginal probability of B=T.

My second question which follows from the first one is,
2) if P(B=T) = 1.0 when B=T has been observed, then while calculating the posterior probability of A=T given B=T i.e. P(A=T|B=T) why is it that we don't put P(B=T)=1.0 in the denominator of the following Baye's rule


P(A=T|B=T) = P(A=T) x P(B=T|A=T) / P(B=T)

Why do we use the marginal value of P(B=T) [when nothing is observed] computed by the expression
P(A=T) x P(B=T|A=T) + P(A=F) x P(B=T|A=F)?

Thanks in advance.
newbie
 
Physics news on Phys.org
OK - denote ##\small A## meaning ##\small A\leftarrow T## and ##\small \lnot A## meaning ##\small A \leftarrow F##.
$$\small P(A)=0.7\\
\small P(\lnot A)=0.3\\
\small P(B|A)=0.6\\
\small P(B|\lnot A)=0.4$$
So, if ##\small A## then ##\small P(B)=0.6##
We want to ask, if ##\small B## then what is ##\small P(A)## ?

Construct a tree and figure out how many situations could lead to B (=T).
That should allow you to confirm or refute your assertions.
 
Thank you Simon for your reply. If A=T then is P(A) = 1.0? I can understand that if A=T then P(B) = 0.6. I am just thinking what the probability of an observed event should be, not the unobserved event given some observation. This is actually my first question.
 
My response was in two parts.
The first part hoped to tidy up the notation in a way that may help you think about the problem.
The second part hoped to point you in the direction of finding the answers to your questions.

If you observed A, then P(A) no longer has any meaning by itself.
Subsequent observations may see A or not depending on the nature of the system.

If you tossed a (biased) coin, and left it there, then subsequent measurements will be the same as the first one. eg. if A="the coin shows heads side up when I look at it" and the result was A, then you can say that P(A)=1 for subsequent measurements (observations of the coin without tossing it).

i.e. P(A|A)=1.

You use coin A to select which of two possible B-coins to pick.
You toss the indicated one ...

But you wanted to consider the consequences of doing the math in reverse.
To understand how that works, you need to think what the reverse process is.
The reverse experiment would be that someone else follows the procedure and I see only the result on the B coin ... what does this tell me about the probable states of the A coin?

If you were thinking of a different experiment, then please describe it.
 
Last edited:

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K