Bayesian Conditionalization

  • Thread starter markology
  • Start date
  • Tags
    Bayesian
In summary: Perhaps H(X) is a random variable that you can manipulate in some way that lets you avoid having to interpret P( (X|A) | B)In summary, the conversation is about proving that conditionalizing once on (A or B) is equal to conditionalizing first on A and then on B. There may be some confusion on the terminology used, as "conditionalizing" is not a commonly used term in the context of probability. The problem may involve proving that P(C | A and B) = P( (C|A)|B ), which can be difficult to reason about logically. It may be helpful to think about a different space of outcomes when interpreting notation like (C|A)|B, where C|A
  • #1
markology
2
0
Hello,

I'm stuck at the following problem on Baysesian conditionalization:

Prove that conditionalizing once on (A or B) is equal to conditionalizing first on A and then on B. As a hint, I was asked to define H(x) = P(X given A).

Any help would be appreciated.
 
Mathematics news on Phys.org
  • #2
markology said:
Hello,

Prove that conditionalizing once on (A or B) is equal to conditionalizing first on A and then on B.

Did your text materials say that or is "conditionalizing" some terminology that you invented? Is the problem to prove that P(C | A and B) = P( (C|A)|B ) ? (i.e. "and" instead of "or")
 
  • #3
Sorry. I did mean "and' not "or". Also, conditionalizing is what is written on the homework.
 
  • #4
Since the usual term is "conditioning" rather than "conditionalizing", it's possible that your instructor or text has a specific technique in mind and has invented this term for it. In that case I'm not enough of a mind reader to know what the problem wants!

Can you give an example from your class notes where conditionalizing was used?
Perhaps the problem expects you do some kind of mechanical manipulations that avoid detailed thought.

-------------

Some other thoughts (which may not be what the problem has in mind):

The problem of showing P( (C|A)|B) = P(C|A and B) is actually hard to reason about in a logical fashion. (In an advanced course that uses measure theory, it is very hard ! Let's assume you are in an introductory course.) The notation P(C) is usually used to denote the probability of an event, which is a set in some "space of outcomes". However when we write P(C|A) , the "C|A" has no convenient interpretation as an event in the "space of outcomes"where the events C and A take place.

P(C|A) is usually defined to be P(C and A)/ P(A), so its a P-of-something that isn't an event. at least it is not an event in the original space of outcomes.

If you want to think about C|A as an event, you must think about a different space of outcomes. The possible space of outcomes for C|A is the set A, not the original space. Within the space A, "C|A" is the set consisting of the events in C that are also in A. As a set, C|A is the set (A and C), but we can't say P(C|A) = P(C and A) since, as events, C|A and (C and A) denote events in different spaces of outcomes.

We still face the task of interpreting notation like (C|A)|B. One line of reasoning is that the C|A changes the space of outcomes to A. Then the "| B" changes the space of outcomes that part of B which is in space of outcomes A, i.e. the space of outcomes is (A and B). So (C|A)|B denotes the same thing as C|(A and B). Hence P( (C|A)|B) = P(C|(A andB)).

The hint to let H(X) = P(X gven A) may be some way to detour around having to interpret notation like P( (X|A) | B). I'm not sure why this is necessary.
 
  • #5


Hello, thank you for reaching out for assistance with your problem on Bayesian conditionalization. This is a fundamental concept in Bayesian statistics and understanding it can greatly enhance your ability to make informed decisions based on data.

To prove that conditionalizing once on (A or B) is equal to conditionalizing first on A and then on B, we can use the definition of conditional probability and the definition of conditionalization.

First, let's define H(x) = P(X given A). This means that H(x) represents the probability of X occurring, given that A has already occurred.

Now, let's look at the definition of conditional probability: P(A and B) = P(A) * P(B|A). This means that the probability of both A and B occurring is equal to the probability of A occurring multiplied by the probability of B occurring given that A has already occurred.

Next, let's look at the definition of conditionalization: P(Y|X) = P(X and Y) / P(X). This means that the probability of Y occurring given that X has already occurred is equal to the probability of both X and Y occurring divided by the probability of X occurring.

Using these definitions, we can start with the left side of the equation (conditionalizing once on A or B) and use the definition of conditional probability to expand it: P(X|(A or B)) = P(X and (A or B)) / P(A or B).

Now, using the definition of conditionalization, we can rewrite this as: P(X|(A or B)) = P(X and A) / P(A or B) * P(X and B) / P(A or B).

Finally, using the definition of conditional probability, we can simplify this to: P(X|(A or B)) = P(X|A) * P(X|B).

This is the same as the right side of the equation (conditionalizing first on A and then on B). Therefore, we have proved that conditionalizing once on (A or B) is equal to conditionalizing first on A and then on B.

I hope this explanation helps you better understand Bayesian conditionalization. If you have any further questions, please feel free to reach out. Best of luck with your studies!
 

1. What is Bayesian Conditionalization?

Bayesian Conditionalization is a mathematical framework used to update beliefs or probabilities in light of new evidence. It is based on the Bayes' theorem, which states that the posterior probability of an event is equal to the prior probability of the event multiplied by the likelihood of the evidence given the event.

2. How does Bayesian Conditionalization differ from other methods of updating beliefs?

Unlike other methods, Bayesian Conditionalization takes into account both prior beliefs and new evidence to update probabilities. It also allows for the incorporation of subjective beliefs and changing of beliefs as new evidence is obtained.

3. What is the role of prior probabilities in Bayesian Conditionalization?

Prior probabilities represent an individual's initial beliefs or knowledge about a particular event before any new evidence is obtained. These probabilities are updated using Bayesian Conditionalization to reflect the impact of the new evidence.

4. Can Bayesian Conditionalization be applied to any type of problem?

Yes, Bayesian Conditionalization can be applied to any type of problem where there is uncertainty and the need to update beliefs or probabilities based on new evidence.

5. Is Bayesian Conditionalization a deterministic or probabilistic method?

Bayesian Conditionalization is a probabilistic method, as it involves assigning probabilities to different events and updating those probabilities based on new evidence. However, the results can become more certain as more evidence is obtained.

Similar threads

  • DIY Projects
Replies
16
Views
232
Replies
7
Views
1K
  • Quantum Interpretations and Foundations
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Mechanical Engineering
Replies
12
Views
1K
Replies
2
Views
247
Replies
4
Views
409
Replies
12
Views
1K
  • General Math
Replies
2
Views
880
Back
Top