Law of total expectation (VECTOR case)

  • Thread starter Thread starter kingwinner
  • Start date Start date
  • Tags Tags
    Expectation Law
AI Thread Summary
The law of total expectation states that E(Y) = E[E(Y|X)], which can be extended to multiple variables as E(Y) = E[E(Y|X1,X2)]. The discussion raises questions about whether E(Y|X1,X2) is a random variable and if it can be expressed as a function of X1 and X2. Participants seek clarification on whether the extensions (i) and (ii) are direct consequences of the law and how to derive them, particularly in the context of continuous random variables. The complexity of defining E(Y|X) and E(Y|X1,X2) for general random variables is acknowledged, with suggestions to explore specific examples for better understanding. The conversation emphasizes the need for robust definitions and proofs related to these expectations.
kingwinner
Messages
1,266
Reaction score
0
" The law of total expectation is: E(Y) = E[E(Y|X)].
It can be generalized to the vector case: E(Y) = E[E(Y|X1,X2)].

Further extension:
(i) E(Y|X1) = E[E(Y|X1,X2)|X1]
(ii) E(Y|X1,X2) = E[E(Y|X1,X2,X3)|X1,X2] "
====================

I understand the law of total expectation itself, but I don't understand the generalizations to the vector case and the extensions.

1) Is E(Y|X1,X2) a random variable? Is E(Y|X1,X2) a function of both X1 and X2? i.e. E(Y|X1,X2)=g(X1,X2) for some function g?

2) Are (i) and (ii) direct consequences of the law of total expectation? (are they related at all?) I don't see how (i) and (ii) can be derived as special cases from it...can somone please show me how?

Any help is much appreciated!
 
Physics news on Phys.org
kingwinner said:
I understand the law of total expectation itself, but I don't understand the generalizations to the vector case and the extensions.

1) Is E(Y|X1,X2) a random variable? Is E(Y|X1,X2) a function of both X1 and X2? i.e. E(Y|X1,X2)=g(X1,X2) for some function g?

2) Are (i) and (ii) direct consequences of the law of total expectation? (are they related at all?) I don't see how (i) and (ii) can be derived as special cases from it...can somone please show me how?

E(Y|X) and E(Y|X1,X2) are tricky to define for general random variables. Which definition are you working with?
 
bpet said:
E(Y|X) and E(Y|X1,X2) are tricky to define for general random variables. Which definition are you working with?

E(Y|X=x)=

∫ y f(y|x) dy
-∞
for continuous random variables X and Y.
(similarly for discrete).

General definition:
E(Y|A)=E(Y I_A)/E(I_A)=E(Y I_A)/P(A)
where I_A is the indicator function of A.

If it's too hard to show it in general, can you please show me how can we derive (i) and (ii) from the law of total expectation for the case of CONTINUOUS random variables?

Thanks!
 
Last edited:
kingwinner said:
... for the case of CONTINUOUS random variables?

Even then it's tricky - try some examples first with X1 iid X2 and then with X1=X2 and you'll see how those definitions break. The only robust proofs I've seen work with implicit definition for E(Y|X) etc. Maybe someone else here can suggest a simpler way?
 
First of all, is E(Y|X1,X2) a function of X1 and X2??

Is E(Y|X1) = E[E(Y|X1,X2)|X1] a special case of the law of total expectation E(Y) = E[E(Y|X)]?
 
Back
Top