MHB Proving conditional expectation

AI Thread Summary
The discussion revolves around proving that if E(U|X) = E(U) = 0 for random variables U and X, then it follows that E(U^2|X) = E(U^2). A participant challenges this assumption by providing a counterexample where U conditioned on X has a non-zero variance. The conversation then shifts to the context of homoskedasticity in simple linear regression, questioning how the conclusion σ² = E(U²|X) implies σ² = E(U²) is reached. The clarification provided emphasizes that this conclusion stems from the homoskedasticity assumption. The thread highlights the complexities involved in understanding conditional expectations and their implications in statistical models.
Usagi
Messages
38
Reaction score
0
Hi guys, assume we have an equality involving 2 random variables U and X such that E(U|X) = E(U)=0, now I was told that this assumption implies that E(U^2|X) = E(U^2). However I'm not sure on how to prove this, if anyone could show me that'd be great!
 
Mathematics news on Phys.org
Usagi said:
Hi guys, assume we have an equality involving 2 random variables U and X such that E(U|X) = E(U)=0, now I was told that this assumption implies that E(U^2|X) = E(U^2). However I'm not sure on how to prove this, if anyone could show me that'd be great!

Not sure this is true. Suppose \(U|(X=x) \sim N(0,x^2)\), and \(X\) has whatever distribution we like.

Then \(E(U|X=x)=0\) and \( \displaystyle E(U)=\int \int u f_{U|X=x}(u) f_X(x)\;dudx =\int E(U|X=x) f_X(x) \; dx=0\).

Now \(E(U^2|X=x)={\text{Var}}(U|X=x)=x^2\). While \( \displaystyle E(U^2)=\int E(U^2|X=x) f_X(x) \; dx= \int x^2 f_X(x) \; dx\).

Or have I misunderstood something?

CB
 
Hi CB,

Actually the problem arose from the following passage regarding the homoskedasticity assumption for simple linear regression:

http://img444.imageshack.us/img444/6892/asdfsdfc.jpg
I do not understand how they came to the conclusion that \sigma^2 = E(u^2|x) \implies \sigma^2 = E(u^2)

Thanks for your help!
 
Usagi said:
Hi CB,

Actually the problem arose from the following passage regarding the homoskedasticity assumption for simple linear regression:

http://img444.imageshack.us/img444/6892/asdfsdfc.jpg
I do not understand how they came to the conclusion that \sigma^2 = E(u^2|x) \implies \sigma^2 = E(u^2)

Thanks for your help!

It is the assumed homoskedasticity (that is what it means).

CB
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Back
Top