Calculating Conditional Expectation for IID Normal Variables

In summary: The conditional distribution for heads after imposing the condition is (1,0) and the conditional distribution for tails is (0,1). This is just a simple example, but it should make the point that what was once an N(0,1) variable becomes an N(x,1) variable when the condition is imposed.
  • #1
redflame34
2
0
If I have x1,x2 iid normal with N(0,1)

and I want to find E(x1*x2 | x1 + x2 = x)
Can I simply say: x1 = x - x2 and thus
E(x1*x2 | x1 + x2 = x) =
E[ (x - x2)*x2) = E[ (x * x2) - ((x2)^2) ] <=>

x*E[x2] - E[x2^2] =
0 - 1 =
-1?
 
Physics news on Phys.org
  • #2
No.
You need to transform the problem to one where x1 + x2 is a random variable.
Let u=(x1 + x2)/√2, v=(x1-x2)/√2. (I am using √2 so that u and v will be iid N(0,1))
Then x1=(u+v)/√2 and x2=(u-v)/√2

Your original problem is now E((u2 - v2)/2 |u=x/√2)
Since u and v are independent, the result is x2/4 -1/2.
 
  • #3
I'm confused. Can you please explain more why the first solution (-1) is wrong?
[tex]X_1[/tex] and [tex]X_2[/tex] are 2 RVs, and we're told that [tex]X_1+X_2=y[/tex].

1) I understand that sum of two RVs will be a RV, but here, when we are told the sum is [tex]y[/tex], it is something like a constant? (a realization of [tex]Y[/tex] that is [tex]Y=y[/tex] I mean.)

2) if I'm right about (1), then when we are told the sum is [tex]y[/tex], it means we can focus only on one RV (accurately, I mean we can think we have two RVs: [tex]y-X_1[/tex] and [tex]X_1[/tex]). Now the solution as redflame said will be:

[tex]\int_{x_1} (y-x_1)(x_1) p(x_1) \ud x_1 = \ldots = -1[/tex]
(I'm not sure if the TeX code above is shown correctly, it is (\int_{x_1} (y-x_1)(x_1) p(x_1) \ud x_1 = \ldots = -1), (the dx_1 is not shown correctly for me).

Thanks in advance. (;
 
Last edited:
  • #4
The argument against this approach is complicated, but a simple illustration will show that it is wrong.
Let x1 be N(0,a) and x2 be N(0,b), where a and b are different. Then we have the following:
E(x1*x2|x1+x2=x)=E(x*x2)-E(x2^2)=-b
E(x1*x2|x1+x2=x)=E(x1*x)-E(x1*2)=-a
This is obviously incorrect!
 
  • #5
Thanks.
I'm really confused!
Can you please suggest a solution through the definition of E{x1*x2|x1+x2=x}?
I want to correct my beliefs about this kind of problems (working with sum of RVs and ...).
I think E{x1*x2|x1+x2=x}=double integral on x1 and x2 { x1 * x2 * p(x1,x2|x1+x2=x) *dx1 *dx2}

(BTW, is it true?)
Can you continue it?
 
  • #6
and please tell me if you know a good reference for understanding these subjects. I am trying to fix my probability/statistics background knowledge in order to understand stochastic processes and estimation theory deeply.
 
  • #7
The basic problem in your original approach is that by using x2=x-x1, you were redefining x2. This new x2 is N(x,1) not N(0,1) and is no longer independent of x1.

As for texts I haven't been keeping up (I'm retired). I hope someone else can help.
 
  • #8
I finally understand the underlying problem with your original approach. x1 and x2 are N(0,1) when there is no condition. However their distributions change when you impose the condition x1 + x2=x.

I can give you a very simple example to show what happens. Toss a fair coin twice, and for this analysis, let heads be called +1 and tails -1. Then each toss has mean 0 and variance 1.
Now impose the condition the sum of the tosses = 2. The resulting conditional distribution for both tosses is now +1, so the conditional means are now 1 and the conditional variances are 0.
 

1. What is conditional expectation?

Conditional expectation is a statistical concept that represents the expected value of one variable given the known value of another variable. It is used to model the relationship between two variables and is often denoted as E(X|Y), where X is the dependent variable and Y is the independent variable.

2. How is conditional expectation calculated?

Conditional expectation is calculated by taking the product of the probability of the dependent variable given the independent variable and the value of the dependent variable, summed over all possible values of the independent variable. It can also be calculated using regression analysis or other statistical techniques.

3. What is the difference between conditional expectation and unconditional expectation?

Unconditional expectation is the expected value of a variable without considering any other variables, while conditional expectation takes into account the relationship between two variables. In other words, unconditional expectation is the average value of a variable, while conditional expectation is the average value of a variable given the value of another variable.

4. How is conditional expectation used in real-world applications?

Conditional expectation is commonly used in fields such as economics, finance, and machine learning to model relationships between variables. It can be used to make predictions, calculate risk, and inform decision making.

5. What are some limitations of conditional expectation?

Conditional expectation assumes a linear relationship between variables and may not accurately model more complex relationships. It also requires a large amount of data and may not be suitable for small sample sizes. Additionally, it may not account for all relevant variables and can be sensitive to outliers in the data.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
254
  • Set Theory, Logic, Probability, Statistics
2
Replies
54
Views
4K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
3K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
5
Views
995
  • Classical Physics
Replies
4
Views
891
Replies
5
Views
360
  • Programming and Computer Science
Replies
14
Views
628
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
842
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
2K
Back
Top