Conditional Expectation of Multiple Independent Random Varia

In summary: The center of mass of the line z = 1-y would be:∫01ds = ∫01√(dz2+ dy2) = ∫01√(1 + 1) = √2This answer doesn' make sense unless it's scaled by the distribution to be smaller... but I didn't quite comprehend... what does that mean?
  • #1
whitejac
169
0

Homework Statement


Given X,Y,Z are 3 N(1,1) random variables,

(1)
Find E[ XY | Y + Z = 1]

Homework Equations

The Attempt at a Solution


I'm honestly completely lost in statistics... I didn't quite grasp the intuitive aspect of expectation because my professor lives in the numbers side and not the relating it to others side. So these processes are turning into foggy abstract generalities and any light would be superb.

My book's explanation of the multi-variable cases are that they're essentially the same as the joint case which is the same as the singular case.

So:
For two jointly continuous random variables,
E[X| Y = y] = ∫-∞ , ∞ x*fx|y(x|y)dx
*note that X could easily be replaced by g(X) and that fxy(x|y) is defined as fxy(x,y) / fy(y)

My thoughts are to define two variables
W = XY
V = Y + Z
**note: X + Y = N(2,2) = 1
This allows us to write (1) as
E[W | V = 1]
which would be defined as listed above for the X,Y case.

At this point I'm stuck because finding the pdf seems to essentially be the struggle and I don't know how.
 
Physics news on Phys.org
  • #2
Does the problem statement specify that X, Y and Z mutually independent?
 
  • #3
whitejac said:

Homework Statement


Given X,Y,Z are 3 N(1,1) random variables,

(1)
Find E[ XY | Y + Z = 1]

Homework Equations

The Attempt at a Solution


I'm honestly completely lost in statistics... I didn't quite grasp the intuitive aspect of expectation because my professor lives in the numbers side and not the relating it to others side. So these processes are turning into foggy abstract generalities and any light would be superb.

My book's explanation of the multi-variable cases are that they're essentially the same as the joint case which is the same as the singular case.

So:
For two jointly continuous random variables,
E[X| Y = y] = ∫-∞ , ∞ x*fx|y(x|y)dx
*note that X could easily be replaced by g(X) and that fxy(x|y) is defined as fxy(x,y) / fy(y)

My thoughts are to define two variables
W = XY
V = Y + Z
**note: X + Y = N(2,2) = 1
This allows us to write (1) as
E[W | V = 1]
which would be defined as listed above for the X,Y case.

At this point I'm stuck because finding the pdf seems to essentially be the struggle and I don't know how.
For an event ##A## you can think of ##P(A|Y+Z=1)## as
[tex] \begin{array}{rcl}P(A|Y+Z=1) &=&\displaystyle \lim_{h \to 0} P(A| 1 < Y+Z < 1+h)\\

&=&\displaystyle \lim_{h \to 0} \frac{P(A \: \& \: Y+Z \in (1,1+h) )}{P(Y+Z \in (1,1+h)}
\end{array}[/tex]
If ##X,Y,Z## are independent, and ##A = \{ XY \in (w,w+ \Delta w) \}## we have
[tex] P( XY \in (w,w+\Delta w )\: \& \: Y+Z \in (1,1+h) ) \approx \int_{y=-\infty}^{\infty}f_{YZ}(y,1-y) P(Xy \in (w,w+\Delta w) |Y=y, Z=1-y)\, dy [/tex]
(with errors of order ##h .\cdot \Delta w##, because ##Z## is not exactly ##1-y##, but differs from it by order ##< h##). Anyway, when we take ratios and go to the limit of small ##h## and ##\Delta w## this will all become exact. Now if ##X,Y,Z## are independent everything becomes a lot simpler from this point onward.
 
Last edited:
  • #4
Ray Vickson said:
y=-∞ fYZ(y, 1-y)P(XY ∈ (w,w+Δw) | Y = y, Z= 1-y)

this looks quite complicated! I said they were independent in the title, how does knowing its independence make this simpler? Looking at it...
it seems like I'm expanding the problem into a series of problems, the first demanding that I find the pdf fYZ and multiplying it by P(XY | X+Y = 1).. which both of these alone sound difficult to extrapolate from just the expectation alone.
 
  • #5
A simplification that can be done right at the start is to notice that if X is independent from Y and Z then we have

$$E[XY|Y+Z=1]=E[X]\cdot E[Y|Y+Z=1]=1\cdot E[Y|Y+Z=1]=E[Y|Y+Z=1]$$

The problem then reduces to finding the mean of y along the line y+z=1 in the yz number plane, when the distribution of the pair (y,z) is a bivariate normal with no correlation. If you note that the pdf of that distribution is rotationally symmetric about the point (y=1,z=1) and plot both that point and the line y+z=1 on a number plane, it should be easy enough to find the location of the probability centre of mass of the line with respect to the distribution.
 
  • #6
andrewkirk said:
If you note that the pdf of that distribution is rotationally symmetric about the point (y=1,z=1) and plot both that point and the line y+z=1 on a number plane, it should be easy enough to find the location of the probability centre of mass of the line with respect to the distribution.

I don't really understand the probability centre of mass... but the center of mass of the line z = 1-y would be:
01ds
= ∫01√(dz2+ dy2)
= ∫01√(1 + 1)
= √2

This answer doesn' make sense unless it's scaled by the distribution to be smaller... but I didn't quite comprehend your analogy of rotational symmetry and finding the mean along the number line as a method of solving this. it's probably a very intuitive way to derive my answer... but I'm not quite experienced enough to follow it.
 
  • #7
whitejac said:
I don't really understand the probability centre of mass
By this I meant to refer to the point on that line at which the integral of the pdf of y and z from that point to the 'end' of the line is the same in both directions. That is, if we parametrise the line by the y coordinate, it is the point ##(y=\alpha,z=1-\alpha)## such that ##\int_{-\infty}^\alpha p_Y(y)p_Z(1-y)\,dy=\int_\alpha^{\infty} p_Y(y)p_Z(1-y)\,dy##.

Since the joint pdf of Y and Z, which is ##p_Y(y)p_Z(z)##, is rotationally symmetric around (1,1), that centre of mass must be at the foot of the perpendicular from (1,1) to the line, so the expected value of y must be the value of y at that foot.
 
  • #8
whitejac said:
this looks quite complicated! I said they were independent in the title, how does knowing its independence make this simpler? Looking at it...
it seems like I'm expanding the problem into a series of problems, the first demanding that I find the pdf fYZ and multiplying it by P(XY | X+Y = 1).. which both of these alone sound difficult to extrapolate from just the expectation alone.

Typically, there are several ways to do such problems. One way is to just calculate everything in detail and apply the definitions; that was essentially what I was outlining in the previous post. Even if that is not necessarily the fastest or easiest way to do it, I think it is important for you to grasp all the concepts involved, as part of the learning process regarding probability.

Of course, there are easier, trickier ways to do it, essentially without calculation and almost "by inspection". These, too use probability concepts, but different ones from the detailed grind-it-out approach of computing the conditional density functions. Post # 5 shows you the start of such an approach; however, I cannot show you the rest of the solution without violating PF rules about being too helpful. Rest assured, though, that practically no computations are needed to finish the problem.
 

1. What is conditional expectation of multiple independent random variables?

Conditional expectation of multiple independent random variables is the expected value of one random variable given the values of other independent random variables. It represents the average value of the dependent variable, taking into account the values of the independent variables.

2. How is conditional expectation calculated?

Conditional expectation is calculated by taking the expected value of the dependent variable, given a specific value of the independent variable. This can be represented mathematically as E(Y|X=x), where Y is the dependent variable and X is the independent variable.

3. What is the relationship between conditional expectation and conditional probability?

Conditional expectation and conditional probability are closely related, as they both represent the likelihood of an event occurring given certain conditions. However, conditional expectation is a numerical value, while conditional probability is a probability value between 0 and 1.

4. Can conditional expectation be negative?

Yes, conditional expectation can be negative. This means that the expected value of the dependent variable is lower than the mean value of the dependent variable, given a specific value of the independent variable.

5. How is conditional expectation used in statistical analysis?

Conditional expectation is often used in statistical analysis to model the relationship between multiple variables. It can be used to predict the value of a dependent variable based on the values of one or more independent variables, and can also help identify any potential relationships or patterns between variables.

Similar threads

  • Calculus and Beyond Homework Help
Replies
0
Views
155
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
451
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
484
Back
Top