Some help on a tough math topic

  • Thread starter Thread starter irony of truth
  • Start date Start date
  • Tags Tags
    Topic
AI Thread Summary
The discussion revolves around calculating the expected value and variance of the product of two independent random variables, X and Y, using statistical differentials. The participants clarify that the function g(x,y) should be defined as g(x,y) = xy, which simplifies the calculations for expected values and variances. The expected value E[XY] and variance Var(XY) were found to be 0.4 and 0.04, respectively, consistent with the results obtained using the PDF of Z. The conversation also highlights the importance of correctly applying the formulas for statistical differentials and understanding the relationship between the variables involved. The final consensus is that the derived results are not coincidental but stem from the proper transformation of the random variables.
irony of truth
Messages
89
Reaction score
0
"Two independent random variables x and y have PDF's given by f(x) = 12x^2 (1 - x) for 0 <= x <= 1 and f(y) = 2y for 0 <= y <= 1. Their product Z = XY has PDF defined as f(z) = 12z(1 - z)^2 for 0 <= y <= 1. Find the approximate values obtained by the method of statistical differentials."

I have already got the answer when I used the formula (not the required in the problem above) for the expected value and the variance of
12z(1 - z)^2...I got 0.4 and 0.04 respectively and the same answer resulted when I solved for E[XY] and var(XY).

But I got crazy in using the method of statistical differentials because I have not encountered this before, but the formula truncates to 3 terms( I have seen the formula)... and uses the taylor series... and I don't quite understand how to solve the problem... the answers, according to the book, for the approximate mean and variance are 0.4 and 0.0377, respectively. What must be my g(x,y) in this problem? Maybe from this, I can have an idea...

How do I solve this problem?

The book says let m_1 = E[X] and m_2 = E[Y]... the formula for the statistical differentials requires the known values for Var(X) and Var(Y) and Cov(X,Y)...

E[g(X,Y)] = g(m_1, m_2) + ½g"(x)(m_1, m_2)Var(X) + ½g"(y)(m_1, m_2)Var(Y)+ g"(x,y)(m_1, m_2)Cov(X,Y)

where g"(x) is the second partial derivative of g(x,y) wrt x;
g"(y) is the second partial derivative of g(x,y) wrt y;
g"(x,y) is the partial derivative of g(x,y) wrt y, then wrt x;
 
Physics news on Phys.org
irony of truth said:
What must be my g(x,y) in this problem? Maybe from this, I can have an idea...

How do I solve this problem?

Looks to me like you are supposed to be looking at g(x,y) = xy and using the PDFs of x and y to compute E(xy) and Var(xy). The given answers suggest that will be a good approximation for the values you obtained using the given PDF for z. Have you tried this?
 
I think I have not tried it... Hmm, my g(x,y) = xy only? Before, I used g(x,y) = 12x^2 (1 - x)2y (the product of the two functions f given above) but I don't get the answer correctly...

In the PDFs of x and y, you mean the functions f(x) and f(y)? Also, I hope I am right here... the E[XY] means that I am getting the expected value of xy12x^2 (1 - x)2y
from 0 to 1 "twice"?
 
irony of truth said:
I think I have not tried it... Hmm, my g(x,y) = xy only? Before, I used g(x,y) = 12x^2 (1 - x)2y (the product of the two functions f given above) but I don't get the answer correctly...

In the PDFs of x and y, you mean the functions f(x) and f(y)? Also, I hope I am right here... the E[XY] means that I am getting the expected value of xy12x^2 (1 - x)2y
from 0 to 1 "twice"?

I didn't just make up the idea that g = xy. It came from the problem statement that defined Z = XY with the given f(z). Using f(x) and f(y) and f(z) to find E(X) and E(Y) and E(Z), I find E(X) = 0.6, E(Y) = 2/3, and E(Z) = 0.4. It follows from Z=XY that

Cov(X,Y) = E(XY) - E(X)E(Y) = 0.4 - 0.6*2/3 = 0

and if g(x,y) is in fact xy, then g''(x) = g''(y) = 0 and your equation

E[g(X,Y)] = g(m_1, m_2) + ½g"(x)(m_1, m_2)Var(X) + ½g"(y)(m_1, m_2)Var(Y)+ g"(x,y)(m_1, m_2)Cov(X,Y)

reduces to

E[g(X,Y)] = g(m_1, m_2) = g(0.6, 2/3) = 0.6*2/3 = 0.4 = E(X,Y)

Looks good so far. You did not give the equation for Var[g(X,Y)], but I believe it is

var(XY) = g'(x)(m_1, m_2)^2*var(X) + 2*g'(x)(m_1, m_2)g'(y)(m_1, m_2)*Cov(X,Y) + g'(y)(m_1, m_2)^2*var(Y)

which reduces to the given answer by my calculation.

I think I should be able to verify that f(z) follows from the given f(x) and f(y), but I am not remembering exactly how to do that, and not finding it with a quick google search. Please post that if you have it handy.
 
OlderDan,

Thank you for the help... I got the idea how to solve for the variance when you showed how to find the expected value... 0.0377...

Now, something came into my mind... I solved for the expected value and the variance of 12z(1 - z)^2...I got 0.4 and 0.04 respectively and the same answer resulted when I solved for E[XY] and var(XY). The integrand used in finding the
E[XY] is xy12x^2(1 - x)2ydxdy... integrating twice from 0 to 1... and I got the same answer as that mentioned above... and same thing with the variance.

Are these same answers for my expected value and variance just a coincidence or is really true... I want to know why...
 
irony of truth said:
OlderDan,

Thank you for the help... I got the idea how to solve for the variance when you showed how to find the expected value... 0.0377...

Now, something came into my mind... I solved for the expected value and the variance of 12z(1 - z)^2...I got 0.4 and 0.04 respectively and the same answer resulted when I solved for E[XY] and var(XY). The integrand used in finding the
E[XY] is xy12x^2(1 - x)2ydxdy... integrating twice from 0 to 1... and I got the same answer as that mentioned above... and same thing with the variance.

Are these same answers for my expected value and variance just a coincidence or is really true... I want to know why...

It is not a coincicence at all. You get those results because f(z) is the appropriate function for the transformation from x,y coordinates to z,w coordintaes. The problem does not tell you what the variable w should be, and how the integral over w yields the result. I was hoping you would come up with it, but I think I've finally found it.

<br /> f(x) = 12x^2 \left( {1 - x} \right)

g(y) = 2y

z = xy{\rm{\ ,\ \ }}w = \frac{1}{y}

x = zw{\rm{\ ,\ \ }}y = \frac{1}{w}

\frac{{\partial x}}{{\partial z}} = w{\rm{\ ,\ \ }}\frac{{\partial x}}{{\partial w}} = z{\rm{\ ,\ \ }}\frac{{\partial y}}{{\partial z}} = 0{\rm{\ ,\ \ }}\frac{{\partial y}}{{\partial w}} = - \frac{1}{{w^2 }}

The Jacobian is

J = - \frac{1}{w}

The tricky part is getting the limits of integration. I think this is right

\int_0^1 {\int_0^1 {12x^2 \left( {1 - x} \right)2ydydx} } = 24\int_0^1 {\int_1^{1/z} {z^2 w^2 \frac{1}{w}\left( {1 - zw} \right)\frac{{dw}}{w}} } dz = 24\int_0^1 {z^2 \int_1^{1/z} {\left( {1 - zw} \right)dw} } dz

\int_0^1 {\int_0^1 {12x^2 \left( {1 - x} \right)2ydydx} } = 24\int_0^1 {z^2 \left( {\frac{1}{{2z}} - 1 + \frac{z}{2}} \right)} dz = 12\int_0^1 {z\left( {1 - 2z + z^2 } \right)} dz

\int_0^1 {\int_0^1 {12x^2 \left( {1 - x} \right)2ydydx} } = 12\int_0^1 {z\left( {1 - z} \right)^2 } dz
 
I multiplied the values first without the error limit. Got 19.38. rounded it off to 2 significant figures since the given data has 2 significant figures. So = 19. For error I used the above formula. It comes out about 1.48. Now my question is. Should I write the answer as 19±1.5 (rounding 1.48 to 2 significant figures) OR should I write it as 19±1. So in short, should the error have same number of significant figures as the mean value or should it have the same number of decimal places as...
Thread 'A cylinder connected to a hanging mass'
Let's declare that for the cylinder, mass = M = 10 kg Radius = R = 4 m For the wall and the floor, Friction coeff = ##\mu## = 0.5 For the hanging mass, mass = m = 11 kg First, we divide the force according to their respective plane (x and y thing, correct me if I'm wrong) and according to which, cylinder or the hanging mass, they're working on. Force on the hanging mass $$mg - T = ma$$ Force(Cylinder) on y $$N_f + f_w - Mg = 0$$ Force(Cylinder) on x $$T + f_f - N_w = Ma$$ There's also...

Similar threads

Back
Top