Proving Variance of a Random Variable with Moment-Generating Functions

  • Thread starter Thread starter dmatador
  • Start date Start date
  • Tags Tags
    Proof Variance
Click For Summary
The discussion focuses on proving that the variance of a transformed random variable W, defined as W = aY + b, can be expressed as V(W) = V(Y) * a^2 using moment-generating functions (mgf). The calculations show that the mean of W is E(W) = aE(Y) + b, and the second derivative of the mgf is used to derive the variance. The final result confirms that the variance of W simplifies to a^2V(Y), demonstrating the relationship between the variances of Y and W. Participants emphasize the utility of moment-generating functions in this proof. The conversation highlights the importance of correctly applying mgf in variance calculations.
dmatador
Messages
120
Reaction score
1
Suppose that Y is a random variable with moment-generating function m(t) and W = aY + b, with a moment-generating function of m(at) * e^(tb). Prove that V(W) = V(Y) * a^2. I have done an absurd amount of work on this problem, and I know its actual solution doesn't have one and a half pages worth of work needed. I have tried to find the variances separately and also to find the expected values, but this just gave me a big mess of equations and I need some advice.
 
Physics news on Phys.org
E(W)= aE(Y) + b
E(W2)=a2E(Y2) + 2abE(Y) + b2
V(W)=E(W2)-(E(W))2=a2(E(Y2)-E(Y)2)=a2V(Y)
 
Thanks a lot man. I kept trying to use the moment-generating function for W. This is a big help.
 
Are you sure you were not supposed to use the mgf?

<br /> \begin{align*}<br /> m_W(t) &amp; = e^{tb} m_Y(at) \\<br /> m&#039;_W(t) &amp; = be^{tb} m_Y(at) + ae^{tb} m&#039;_Y(at) \\<br /> \mu_W &amp; = m&#039;_W(0) = b + a\mu_Y <br /> \end{align*}<br />

so the mean of W is a\mu_Y + b.

Remember that the second derivative, evaluated at t = 0, is \sigma^2 + \mu^2 [/tex].<br /> <br /> &lt;br /&gt; \begin{align*}&lt;br /&gt; m&amp;#039;_W(t) &amp;amp; = (bm_Y(at) + am&amp;#039;_Y(at)) e^{tb} \\&lt;br /&gt; m&amp;#039;&amp;#039;_W(t) &amp;amp; =b(bm_Y(at) + am&amp;#039;_Y(at)) e^{b}+ (abm&amp;#039;_Y(at) + a^2m&amp;#039;&amp;#039;_Y(at))e^{tb} \\&lt;br /&gt; m&amp;#039;&amp;#039;_W(0) &amp;amp; = b(b + a\mu_Y) + (ab\mu_Y + a^2 (\sigma^2_y + \mu^2_Y)) \\&lt;br /&gt; &amp;amp; = a^2 \sigma_Y^2 + a^2\mu_Y^2 + 2ab\mu_Y + b^2 \\&lt;br /&gt; &amp;amp; = a^2 \sigma_Y^2 + (a\mu_Y + b)^2&lt;br /&gt; \end{align*}&lt;br /&gt;<br /> <br /> The final line in the second bit is E[W^2], so the variance of W is<br /> <br /> &lt;br /&gt; a^2 \sigma^2_Y + (a\mu_Y+b)^2 - (a\mu_Y + b)^2 = a^2 \sigma^2_Y&lt;br /&gt;
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K