Bivariate normal distribution from normal linear combination

Click For Summary
SUMMARY

The discussion centers on proving the bivariate normal distribution from linear combinations of independent normal random variables (rv's). Participants confirm that the moment-generating function (mgf) of a normal distribution, expressed as $$E(e^{X_1+X_2+\ldots+X_n})$$, leads to a normal distribution with mean $$\mu_1+\mu_2+\ldots+\mu_n$$ and variance $$\sigma_1^2+\sigma_2^2+\ldots+\sigma_n^2$$. The conversation also touches on the use of matrix theory and moment-generating functions to derive the means and variance-covariance matrix for linear combinations of these variables, emphasizing the need for specific coefficients to achieve desired outcomes.

PREREQUISITES
  • Understanding of moment-generating functions (mgf) in probability theory
  • Knowledge of normal distribution properties and definitions
  • Familiarity with variance-covariance matrices
  • Basic skills in algebraic manipulation and matrix theory
NEXT STEPS
  • Study the properties of moment-generating functions in depth
  • Explore the derivation of variance-covariance matrices for linear combinations of random variables
  • Learn about the application of matrix theory in probability and statistics
  • Investigate the use of computer algebra systems like Maple for solving complex algebraic problems
USEFUL FOR

Statisticians, data scientists, and researchers in fields requiring advanced statistical analysis, particularly those working with normal distributions and linear combinations of random variables.

fisher garry
Messages
63
Reaction score
1
upload_2019-1-8_17-53-23.png

I can't prove this proposition. I have however managed to prove that the linear combinations of the independent normal rv's are also normal by looking at it's mgf

$$E(e^{X_1+X_2+...+X_n})=E(e^{X_1})E(e^{X_2})...E(e^{X_n})$$
The mgf of a normal distribution is $$e^{\mu t}e^{\frac{t^2 \sigma^2}{2}}$$
$$E(e^{X_1+X_2+...+X_n})=e^{\mu_1 t}e^{\frac{t^2 \sigma_1^2}{2}}e^{\mu_2 t}e^{\frac{t^2 \sigma_2^2}{2}}...e^{\mu_n t}e^{\frac{t^2 \sigma_n^2}{2}}=e^{(\mu_1+\mu_2+...\mu_n) t}e^{\frac{t^2 (\sigma_1^2+\sigma_2^2+...+\sigma_n^2)}{2}}$$

Which is the normal distribution with mean $$\mu_1+\mu_2+...\mu_n$$ and variance $$\sigma_1^2+\sigma_2^2+...+\sigma_n^2$$

I know about the theory in section 5.4. Some of it is presented here
$$g(y_1,y_2)=f(x_1,x_2)|\frac{\partial(x_1,x_2)}{\partial(y_1,y_2)}|$$Can anyone show how the proof they refer to by section 5.4 and matrix theory goes?
 

Attachments

  • upload_2019-1-8_17-53-23.png
    upload_2019-1-8_17-53-23.png
    20.5 KB · Views: 1,060
Physics news on Phys.org
fisher garry said:
View attachment 236979
I can't prove this proposition. I have however managed to prove that the linear combinations of the independent normal rv's are also normal by looking at it's mgf

$$E(e^{X_1+X_2+...+X_n})=E(e^{X_1})E(e^{X_2})...E(e^{X_n})$$
The mgf of a normal distribution is $$e^{\mu t}e^{\frac{t^2 \sigma^2}{2}}$$
$$E(e^{X_1+X_2+...+X_n})=e^{\mu_1 t}e^{\frac{t^2 \sigma_1^2}{2}}e^{\mu_2 t}e^{\frac{t^2 \sigma_2^2}{2}}...e^{\mu_n t}e^{\frac{t^2 \sigma_n^2}{2}}=e^{(\mu_1+\mu_2+...\mu_n) t}e^{\frac{t^2 (\sigma_1^2+\sigma_2^2+...+\sigma_n^2)}{2}}$$

Which is the normal distribution with mean $$\mu_1+\mu_2+...\mu_n$$ and variance $$\sigma_1^2+\sigma_2^2+...+\sigma_n^2$$

I know about the theory in section 5.4. Some of it is presented here
$$g(y_1,y_2)=f(x_1,x_2)|\frac{\partial(x_1,x_2)}{\partial(y_1,y_2)}|$$Can anyone show how the proof they refer to by section 5.4 and matrix theory goes?

I think you are looking at the wrong problem. The description implies that
$$
\begin{array}{rcl}
U&=& a_1 X_1 + a_2 X_2 + \cdots + a_n X_n \\
V&=& b_1 X_1 + b_2 X_2 + \cdots + b_n X_n
\end{array}
$$
Here the ##a_i## and ##b_i## are constants.

You can work out ##EU, EV, \text{Var}(U), \text{Var}(V)## and ##\text{Cov}(U,V)## in terms of the ##a_i, b_i## and the original ##\mu, \sigma## of the ##X_i##. Thus, you have a formula for the means and variance-covariance matrix in terms of ##a_i , b_i, i=1,2,\ldots, n##.

I think that what the question is asking for is the converse: given ##\mu, \sigma##, the means and the variance-covariance matrix of ##U,V##, it wants you to either find appropriate ##a_i, b_i## that will work (that is, give the right ##U,V##), or at least to show that such ##a_i, b_i## exist, even if not easy to find. Using moment-generating functions (as you did above) should be very helpful for this purpose.
 
fisher garry said:
View attachment 236979
I can't prove this proposition. I have however managed to prove that the linear combinations of the independent normal rv's are also normal by looking at it's mgf

$$E(e^{X_1+X_2+...+X_n})=E(e^{X_1})E(e^{X_2})...E(e^{X_n})$$
The mgf of a normal distribution is $$e^{\mu t}e^{\frac{t^2 \sigma^2}{2}}$$
$$E(e^{X_1+X_2+...+X_n})=e^{\mu_1 t}e^{\frac{t^2 \sigma_1^2}{2}}e^{\mu_2 t}e^{\frac{t^2 \sigma_2^2}{2}}...e^{\mu_n t}e^{\frac{t^2 \sigma_n^2}{2}}=e^{(\mu_1+\mu_2+...\mu_n) t}e^{\frac{t^2 (\sigma_1^2+\sigma_2^2+...+\sigma_n^2)}{2}}$$

Which is the normal distribution with mean $$\mu_1+\mu_2+...\mu_n$$ and variance $$\sigma_1^2+\sigma_2^2+...+\sigma_n^2$$

I know about the theory in section 5.4. Some of it is presented here
$$g(y_1,y_2)=f(x_1,x_2)|\frac{\partial(x_1,x_2)}{\partial(y_1,y_2)}|$$Can anyone show how the proof they refer to by section 5.4 and matrix theory goes?

I won't give a lot of details, but will expand a bit on my previous post.

Presumably, if you are given the means ##EU, EV## and the variance-covariance matrix of the pair ##(U,V)##, you are allowed to specify just how to make up ##U,V## in terms of some iid random variables ##X_1, X_2, \ldots, X_n##. That is, we are allowed to choose an ##n## in an attempt to prove the result.

When we are given the above information about ##U## and ##V##, we are given five items of data: two means, two variances, and one covariance. So, we will need at least five "coefficients" altogether.

I tried ##U = a_1 X_1 + a_2 X_2## and ##V = b_1 X_1 + b_2 X_2 + b_3 X_3##. The means, variances and covariance of ##(U,V)## can be expressed algebraically in terms of the ##a_i, b_j## and the underlying ##\mu, \sigma## of the ##X_k.## Thus, we obtain five equations in the five variables ##a_1, a_2, b_1, b_2,b_3##.

I managed to get a solution, thus proving the result asked for; however, it would possibly take many hours (perhaps days) of algebraic work to do it manually, so I saved myself a lot of grief by using the computer algebra package Maple to do the heavy lifting.

I don't see how matrix properties would be useful here, but maybe the person setting the problem had another method in mind.
 
  • Like
Likes   Reactions: fisher garry

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
Replies
1
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K