Bivariate normal distribution- converse question

Click For Summary
SUMMARY

The discussion centers on the bivariate normal distribution and the justification for deriving the joint probability density function from known marginal distributions and their correlation coefficient, ρ. The user questions the validity of assuming the converse relationship, especially in cases where ρ equals zero, referencing both a textbook and a Wikipedia article that highlight the nuances of independence versus correlation. The conversation emphasizes the importance of understanding the conditions under which marginal normality implies joint normality.

PREREQUISITES
  • Understanding of bivariate normal distribution
  • Knowledge of marginal distributions in probability theory
  • Familiarity with correlation coefficients and their implications
  • Basic concepts of independence in statistics
NEXT STEPS
  • Study the properties of bivariate normal distributions in detail
  • Explore the implications of correlation coefficients on independence
  • Review the proof of independence for sample mean and sample variance
  • Examine counterexamples where marginal normality does not imply independence
USEFUL FOR

Statisticians, data scientists, and researchers interested in probability theory, particularly those working with bivariate distributions and their applications in statistical modeling.

bobby2k
Messages
126
Reaction score
2
bivariate normal distribution-"converse question"

Hello, I have a theoretical question on how to use the bivariate normal distribution. First I will define what I need, then I will ask my question.

pics from: http://mathworld.wolfram.com/BivariateNormalDistribution.html

We define the bivariate normal distribution, (1):
image.png


From this we get the marginal distributions:
image.png


No comes my question:

Let's say that we have 2 random variables x1 and x2, and we know that each marginal distribution satisfies (2) and(3), that is, we know they are normal, and we know their mean, and variance. Suppose we also know their correlation-coefficient p. How can we now say that equation (1) is the joint probability density function. I mean, we defined it one way, and got the marginals, what is the justification that if we have 2 marginals and their p, we can go back? I mean, it is not allways true that the converse is true, why can we assume the converse here?

They used this technique in my book when proving that \bar{X} and S^{2} are independent.
 
Last edited:
Physics news on Phys.org
http://physicsforums.bernhardtmediall.netdna-cdn.com/images/physicsfor[/b]

bobby2k said:
How can we now say that equation (1) is the joint probability density function.

A Wikipedia article claims we can't say that in the case \rho = 0. http://en.wikipedia.org/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent.

However, it is possible for two random variables X and Y to be so distributed jointly that each one alone is marginally normally distributed, and they are uncorrelated, but they are not independent;

Perhaps you should give the exact statement of what your book proves.
 
Last edited by a moderator:
Here I have scanned the proof.
http://i.imgur.com/naRsk9s.jpg

The proof starts inside the green line, and the quote I am interested in starts inside the red line. I have however added some information that is before this, so you can see where it all comes from.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 32 ·
2
Replies
32
Views
9K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
7K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
3K
Replies
5
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
4K