- #1
- 7,405
- 11,411
Hi All,
I think I have some idea of how to interpret covariance and correlation. But some doubts remain:
1)What joint distribution do we assume? An example of uncorrelated variables is that of points on a circle, i.e., the variables ##X ##and ##\sqrt{ 1- x^2} ##are uncorrelated -- have ##Cov(X,Y)=0 ##.
## Cov(X,Y) =E(XY) - \mu_X \mu_Y ## Now, each of these terms assumes a distribution. That for ##X, Y ##and a joint for ##X,Y ##. But I have never seen any mention of either after searching.
2) Is there a way of "going backwards" and deciding which joints/marginals would create uncorrelated variables, i.e., can we find all ## f_{XY}(X,Y) ## so that :
## \Sigma X_iY_i f_{XY}(x_i,y_i) - \Sigma xf_X(x) \Sigma yf_Y(y) =0 ##
or, in the continuous case:
## \int xy f_{XY}(x,y) dxdy - \int xf_X(x) dx \int yf_Y(y)dy =0 ## ?
3) In what sense is correlation a measure of linear dependence? I don't see where/how this follows from the formulas.
Thanks.
I think I have some idea of how to interpret covariance and correlation. But some doubts remain:
1)What joint distribution do we assume? An example of uncorrelated variables is that of points on a circle, i.e., the variables ##X ##and ##\sqrt{ 1- x^2} ##are uncorrelated -- have ##Cov(X,Y)=0 ##.
## Cov(X,Y) =E(XY) - \mu_X \mu_Y ## Now, each of these terms assumes a distribution. That for ##X, Y ##and a joint for ##X,Y ##. But I have never seen any mention of either after searching.
2) Is there a way of "going backwards" and deciding which joints/marginals would create uncorrelated variables, i.e., can we find all ## f_{XY}(X,Y) ## so that :
## \Sigma X_iY_i f_{XY}(x_i,y_i) - \Sigma xf_X(x) \Sigma yf_Y(y) =0 ##
or, in the continuous case:
## \int xy f_{XY}(x,y) dxdy - \int xf_X(x) dx \int yf_Y(y)dy =0 ## ?
3) In what sense is correlation a measure of linear dependence? I don't see where/how this follows from the formulas.
Thanks.
Last edited: