Product of correlated random variables

Click For Summary
The discussion centers on the relationship between correlated random variables and their probabilities. It emphasizes that while individual probabilities P(X1), P(X2), ..., P(Xn) can be computed, the joint probability P(X1=x1, X2=x2, ..., Xn=xn) cannot be determined solely from correlation coefficients. The correlation coefficients provide n^2 values, while the joint distribution requires m^n values, where m is the number of discrete outcomes for each variable. This indicates that the correlation matrix and individual distributions do not capture the full complexity of the joint distribution. Understanding this distinction is crucial for accurately analyzing correlated random variables.
benjaminmar8
Messages
10
Reaction score
0
Hi, All,

Let x1 x2... Xn be correlated random events (or variables). Say P(X1), P(X2)..., P(Xn) can be computed, in addition to that, covariance and correlated between all X can be computed. My question is, what is P(X1) * P(X2) *... * P(Xn)?
 
Physics news on Phys.org
Well, if P(X1=x1)...P(Xn=xn) can be computed, then obviously P(X1=x1) * ... * P(Xn=xn) can also be computed as the product of those. Perhaps you meant to ask, "what is P(X1=x1, X2=x2, ..., Xn=xn)?" From just the correlation coefficients, you don't have enough information to compute that. I could give you a specific example of a situation where P(X1=x1,X2=x2) can't be computed from your given information, but it is more intuitive to note that the correlation coefficients give you n^2 numbers, and if you know every P(X_i=x_j) that gives you mn numbers where m is the number of discrete values each variable may take, but knowing every P(X1=x_j1,...,Xn=x_jn) for each sequence of indices j1 ... jn, involves knowing m^n numbers, potentially a much larger number than n^2 + mn. So the full joint distribution usually has a lot more "degrees of freedom" than the correlation matrix + the individual distributions, so specifying the latter can't tell you everything about the former. That's not exactly a proof, but it should help give an intuitive idea.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
2
Views
6K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K