Graduate Covariance matrix for transformed variables

Click For Summary
The discussion centers on calculating the covariance matrix for transformed variables derived from a set of correlated experimental measurements. The user seeks confirmation on whether the off-diagonal entries of the covariance matrix can be obtained by summing the corresponding blocks of the original covariance matrix, particularly when dealing with weighted sums of the original variables. They express challenges in using rotation matrices for transformations and emphasize that principal component analysis, while informative, does not directly solve their problem. The conversation also touches on the mathematical properties of covariance, suggesting that understanding these properties could aid in deriving the necessary transformations. Overall, the need for a clear method to transition from the original covariance matrix to that of the transformed variables is highlighted.
Messages
37,398
Reaction score
14,233
This sounds like a common application, but I didn't find a discussion of it.

Simple case:
I have 30 experimental values, and I have the full covariance matrix for the measurements (they are correlated). I am now interested in the sum of the first 5 measured values, the sum of the following 5 measured values, and so on. In total I want 6 values and their covariance matrix. The diagonal entries of the covariance matrix are easy - just sum the corresponding 5x5 blocks along the diagonal of the original covariance matrix. Do I get the other entries also as sum of the corresponding 5x5 blocks? I would expect so but a confirmation would be nice.

General case:
More generally, if my transformed variables are a weighted sum of the original variables (weights are not negative), how do I get the off-diagonal elements of the covariance matrix? As long as two transformed variables do not share a common measured value, I can scale everything in the covariance matrix to get back to the previous case. But what if they do? I was playing around with rotation matrices (going to a basis of transformed variables plus some dummy variables) but somehow it didn't work, and constructing 30x30 rotation matrices is ugly.
 
Physics news on Phys.org
I haven't read it, simply relied on the source. However, it looks helpful. (If I understood you correctly and the main problem here is the diagonalization).

May I ask you whether you try to create a risk minimal DAX portfolio made out of five paper subportfolios? If so, then there is vast literature on the issue. E.g. I've found a dissertation Risk proportion and Eigen-Risk-Portfolio.

Edit: And https://en.wikipedia.org/wiki/Principal_component_analysis provides a lot of information and links, including software tools.
 
I know about principal component analysis but that is not what I want to do. I could diagonalize the covariance matrix that way, but then I still have to produce a new covariance matrix for the target variables, which is yet another transformation that is very similar to the original problem, and I don't know how stable the principal component analysis would be.

The application is in physics.
 
I had a look at another special case interesting to me. Let Xi be the measured variables and Yi be the transformed variables. The original covariance matrix is C, the new one is D.

Let Y1 = X1+X2 and Y2=X2. Then ##D_{11} = C_{11}+ C_{12}+ C_{21}+ C_{22}## and ##D_{22}=C_{22}## and ##D_{12}=D_{21}=C_{11}##.
I don't find a transformation that would produce this result.
 
Perhaps I don't understand the question, but if you have the full covariance matrix, can't you find everything you need from properties like ##COV(X_1+X_2,X_3) = COV(X_1,X_3) + COV(X_2,X_3)## and ##COV(\alpha X_1,X_3) = \alpha COV(X_1,X_3)## ?

Or are we asking for how the consequences of those properties are implemented with block matrix manipulations ?
 
You are right, that is sufficient to cover all those cases. Thanks.

A nice transformation could simplify the practical side, but computers are good at adding tons of stuff anyway.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 4 ·
Replies
4
Views
5K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K