Covariance matrix for transformed variables

Click For Summary

Discussion Overview

The discussion revolves around the calculation of the covariance matrix for transformed variables derived from a set of experimental measurements. Participants explore both specific cases and general approaches to derive the covariance matrix for sums of measured values and weighted sums of original variables, addressing challenges related to off-diagonal elements and transformations.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes a simple case involving 30 experimental values and seeks confirmation on how to derive the covariance matrix for sums of groups of these values, specifically questioning the treatment of off-diagonal entries.
  • Another participant suggests that diagonalization might be relevant but expresses uncertainty about its applicability to the original problem.
  • A third participant clarifies that while they are aware of principal component analysis, they are looking for a different method to derive a new covariance matrix for transformed variables, expressing concerns about the stability of PCA.
  • One participant presents a specific transformation example involving measured and transformed variables, detailing how to compute certain entries of the new covariance matrix but notes the lack of a general transformation method to achieve this.
  • Another participant proposes using properties of covariance to derive necessary values from the full covariance matrix, questioning whether the discussion is about implementing these properties through block matrix manipulations.
  • A later reply acknowledges that the proposed properties are sufficient for the cases discussed but suggests that a nice transformation could simplify practical applications.

Areas of Agreement / Disagreement

Participants express differing views on the best approach to derive the covariance matrix for transformed variables, with no consensus reached on a single method or solution. Some agree on the sufficiency of certain properties of covariance, while others seek more specific transformations.

Contextual Notes

Participants mention challenges related to the stability of methods like principal component analysis and the complexity of constructing rotation matrices, indicating potential limitations in their approaches.

Messages
37,444
Reaction score
14,313
This sounds like a common application, but I didn't find a discussion of it.

Simple case:
I have 30 experimental values, and I have the full covariance matrix for the measurements (they are correlated). I am now interested in the sum of the first 5 measured values, the sum of the following 5 measured values, and so on. In total I want 6 values and their covariance matrix. The diagonal entries of the covariance matrix are easy - just sum the corresponding 5x5 blocks along the diagonal of the original covariance matrix. Do I get the other entries also as sum of the corresponding 5x5 blocks? I would expect so but a confirmation would be nice.

General case:
More generally, if my transformed variables are a weighted sum of the original variables (weights are not negative), how do I get the off-diagonal elements of the covariance matrix? As long as two transformed variables do not share a common measured value, I can scale everything in the covariance matrix to get back to the previous case. But what if they do? I was playing around with rotation matrices (going to a basis of transformed variables plus some dummy variables) but somehow it didn't work, and constructing 30x30 rotation matrices is ugly.
 
Physics news on Phys.org
I haven't read it, simply relied on the source. However, it looks helpful. (If I understood you correctly and the main problem here is the diagonalization).

May I ask you whether you try to create a risk minimal DAX portfolio made out of five paper subportfolios? If so, then there is vast literature on the issue. E.g. I've found a dissertation Risk proportion and Eigen-Risk-Portfolio.

Edit: And https://en.wikipedia.org/wiki/Principal_component_analysis provides a lot of information and links, including software tools.
 
I know about principal component analysis but that is not what I want to do. I could diagonalize the covariance matrix that way, but then I still have to produce a new covariance matrix for the target variables, which is yet another transformation that is very similar to the original problem, and I don't know how stable the principal component analysis would be.

The application is in physics.
 
I had a look at another special case interesting to me. Let Xi be the measured variables and Yi be the transformed variables. The original covariance matrix is C, the new one is D.

Let Y1 = X1+X2 and Y2=X2. Then ##D_{11} = C_{11}+ C_{12}+ C_{21}+ C_{22}## and ##D_{22}=C_{22}## and ##D_{12}=D_{21}=C_{11}##.
I don't find a transformation that would produce this result.
 
Perhaps I don't understand the question, but if you have the full covariance matrix, can't you find everything you need from properties like ##COV(X_1+X_2,X_3) = COV(X_1,X_3) + COV(X_2,X_3)## and ##COV(\alpha X_1,X_3) = \alpha COV(X_1,X_3)## ?

Or are we asking for how the consequences of those properties are implemented with block matrix manipulations ?
 
You are right, that is sufficient to cover all those cases. Thanks.

A nice transformation could simplify the practical side, but computers are good at adding tons of stuff anyway.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 4 ·
Replies
4
Views
6K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K