Simple Covariance Matrix Question

In summary, the conversation discusses a time-varying random vector, \underline{m}(t), with unity power elements and uncorrelated components, resulting in an identity matrix covariance matrix. The speaker then asks for help understanding the relationship between the vector and its separate components, \underline{b}(t) and m_0(t), and whether the covariance matrix of \underline{b}(t) should also be proportional to the identity matrix. Further clarification is needed on the terminology and assumptions used.
  • #1
weetabixharry
111
0
I have a time-varying random vector, [itex]\underline{m}(t)[/itex], whose elements are unity power and uncorrelated. So, its covariance matrix is equal to the identity matrix.

Now, if I separate [itex]\underline{m}(t)[/itex] into two separate components (a vector and a scalar):

[itex]\underline{m}(t)\triangleq\underline{b}(t)m_0(t)[/itex]

I'm confused as to what I can say about [itex]\underline{b}(t)[/itex] and [itex]m_0(t)[/itex]. In particular, I feel that the covariance matrix of [itex]\underline{b}(t)[/itex] should be proportional to the identity matrix. Therefore, I also feel that [itex]m_0(t)[/itex] should be uncorrelated with the elements of [itex]\underline{b}(t)[/itex]. However, I cannot see how to prove or disprove these things. Where can I start?!

Any help is greatly appreciated!
 
Physics news on Phys.org
  • #2
weetabixharry said:
I have a time-varying random vector, [itex]\underline{m}(t)[/itex], whose elements are unity power

What doesn "unity power" mean?

and uncorrelated. So, its covariance matrix is equal to the identity matrix.

You didn't say what the random variables involved in the covariance matrix have to do with a time varying vector. Are they the coordinate values of the time varying vector at different times?

Now, if I separate [itex]\underline{m}(t)[/itex] into two separate components (a vector and a scalar):

[itex]\underline{m}(t)\triangleq\underline{b}(t)m_0(t)[/itex]

You can write a single random vector that way. But, as far as I know, the term covariance is a concept involving scalar random variables. So if you are dealing with a covariance matrix for random vectors, the random variables involved will be coordinates of the vectors. Suppose we are dealing with two dimensional vectors and the cartesian coordinates are independent random variables. They will be uncorrelated. But if you change coordinates, the coordinates in the new coordinate system may not be independent of each other and they may be correlated.

In two dimensions, suppose that the x coordinate has a normal distribution with mean 0 and standard deviation 100 and the y coordinate (independently) has a normal distribution with mean 0 and standard deviation 1. Suppose the vector is expressed as the 3 random variables (ux,uy,r) where ux and uy are unit vectors and r is the magnitude of the vector. Suppose I get a realization where the vector (ux,uy) points almost due north (i.e. in the direction of the positive Y axis). Then it isn't likely that the x coordinate had a relatively large value because in order to get the vector to point North, then the Y value would have to be huge and Y has a standard deviation of only 1. So, in a manner of speaking, the more likely north pointing vectors have relatively small x values and relatively medium sized y values. This informally indicates that there can be dependence between the unit vectors and the magnitude.
 
  • #3
weetabixharry said:
I have a time-varying random vector, [itex]\underline{m}(t)[/itex], whose elements are unity power and uncorrelated. So, its covariance matrix is equal to the identity matrix.

Now, if I separate [itex]\underline{m}(t)[/itex] into two separate components (a vector and a scalar):

[itex]\underline{m}(t)\triangleq\underline{b}(t)m_0(t)[/itex]

I'm confused as to what I can say about [itex]\underline{b}(t)[/itex] and [itex]m_0(t)[/itex]. In particular, I feel that the covariance matrix of [itex]\underline{b}(t)[/itex] should be proportional to the identity matrix. Therefore, I also feel that [itex]m_0(t)[/itex] should be uncorrelated with the elements of [itex]\underline{b}(t)[/itex]. However, I cannot see how to prove or disprove these things. Where can I start?!

Any help is greatly appreciated!

Are you saying that the vector components are assumed to be time varying independently such that all off-diagonal elements of the variance-covariance matrix are 0? If so, this in no way entails that the diagonal elements should all be 1. Since the main diagonal is the variance of each component, you are suggesting a system where each component varies randomly with mean 0 and variance 1, or that each component has an identical standard normal distribution. If so, exactly what are you trying to prove beyond what you have already defined?
 
Last edited:
  • #4
There are clearly a lot of problems with my use of terminology here. Furthermore, I neglected to mention that I assume all my variables to have zero mean.

The covariance matrix of [itex]\underline{m}(t)[/itex] is:

[itex]\mathcal{E}\{\underline{m}(t)\underline{m}^H(t)\} = \textbf{I}[/itex]

where [itex]\mathcal{E}\{\}[/itex], [itex]()^H[/itex] and [itex]\textbf{I}[/itex] denote the expectation, Hermitian transpose (conjugate transpose) and identity matrix, respectively.

It seems as though I'm not even speaking the right language. This is perhaps because I have no understanding of what the consequences would be if [itex]\underline{m}(t)[/itex] were deterministic. I don't see that it really matters for my particular problem.

I'm saying that if I write [itex]\underline{m}(t)[/itex] as:

[itex]\underline{m}(t) \triangleq \left(\underline{b}(t) \odot \underline{1}m_0(t)\right)[/itex]

where [itex]\odot[/itex] and [itex]\underline{1}[/itex] denote the Hadamard (element-by-element) product and column vector of ones, respectively... then what can I say about [itex]\underline{b}(t)[/itex] and [itex]m_0(t)[/itex]?
 
  • #5
weetabixharry said:
There are clearly a lot of problems with my use of terminology here. Furthermore, I neglected to mention that I assume all my variables to have zero mean.

The covariance matrix of [itex]\underline{m}(t)[/itex] is:

[itex]\mathcal{E}\{\underline{m}(t)\underline{m}^H(t)\} = \textbf{I}[/itex]

where [itex]\mathcal{E}\{\}[/itex], [itex]()^H[/itex] and [itex]\textbf{I}[/itex] denote the expectation, Hermitian transpose (conjugate transpose) and identity matrix, respectively.

It seems as though I'm not even speaking the right language. This is perhaps because I have no understanding of what the consequences would be if [itex]\underline{m}(t)[/itex] were deterministic. I don't see that it really matters for my particular problem.

I'm saying that if I write [itex]\underline{m}(t)[/itex] as:

[itex]\underline{m}(t) \triangleq \left(\underline{b}(t) \odot \underline{1}m_0(t)\right)[/itex]

where [itex]\odot[/itex] and [itex]\underline{1}[/itex] denote the Hadamard (element-by-element) product and column vector of ones, respectively... then what can I say about [itex]\underline{b}(t)[/itex] and [itex]m_0(t)[/itex]?

I think you're talking about something other than random vectors and covariance matrices. There is no column vector of 1s in the case you described. As you said, it looks like an identity matrix. I'll let someone else answer your question. The fact that you've defined a scalar matrix probably has something to do with whatever you're trying to prove.
 
Last edited:
  • #6
weetabixharry said:
if I write [itex]\underline{m}(t)[/itex] as:

[itex]\underline{m}(t) \triangleq \left(\underline{b}(t) \odot \underline{1}m_0(t)\right)[/itex]

where [itex]\odot[/itex] and [itex]\underline{1}[/itex] denote the Hadamard (element-by-element) product and column vector of ones, respectively... then what can I say about [itex]\underline{b}(t)[/itex] and [itex]m_0(t)[/itex]?

Perhaps you could clear this up with 2 dimensional example. What confuses me about your terminology is that the dimensions don't seem to make sense. If I have a two dimensional vector and I wish to separate the magnitude and direction information, I'm going to end up with 3 scalar variables.

On the other hand, if I have a 2 dimensional column vector of complex scalars and factor each individual scalar into magnitude and direction information, I could express each scalar as a magnitude times a complex number of unit magnitude. That would be expressible as the element-by-element product of two 2 dimensional column vectors. But it is not what I would call factoring out the magnitude of the vector from its direction.
 

1. What is a simple covariance matrix?

A simple covariance matrix is a square matrix that measures the linear relationship between two or more variables. It is used to determine the strength and direction of the relationship between variables.

2. How is a simple covariance matrix calculated?

A simple covariance matrix is calculated by finding the covariance between each pair of variables in a dataset and arranging them in a matrix form. Covariance is calculated by taking the product of the differences between each variable and their mean, and then dividing by the total number of observations.

3. What does a positive covariance value indicate?

A positive covariance value indicates a positive relationship between two variables, meaning that as one variable increases, the other variable also tends to increase. This suggests a direct or positive correlation between the variables.

4. Can a simple covariance matrix be used to measure the strength of a relationship?

Yes, a simple covariance matrix can be used to measure the strength of a linear relationship between variables. However, it only measures the strength of a linear relationship and does not provide information about the form of the relationship.

5. How is a simple covariance matrix useful in data analysis?

A simple covariance matrix is useful in data analysis as it helps identify patterns and relationships between variables. It can also be used to identify redundant variables and select the most relevant variables for further analysis.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
5K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
4K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
24
Views
1K
Back
Top