Simple Covariance Matrix Question

AI Thread Summary
The discussion revolves around the properties of a time-varying random vector, \underline{m}(t), which is defined to have unity power and uncorrelated elements, resulting in a covariance matrix equal to the identity matrix. The user seeks clarification on how to analyze the components when separating \underline{m}(t) into a vector \underline{b}(t) and a scalar m_0(t), particularly regarding the covariance matrix of \underline{b}(t) and its relationship with m_0(t). There is confusion about the implications of this separation, especially concerning the independence and correlation of the components. Participants highlight the need for precise terminology and suggest that the covariance matrix's properties depend on the definitions and assumptions made about the random variables involved. The conversation emphasizes the importance of understanding the relationship between vector components and their statistical properties in the context of covariance.
weetabixharry
Messages
111
Reaction score
0
I have a time-varying random vector, \underline{m}(t), whose elements are unity power and uncorrelated. So, its covariance matrix is equal to the identity matrix.

Now, if I separate \underline{m}(t) into two separate components (a vector and a scalar):

\underline{m}(t)\triangleq\underline{b}(t)m_0(t)

I'm confused as to what I can say about \underline{b}(t) and m_0(t). In particular, I feel that the covariance matrix of \underline{b}(t) should be proportional to the identity matrix. Therefore, I also feel that m_0(t) should be uncorrelated with the elements of \underline{b}(t). However, I cannot see how to prove or disprove these things. Where can I start?!

Any help is greatly appreciated!
 
Physics news on Phys.org
weetabixharry said:
I have a time-varying random vector, \underline{m}(t), whose elements are unity power

What doesn "unity power" mean?

and uncorrelated. So, its covariance matrix is equal to the identity matrix.

You didn't say what the random variables involved in the covariance matrix have to do with a time varying vector. Are they the coordinate values of the time varying vector at different times?

Now, if I separate \underline{m}(t) into two separate components (a vector and a scalar):

\underline{m}(t)\triangleq\underline{b}(t)m_0(t)

You can write a single random vector that way. But, as far as I know, the term covariance is a concept involving scalar random variables. So if you are dealing with a covariance matrix for random vectors, the random variables involved will be coordinates of the vectors. Suppose we are dealing with two dimensional vectors and the cartesian coordinates are independent random variables. They will be uncorrelated. But if you change coordinates, the coordinates in the new coordinate system may not be independent of each other and they may be correlated.

In two dimensions, suppose that the x coordinate has a normal distribution with mean 0 and standard deviation 100 and the y coordinate (independently) has a normal distribution with mean 0 and standard deviation 1. Suppose the vector is expressed as the 3 random variables (ux,uy,r) where ux and uy are unit vectors and r is the magnitude of the vector. Suppose I get a realization where the vector (ux,uy) points almost due north (i.e. in the direction of the positive Y axis). Then it isn't likely that the x coordinate had a relatively large value because in order to get the vector to point North, then the Y value would have to be huge and Y has a standard deviation of only 1. So, in a manner of speaking, the more likely north pointing vectors have relatively small x values and relatively medium sized y values. This informally indicates that there can be dependence between the unit vectors and the magnitude.
 
weetabixharry said:
I have a time-varying random vector, \underline{m}(t), whose elements are unity power and uncorrelated. So, its covariance matrix is equal to the identity matrix.

Now, if I separate \underline{m}(t) into two separate components (a vector and a scalar):

\underline{m}(t)\triangleq\underline{b}(t)m_0(t)

I'm confused as to what I can say about \underline{b}(t) and m_0(t). In particular, I feel that the covariance matrix of \underline{b}(t) should be proportional to the identity matrix. Therefore, I also feel that m_0(t) should be uncorrelated with the elements of \underline{b}(t). However, I cannot see how to prove or disprove these things. Where can I start?!

Any help is greatly appreciated!

Are you saying that the vector components are assumed to be time varying independently such that all off-diagonal elements of the variance-covariance matrix are 0? If so, this in no way entails that the diagonal elements should all be 1. Since the main diagonal is the variance of each component, you are suggesting a system where each component varies randomly with mean 0 and variance 1, or that each component has an identical standard normal distribution. If so, exactly what are you trying to prove beyond what you have already defined?
 
Last edited:
There are clearly a lot of problems with my use of terminology here. Furthermore, I neglected to mention that I assume all my variables to have zero mean.

The covariance matrix of \underline{m}(t) is:

\mathcal{E}\{\underline{m}(t)\underline{m}^H(t)\} = \textbf{I}

where \mathcal{E}\{\}, ()^H and \textbf{I} denote the expectation, Hermitian transpose (conjugate transpose) and identity matrix, respectively.

It seems as though I'm not even speaking the right language. This is perhaps because I have no understanding of what the consequences would be if \underline{m}(t) were deterministic. I don't see that it really matters for my particular problem.

I'm saying that if I write \underline{m}(t) as:

\underline{m}(t) \triangleq \left(\underline{b}(t) \odot \underline{1}m_0(t)\right)

where \odot and \underline{1} denote the Hadamard (element-by-element) product and column vector of ones, respectively... then what can I say about \underline{b}(t) and m_0(t)?
 
weetabixharry said:
There are clearly a lot of problems with my use of terminology here. Furthermore, I neglected to mention that I assume all my variables to have zero mean.

The covariance matrix of \underline{m}(t) is:

\mathcal{E}\{\underline{m}(t)\underline{m}^H(t)\} = \textbf{I}

where \mathcal{E}\{\}, ()^H and \textbf{I} denote the expectation, Hermitian transpose (conjugate transpose) and identity matrix, respectively.

It seems as though I'm not even speaking the right language. This is perhaps because I have no understanding of what the consequences would be if \underline{m}(t) were deterministic. I don't see that it really matters for my particular problem.

I'm saying that if I write \underline{m}(t) as:

\underline{m}(t) \triangleq \left(\underline{b}(t) \odot \underline{1}m_0(t)\right)

where \odot and \underline{1} denote the Hadamard (element-by-element) product and column vector of ones, respectively... then what can I say about \underline{b}(t) and m_0(t)?

I think you're talking about something other than random vectors and covariance matrices. There is no column vector of 1s in the case you described. As you said, it looks like an identity matrix. I'll let someone else answer your question. The fact that you've defined a scalar matrix probably has something to do with whatever you're trying to prove.
 
Last edited:
weetabixharry said:
if I write \underline{m}(t) as:

\underline{m}(t) \triangleq \left(\underline{b}(t) \odot \underline{1}m_0(t)\right)

where \odot and \underline{1} denote the Hadamard (element-by-element) product and column vector of ones, respectively... then what can I say about \underline{b}(t) and m_0(t)?

Perhaps you could clear this up with 2 dimensional example. What confuses me about your terminology is that the dimensions don't seem to make sense. If I have a two dimensional vector and I wish to separate the magnitude and direction information, I'm going to end up with 3 scalar variables.

On the other hand, if I have a 2 dimensional column vector of complex scalars and factor each individual scalar into magnitude and direction information, I could express each scalar as a magnitude times a complex number of unit magnitude. That would be expressible as the element-by-element product of two 2 dimensional column vectors. But it is not what I would call factoring out the magnitude of the vector from its direction.
 
I was reading documentation about the soundness and completeness of logic formal systems. Consider the following $$\vdash_S \phi$$ where ##S## is the proof-system making part the formal system and ##\phi## is a wff (well formed formula) of the formal language. Note the blank on left of the turnstile symbol ##\vdash_S##, as far as I can tell it actually represents the empty set. So what does it mean ? I guess it actually means ##\phi## is a theorem of the formal system, i.e. there is a...

Similar threads

Back
Top