Register to reply 
Simple Covariance Matrix Question 
Share this thread: 
#1
Oct3011, 02:20 PM

P: 108

I have a timevarying random vector, [itex]\underline{m}(t)[/itex], whose elements are unity power and uncorrelated. So, its covariance matrix is equal to the identity matrix.
Now, if I separate [itex]\underline{m}(t)[/itex] into two separate components (a vector and a scalar): [itex]\underline{m}(t)\triangleq\underline{b}(t)m_0(t)[/itex] I'm confused as to what I can say about [itex]\underline{b}(t)[/itex] and [itex]m_0(t)[/itex]. In particular, I feel that the covariance matrix of [itex]\underline{b}(t)[/itex] should be proportional to the identity matrix. Therefore, I also feel that [itex]m_0(t)[/itex] should be uncorrelated with the elements of [itex]\underline{b}(t)[/itex]. However, I cannot see how to prove or disprove these things. Where can I start?! Any help is greatly appreciated! 


#2
Oct3111, 06:50 PM

Sci Advisor
P: 3,295

In two dimensions, suppose that the x coordinate has a normal distribution with mean 0 and standard deviation 100 and the y coordinate (independently) has a normal distribution with mean 0 and standard deviation 1. Suppose the vector is expressed as the 3 random variables (ux,uy,r) where ux and uy are unit vectors and r is the magnitude of the vector. Suppose I get a realization where the vector (ux,uy) points almost due north (i.e. in the direction of the positive Y axis). Then it isn't likely that the x coordinate had a relatively large value because in order to get the vector to point North, then the Y value would have to be huge and Y has a standard deviation of only 1. So, in a manner of speaking, the more likely north pointing vectors have relatively small x values and relatively medium sized y values. This informally indicates that there can be dependence between the unit vectors and the magnitude. 


#3
Nov111, 05:05 PM

P: 2,499




#4
Nov111, 06:17 PM

P: 108

Simple Covariance Matrix Question
There are clearly a lot of problems with my use of terminology here. Furthermore, I neglected to mention that I assume all my variables to have zero mean.
The covariance matrix of [itex]\underline{m}(t)[/itex] is: [itex]\mathcal{E}\{\underline{m}(t)\underline{m}^H(t)\} = \textbf{I}[/itex] where [itex]\mathcal{E}\{\}[/itex], [itex]()^H[/itex] and [itex]\textbf{I}[/itex] denote the expectation, Hermitian transpose (conjugate transpose) and identity matrix, respectively. It seems as though I'm not even speaking the right language. This is perhaps because I have no understanding of what the consequences would be if [itex]\underline{m}(t)[/itex] were deterministic. I don't see that it really matters for my particular problem. I'm saying that if I write [itex]\underline{m}(t)[/itex] as: [itex]\underline{m}(t) \triangleq \left(\underline{b}(t) \odot \underline{1}m_0(t)\right)[/itex] where [itex]\odot[/itex] and [itex]\underline{1}[/itex] denote the Hadamard (elementbyelement) product and column vector of ones, respectively... then what can I say about [itex]\underline{b}(t)[/itex] and [itex]m_0(t)[/itex]? 


#5
Nov111, 08:01 PM

P: 2,499




#6
Nov311, 10:42 AM

Sci Advisor
P: 3,295

On the other hand, if I have a 2 dimensional column vector of complex scalars and factor each individual scalar into magnitude and direction information, I could express each scalar as a magnitude times a complex number of unit magnitude. That would be expressible as the elementbyelement product of two 2 dimensional column vectors. But it is not what I would call factoring out the magnitude of the vector from its direction. 


Register to reply 
Related Discussions  
Mean and covariance matrix?  Calculus & Beyond Homework  3  
Specifying a covariance matrix  Set Theory, Logic, Probability, Statistics  1  
Covariance matrix  Set Theory, Logic, Probability, Statistics  4  
Covariance matrix  Set Theory, Logic, Probability, Statistics  2 