weetabixharry
- 111
- 0
I have an (m \times n) complex matrix, \textbf{N}, whose elements are zero-mean random variables. I have a sort of covariance expression:
\mathcal{E}\left\{\textbf{N}\textbf{N}^H\right\} = \textbf{I}
where \mathcal{E}\left\{\right\} denotes expectation, \{\}^H is conjugate transpose and \textbf{I} is the identity matrix.
Basically, I want to know exactly what this tells me about the second order statistics of the elements of \textbf{N}. For example, I know that if instead I just had an (m \times 1) vector, \textbf{n}, then an identity covariance matrix would imply that all the elements of \textbf{n} have unit variance and are uncorrelated.
Can I make any similar deductions from the matrix equation, above? Many thanks for any help!
\mathcal{E}\left\{\textbf{N}\textbf{N}^H\right\} = \textbf{I}
where \mathcal{E}\left\{\right\} denotes expectation, \{\}^H is conjugate transpose and \textbf{I} is the identity matrix.
Basically, I want to know exactly what this tells me about the second order statistics of the elements of \textbf{N}. For example, I know that if instead I just had an (m \times 1) vector, \textbf{n}, then an identity covariance matrix would imply that all the elements of \textbf{n} have unit variance and are uncorrelated.
Can I make any similar deductions from the matrix equation, above? Many thanks for any help!