- #1
sanctifier
- 58
- 0
Suppose vectors X1, X2, ... , Xn whose components are random variables are mutually independent(I mean Xi's are vectors of components with constants which are possible values of random variables labeled by the component indice, and all these labeled random variables are organized as a vector X, hence Xi's just are samples of such a X), and the sample mean is [itex]\hat{M}[/itex] = [itex]\frac{1}{N}[/itex][itex]\sum[/itex] [itex]\stackrel{N}{i = 1}[/itex] Xi, and the true mean of all Xi's is M. Then to estimate the covariance matrix of Xi, we employ the following formula:
[itex]\hat{Ʃ}[/itex] = [itex]\frac{1}{N}[/itex][itex]\sum[/itex] [itex]\stackrel{N}{i = 1}[/itex] {(Xi - [itex]\hat{M}[/itex])(Xi - [itex]\hat{M}[/itex])T}
[itex]\ [/itex][itex]\ [/itex][itex]\: [/itex]= [itex]\frac{1}{N}[/itex][itex]\sum[/itex] [itex]\stackrel{N}{i = 1}[/itex] {((Xi - M) - ([itex]\hat{M}[/itex] - M))((Xi - M) - ([itex]\hat{M}[/itex] - M))T}
[itex]\ [/itex][itex]\ [/itex][itex]\: [/itex]= [itex]\frac{1}{N}[/itex][itex]\sum[/itex] [itex]\stackrel{N}{i = 1}[/itex] (Xi - M)(Xi - M)T - ([itex]\hat{M}[/itex] - M)([itex]\hat{M}[/itex] - M)T
My question is how does the equal sign hold in the last step?
I did some work about this question, first I note that the transpose is a llinear transformation, i.e., for two vectors V and U, (V + U)T = VT + UT, then I realize that the following equation is legal.
(V - U)(V - U)T = V[itex]\! [/itex]VT - VUT - UVT + UUT
Let V = (Xi - M) and U = ([itex]\hat{M}[/itex] - M), the terms missing in the last step of [itex]\hat{Ʃ}[/itex] are -VUT and -UVT, OK, I know the entries of E[VUT] actually are covariances of Xi and [itex]\hat{M}[/itex], and I assume they are all zero, consequently the terms -VUT and -UVT do miss because of taking the expectation on [itex]\hat{Ʃ}[/itex], but in the last step, they vanished before taking the expectation! Why?
Finally, I also notice that the sign of UUT = ([itex]\hat{M}[/itex] - M)([itex]\hat{M}[/itex] - M)T has been changed from + to -, how does this happen?
[itex]\hat{Ʃ}[/itex] = [itex]\frac{1}{N}[/itex][itex]\sum[/itex] [itex]\stackrel{N}{i = 1}[/itex] {(Xi - [itex]\hat{M}[/itex])(Xi - [itex]\hat{M}[/itex])T}
[itex]\ [/itex][itex]\ [/itex][itex]\: [/itex]= [itex]\frac{1}{N}[/itex][itex]\sum[/itex] [itex]\stackrel{N}{i = 1}[/itex] {((Xi - M) - ([itex]\hat{M}[/itex] - M))((Xi - M) - ([itex]\hat{M}[/itex] - M))T}
[itex]\ [/itex][itex]\ [/itex][itex]\: [/itex]= [itex]\frac{1}{N}[/itex][itex]\sum[/itex] [itex]\stackrel{N}{i = 1}[/itex] (Xi - M)(Xi - M)T - ([itex]\hat{M}[/itex] - M)([itex]\hat{M}[/itex] - M)T
My question is how does the equal sign hold in the last step?
I did some work about this question, first I note that the transpose is a llinear transformation, i.e., for two vectors V and U, (V + U)T = VT + UT, then I realize that the following equation is legal.
(V - U)(V - U)T = V[itex]\! [/itex]VT - VUT - UVT + UUT
Let V = (Xi - M) and U = ([itex]\hat{M}[/itex] - M), the terms missing in the last step of [itex]\hat{Ʃ}[/itex] are -VUT and -UVT, OK, I know the entries of E[VUT] actually are covariances of Xi and [itex]\hat{M}[/itex], and I assume they are all zero, consequently the terms -VUT and -UVT do miss because of taking the expectation on [itex]\hat{Ʃ}[/itex], but in the last step, they vanished before taking the expectation! Why?
Finally, I also notice that the sign of UUT = ([itex]\hat{M}[/itex] - M)([itex]\hat{M}[/itex] - M)T has been changed from + to -, how does this happen?
Last edited: