Quote by kasraa
Thanks for your reply.
Actually I've read it.
My question is about MMSE estimation in general (and Kalman filter, only as one of its implementations for some particular case).
[tex] E \left[ \left( x  \hat{x} \right) \left( x  \hat{x} \right)^{T} \right] [/tex]
(where [tex] Z [/tex] is the observation (or sequence of observations as in Kalman) and [tex] \hat{x}=E \left[ x  Z \right] [/tex]).
Again, if we look at Kalman as an implementation of MMSE estimator, in some references the conditional MSE is expanded to reach Kalman's covariances, and in some others, the unconditional MSE is used to do so.
(BTW, I won't be surprised if someone show that they're equal for Gaussian/linear case, and both references are right).
Thanks a lot.

I think this article may help.
http://cnx.org/content/m11267/latest/
I take it that P(Z) is your unconditional probability density and p(Zx) is your likelihood function. Then taking the joint density p(x)p(Zx) you can use Bayes Theorem for the posterior density which is the conditional p(xZ)=p(Zx)p(x)/p(Z).
I'm not sure why you think the unconditional and conditional probability densities would be equal unless, of course, the prior density and the posterior density were equal. It appears that the MMSE estimate applies to the posterior density p(xZ).
EDIT: The link is a bit slow, but works as of my testing at the edit time.