Register to reply

Fisher matrix for multivariate normal distribution

Share this thread:
hdb
#1
Mar17-11, 01:47 PM
P: 3
The fisher information matrix for multivariate normal distribution is said at many places to be simplified as:
[tex]\mathcal{I}_{m,n} = \frac{\partial \mu^\mathrm{T}}{\partial \theta_m} \Sigma^{-1} \frac{\partial \mu}{\partial \theta_n}.\ [/tex]
even on
http://en.wikipedia.org/wiki/Fisher_...l_distribution
I am trying to come up with the derivation, but no luck so far. Does anyone have any ideas / hints / references, how to do this?

Thank you
Phys.Org News Partner Science news on Phys.org
Security CTO to detail Android Fake ID flaw at Black Hat
Huge waves measured for first time in Arctic Ocean
Mysterious molecules in space
edmundfo
#2
May18-11, 10:47 PM
P: 5
Using matrix derivatives one has [tex] D_x(x^T A x) = x^T(A+A^T) [/tex] from which it follows that [tex] D_{\theta} \log p(z ; \mu(\theta) , \Sigma) = (z-\mu(\theta))^T \Sigma^{-1} D_{\theta} \mu(\theta) [/tex] For simplicity let's write [tex] D_{\theta} \mu(\theta) = H [/tex] The FIM is then found as [tex] J = E[ ( D_{\theta} \log p(z ; \mu(\theta) , \Sigma))^T D_{\theta} \log p(z ; \mu(\theta) , \Sigma)] = E[ H^T R^{-1} (z - \mu(\theta))^T (z - \mu(\theta)) R^{-1} H] = H^T R^{-1} R R^{-1} H = H^T R^{-1} H [\tex] which is equivalent to the given formula. Notice that this formula only is valid as long as [tex] \Sigma [\tex] does not depend on [tex] \theta [\tex]. I'm still struggling to find a derivation of the more general case where also [tex] \Sigma [\tex] depends on [tex] \theta [\tex].

For some reason my tex code is not correctly parsed. I cannot understand why.
edmundfo
#3
May18-11, 11:00 PM
P: 5
Actually the general proof can apparently be found in Porat & Friedlander: Computation of the Exact Information Matrix of Gaussian Time Series with Stationary Random Components, IEEE Transactions on Acoustics, Speech and Signal Processing, Vol ASSP-34, No. 1, Feb. 1986.

SW VandeCarr
#4
May19-11, 08:16 PM
P: 2,499
Fisher matrix for multivariate normal distribution

Quote Quote by edmundfo View Post
R^{-1} H] = H^T R^{-1} R R^{-1} H = H^T R^{-1} H [\tex]

For some reason my tex code is not correctly parsed. I cannot understand why.
For one thing, you're using the back slash [\tex] instead of the forward slash [/tex] at the end of your code.
hdb
#5
May20-11, 03:56 AM
P: 3
Quote Quote by edmundfo View Post
Actually the general proof can apparently be found in Porat & Friedlander: Computation of the Exact Information Matrix of Gaussian Time Series with Stationary Random Components, IEEE Transactions on Acoustics, Speech and Signal Processing, Vol ASSP-34, No. 1, Feb. 1986.
Thank you for the answers, in between I have found an another reference, which is a direct derivation of the same result, for me this one seems to be easier to interpret:

Klein, A., and H. Neudecker. A direct derivation of the exact Fisher information matrix of Gaussian vector state space models. Linear Algebra and its Applications 321, no. 1-3


Register to reply

Related Discussions
Determining the covariance matrix of a multivariate normal distribution Set Theory, Logic, Probability, Statistics 3
Multivariate Normal Distribution Calculus & Beyond Homework 1
Probability - Multivariate normal distribution Precalculus Mathematics Homework 0
How to derive the multivariate normal distribution Set Theory, Logic, Probability, Statistics 5
Multivariate Normal Distribution Calculus & Beyond Homework 2