Fisher matrix for multivariate normal distribution

by hdb
Tags: distribution, fisher, matrix, multivariate, normal
 P: 3 The fisher information matrix for multivariate normal distribution is said at many places to be simplified as: $$\mathcal{I}_{m,n} = \frac{\partial \mu^\mathrm{T}}{\partial \theta_m} \Sigma^{-1} \frac{\partial \mu}{\partial \theta_n}.\$$ even on http://en.wikipedia.org/wiki/Fisher_...l_distribution I am trying to come up with the derivation, but no luck so far. Does anyone have any ideas / hints / references, how to do this? Thank you
 P: 5 Using matrix derivatives one has $$D_x(x^T A x) = x^T(A+A^T)$$ from which it follows that $$D_{\theta} \log p(z ; \mu(\theta) , \Sigma) = (z-\mu(\theta))^T \Sigma^{-1} D_{\theta} \mu(\theta)$$ For simplicity let's write $$D_{\theta} \mu(\theta) = H$$ The FIM is then found as $$J = E[ ( D_{\theta} \log p(z ; \mu(\theta) , \Sigma))^T D_{\theta} \log p(z ; \mu(\theta) , \Sigma)] = E[ H^T R^{-1} (z - \mu(\theta))^T (z - \mu(\theta)) R^{-1} H] = H^T R^{-1} R R^{-1} H = H^T R^{-1} H [\tex] which is equivalent to the given formula. Notice that this formula only is valid as long as [tex] \Sigma [\tex] does not depend on [tex] \theta [\tex]. I'm still struggling to find a derivation of the more general case where also [tex] \Sigma [\tex] depends on [tex] \theta [\tex]. For some reason my tex code is not correctly parsed. I cannot understand why.  P: 5 Actually the general proof can apparently be found in Porat & Friedlander: Computation of the Exact Information Matrix of Gaussian Time Series with Stationary Random Components, IEEE Transactions on Acoustics, Speech and Signal Processing, Vol ASSP-34, No. 1, Feb. 1986. P: 2,490 Fisher matrix for multivariate normal distribution  Quote by edmundfo R^{-1} H] = H^T R^{-1} R R^{-1} H = H^T R^{-1} H [\tex] For some reason my tex code is not correctly parsed. I cannot understand why. For one thing, you're using the back slash [\tex] instead of the forward slash$$ at the end of your code.
P: 3
 Quote by edmundfo Actually the general proof can apparently be found in Porat & Friedlander: Computation of the Exact Information Matrix of Gaussian Time Series with Stationary Random Components, IEEE Transactions on Acoustics, Speech and Signal Processing, Vol ASSP-34, No. 1, Feb. 1986.
Thank you for the answers, in between I have found an another reference, which is a direct derivation of the same result, for me this one seems to be easier to interpret:

﻿Klein, A., and H. Neudecker. “A direct derivation of the exact Fisher information matrix of Gaussian vector state space models.” Linear Algebra and its Applications 321, no. 1-3

 Related Discussions Set Theory, Logic, Probability, Statistics 3 Calculus & Beyond Homework 1 Precalculus Mathematics Homework 0 Set Theory, Logic, Probability, Statistics 5 Calculus & Beyond Homework 2