Fisher Information Matrix: Equivalent Expressions

In summary, the problem is that the author is trying to find the non-random real vector, \textbf{θ}, from discrete observations of a complex Gaussian random vector, \textbf{x}. The solution is to find the Fisher information matrix, \textbf{J}.
  • #1
weetabixharry
111
0
I don't understand the following step regarding the [itex](i,j)^{th}[/itex] element of the Fisher Information Matrix, [itex]\textbf{J}[/itex]:
[tex]J_{ij}\triangleq\mathcal{E}\left\{ \frac{\partial}{\partial\theta_{i}}L_{\textbf{x}}(\textbf{θ})\frac{\partial}{\partial\theta_{j}}

L_{\mathbf{x}}(\textbf{θ})\right\}
\\
=-\mathcal{E}\left\{ \frac{\partial^{2}}{\partial\theta_{i} \partial \theta_{j}}L_{\textbf{x}}(\textbf{θ})\right\}[/tex]

which is given in (Eq. 8.26, on p. 926 of) "Optimum Array Processing" by Harry van Trees. I don't know if the details matter, but [itex]L_{\textbf{x}}[/itex] is the log-likelihood function and he is looking at the problem of estimating the non-random real vector, [itex]\textbf{θ}[/itex], from discrete observations of a complex Gaussian random vector, [itex]\textbf{x}[/itex].

Am I missing something obvious? I'm not very sharp on partial derivatives.
 
Last edited:
Physics news on Phys.org
  • #2
L_X is minus the log of the pdf. Write down the explicit expression for the expectation values in terms of L_X and its derivatives and use partial integration.
 
  • #3
DrDu said:
Write down the explicit expression for the expectation values in terms of L_X and its derivatives and use partial integration.

The expressions are rather intimidating functions of complex matrices. I don't think I want to try partial integration. I tried just evaluating the derivatives to see if they would come out the same, but I ran out of paper before I had even scratched the surface.

If the result is specific to this problem, then I would be willing to take it on face value. It's just frustrating that I keep seeing the result stated without proof. It's as though it's too obvious to warrant a formal proof.
 
  • #6
6 month is completely inacceptable. As I said, this topic is contained in every book on introductory statistics.
Elsewise it is a good idea to search for some lecture notes containing the problem: I used "fisher information lecture notes" and almost every lecture note contained a proof of the statement, e.g. the first one i got:
http://ocw.mit.edu/courses/mathemat...ications-fall-2006/lecture-notes/lecture3.pdf
 
  • #7
DrDu said:
this topic is contained in every book on introductory statistics.
This is obviously incorrect.

DrDu said:
Elsewise it is a good idea to search for some lecture notes containing the problem
This is no longer necessary. The topic is resolved.
 

What is the Fisher Information Matrix?

The Fisher Information Matrix is a mathematical tool used in statistics to measure the amount of information that a set of data provides about the parameters of a statistical model. It is often used to assess the precision and accuracy of estimators in statistical inference.

What are equivalent expressions for the Fisher Information Matrix?

There are several different ways to express the Fisher Information Matrix, including the expected value of the negative Hessian matrix of the log-likelihood function, the covariance matrix of the score function, and the expected value of the outer product of the gradient of the log-likelihood function. These are all mathematically equivalent expressions.

What is the importance of the Fisher Information Matrix in statistical analysis?

The Fisher Information Matrix plays a crucial role in statistical analysis as it provides a way to quantify the amount of information contained in a set of data. It is used to assess the efficiency of estimators, to derive standard errors and confidence intervals, and to make comparisons between different statistical models.

How is the Fisher Information Matrix calculated?

The Fisher Information Matrix is typically calculated using mathematical formulas that involve derivatives of the log-likelihood function. In some cases, it can also be estimated using numerical methods such as Monte Carlo simulation.

What are some limitations of the Fisher Information Matrix?

While the Fisher Information Matrix is a powerful tool in statistical analysis, it does have some limitations. It assumes that the data follows a specific statistical model and that the parameters of this model are known. It also assumes that the data is independent and identically distributed, which may not always be the case in real-world data sets.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
1K
  • Advanced Physics Homework Help
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
1K
Replies
1
Views
845
  • Special and General Relativity
Replies
28
Views
656
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Replies
0
Views
266
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
952
Replies
3
Views
378
Back
Top