I don't understand the following step regarding the [itex](i,j)^{th}[/itex] element of the Fisher Information Matrix, [itex]\textbf{J}[/itex]:(adsbygoogle = window.adsbygoogle || []).push({});

[tex]J_{ij}\triangleq\mathcal{E}\left\{ \frac{\partial}{\partial\theta_{i}}L_{\textbf{x}}(\textbf{θ})\frac{\partial}{\partial\theta_{j}}

L_{\mathbf{x}}(\textbf{θ})\right\}

\\

=-\mathcal{E}\left\{ \frac{\partial^{2}}{\partial\theta_{i} \partial \theta_{j}}L_{\textbf{x}}(\textbf{θ})\right\}[/tex]

which is given in (Eq. 8.26, on p. 926 of) "Optimum Array Processing" by Harry van Trees. I don't know if the details matter, but [itex]L_{\textbf{x}}[/itex] is the log-likelihood function and he is looking at the problem of estimating the non-random real vector, [itex]\textbf{θ}[/itex], from discrete observations of a complex Gaussian random vector, [itex]\textbf{x}[/itex].

Am I missing something obvious? I'm not very sharp on partial derivatives.

**Physics Forums | Science Articles, Homework Help, Discussion**

Join Physics Forums Today!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

# Fisher Information Matrix: Equivalent Expressions

**Physics Forums | Science Articles, Homework Help, Discussion**