- #1
weetabixharry
- 111
- 0
I don't understand the following step regarding the [itex](i,j)^{th}[/itex] element of the Fisher Information Matrix, [itex]\textbf{J}[/itex]:
[tex]J_{ij}\triangleq\mathcal{E}\left\{ \frac{\partial}{\partial\theta_{i}}L_{\textbf{x}}(\textbf{θ})\frac{\partial}{\partial\theta_{j}}
L_{\mathbf{x}}(\textbf{θ})\right\}
\\
=-\mathcal{E}\left\{ \frac{\partial^{2}}{\partial\theta_{i} \partial \theta_{j}}L_{\textbf{x}}(\textbf{θ})\right\}[/tex]
which is given in (Eq. 8.26, on p. 926 of) "Optimum Array Processing" by Harry van Trees. I don't know if the details matter, but [itex]L_{\textbf{x}}[/itex] is the log-likelihood function and he is looking at the problem of estimating the non-random real vector, [itex]\textbf{θ}[/itex], from discrete observations of a complex Gaussian random vector, [itex]\textbf{x}[/itex].
Am I missing something obvious? I'm not very sharp on partial derivatives.
[tex]J_{ij}\triangleq\mathcal{E}\left\{ \frac{\partial}{\partial\theta_{i}}L_{\textbf{x}}(\textbf{θ})\frac{\partial}{\partial\theta_{j}}
L_{\mathbf{x}}(\textbf{θ})\right\}
\\
=-\mathcal{E}\left\{ \frac{\partial^{2}}{\partial\theta_{i} \partial \theta_{j}}L_{\textbf{x}}(\textbf{θ})\right\}[/tex]
which is given in (Eq. 8.26, on p. 926 of) "Optimum Array Processing" by Harry van Trees. I don't know if the details matter, but [itex]L_{\textbf{x}}[/itex] is the log-likelihood function and he is looking at the problem of estimating the non-random real vector, [itex]\textbf{θ}[/itex], from discrete observations of a complex Gaussian random vector, [itex]\textbf{x}[/itex].
Am I missing something obvious? I'm not very sharp on partial derivatives.
Last edited: