weetabixharry
- 111
- 0
I don't understand the following step regarding the (i,j)^{th} element of the Fisher Information Matrix, \textbf{J}:
J_{ij}\triangleq\mathcal{E}\left\{ \frac{\partial}{\partial\theta_{i}}L_{\textbf{x}}(\textbf{θ})\frac{\partial}{\partial\theta_{j}}<br /> <br /> L_{\mathbf{x}}(\textbf{θ})\right\}<br /> \\<br /> =-\mathcal{E}\left\{ \frac{\partial^{2}}{\partial\theta_{i} \partial \theta_{j}}L_{\textbf{x}}(\textbf{θ})\right\}
which is given in (Eq. 8.26, on p. 926 of) "Optimum Array Processing" by Harry van Trees. I don't know if the details matter, but L_{\textbf{x}} is the log-likelihood function and he is looking at the problem of estimating the non-random real vector, \textbf{θ}, from discrete observations of a complex Gaussian random vector, \textbf{x}.
Am I missing something obvious? I'm not very sharp on partial derivatives.
J_{ij}\triangleq\mathcal{E}\left\{ \frac{\partial}{\partial\theta_{i}}L_{\textbf{x}}(\textbf{θ})\frac{\partial}{\partial\theta_{j}}<br /> <br /> L_{\mathbf{x}}(\textbf{θ})\right\}<br /> \\<br /> =-\mathcal{E}\left\{ \frac{\partial^{2}}{\partial\theta_{i} \partial \theta_{j}}L_{\textbf{x}}(\textbf{θ})\right\}
which is given in (Eq. 8.26, on p. 926 of) "Optimum Array Processing" by Harry van Trees. I don't know if the details matter, but L_{\textbf{x}} is the log-likelihood function and he is looking at the problem of estimating the non-random real vector, \textbf{θ}, from discrete observations of a complex Gaussian random vector, \textbf{x}.
Am I missing something obvious? I'm not very sharp on partial derivatives.
Last edited: