0xDEADBEEF
- 815
- 1
I am confused about calculating errors. I have learned if you take the variance covariance matrix \Sigma_{ij} of a fit of function f(x,p) to data for parameters p_i (for example by using Levenberg-Marquart) that the one sigma error interval for p_i is \sigma_{p_i}=\sqrt{\Sigma_{ii}} I only understand this, if there are no covariance terms. Why do we do this? I would have thought a better way to find the error would be to do diagonalize \Sigma, say the diagonal form is \Xi with normalized eigenvectors (\vec{v})_k. Then we would have independent variables that have a Gaussian distribution and one can calculate the error on p_i using error propagation, i.e. \sigma_{p_i} = \sqrt{\sum \Xi_{kk}\left\langle(\vec{v})_k\mid l_i \right\rangle} where \left\langle(\vec{v})_k\mid l_i \right\rangle is the i^\text{th} component of (\vec{v})_k. If this is permissible, is there a name for it?