The tie to Levenberg-Marquardt is as follows. The (square) matrix [itex] J^T J [/itex] can often be ill-conditioned at intial guesses of the function coefficients making the equation [itex] J^T J z = J^T (y-f) [/itex] difficult if not impossible to solve. The (simpler) version of Levenberg-Marquardt modifies the problem by adding a small number [itex] \lambda [/itex] to the

*diagonals *of [itex] J^T J [/itex], resulting in the following modified equation:

[tex] (J^T J + \lambda I) z = J^T (y-f) [/tex]

which is more numerically stable the larger [itex] \lambda [/itex] is. The trick is to make [itex] \lambda [/itex] as small as possible to solve the problem, then gradually reduce it's magnitude to zero as the solution progresses.

One other note. The matrix [itex] J^T J [/itex] doesn't need to be computed at all if QR matrix factorization is used. Then, all that's needed is to factor [itex] J = QR [/itex] then solve the triangular system [itex] R z = Q^T (y-f) [/itex]. See link below for more explanation.

http://www.alkires.com/teaching/ee10...torization.htm
The next question might be how to add [itex] \lambda [/itex] to the diagonals of [itex]J^T J[/itex] if [itex]J^T J[/itex] is never computed? Actually, it is done by augmenting J and (y-f) as follows:

[tex] J^* = \begin{bmatrix} J \\ \sqrt{\lambda} I \end{bmatrix}

\qquad \qquad

(y-f)^* = \begin{bmatrix} (y-f) \\ 0 \end{bmatrix} [/tex]

Then factor [itex] J^* = QR [/itex] and solve [itex] R z = Q^T (y-f)^* [/itex].

This is probably more than you wanted to know........