Have you tried using ridge regression to solve for a singular Jacobian matrix?

  • Thread starter Thread starter top40
  • Start date Start date
  • Tags Tags
    Method Newton
AI Thread Summary
The discussion revolves around using ridge regression to address issues with a singular Jacobian matrix in a nonlinear system of equations solved via the Newton-Raphson method. The primary challenge is that the Jacobian matrix (J) is singular, preventing the calculation of its inverse. Suggestions include LU decomposition and the potential use of singular value decomposition (SVD) for finding a least norm solution. Ridge regression is proposed as a technique to modify the singular matrix by adding a small value to its diagonal, which may help in achieving convergence, although results can be inconsistent. Ultimately, the effectiveness of these methods depends on the specific characteristics of the system being analyzed.
top40
Messages
5
Reaction score
0
I have a nonlinear system of equations and am using the Newton raphson method to solve for them. The method is basically as follows:

J*dx = -F, where J is the jacobian matrix, dx is a vector whose elements are the changes to x, F is a vector whose elements are the function values at x.

To find dx, I find the inverse of J and multiply it by both sides, yielding dx = J^-1 * -F

The only problem is that I keep getting a singular matrix J, so I can't find the inverse. Any suggestions?
 
Mathematics news on Phys.org
I can't answer directly, but it looks like you've encountered a vector analog of using this method where the derivative = 0 (one dim.).
 
I just read wikipedia, and this is what it had to say:

"Rather than actually computing the inverse of this matrix [they are talking about J], one can save time by solving the system of linear equations

F * dx = -F

for the unknown dx."

So then I did more reading and found that a singular matrix CAN be LU decomposed: "If the matrix is singular, then an LU factorization may still exist. In fact, a square matrix of rank k has an LU factorization if the first k leading principal minors are non-zero, although the converse is not true."

So basically, instead of finding the inverse of J and multiplying both sides by J^-1, maybe I need to decompose J and use back substitution. However, with my luck, I'll bet that some of the first k leading principal minors will be 0, making the matrix undecomposable.
 
Assuming J is properly calculated (you may want to verify) then there is either no solution or an infinite # of solutions. I doubt LU will get around the problem. In the case of infinite # solutions, I believe SVD (singular value decomposition) can be used to find a 'least norm' solution though I don't know the details of how. It could also be that you need a different 'initial guess' for starting point for the solution. These kinds of problems can be sensitive to starting point.

If you want to post (or PM) the equations and they aren't too nasty, I'd be happy to take a look at them.
 
Last edited:
hotvette said:
Assuming J is properly calculated (you may want to verify) then there is either no solution or an infinite # of solutions.

Why is that?
 
A system of linear equations Ax = b either has a unique solution, no solution, or an infinite number of solutions depending on the rank of [A] compared to the rank of the augmented matrix [A b] and the number of unknowns. The following link might be helpful.

http://www.facstaff.bucknell.edu/maneval/help211/linalg.html

Here are some simple examples to illustrate.

1. The following system has a unique solution of x=1, y=2 (i.e. the matrix A has an inverse):

x + 2y = 5
2x + 3y = 8


2. The following system is singular (i.e. the matrix A has no inverse):

x + 2y = 5
2x + 4y = 12

and has no solution because the 2nd equation simplifies to x + 2y = 6 which clearly conflicts with the first equation x + 2y = 5.


3. The following system is also singular:

x + 2y = 5
2x + 4y = 10

Both equations simplify to x + 2y = 5 and there are infinitely many solutions (for any arbitrary value for x you can determine a corresponding value for y). Even though there are an infinite number of solutions, there is one solution (x=1, y=2) that results in a minimum norm (i.e. minimum x2 + y2).

Thus, if J is singular, there is either no solution or infinitely many solutions.
 
Last edited:
So a matrix that 1) can be LU decomposed and 2) does not have an inverse

does not have a unique solution?
 
top40 said:
So a matrix that 1) can be LU decomposed and 2) does not have an inverse does not have a unique solution?

Correct. In my previous post, the LU decomposition exists for each example but that doesn't help in the last two examples where the matrix is singular.
 
There is something you can try that may or may not be useful. It's called ridge regression (also called Tikhonov regularization) that adds a small numerical quantity to the diagonals of the singular matrix. If poor choice of starting point is the reason J is singular, it might help the solution eventually converge to a point where you can reduce the added quantity to zero and end up with a valid solution. But, it may also produce results that make no sense.

In the case of problem #3 in my previous post, adding 0.001 to the diagonal produces the result x=0.9998, y=1.9996, which is a solution that makes sense. However, for problem #2, the result is x=-798.83, y=402.32 which doesn't make any sense.
 
Back
Top