Have you tried using ridge regression to solve for a singular Jacobian matrix?

In summary: For problem #1, the result is x=1.0001, y=1.9999 which is close to the correct solution but not exact.So, in summary, the Newton raphson method is a method commonly used to solve nonlinear systems of equations. It involves finding the jacobian matrix and using it to solve for a vector that represents the changes to the variables. However, if the jacobian matrix is singular, there may not be a unique solution or there may be an infinite number of solutions. In this case, it may be helpful to use LU decomposition or ridge regression to try and find a solution.
  • #1
top40
5
0
I have a nonlinear system of equations and am using the Newton raphson method to solve for them. The method is basically as follows:

J*dx = -F, where J is the jacobian matrix, dx is a vector whose elements are the changes to x, F is a vector whose elements are the function values at x.

To find dx, I find the inverse of J and multiply it by both sides, yielding dx = J^-1 * -F

The only problem is that I keep getting a singular matrix J, so I can't find the inverse. Any suggestions?
 
Mathematics news on Phys.org
  • #2
I can't answer directly, but it looks like you've encountered a vector analog of using this method where the derivative = 0 (one dim.).
 
  • #3
I just read wikipedia, and this is what it had to say:

"Rather than actually computing the inverse of this matrix [they are talking about J], one can save time by solving the system of linear equations

F * dx = -F

for the unknown dx."

So then I did more reading and found that a singular matrix CAN be LU decomposed: "If the matrix is singular, then an LU factorization may still exist. In fact, a square matrix of rank k has an LU factorization if the first k leading principal minors are non-zero, although the converse is not true."

So basically, instead of finding the inverse of J and multiplying both sides by J^-1, maybe I need to decompose J and use back substitution. However, with my luck, I'll bet that some of the first k leading principal minors will be 0, making the matrix undecomposable.
 
  • #4
Assuming J is properly calculated (you may want to verify) then there is either no solution or an infinite # of solutions. I doubt LU will get around the problem. In the case of infinite # solutions, I believe SVD (singular value decomposition) can be used to find a 'least norm' solution though I don't know the details of how. It could also be that you need a different 'initial guess' for starting point for the solution. These kinds of problems can be sensitive to starting point.

If you want to post (or PM) the equations and they aren't too nasty, I'd be happy to take a look at them.
 
Last edited:
  • #5
hotvette said:
Assuming J is properly calculated (you may want to verify) then there is either no solution or an infinite # of solutions.

Why is that?
 
  • #6
A system of linear equations Ax = b either has a unique solution, no solution, or an infinite number of solutions depending on the rank of [A] compared to the rank of the augmented matrix [A b] and the number of unknowns. The following link might be helpful.

http://www.facstaff.bucknell.edu/maneval/help211/linalg.html

Here are some simple examples to illustrate.

1. The following system has a unique solution of x=1, y=2 (i.e. the matrix A has an inverse):

x + 2y = 5
2x + 3y = 8


2. The following system is singular (i.e. the matrix A has no inverse):

x + 2y = 5
2x + 4y = 12

and has no solution because the 2nd equation simplifies to x + 2y = 6 which clearly conflicts with the first equation x + 2y = 5.


3. The following system is also singular:

x + 2y = 5
2x + 4y = 10

Both equations simplify to x + 2y = 5 and there are infinitely many solutions (for any arbitrary value for x you can determine a corresponding value for y). Even though there are an infinite number of solutions, there is one solution (x=1, y=2) that results in a minimum norm (i.e. minimum x2 + y2).

Thus, if J is singular, there is either no solution or infinitely many solutions.
 
Last edited:
  • #7
So a matrix that 1) can be LU decomposed and 2) does not have an inverse

does not have a unique solution?
 
  • #8
top40 said:
So a matrix that 1) can be LU decomposed and 2) does not have an inverse does not have a unique solution?

Correct. In my previous post, the LU decomposition exists for each example but that doesn't help in the last two examples where the matrix is singular.
 
  • #9
There is something you can try that may or may not be useful. It's called ridge regression (also called Tikhonov regularization) that adds a small numerical quantity to the diagonals of the singular matrix. If poor choice of starting point is the reason J is singular, it might help the solution eventually converge to a point where you can reduce the added quantity to zero and end up with a valid solution. But, it may also produce results that make no sense.

In the case of problem #3 in my previous post, adding 0.001 to the diagonal produces the result x=0.9998, y=1.9996, which is a solution that makes sense. However, for problem #2, the result is x=-798.83, y=402.32 which doesn't make any sense.
 

1. What is the Newton Raphson Method?

The Newton Raphson Method is an algorithm used to find the roots of a given equation. It is also known as the Newton's method or the Newton's iterative method. It is a powerful numerical method that uses derivatives to find successively better approximations to the roots of a given equation.

2. How does the Newton Raphson Method work?

The method works by starting with an initial guess for the root of the equation and then using the derivative of the function at that point to find the slope of the tangent line. The point where this tangent line intersects the x-axis is taken as the next approximation for the root. This process is repeated until a desired level of accuracy is achieved.

3. What are the advantages of using the Newton Raphson Method?

The method is very efficient and can converge to the root of an equation quickly, especially when the initial guess is close to the actual root. It also has the ability to find multiple roots of an equation simultaneously.

4. What are the limitations of the Newton Raphson Method?

The method may fail to converge or may converge to a wrong root if the initial guess is not close enough to the actual root or if the function has multiple roots in the same region. It also requires the calculation of derivatives, which can be computationally expensive.

5. In what fields is the Newton Raphson Method commonly used?

The method is commonly used in various fields of science and engineering, such as physics, mathematics, and economics. It is particularly useful in solving optimization problems and in finding the roots of nonlinear equations in these fields.

Similar threads

  • General Math
Replies
11
Views
1K
  • Special and General Relativity
Replies
8
Views
1K
  • General Math
Replies
13
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
Replies
4
Views
1K
Replies
8
Views
2K
Replies
36
Views
6K
Replies
3
Views
3K
  • Linear and Abstract Algebra
Replies
12
Views
2K
Replies
2
Views
1K
Back
Top