Understanding Solutions and Singular Matrices

  • Context: Undergrad 
  • Thread starter Thread starter Plasmarobo
  • Start date Start date
  • Tags Tags
    Matrices
Click For Summary

Discussion Overview

The discussion revolves around challenges faced in implementing an iterative nonlinear least squares problem related to trilateration using the JAMA library in Java. Participants explore issues with singular matrices and the implications for solving equations, as well as the robustness of different linear algebra systems.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • The original poster (OP) encounters a singular matrix when trying to solve an equation derived from a nonlinear least squares problem.
  • Some participants suggest that the issue may stem from the matrices being provided to the JAMA library rather than the library itself.
  • There is discussion about the importance of matrix conditioning and the potential for ill-conditioned matrices affecting results.
  • One participant proposes that matrix factorization methods may be more appropriate than direct inversion for solving the equations.
  • The OP expresses uncertainty about how to translate residual equations into a usable vector for the next guess in the iterative process.
  • Participants mention various methods such as QR Decomposition, Singular Value Decomposition, and the simplex method, with mixed opinions on their applicability to the OP's problem.
  • There is a clarification that the simplex method is primarily for linear programming, which raises questions about its relevance to the OP's nonlinear least squares problem.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best approach to address the singular matrix issue, with multiple competing views on the methods and libraries to use. The discussion remains unresolved regarding the most effective solution strategy.

Contextual Notes

Participants note that the original algorithm may not be robust against poorly conditioned data, and there is uncertainty about the specific mathematical steps required to adapt the algorithm for the OP's needs.

Plasmarobo
Messages
4
Reaction score
1
Hey folks.
I'm working on a project which seems to be encountering a problem.
I took Linear Algebra a few years ago in college, and haven't really applied it very much so I'm at a bit of a loss here.

I have a solution to a iterative nonlinear least squares problem: Trilateration with n distances.
I'm implementing the algorithm found in this PDF in java using the JAMA library.

I don't understand much of the math here, and I need to figure out why my matrices are singular (unfortunately the inverse is part of the equation handed to me).
So I guess I'm asking a few things.

1. For Equation (40) (pg 18) R{k+1}=R{k}-(JT{k}J{k})-1J{k}f
I'm ending up with a JT{k}J{k} term which is singular. This means I cannot directly solve this equation. But while I'm familiar with solving equations of the form Ax=B I'm not really sure how to find an alternate solution for equation (40).

2. How robust is the JAMA library in general, is there a better linear algebra system for java?

Thanks!
 
Physics news on Phys.org
My 2 cents:
It would surprise me if any commercial software math package had unusual trouble inverting a matrix. I would be more suspicious of the matrices you are giving it. You might try putting your matrix into another package and see if it has trouble.

The article you reference says that their C++ code is available from the authors. You might check into that.

For debugging your own code:
1) Have you tried the example in the article? It would be interesting to see if your code works on their data.
2) Having the author's C++ implementation would give you something to compare intermediate results against.
3) Throughout the paper, the problem of ill-conditioned matrices is central. So your problem may be common if the data is not as good as their example. In their example, it looks like they have 8 positions in a nice circle. Do you have that in the data you are using?
 
  • Like
Likes   Reactions: 1 person
I'm in touch with the author. The C++ code is actually proprietary and controlled by the mining company who commissioned it. My first instinct was also to examine it directly. Porting C++ to java is trivial.

The matrices may be poorly conditioned. Unfortunately my real-world data may be poorly conditioned too. I am running against their data. The algorithm fails my random tests (randomly). It's possible I need to modify the algorithm to deal with more uncertainty, but that's not possible until I understand exactly how the math functions (I thought I had a decent handle on it until I tried to implement it).
 
  • Like
Likes   Reactions: 1 person
FactChecker said:
It would surprise me if any commercial software math package had unusual trouble inverting a matrix.

I would be suspicious of any implementation of an algorithms that explicitly invert the matrix, in any case. You don't need to invert a matrix to solve a set of equations, which presumably* is what you are trying to do here. You need to factorize the matrix in some way (and that can be well conditioned even for singular matrices) but inverting the matrix explicitly is usually poorly conditioned, and almost always unnecessary and inefficient.

* I say "presumably," because the PDF reference gives some details of two standard methods for solving linear least squares problems (though it's hard to see what the motivation for writing those section was - if you understand the methods they don't tell you anything, and if you don't understand them they probably also don't tell you anything useful) but nothing of any substance about their preferred method of solving the nonlinear least squares problem.

It's hard to say anything constructive given the lack of information about the problem, but I guess the most likely explanation is that
Porting C++ to java is trivial
was over optimistic, though it should certainly be possible.
 
  • Like
Likes   Reactions: 1 person
Yeah, I'm doing nonlinear minimization of squares in relation to a reference point.

Maybe this is the part I don't really understand. I get that I have a set of n spheres (where n > 4) and 3 unknowns. I can figure out the residual equations, and compute the differentials.
I understand that we model the nonlinear equation as a linear one and refine by iteration, but I'm not sure how to do that with a system of simultaneous equations. Not sure what the solution from the residual equations to the vector for the next guess would be. Conceptually I'm missing the translation there.

I've looked at Wikipedia and Mathmatica's nonlinear least squares minimization and found them both intractable. I'm not sure I understand half of the notation there. I've seen a few more explanations of Nonlinear Least Squares and I vaguely understand the procedure.

I think the thing I'm missing is the linearized estimate. I don't have enough linear algebra to create the Ax=B form required by the apache math3 library or JAMA.
 
AlephZero said:
I would be suspicious of any implementation of an algorithms that explicitly invert the matrix, in any case. You don't need to invert a matrix to solve a set of equations, which presumably* is what you are trying to do here. You need to factorize the matrix in some way (and that can be well conditioned even for singular matrices) but inverting the matrix explicitly is usually poorly conditioned, and almost always unnecessary and inefficient.

I should have said that a mature matrix inversion algorithm should not have any unnecessary problem inverting a matrix. In these days of double-precision calculation, it takes a very ill-conditioned matrix to cause problems. But a singular matrix can not be inverted by any algorithm. A pseudoinverse is the best you can hope for.

One way or another, any algorithm will do the same thing that inverting the matrix would do. If the matrix is ill-conditioned, then the algorithm will have to deal with it. The ill-condition is in the equations being solved, not in the method, and there is no way around it.

That being said, there are ways to mitigate the problem. The simplex method of linear programming will invert a nonsingular, well-conditioned, matrix. If the matrix is singular, the same algorithm generates a pseudoinverse. But then the answers are not unique.
 
Last edited:
  • Like
Likes   Reactions: 1 person
I'll look into the simplex method. Hopefully the answers will be close enough. This is a non-linear least squares problem, and I don't need an exact solution. I only need the statistically least wrong solution. Or a decent approximation.

I'm trying to use QR Decomposition to solve the systems first, and then I fail over into SingularValueDecomposition, and if that throws and exception I'll see if I can apply the simplex method.
Thanks!
 
Plasmarobo said:
I'll look into the simplex method. Hopefully the answers will be close enough. This is a non-linear least squares problem, and I don't need an exact solution. I only need the statistically least wrong solution. Or a decent approximation.

I'm trying to use QR Decomposition to solve the systems first, and then I fail over into SingularValueDecomposition, and if that throws and exception I'll see if I can apply the simplex method.
Thanks!

I don't think that the simplex method will help you. It's for linear programming. I was just responding to another post and was digressing. Sorry.
 
  • Like
Likes   Reactions: 1 person
FactChecker said:
That being said, there are ways to mitigate the problem. The simplex method of linear programming will invert a nonsingular, well-conditioned, matrix. If the matrix is singular, the same algorithm generates a pseudoinverse. But then the answers are not unique.

I have no idea what FactChecker means by that (and I do know what the technical terms mean).

FactChecker said:
I don't think that the simplex method will help you. It's for linear programming.

There is another "simplex method" which has nothing to do with linear programming, also known as the http://en.wikipedia.org/wiki/Nelder–Mead_method

Nelder-Mead could probably be used to solve your nonlinear optimization problem, but from the OP's PDF, I don't think that was the method used there.
 
  • #10
AlephZero said:
I have no idea what FactChecker means by that (and I do know what the technical terms mean).

There is another "simplex method" which has nothing to do with linear programming, also known as the http://en.wikipedia.org/wiki/Nelder–Mead_method

Nelder-Mead could probably be used to solve your nonlinear optimization problem, but from the OP's PDF, I don't think that was the method used there.

Sorry. I wasn't very clear and I am afraid that this is diverging from the original question.

The simplex method for solving a linear programming problem will find the optimum of linear function c \cdot x subject to linear constraints Ax = b, x>0 . The method is a sequence of "pivot" calculations. It is possible to start the algorithm with an identity matrix included as additional columns in a table. If A is nonsingular, the inverse of A is produced in the location where the identity matrix was. If the matrix A is singular, the end result is a pseudoinverse of A in that position.

I don't think that the Nelder-Mead method for nonlinear programming is related, but I am not familiar with it.
 

Similar threads

  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 17 ·
Replies
17
Views
9K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K