How to judge the singularity of a matrix in numerical method?

In summary, different methods give different results and can be inconsistent when determining the singularity of a matrix. This is due to the set of regular matrices being dense in the set of all matrices, meaning that even small errors in the input data can greatly affect the results. Therefore, it is not recommended to rely on the determinant method for larger matrices and it is better to use the rank method instead. Additionally, the condition of the matrix can also be used to determine singularity and should be considered when working with numerically sensitive systems.
  • #1
nenyan
67
0
TL;DR Summary
different methods give different results. They are not consistent.
Summary: different methods give different results. They are not consistent.

Summary: different methods give different results. They are not consistent.

I use two different methods to detect whether a matrix is singular. The result of calculating the determinant of a 9-order square matrix is far greater than zero. The result of calculating the rank of the same square matrix is 8. Why are the results inconsistent?
singularity:
import numpy as np
from numpy.linalg import inv
a = np.loadtxt('data/d0.txt')
print(a.shape)

print(np.linalg.det(a))
print(np.linalg.matrix_rank(a))
the result:
(9, 9)
2.3274769078093922e+52
8


the attached is one of those matrixes. There are hundreds of them.
 

Attachments

  • d0.txt
    2 KB · Views: 329
Last edited:
Physics news on Phys.org
  • #2
You might find these links helpful:
Numerical rank of matrices
https://www.mathworks.com/help/symbolic/mupad_ref/numeric-rank.html
From what I see, this is precisely the way your rank function is working, using singular value decomposition of the matrix. It might be that in certain cases the approximation on rank and determinant are different. I'm not an expert on numerics, really, but in my opinion I would go with determinant as a way of viewing whether the matrix is singular or not, simply because it is probably less hard to calculate, so the approximation is more precise(I guess). Maybe some people here who have worked more on numerics can give a better explanation, but this is what I found so far.

Regards!
Antarres
 
  • Like
Likes nenyan
  • #3
nenyan said:
Why are the results inconsistent?
That is because you are looking for a solution in a set which is thin, like a line in a plane.

Both questions are equations: ##\det A = 0## and ##\operatorname{rk}A=8##. They are very error sensitive. As soon as you vary your input data only a little bit, you'll get completely different results. Mathematically speaking it is: the set of regular matrices is dense in the set of all matrices. That means you always get a regular matrix as soon as you make a little error, e.g. by rounding numbers. Compare
$$
\begin{bmatrix}0&0\\0&0\end{bmatrix} \text{ and } \begin{bmatrix}\varepsilon& 2 \varepsilon\\-\varepsilon&3 \varepsilon \end{bmatrix}
$$
So especially numeric algorithms are difficult, because they normally cannot calculate exactly and you end up in the dense subset of regular matrices.
 
  • Like
Likes nenyan and Antarres
  • #4
Calculating determinants of large matrices is inefficient. One option is to use det(sym(data)), but that might take too long to execute, or throw errors with floating point data. I tried using it in Octave, and after waiting for a long time, I got an error:

1572533447664.png


Matlab might have performed better.

See this answer on Stack Overflow:
https://stackoverflow.com/a/13146750/8387076It is better to rely on the rank method for larger matrices.
 
Last edited:
  • Like
Likes WWGD
  • #5
Funny, with excel MDETERM I get the same determinant
2.327477552992E+52

The M_RANK function in http://web.bowdoin.edu/~rdelevie/excellaneous/#downloads returns a clear 9

And MINVERSE gives me an inverse without problems.

So all OK ? Don't know for sure: Multiplying the two with MMULT does show defects of size 10-4, so it seems there is something sensitive:

1572531639954.png


don't know how to hide things so use spoiler: here are the numbers.
Code:
1    3.15482E-21    5.19577E-19    -8.36678E-18    2.46666E-16    1.02344E-12    -8.69165E-13    -8.74799E-12    1.92627E-14
-2.79397E-09    1    -1.45519E-11    -7.45058E-09    0    1.90735E-06    7.62939E-06    -6.10352E-05    7.62939E-06
-2.91038E-11    3.97904E-13    1    2.09548E-09    0    -1.19209E-07    -4.17233E-07    2.86102E-06    5.96046E-07
-3.72529E-09    0    -5.82077E-11    1    0    -3.8147E-06    -7.62939E-06    -6.10352E-05    3.05176E-05
0    5.82077E-11    0    2.38419E-07    1.000000477    0    -0.00012207    0.000488281    0
-2.27374E-12    3.55271E-15    -2.84217E-14    1.45519E-11    7.27596E-12    1    4.65661E-10    5.21541E-08    1.86265E-09
-2.72848E-12    7.10543E-15    -4.26326E-14    2.91038E-11    1.45519E-11    -4.65661E-10    0.999999994    7.45058E-08    3.72529E-09
3.41061E-13    -1.11022E-15    7.10543E-15    -5.45697E-12    0    2.91038E-10    9.31323E-10    0.999999994    -1.39698E-09
1.16415E-10    2.27374E-13    0    0    -1.86265E-09    -2.38419E-07    -4.76837E-07    0    1.000000477
 
  • #6
fresh_42 said:
That is because you are looking for a solution in a set which is thin, like a line in a plane.

Both questions are equations: ##\det A = 0## and ##\operatorname{rk}A=8##. They are very error sensitive. As soon as you vary your input data only a little bit, you'll get completely different results. Mathematically speaking it is: the set of regular matrices is dense in the set of all matrices. That means you always get a regular matrix as soon as you make a little error, e.g. by rounding numbers. Compare
$$
\begin{bmatrix}0&0\\0&0\end{bmatrix} \text{ and } \begin{bmatrix}\varepsilon& 2 \varepsilon\\-\varepsilon&3 \varepsilon \end{bmatrix}
$$
So especially numeric algorithms are difficult, because they normally cannot calculate exactly and you end up in the dense subset of regular matrices.
That's precisely what I was getting at with those links. You phrased it better than me though.
 
  • #7
When working in a numerical environment, the use of det(M) is not encouraged. This is because det is very sensitive to scale. This is easy to show, for example we can create a matrix that is singular by definition.

In MATLAB:

MATLAB code to test effectiveness of singularity detecting methods:
Dcnt = 0;  % Count of successful singularity detection.
Rcnt = 0;
Ccnt = 0;
N = 100000;  % The number of tests to run

for k = 1:N
   A = randi(10000,8,9);   % An 8-by-9 matrix of integers drawn from [1,10000].
%    A = rand(8,9)*1000;   % An 8-by-9 matrix of floating point numbers drawn from [0,1000].
   A = [A;sum(A,1)];       % This 9-by-9 matrix is singular BY DEFINITION.
   D = det(A) ;              % Test for singularity using det(A)=0
   R = rank(A);              % Test for singularity using rank(A)~=9
   C = cond(A);              % Test for singularity using condition>>0

   if D==0
       Dcnt = Dcnt + 1;
   end

   if R~=9
       Rcnt = Rcnt + 1;
   end

   if C>1
       Ccnt = Ccnt + 1;
   end    
end

fprintf('Determinant success rate: %4.2f%%\n',Dcnt/N*100)
fprintf('Rank success rate: %4.2f%%\n',Rcnt/N*100)
fprintf('Condition success rate: %4.2f%%\n',Ccnt/N*100)

The results for integer matrices come back as:

Results with integer matrix:
Determinant success rate: 0.45%
Rank success rate: 100.00%
Condition success rate: 100.00%

When we use the floating point singular matrix, things get worse:

Results with non-integer matrix:
Determinant success rate: 0.04%
Rank success rate: 100.00%
Condition success rate: 100.00%
So the lesson is:

Do not use the determinant to discover matrix singularity when working with numerical systems. It is unreliable.
 
Last edited:
  • Like
  • Informative
Likes DrClaude, kreil and Wrichik Basu
  • #8
a closely related idea -- the (2 norm based) condition of the matrix is given by ##\frac{\sigma_{\max}}{\sigma_{\min}}##

what OP has posted strongly suggests horrific conditioning here-- so even if the matrix is formally invertible, for (almost) all purposes one should think of it as if it is rank deficient and consider pruning the smallest singular value(s).
- - - -
note: we don't know what the components of the matrix look like, but to the extent they are rational numbers, OP could load the matrix in say Sage, use QQ as the field, and compute the rank exactly
 
  • #9
I second mfig's reply above NOT to use the determinant to detect singularity in a numerical context. I added info to the det page in MATLAB on this:
https://www.mathworks.com/help/matlab/ref/det.html#bubi9tw
The reason: Most numerical algorithms calculate the determinant using an LU decomposition of the matrix, and LU is notoriously susceptible to round-off errors since it performs a lot of arithmetic. This ends up meaning that the determinant can get arbitrarily close to zero with some well conditioned matrices, or arbitrarily large even though the matrix is singular.

So you're better off calculating the rank of the matrix via SVD (rank is the number of nonzero singular values using a tolerance), or calculating the condition number (or reciprocal condition number) of the matrix (which are also calculated from the SVD). The condition numbers are nice since they tell you how close to singular the matrix is.
 
  • Like
Likes AVBs2Systems, mfig and WWGD

1. How do you define a singular matrix in numerical methods?

A singular matrix in numerical methods is a square matrix that does not have an inverse. This means that the matrix cannot be inverted or solved for a unique solution. In other words, the matrix is not full rank and there are linearly dependent rows or columns.

2. What is the significance of determining the singularity of a matrix in numerical methods?

Determining the singularity of a matrix is important in numerical methods because it affects the accuracy and stability of the solution. A singular matrix can lead to computational errors and make it impossible to obtain a unique solution. It is also used in various algorithms such as Gaussian elimination and LU decomposition.

3. How is the singularity of a matrix calculated in numerical methods?

The singularity of a matrix is typically determined by calculating its determinant. If the determinant is equal to zero, the matrix is singular. However, this method can be computationally expensive for large matrices, so other techniques such as LU decomposition or SVD (Singular Value Decomposition) can also be used.

4. What are the possible solutions for a singular matrix in numerical methods?

There are a few approaches that can be taken for a singular matrix in numerical methods. One option is to use a regularization technique to modify the matrix and make it non-singular. Another approach is to use a pseudoinverse, which is a generalized inverse that can be used for singular matrices. Additionally, some algorithms may be able to handle singular matrices without any special treatment.

5. How does singularity affect the stability of a numerical method?

Singularity can greatly affect the stability of a numerical method. In some cases, it can lead to a catastrophic failure of the algorithm and make it impossible to obtain a solution. In other cases, it may cause the solution to be very sensitive to small changes in the input, resulting in a less accurate solution. This is why it is important to identify and handle singular matrices appropriately in numerical methods.

Similar threads

  • Programming and Computer Science
Replies
6
Views
1K
  • Programming and Computer Science
Replies
15
Views
2K
  • Advanced Physics Homework Help
Replies
7
Views
1K
  • Introductory Physics Homework Help
Replies
7
Views
1K
  • Programming and Computer Science
Replies
6
Views
2K
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
2K
Replies
5
Views
2K
  • Programming and Computer Science
Replies
1
Views
1K
Back
Top