Finding the Inv. of a rank deficient Matrix

  • Thread starter Thread starter Shaddyab
  • Start date Start date
  • Tags Tags
    Matrix rank
Click For Summary
The discussion revolves around solving for Phi in the equation A * Phi = Ax' * Sx + Ay' * Sy, where A is a rank-deficient symmetric sparse matrix. It is clarified that a rank-deficient matrix does not have a standard inverse, and the determinant's near-zero value indicates numerical instability. Users suggest utilizing the pseudo-inverse or SVD for a least squares solution, while emphasizing the importance of understanding the dimensions of the matrices involved. MATLAB's mldivide function is recommended for solving least squares problems more effectively than directly computing the inverse. The conversation highlights the challenges of working with poorly conditioned matrices in numerical computations.
Shaddyab
Messages
19
Reaction score
0
I have the following problem:

A * Phi = Ax' * Sx + Ay' * Sy

where,
A= Ax' * Ax + Ay' * Ay + Axy' * Axy

and I would like to solve for Phi.

Matrix A is:
1)symmetric
2) [89x89]
3) Rank(A)=88 ( I guess it means that there is no unique solution )
4) Det(A)~=0 ( I guess it means that A is not Singular )
5) A is a Sparse Matrix ( 673 (%8.5) non-zero elements out of 7921 )

How can I find the inverse of A and solve for Phi ?

Thank you
 
Physics news on Phys.org
You don't. A "rank deficient" matrix which necessarily has determinant 0 does NOT have an inverse!

Now, what are "Ax' ", "Sx", etc.?
 
Det(A) is NOT equal to zero.

I thought that I can solve it using SVD or pseudo-inverse, but I do not know how to implement this solution.

I am trying to find the least square phase ( Phi ) fit to a given slopes slopes.

Ax, Ay, and Axy are derivative matrix operators.

Can I Impose a B.C to solve the problem ? and how I do this ?

Thank you
 
If the determinant is not zero the matrix has to have full rank. You should double check that the problem is intended as written
 
the determinant is not zero BUT it is almost -Inf.
 
This is a problem you are solving numerically on a computer, yes?

How are you estimating the rank?

How did you compute the determinant?

What does "the determinant is not zero BUT it is almost -Inf" mean?

It sounds like at the very least this is a poorly conditioned matrix. Yes, the pseudo-inverse can be used for these kinds of things to give a kind of solution (although I do not understand your notation at all so don't know the exact problem you are trying to solve ...). Wikipedia has a reasonable page on the Moore-Penrose pseudoinverse that may be worth your while.

jason
 
I am running a Matlab code to solve the problem.
The rank and Determinant are estimated using Matlab commands 'rank' and 'det'

By saying that "the determinant is not zero BUT it is almost -Inf" I mean that the result of det(A) is around -1e24.

I am solving the following problem:

\vec{S} = \nabla Phi

Where
\vec{S} = Sx \hat{x} + Sy \hat{y}

Ax, Ay And Axy are matrix operators such as:
Ax=\partial / \partial x
Ay=\partial / \partial y
Axy=\partial / \partial x \partial y

Solving the least squares estimate I end out with the following equation:
A * Phi = Ax^{T} * Sx + Ay^{T}* Sy

where,
A=Ax^{T} * Ax + Ay^{T} * Ay + Axy^{T} * Axy

Hence I need the inverse of A to find Phi.
such that:
Phi = A^{-1} * Ax^{T} * Sx + A^{-1} * Ay^{T}* Sy

But Matrix A is rank deficient and I can not find the the inverse in the normal way.
 
I still don't understand your problem. From what you wrote Phi a scalar function? Thus \mathbf{S} is a vector (column vector?). Ax, Ay, Axy are all matrices, so Ax*sx is a vector? ...

So to me your equation looks like: matrix * scalar = vector.

Which seems crazy, since you state that the matrix A is square ...

What are the dimensions of each term in your equation? It still makes absolutely no sense to me.

jason
 
For some reason the fact that you are doing least squares just sunk in ...

I still do not understand your problem at all, but my hunch is you are doing least-squares and are showing us the "normal equations" that you derive that way. That approach is usually very poor numerically. If you have a linear least squares problem with a set of parameters \matbf{x} (n x 1)that you want to solve to minimize
| A \mathbf{x} - \mathbf{b} |^2,
where matrix A (m x n, with m>n) and the vector \mathbf{b} (m x 1) are both known, then in Matlab you solve it by using mldivide (the \ operator). Just like this:

x = A\b;

Inside matlab, type

doc mldivide

to see the algorithm it uses. For the m>n case (usual least-squares) it uses the QR factorization which is a reasonable way to do least squares.

My hunch is that you are trying to solve (using MATLAB notation)
A'*A*x = A'*b

Mathematically that is the exact way to solve the least squares problem, but it is usually a disaster numerically since A'*A often has a very poor condition number and can be unstable.

jason
 
Last edited:

Similar threads

  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
4K