Call for help in finding approximate inverse matrix

Click For Summary

Discussion Overview

The discussion centers around the challenge of finding an approximate inverse matrix S for a given matrix A, under the condition that the product of matrices A and B equals the identity matrix I, with the dimensions of A and B being non-square (m < n). Participants explore the feasibility of achieving an approximate identity matrix through various methods, particularly in the context of applications like image sequence compression and recovery.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant inquires about the existence of a matrix S such that SA approximates I(n,n), without a specific definition of "approximate."
  • Another participant asserts that if m < n, it is impossible for SA to equal I(n,n) due to rank limitations.
  • A suggestion is made to find S such that SA is diagonal with m ones and the rest zeros, prompting a request for clarification on the purpose of S.
  • The original poster expresses a desire to use the result for image sequence compression, specifically mentioning Principal Components.
  • One participant proposes that S can be chosen to create any nxn rank m matrix, including a matrix filled with ones, but questions if this aligns with the original intent.
  • A method involving minimizing the Frobenius norm of the difference between SA and I is suggested as a potential least squares approach.
  • Another participant expresses curiosity about the extent of approximation achievable through least squares and references research on the topic.
  • Reiteration of the impossibility of finding S such that SA equals I(n,n) is made, emphasizing the need for numerical approximation methods.
  • A participant introduces the concept of compressed sensing as a relevant framework, distinguishing it from Principal Component Analysis and discussing its implications for recovering information from projections.
  • The original poster acknowledges the relevance of compressed sensing to their problem and expresses gratitude for the insights shared.

Areas of Agreement / Disagreement

Participants generally agree that finding a matrix S such that SA equals I(n,n) is impossible when m < n. However, there are multiple competing views on how to approach the problem of finding an approximate solution, with some advocating for least squares methods and others suggesting alternative frameworks like compressed sensing.

Contextual Notes

Participants note limitations regarding the definitions of "approximate" and the specific structures of matrices involved, as well as the implications of rank constraints on the existence of the desired matrix S.

genxium
Messages
137
Reaction score
2
I'm looking for solutions to this problem:

Matrices A(m,n) and B(n,m) satisfy AB=I(m,m) where n isn't equal to m.

Can I find a matrix S(m,n) such that SA=I(n,n) or SA approximates I(n,n)?

By approximate I don't have preferred definition, hence any suggestion is welcome!
 
Physics news on Phys.org
From the first line I'm assuming that m<n. In this case no as the product of matrices has rank no larger than the rank of any invididual matrix, so SA can have rank no larger than m. In particular I(n,n) has rank n which is too large.
 
Yes that's why I say approximate, I'm interesting in looking for an approximate & numerical solution to this problem.
 
You can find a matrix S such that SA is diagonal with m 1's and the rest 0 on the diagonal, do you consider that approximate?

It would help if you described why you want to find such a matrix S.
 
Uhm... I'm afraid not, because I would like to use this result to perform image sequence compression & recovery, thus in practice m will be small (I'm talking about Principal Components actually). Can I get all positive elements in the diagonal for the product matrix?
 
You can basically pick S to get any nxn rank m matrix. So you could have a matrix that has all 1's everywhere for example, but I'm guessing that's not what you want either.

You could do something like find S such that

[tex]||SA-I||^2_{F}[/tex]
the sum of squares of the difference of the entries is minimal. This is a least square problem so it is easily solved numerically and probably algebraically as well. I would assume that S is the pseud-inverse of A in this case because that's always the answer.
 
I do think this approach is on the right track, yet I just wondering "how approximate" I can get if using least square. Does anyone ever do some research works on this problem and hold state-of-the-art conclusion?
 
Given than m<n, you *cannot* find a matrix S such that SA = In×n. Such a matrix does not exist.
 
Thinking about it a bit more I realize that you are likely trying to do compressed sensing:
http://en.wikipedia.org/wiki/Compressed_sensing

where what you really want to do is given y = Ax solve for what x is. There are known algorithms for calculating x assuming it has certain structures (such as sparsity) that images typically satisfy or come close to satisfying in the right basis.

The typical setup is a bit different than principal component analysis - once you've done principal component analysis the whole point is that the extra information is noise and you are totally throwing it away - there is no way to recover the information and there is no reason you should want to recover the information. Compressed sensing works a bit different - you take a (typically) random projection of the data to get a lower dimension, and then the inversion problem is finding some principal components under a certain basis (which is usually not the standard basis as that would give weird looking stuff) that would project down to give the same projection as your original one.
 
  • #10
D H said:
Given than m<n, you *cannot* find a matrix S such that SA = In×n. Such a matrix does not exist.

Yes you're right, actually I'm looking for some numerically approximate methods to handle this problem.
 
  • #11
Office_Shredder said:
Thinking about it a bit more I realize that you are likely trying to do compressed sensing:
http://en.wikipedia.org/wiki/Compressed_sensing

where what you really want to do is given y = Ax solve for what x is. There are known algorithms for calculating x assuming it has certain structures (such as sparsity) that images typically satisfy or come close to satisfying in the right basis.

The typical setup is a bit different than principal component analysis - once you've done principal component analysis the whole point is that the extra information is noise and you are totally throwing it away - there is no way to recover the information and there is no reason you should want to recover the information. Compressed sensing works a bit different - you take a (typically) random projection of the data to get a lower dimension, and then the inversion problem is finding some principal components under a certain basis (which is usually not the standard basis as that would give weird looking stuff) that would project down to give the same projection as your original one.

Thank you very much Office_Shredder! You saved my day! The link you gave is tackling exact the problem I met. I'm reading it and seems it could provide useful information for my project :)
 

Similar threads

  • · Replies 34 ·
2
Replies
34
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 26 ·
Replies
26
Views
2K