Call for help in finding approximate inverse matrix

genxium
Messages
137
Reaction score
2
I'm looking for solutions to this problem:

Matrices A(m,n) and B(n,m) satisfy AB=I(m,m) where n isn't equal to m.

Can I find a matrix S(m,n) such that SA=I(n,n) or SA approximates I(n,n)?

By approximate I don't have preferred definition, hence any suggestion is welcome!
 
Physics news on Phys.org
From the first line I'm assuming that m<n. In this case no as the product of matrices has rank no larger than the rank of any invididual matrix, so SA can have rank no larger than m. In particular I(n,n) has rank n which is too large.
 
Yes that's why I say approximate, I'm interesting in looking for an approximate & numerical solution to this problem.
 
You can find a matrix S such that SA is diagonal with m 1's and the rest 0 on the diagonal, do you consider that approximate?

It would help if you described why you want to find such a matrix S.
 
Uhm... I'm afraid not, because I would like to use this result to perform image sequence compression & recovery, thus in practice m will be small (I'm talking about Principal Components actually). Can I get all positive elements in the diagonal for the product matrix?
 
You can basically pick S to get any nxn rank m matrix. So you could have a matrix that has all 1's everywhere for example, but I'm guessing that's not what you want either.

You could do something like find S such that

||SA-I||^2_{F}
the sum of squares of the difference of the entries is minimal. This is a least square problem so it is easily solved numerically and probably algebraically as well. I would assume that S is the pseud-inverse of A in this case because that's always the answer.
 
I do think this approach is on the right track, yet I just wondering "how approximate" I can get if using least square. Does anyone ever do some research works on this problem and hold state-of-the-art conclusion?
 
Given than m<n, you *cannot* find a matrix S such that SA = In×n. Such a matrix does not exist.
 
Thinking about it a bit more I realize that you are likely trying to do compressed sensing:
http://en.wikipedia.org/wiki/Compressed_sensing

where what you really want to do is given y = Ax solve for what x is. There are known algorithms for calculating x assuming it has certain structures (such as sparsity) that images typically satisfy or come close to satisfying in the right basis.

The typical setup is a bit different than principal component analysis - once you've done principal component analysis the whole point is that the extra information is noise and you are totally throwing it away - there is no way to recover the information and there is no reason you should want to recover the information. Compressed sensing works a bit different - you take a (typically) random projection of the data to get a lower dimension, and then the inversion problem is finding some principal components under a certain basis (which is usually not the standard basis as that would give weird looking stuff) that would project down to give the same projection as your original one.
 
  • #10
D H said:
Given than m<n, you *cannot* find a matrix S such that SA = In×n. Such a matrix does not exist.

Yes you're right, actually I'm looking for some numerically approximate methods to handle this problem.
 
  • #11
Office_Shredder said:
Thinking about it a bit more I realize that you are likely trying to do compressed sensing:
http://en.wikipedia.org/wiki/Compressed_sensing

where what you really want to do is given y = Ax solve for what x is. There are known algorithms for calculating x assuming it has certain structures (such as sparsity) that images typically satisfy or come close to satisfying in the right basis.

The typical setup is a bit different than principal component analysis - once you've done principal component analysis the whole point is that the extra information is noise and you are totally throwing it away - there is no way to recover the information and there is no reason you should want to recover the information. Compressed sensing works a bit different - you take a (typically) random projection of the data to get a lower dimension, and then the inversion problem is finding some principal components under a certain basis (which is usually not the standard basis as that would give weird looking stuff) that would project down to give the same projection as your original one.

Thank you very much Office_Shredder! You saved my day! The link you gave is tackling exact the problem I met. I'm reading it and seems it could provide useful information for my project :)
 
Back
Top