I Inverse of the sum of two matrices

Luck0
Messages
22
Reaction score
1
Suppose I have a matrix M = A + εB, where ε << 1.

If A is invertible, under some assumptions I can write e Neumann series

M-1 = (I - εA-1B)A-1

But if A is not invertible, how can I expand M-1 in powers of ε?

Thanks in advance
 
Physics news on Phys.org
Luck0 said:
Suppose I have a matrix M = A + εB, where ε << 1.

If A is invertible, under some assumptions I can write e Neumann series

M-1 = (I - εA-1B)A-1

But if A is not invertible, how can I expand M-1 in powers of ε?

Thanks in advance
I smell Zariski, my favorite topology. The first thing you should ask is: what for? As invertible matrices are Zariski-dense in the space of all matrices, a small variation of ##\varepsilon## should give you an invertible matrix ##M##. However, this process isn't stable for algebraic properties like eigenvalues, nilpotency or similar. Of course not, as we already changed the determinant in the first place. So what for is essential to answers. Are we interested in topological properties or algebraic properties? What's fine for the first might be a no-go for the second and vice versa.

Also commutation properties between ##A## and ##B## could play a role. In general, you'll always find an element ##X \in \{A+\delta \cdot I\,\vert \,\delta \ll 1\}## which is invertible.
 
fresh_42 said:
I smell Zariski, my favorite topology. The first thing you should ask is: what for? As invertible matrices are Zariski-dense in the space of all matrices, a small variation of ##\varepsilon## should give you an invertible matrix ##M##. However, this process isn't stable for algebraic properties like eigenvalues, nilpotency or similar. Of course not, as we already changed the determinant in the first place. So what for is essential to answers. Are we interested in topological properties or algebraic properties? What's fine for the first might be a no-go for the second and vice versa.

Also commutation properties between ##A## and ##B## could play a role. In general, you'll always find an element ##X \in \{A+\delta \cdot I\,\vert \,\delta \ll 1\}## which is invertible.

I'm more interested in algebraic properties. In fact, I want a closed form for the coefficients of ##M^{-1}## in powers of ##\epsilon##. The problem is that in my calculation, if I make ##A \to A + \delta I##, I'll have to keep terms in ##\delta##, which is something I want to avoid, because it looks a bit like cheating
 
Luck0 said:
I'm more interested in algebraic properties. In fact, I want a closed form for the coefficients of ##M^{-1}## in powers of ##\epsilon##. The problem is that in my calculation, if I make ##A \to A + \delta I##, I'll have to keep terms in ##\delta##, which is something I want to avoid, because it looks a bit like cheating
It is cheating, if you're interested in algebraic properties. But the ##\delta ## is arbitrary to some extend, so you can as well take an expression in ##\varepsilon##. I remember that Volker Strassen has had used similar concepts to generically determine the computational rank of tensors. Unfortunately I can't remember an exact citation. But the keyword here is "generic", which means most in a topological sense, and thus results in quantitave rather than qualitative statements.

As an example: the zero matrix isn't invertible, whereas all ##\operatorname{diag}(\varepsilon, \delta)## are, although they their distance from the zero matrix is arbitrary small. However, they have little algebraic properties in common.
 
Last edited:
fresh_42 said:
It is cheating, if you're interested in algebraic properties. But the ##\delta ## is arbitrary to some extend, so you can as well take an expression in ##\varepsilon##. I remember that Volker Strassen has had used similar concepts to generically determine the computational rank of tensors. Unfortunately I can't remember an exact citation. But the keyword here is "generic", which means most in a topological sense, and thus results in quantitave rather than qualitative statements.

As an example: the zero matrix isn't invertible, whereas all ##\operatorname{diag}(\varepsilon, \delta)## are, although they their distance from the zero matrix is arbitrary small. However, they have little algebraic properties in common.

I see. Thanks for the answers!
 
Back
Top