Inverse of the sum of two matrices

  • Context: Undergrad 
  • Thread starter Thread starter Luck0
  • Start date Start date
  • Tags Tags
    Inverse Matrices Sum
Click For Summary

Discussion Overview

The discussion revolves around the expansion of the inverse of a matrix M defined as M = A + εB, particularly focusing on the case when A is not invertible. Participants explore the implications of this scenario on algebraic and topological properties, as well as the challenges in deriving a closed form for the coefficients of M-1 in powers of ε.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • Some participants propose using a Neumann series expansion for M-1 when A is invertible, but question how to proceed when A is not invertible.
  • One participant mentions that invertible matrices are Zariski-dense, suggesting that small variations in ε could yield an invertible matrix M, but raises concerns about the stability of algebraic properties.
  • Another participant expresses a desire for a closed form for the coefficients of M-1 in powers of ε, while noting the complications introduced by keeping terms in δ when modifying A.
  • It is suggested that using δ could be seen as "cheating" when focusing on algebraic properties, yet it is acknowledged that δ is arbitrary to some extent.
  • One participant recalls that similar concepts have been used by Volker Strassen to generically determine the computational rank of tensors, emphasizing the distinction between topological and algebraic properties.
  • Examples are provided to illustrate the differences between invertibility and algebraic properties, particularly regarding matrices close to the zero matrix.

Areas of Agreement / Disagreement

Participants express differing views on the implications of modifying A and the relevance of topological versus algebraic properties. There is no consensus on how to approach the expansion of M-1 when A is not invertible, and the discussion remains unresolved.

Contextual Notes

Participants highlight limitations related to the stability of algebraic properties when altering A and the implications of using δ in calculations. The discussion also reflects uncertainty regarding the appropriate framework for addressing the problem.

Luck0
Messages
22
Reaction score
1
Suppose I have a matrix M = A + εB, where ε << 1.

If A is invertible, under some assumptions I can write e Neumann series

M-1 = (I - εA-1B)A-1

But if A is not invertible, how can I expand M-1 in powers of ε?

Thanks in advance
 
Physics news on Phys.org
Luck0 said:
Suppose I have a matrix M = A + εB, where ε << 1.

If A is invertible, under some assumptions I can write e Neumann series

M-1 = (I - εA-1B)A-1

But if A is not invertible, how can I expand M-1 in powers of ε?

Thanks in advance
I smell Zariski, my favorite topology. The first thing you should ask is: what for? As invertible matrices are Zariski-dense in the space of all matrices, a small variation of ##\varepsilon## should give you an invertible matrix ##M##. However, this process isn't stable for algebraic properties like eigenvalues, nilpotency or similar. Of course not, as we already changed the determinant in the first place. So what for is essential to answers. Are we interested in topological properties or algebraic properties? What's fine for the first might be a no-go for the second and vice versa.

Also commutation properties between ##A## and ##B## could play a role. In general, you'll always find an element ##X \in \{A+\delta \cdot I\,\vert \,\delta \ll 1\}## which is invertible.
 
fresh_42 said:
I smell Zariski, my favorite topology. The first thing you should ask is: what for? As invertible matrices are Zariski-dense in the space of all matrices, a small variation of ##\varepsilon## should give you an invertible matrix ##M##. However, this process isn't stable for algebraic properties like eigenvalues, nilpotency or similar. Of course not, as we already changed the determinant in the first place. So what for is essential to answers. Are we interested in topological properties or algebraic properties? What's fine for the first might be a no-go for the second and vice versa.

Also commutation properties between ##A## and ##B## could play a role. In general, you'll always find an element ##X \in \{A+\delta \cdot I\,\vert \,\delta \ll 1\}## which is invertible.

I'm more interested in algebraic properties. In fact, I want a closed form for the coefficients of ##M^{-1}## in powers of ##\epsilon##. The problem is that in my calculation, if I make ##A \to A + \delta I##, I'll have to keep terms in ##\delta##, which is something I want to avoid, because it looks a bit like cheating
 
Luck0 said:
I'm more interested in algebraic properties. In fact, I want a closed form for the coefficients of ##M^{-1}## in powers of ##\epsilon##. The problem is that in my calculation, if I make ##A \to A + \delta I##, I'll have to keep terms in ##\delta##, which is something I want to avoid, because it looks a bit like cheating
It is cheating, if you're interested in algebraic properties. But the ##\delta ## is arbitrary to some extend, so you can as well take an expression in ##\varepsilon##. I remember that Volker Strassen has had used similar concepts to generically determine the computational rank of tensors. Unfortunately I can't remember an exact citation. But the keyword here is "generic", which means most in a topological sense, and thus results in quantitave rather than qualitative statements.

As an example: the zero matrix isn't invertible, whereas all ##\operatorname{diag}(\varepsilon, \delta)## are, although they their distance from the zero matrix is arbitrary small. However, they have little algebraic properties in common.
 
Last edited:
fresh_42 said:
It is cheating, if you're interested in algebraic properties. But the ##\delta ## is arbitrary to some extend, so you can as well take an expression in ##\varepsilon##. I remember that Volker Strassen has had used similar concepts to generically determine the computational rank of tensors. Unfortunately I can't remember an exact citation. But the keyword here is "generic", which means most in a topological sense, and thus results in quantitave rather than qualitative statements.

As an example: the zero matrix isn't invertible, whereas all ##\operatorname{diag}(\varepsilon, \delta)## are, although they their distance from the zero matrix is arbitrary small. However, they have little algebraic properties in common.

I see. Thanks for the answers!
 

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K