Continuity of matrix multiplication and inversion in a normed vector space?

Click For Summary

Homework Help Overview

The discussion revolves around proving the continuity of matrix inversion in the context of a normed vector space, specifically focusing on the map defined by \varphi: GL(n,R) \to GL(n,R) where \varphi(A) = A^{-1}. The participants are also exploring the continuity of matrix multiplication.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • The original poster attempts to use the epsilon-delta definition of continuity to establish the continuity of matrix inversion, expressing concerns about how to derive a suitable delta from epsilon. Other participants suggest considering the norm of the inverse and manipulating inequalities to find a relationship between delta and epsilon. There are also inquiries about the continuity of matrix multiplication and how to approach proving it.

Discussion Status

Participants are actively engaging with the problem, providing insights and suggestions for approaching the continuity proof. Some have noted that the choice of delta could be constrained by certain conditions, while others are exploring the implications of viewing GL(n,R) as an open subspace of Rn² to simplify the continuity arguments.

Contextual Notes

There is a mention of a typo regarding the norm definition, which has been clarified. The discussion also reflects on the challenges of applying the epsilon-delta definition in this context, with participants questioning the specifics of the continuity proof and the implications of their findings.

Arian.D
Messages
101
Reaction score
0

Homework Statement



Hi guys,

I'm trying to prove that matrix inversion is continuous. In other words, I'm trying to show that in a normed vector space the map \varphi: GL(n,R) \to GL(n,R) defined by \varphi(A) = A^{-1} is continuous.

Homework Equations



The norm that we're working in the class is ||A|| = \sup\{ |AX| : |X| \leq 1 \} where |X| refers to the Euclidean length of the vector X. So the topology we define on L(Rn) (the set of all linear transformations from Rn onto itself) is defined by the metric topology this norm induces on it.

The results that I already know are:

1- If A is a linear transformation in GL(n,R) and we have ||B-A|| . ||A^{-1}|| < 1 then B is also in GL(n,R).
2- GL(n,R) is an open set in L(Rn).

The Attempt at a Solution



well, I attempted to prove it by the classical epislon-delta definition of continuity, even though I failed I conclude some things that might be useful for a solution:

What I should prove is:

\forall \epsilon>0 , \exists \delta>0: ||X-A||< \delta \implies ||X^{-1} - A^{-1}||< \epsilon

We know that if X is invertible then the inverse of I-X is given by the series:
(I-X)^{-1} = \sum_{k=0}^\infty X^k

By writing X=I-(I-X) it's easy to see that:

X^{-1} = \sum_{k=0}^\infty (I-X)^k

So we have:

(X^{-1}A)^{-1} = A^{-1}X = \sum_{k=0}^\infty (I-X^{-1}A)^kThis implies that for any matrix X close to A in GL(n,R) (we have defined a topology on GL(n,R), so I can talk about closeness), I can write down:

A^{-1} = (\sum_{k=0}^\infty (I-X^{-1}A)^k) X^{-1} = \sum_{k=0}^\infty (X^{-1}(X-A))^k X^{-1}

Now we can see that:

||X^{-1} - A^{-1}|| = ||X^{-1} - \sum_{k=0}^\infty (X^{-1}(X-A))^k X^{-1}|| = ||\sum_{k=1}^\infty (X^{-1}(X-A))^k X^{-1}|| \leq \sum_{k=1}^\infty (||X^{-1}||.||(X-A)||)^k ||X^{-1}||

On the other hand, if ||X-A|| < \delta we can conclude that:

||X-A||^k < \delta^k \implies ||X^{-1}||^k ||X-A||^k < ||X^{-1}||^k \delta^k \implies \sum_{k=1}^\infty ||X^{-1}||^k||X-A||^k < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k \implies \sum_{k=1}^\infty ||X^{-1}(X-A)||^k < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k
\implies \sum_{k=1}^\infty (||X^{-1}||.||(X-A)||)^k ||X^{-1}|| < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}||

so far I've shown that:

||X^{-1} - A^{-1}|| < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}||

Now if for any give epsilon I find delta in this inequality then I'm done:

\sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}|| = \frac{\delta ||X^{-1}||^2}{1-\delta||X^{-1}||}< \epsilon

If I could show that delta could be valuated as a function of epsilon then I was done, but unfortunately I have no idea on how to do that.

Maybe whatever I've done so far is nonsense or maybe I'm making it too hard. Any ideas on how to go further with my proof is appreciated. Also if you know a shorter way to prove that matrix inversion is continuous that would be great. I'm pretty bad with writing epsilon-delta continuity proofs I think.

I have another question too, is matrix multiplication continuous?
Oops, I noticed it just now, I posted it on the wrong section :/ Please move it to the homework section. Sorry for that.
 
Last edited:
Physics news on Phys.org
Arian.D said:
The norm that we're working in the class is ||A|| = \sup\{ Ax : |x| \leq 1 \}
I guess you mean ||A|| = \sup\{ |Ax| : |x| \leq 1 \}
Now if for any give epsilon I find delta in this inequality then I'm done:

\sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}|| = \frac{\delta ||X^{-1}||^2}{1-\delta||X^{-1}||}< \epsilon

If I could show that delta could be valuated as a function of epsilon then I was done, but unfortunately I have no idea on how to do that.
For sufficiently small delta (how small?) you can ensure the denominator is positive. That allows you to multiply out and rearrange the inequality.
 
haruspex said:
I guess you mean ||A|| = \sup\{ |Ax| : |x| \leq 1 \}
Yup. Sorry for the typo.

For sufficiently small delta (how small?) you can ensure the denominator is positive. That allows you to multiply out and rearrange the inequality.

But I still need to know how much small it should be! and rearranging the inequality doesn't help, at least I can't see how it helps at this point :(
 
Writing K for the norm of X-1, you have
\frac{δK^2}{1-δK} < ε
Now, this is really working it backwards, but all the steps are reversible...
If δ < 1/K:
δK^2&lt;(1-δK)ε
δK^2+δKε&lt;ε
δ&lt;\frac{ε}{K^2+Kε}
So the choice of δ must be ...?
 
haruspex said:
Writing K for the norm of X-1, you have
\frac{δK^2}{1-δK} &lt; ε
Now, this is really working it backwards, but all the steps are reversible...
If δ < 1/K:
δK^2&lt;(1-δK)ε
δK^2+δKε&lt;ε
δ&lt;\frac{ε}{K^2+Kε}
So the choice of δ must be ...?

Ah... How naive of me not to have seen that already!

Delta could be any number less than the minimum of 1/K and \frac{ε}{K^2+Kε}. Right?
 
Another question, is matrix multiplication continuous as well? If yes, how can I prove that? In general, how do we show that a function from G \times G \to G is continuous?
 
Arian.D said:
Delta could be any number less than the minimum of 1/K and \frac{ε}{K^2+Kε}. Right?
Yes.
For multiplication, pretty sure that would be continuous.
 
You are doing these problems the hard way. The easy way to do these is to view GL(n,R) as an open subspace of Rn2 in the obvious way. This means that determining the continuity of multiplication and inversion is equivalent to determining if each of the component functions for multiplication and inversion is continuous. For multiplication this is obvious since polynomial arithmetic is continuous and for inversion it follows easily by considering the adjugate matrix.

Edit: The nice thing about looking at things this way is that you get smoothness automatically as well.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • · Replies 3 ·
Replies
3
Views
1K
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K