Continuity of matrix multiplication and inversion in a normed vector space?

In summary: You are doing these problems the hard way. The easy way to do these is to view GL(n,R) as an open subspace of R^(n^2) in the obvious way. This means that determining the continuity of multiplication and inversion is equivalent to determining if each of the component functions for multiplication and inversion is continuous. For multiplication this is obvious since polynomial arithmetic is continuous and for inversion it follows easily by considering the adjugate matrix.Edit: The nice thing about looking at things this way is that you get smoothness automatically as well.
  • #1
Arian.D
101
0

Homework Statement



Hi guys,

I'm trying to prove that matrix inversion is continuous. In other words, I'm trying to show that in a normed vector space the map [itex] \varphi: GL(n,R) \to GL(n,R) [/itex] defined by [itex] \varphi(A) = A^{-1}[/itex] is continuous.

Homework Equations



The norm that we're working in the class is [itex] ||A|| = \sup\{ |AX| : |X| \leq 1 \} [/itex] where |X| refers to the Euclidean length of the vector X. So the topology we define on L(Rn) (the set of all linear transformations from Rn onto itself) is defined by the metric topology this norm induces on it.

The results that I already know are:

1- If A is a linear transformation in GL(n,R) and we have [itex] ||B-A|| . ||A^{-1}|| < 1 [/itex] then B is also in GL(n,R).
2- GL(n,R) is an open set in L(Rn).

The Attempt at a Solution



well, I attempted to prove it by the classical epislon-delta definition of continuity, even though I failed I conclude some things that might be useful for a solution:

What I should prove is:

[itex] \forall \epsilon>0 , \exists \delta>0: ||X-A||< \delta \implies ||X^{-1} - A^{-1}||< \epsilon [/itex]

We know that if X is invertible then the inverse of I-X is given by the series:
[tex](I-X)^{-1} = \sum_{k=0}^\infty X^k [/tex]

By writing [itex] X=I-(I-X)[/itex] it's easy to see that:

[tex] X^{-1} = \sum_{k=0}^\infty (I-X)^k[/tex]

So we have:

[tex] (X^{-1}A)^{-1} = A^{-1}X = \sum_{k=0}^\infty (I-X^{-1}A)^k[/tex]This implies that for any matrix X close to A in GL(n,R) (we have defined a topology on GL(n,R), so I can talk about closeness), I can write down:

[tex] A^{-1} = (\sum_{k=0}^\infty (I-X^{-1}A)^k) X^{-1} = \sum_{k=0}^\infty (X^{-1}(X-A))^k X^{-1} [/tex]

Now we can see that:

[tex] ||X^{-1} - A^{-1}|| = ||X^{-1} - \sum_{k=0}^\infty (X^{-1}(X-A))^k X^{-1}|| = ||\sum_{k=1}^\infty (X^{-1}(X-A))^k X^{-1}|| \leq \sum_{k=1}^\infty (||X^{-1}||.||(X-A)||)^k ||X^{-1}||[/tex]

On the other hand, if [itex] ||X-A|| < \delta [/itex] we can conclude that:

[tex] ||X-A||^k < \delta^k \implies ||X^{-1}||^k ||X-A||^k < ||X^{-1}||^k \delta^k \implies \sum_{k=1}^\infty ||X^{-1}||^k||X-A||^k < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k \implies \sum_{k=1}^\infty ||X^{-1}(X-A)||^k < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k [/tex]
[tex]\implies \sum_{k=1}^\infty (||X^{-1}||.||(X-A)||)^k ||X^{-1}|| < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}||[/tex]

so far I've shown that:

[tex] ||X^{-1} - A^{-1}|| < \sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}||[/tex]

Now if for any give epsilon I find delta in this inequality then I'm done:

[tex] \sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}|| = \frac{\delta ||X^{-1}||^2}{1-\delta||X^{-1}||}< \epsilon[/tex]

If I could show that delta could be valuated as a function of epsilon then I was done, but unfortunately I have no idea on how to do that.

Maybe whatever I've done so far is nonsense or maybe I'm making it too hard. Any ideas on how to go further with my proof is appreciated. Also if you know a shorter way to prove that matrix inversion is continuous that would be great. I'm pretty bad with writing epsilon-delta continuity proofs I think.

I have another question too, is matrix multiplication continuous?
Oops, I noticed it just now, I posted it on the wrong section :/ Please move it to the homework section. Sorry for that.
 
Last edited:
Physics news on Phys.org
  • #2
Arian.D said:
The norm that we're working in the class is [itex] ||A|| = \sup\{ Ax : |x| \leq 1 \} [/itex]
I guess you mean [itex] ||A|| = \sup\{ |Ax| : |x| \leq 1 \} [/itex]
Now if for any give epsilon I find delta in this inequality then I'm done:

[tex] \sum_{k=1}^\infty ||X^{-1}||^k \delta^k ||X^{-1}|| = \frac{\delta ||X^{-1}||^2}{1-\delta||X^{-1}||}< \epsilon[/tex]

If I could show that delta could be valuated as a function of epsilon then I was done, but unfortunately I have no idea on how to do that.
For sufficiently small delta (how small?) you can ensure the denominator is positive. That allows you to multiply out and rearrange the inequality.
 
  • #3
haruspex said:
I guess you mean [itex] ||A|| = \sup\{ |Ax| : |x| \leq 1 \} [/itex]
Yup. Sorry for the typo.

For sufficiently small delta (how small?) you can ensure the denominator is positive. That allows you to multiply out and rearrange the inequality.

But I still need to know how much small it should be! and rearranging the inequality doesn't help, at least I can't see how it helps at this point :(
 
  • #4
Writing K for the norm of X-1, you have
[itex]\frac{δK^2}{1-δK} < ε[/itex]
Now, this is really working it backwards, but all the steps are reversible...
If δ < 1/K:
[itex]δK^2<(1-δK)ε[/itex]
[itex]δK^2+δKε<ε[/itex]
[itex]δ<\frac{ε}{K^2+Kε}[/itex]
So the choice of δ must be ...?
 
  • #5
haruspex said:
Writing K for the norm of X-1, you have
[itex]\frac{δK^2}{1-δK} < ε[/itex]
Now, this is really working it backwards, but all the steps are reversible...
If δ < 1/K:
[itex]δK^2<(1-δK)ε[/itex]
[itex]δK^2+δKε<ε[/itex]
[itex]δ<\frac{ε}{K^2+Kε}[/itex]
So the choice of δ must be ...?

Ah... How naive of me not to have seen that already!

Delta could be any number less than the minimum of 1/K and [itex]\frac{ε}{K^2+Kε}[/itex]. Right?
 
  • #6
Another question, is matrix multiplication continuous as well? If yes, how can I prove that? In general, how do we show that a function from [itex] G \times G \to G[/itex] is continuous?
 
  • #7
Arian.D said:
Delta could be any number less than the minimum of 1/K and [itex]\frac{ε}{K^2+Kε}[/itex]. Right?
Yes.
For multiplication, pretty sure that would be continuous.
 
  • #8
You are doing these problems the hard way. The easy way to do these is to view GL(n,R) as an open subspace of Rn2 in the obvious way. This means that determining the continuity of multiplication and inversion is equivalent to determining if each of the component functions for multiplication and inversion is continuous. For multiplication this is obvious since polynomial arithmetic is continuous and for inversion it follows easily by considering the adjugate matrix.

Edit: The nice thing about looking at things this way is that you get smoothness automatically as well.
 

1. What is continuity in matrix multiplication and inversion?

Continuity in matrix multiplication and inversion refers to the property of a function to have small changes in the input result in small changes in the output. In other words, if we make a small change in the input matrices or vectors, the resulting change in the output should also be small.

2. How is continuity of matrix multiplication and inversion defined in a normed vector space?

In a normed vector space, continuity of matrix multiplication and inversion is defined using the concept of a norm. A norm is a function that assigns a non-negative value to a vector, and it satisfies certain properties. In this context, continuity is defined as the property that for any small change in the input matrices or vectors, the resulting change in the output is bounded by a multiple of the norm of the input.

3. Why is continuity important in matrix multiplication and inversion?

Continuity is important because it ensures that small changes in the input do not result in large, unpredictable changes in the output. This is particularly useful in applications where the input data may have small errors or uncertainties, as it guarantees that the resulting output will also be close to the true value.

4. How does the continuity of matrix multiplication and inversion affect the accuracy of the solutions?

The continuity of matrix multiplication and inversion is directly related to the accuracy of the solutions. In a normed vector space, the continuity property guarantees that the solutions obtained through matrix multiplication and inversion will be close to the true solutions, even if the input data is slightly perturbed.

5. Can continuity be violated in matrix multiplication and inversion?

Yes, it is possible for continuity to be violated in matrix multiplication and inversion. This can happen if the input matrices or vectors are not well-behaved, for example, if they contain extremely large values. In such cases, the resulting output may be significantly different from the true solution, making the problem ill-posed.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
444
  • Calculus and Beyond Homework Help
Replies
14
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
355
  • Calculus and Beyond Homework Help
Replies
1
Views
133
  • Calculus and Beyond Homework Help
Replies
8
Views
116
  • Calculus and Beyond Homework Help
Replies
2
Views
658
  • Calculus and Beyond Homework Help
Replies
1
Views
907
  • Calculus and Beyond Homework Help
Replies
17
Views
1K
  • Calculus and Beyond Homework Help
Replies
13
Views
2K
Replies
1
Views
531
Back
Top