About invertible and diagonalizable matrices

  • Context: Graduate 
  • Thread starter Thread starter jostpuur
  • Start date Start date
  • Tags Tags
    Matrices
Click For Summary

Discussion Overview

The discussion revolves around the properties of invertible and diagonalizable matrices, particularly in the context of the inverse function theorem and its proof as presented in a book on differential manifolds. Participants explore the implications of assuming invertibility and diagonalizability, as well as the application of the mean value theorem in a specific proof.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions whether the assumption that all invertible matrices are diagonalizable is valid in the context of the inverse function theorem.
  • Another participant clarifies that the transformation mentioned in the proof is simply taking the inverse of the Jacobian matrix, not an assumption about diagonalizability.
  • A later post shifts focus to the mean value theorem, raising concerns about the proof's treatment of the norm of a matrix and the inclusion of the term ||g(0)||, suggesting it may not have been adequately addressed.
  • Concerns are expressed regarding the differentiability of the mapping involved in the mean value theorem application, particularly if the function reaches zero at some point.
  • One participant expresses confidence in their understanding of the matter, indicating progress in their reasoning.

Areas of Agreement / Disagreement

Participants do not reach consensus on the validity of assuming diagonalizability for invertible matrices, and there are differing views on the application of the mean value theorem in the proof discussed. The discussion remains unresolved regarding these points.

Contextual Notes

There are limitations in the discussion regarding the assumptions made about the differentiability of the function g and the treatment of the norm without explicit definitions. The proof's handling of the term ||g(0)|| is also noted as potentially problematic.

jostpuur
Messages
2,112
Reaction score
19
Hello, I'm reading this book http://freescience.info/go.php?pagename=books&id=1041 about differential manifolds. In the appendix this book gives a proof for the inverse function theorem. It assumes that the Jacobian matrix Df_a is invertible (where a is a location where it is calculated), and then it says: "By an affine transformation x\mapsto Ax+b we can assume that a=0 and Df_a=1." Isn't this the same thing, as assuming that all invertible matrices are diagonalizable? And isn't that assumption wrong?
 
Physics news on Phys.org
No, it is not the same thing. It is simply taking A to be the inverse of Dfa.
 
All right :blushing:
 
about use of the mean value theorem

I haven't made much progress with this proof. The problem is not anymore about diagonalizability (well it wasn't in the beginning either...), but since I started talking about this proof here, I might as well continue it here.

First it says that there exists such r, that for ||x||<2r, ||Dg_x|| < 1/2 holds. A small question: When a norm of a matrix is written without explanations, does it usually mean the operator norm ||A||:=\textrm{sup}_{||x||<1} ||Ax||? Anyway, then it says that "It follows from the mean value theorem that ||g(x)|| < ||x||/2". I encountered some problems in this step.

Doesn't the mean value theorem in this case say, that there exists such 0\leq\lambda\leq 1, that
<br /> ||g(x)|| = \big(\frac{d}{d\lambda&#039;}||g(\lambda&#039; x)||\big) \Big|_{\lambda&#039;=\lambda} + ||g(0)||<br />
I computed
<br /> \frac{d}{d\lambda&#039;}||g(\lambda&#039; x)|| = \sum_{i=1}^n \frac{g_i(\lambda&#039;x) (x\cdot\nabla g_i(\lambda&#039;x))}{||g(\lambda&#039;x)||} = \frac{g^T(\lambda&#039;x) (Dg_{\lambda&#039;x}) x}{||g(\lambda&#039;x)||}<br />
after which I could estimate
<br /> ||g(x)|| \leq ||Dg_{\lambda x}||\; ||x|| + ||g(0)|| \leq \frac{1}{2}||x|| + ||g(0)||<br />
A big difference is that the proof in the book didn't have ||g(0)|| term. Perhaps that is not a big problem, we can get rig of it by redefining the original function with some translation of the image. Although it is strange that it was not mentioned in the proof, so is there some thing that I'm already getting wrong here?

Another matter is, that the mapping \lambda\mapsto ||g(\lambda x)|| is not nessecarily differentiable, if g reaches zero with some lambda, and I cannot see how to justify that g would remain nonzero here. So the use of the mean value theorem doesn't seem fully justified.
 
Last edited:
I think I got this matter handled.

At least my problem is not, that I could not think complicatedly enough.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
3
Views
1K
Replies
2
Views
1K
Replies
9
Views
8K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K