About invertible and diagonalizable matrices

  • Thread starter jostpuur
  • Start date
  • Tags
    Matrices
In summary, the conversation discusses a proof for the inverse function theorem in a book about differential manifolds. The proof assumes that the Jacobian matrix Df_a is invertible and uses the mean value theorem to show that ||g(x)|| < ||x||/2 for small values of x. However, there are some issues with this step, including the presence of a term ||g(0)|| and the differentiability of the mapping \lambda\mapsto ||g(\lambda x)||.
  • #1
jostpuur
2,116
19
Hello, I'm reading this book http://freescience.info/go.php?pagename=books&id=1041 about differential manifolds. In the appendix this book gives a proof for the inverse function theorem. It assumes that the Jacobian matrix [tex]Df_a[/tex] is invertible (where [itex]a[/itex] is a location where it is calculated), and then it says: "By an affine transformation [itex]x\mapsto Ax+b[/itex] we can assume that [itex]a=0[/itex] and [itex]Df_a=1[/itex]." Isn't this the same thing, as assuming that all invertible matrices are diagonalizable? And isn't that assumption wrong?
 
Physics news on Phys.org
  • #2
No, it is not the same thing. It is simply taking A to be the inverse of Dfa.
 
  • #3
All right :blushing:
 
  • #4
about use of the mean value theorem

I haven't made much progress with this proof. The problem is not anymore about diagonalizability (well it wasn't in the beginning either...), but since I started talking about this proof here, I might as well continue it here.

First it says that there exists such r, that for [itex]||x||<2r[/itex], [itex]||Dg_x|| < 1/2[/itex] holds. A small question: When a norm of a matrix is written without explanations, does it usually mean the operator norm [itex]||A||:=\textrm{sup}_{||x||<1} ||Ax||[/itex]? Anyway, then it says that "It follows from the mean value theorem that [itex]||g(x)|| < ||x||/2[/itex]". I encountered some problems in this step.

Doesn't the mean value theorem in this case say, that there exists such [itex]0\leq\lambda\leq 1[/itex], that
[tex]
||g(x)|| = \big(\frac{d}{d\lambda'}||g(\lambda' x)||\big) \Big|_{\lambda'=\lambda} + ||g(0)||
[/tex]
I computed
[tex]
\frac{d}{d\lambda'}||g(\lambda' x)|| = \sum_{i=1}^n \frac{g_i(\lambda'x) (x\cdot\nabla g_i(\lambda'x))}{||g(\lambda'x)||} = \frac{g^T(\lambda'x) (Dg_{\lambda'x}) x}{||g(\lambda'x)||}
[/tex]
after which I could estimate
[tex]
||g(x)|| \leq ||Dg_{\lambda x}||\; ||x|| + ||g(0)|| \leq \frac{1}{2}||x|| + ||g(0)||
[/tex]
A big difference is that the proof in the book didn't have [itex]||g(0)||[/itex] term. Perhaps that is not a big problem, we can get rig of it by redefining the original function with some translation of the image. Although it is strange that it was not mentioned in the proof, so is there some thing that I'm already getting wrong here?

Another matter is, that the mapping [itex]\lambda\mapsto ||g(\lambda x)||[/itex] is not nessecarily differentiable, if g reaches zero with some lambda, and I cannot see how to justify that g would remain nonzero here. So the use of the mean value theorem doesn't seem fully justified.
 
Last edited:
  • #5
I think I got this matter handled.

At least my problem is not, that I could not think complicatedly enough.
 

1. What is the difference between an invertible and diagonalizable matrix?

An invertible matrix is one that has a unique solution for every possible input, meaning it has a non-zero determinant. A diagonalizable matrix is one that can be expressed as a diagonal matrix through similarity transformations.

2. How can I determine if a matrix is invertible or diagonalizable?

A matrix is invertible if its determinant is non-zero, which can be calculated using row operations on the matrix. To determine if a matrix is diagonalizable, you can perform diagonalization through similarity transformations or check if it has n distinct eigenvalues.

3. Can a matrix be both invertible and diagonalizable?

Yes, a matrix can be both invertible and diagonalizable. This means that it has a unique solution for every input and can be expressed as a diagonal matrix through similarity transformations.

4. What are the benefits of using invertible and diagonalizable matrices in scientific applications?

Invertible matrices are useful for solving systems of linear equations, while diagonalizable matrices allow for easier computation of powers and inverses. They are also used in various scientific fields such as physics, engineering, and computer science for solving complex problems and analyzing data.

5. Are all square matrices invertible and diagonalizable?

No, not all square matrices are invertible and diagonalizable. A matrix must have a non-zero determinant to be invertible and must have n distinct eigenvalues to be diagonalizable. Matrices with repeated eigenvalues or a determinant of zero are not invertible or diagonalizable.

Similar threads

  • Math Proof Training and Practice
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
Replies
3
Views
1K
  • STEM Academic Advising
Replies
6
Views
4K
  • Linear and Abstract Algebra
Replies
2
Views
1K
Replies
3
Views
997
  • Linear and Abstract Algebra
Replies
2
Views
3K
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
4K
Replies
9
Views
8K
Back
Top