Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

About invertible and diagonalizable matrices

  1. Jul 15, 2007 #1
    Hello, I'm reading this book http://freescience.info/go.php?pagename=books&id=1041 about differential manifolds. In the appendix this book gives a proof for the inverse function theorem. It assumes that the Jacobian matrix [tex]Df_a[/tex] is invertible (where [itex]a[/itex] is a location where it is calculated), and then it says: "By an affine transformation [itex]x\mapsto Ax+b[/itex] we can assume that [itex]a=0[/itex] and [itex]Df_a=1[/itex]." Isn't this the same thing, as assuming that all invertible matrices are diagonalizable? And isn't that assumption wrong?
  2. jcsd
  3. Jul 15, 2007 #2


    User Avatar
    Science Advisor

    No, it is not the same thing. It is simply taking A to be the inverse of Dfa.
  4. Jul 15, 2007 #3
    All right :blushing:
  5. Jul 15, 2007 #4
    about use of the mean value theorem

    I haven't made much progress with this proof. The problem is not anymore about diagonalizability (well it wasn't in the beginning either...), but since I started talking about this proof here, I might as well continue it here.

    First it says that there exists such r, that for [itex]||x||<2r[/itex], [itex]||Dg_x|| < 1/2[/itex] holds. A small question: When a norm of a matrix is written without explanations, does it usually mean the operator norm [itex]||A||:=\textrm{sup}_{||x||<1} ||Ax||[/itex]? Anyway, then it says that "It follows from the mean value theorem that [itex]||g(x)|| < ||x||/2[/itex]". I encountered some problems in this step.

    Doesn't the mean value theorem in this case say, that there exists such [itex]0\leq\lambda\leq 1[/itex], that
    ||g(x)|| = \big(\frac{d}{d\lambda'}||g(\lambda' x)||\big) \Big|_{\lambda'=\lambda} + ||g(0)||
    I computed
    \frac{d}{d\lambda'}||g(\lambda' x)|| = \sum_{i=1}^n \frac{g_i(\lambda'x) (x\cdot\nabla g_i(\lambda'x))}{||g(\lambda'x)||} = \frac{g^T(\lambda'x) (Dg_{\lambda'x}) x}{||g(\lambda'x)||}
    after which I could estimate
    ||g(x)|| \leq ||Dg_{\lambda x}||\; ||x|| + ||g(0)|| \leq \frac{1}{2}||x|| + ||g(0)||
    A big difference is that the proof in the book didn't have [itex]||g(0)||[/itex] term. Perhaps that is not a big problem, we can get rig of it by redefining the original function with some translation of the image. Although it is strange that it was not mentioned in the proof, so is there some thing that I'm already getting wrong here?

    Another matter is, that the mapping [itex]\lambda\mapsto ||g(\lambda x)||[/itex] is not nessecarily differentiable, if g reaches zero with some lambda, and I cannot see how to justify that g would remain nonzero here. So the use of the mean value theorem doesn't seem fully justified.
    Last edited: Jul 15, 2007
  6. Jul 16, 2007 #5
    I think I got this matter handled.

    At least my problem is not, that I could not think complicatedly enough.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook