Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Morse Index

  1. Sep 18, 2007 #1
    I started reading a Morse theory by Milnor and am not understanding something.
    I am reading the proof of Theorem on page 25:
    Let M be a compact manifold and f be a differentiable function on M with only two critical points, both of which are non-degenerate, then M is homeomorphic to a sphere.
    We may assume that 0 is mimimum and 1 is maximum of f.
    In the proof he says that by Morse lemma for small epsilon f^-1[0, epsilon], f^-1[1-epsilon, 1] are closed n-cell.
    I guess he is using the fact that on f^-1(1) and f^-1(0) the morse index is 0.
    But Is the fact obvious?
    I know 0 and 1 are minimum and maximum respectively. So Hessian is positive at f^-1(0) and negative at f^-1(1).
    But why is morse index 0 at these points?
     
  2. jcsd
  3. Sep 18, 2007 #2

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    isnt it sort of obvious that if a linear combination of squares has both signs, it cant be an extreme point? i.e. a saddle surface does not have a max or min at the origin.

    look up your second year calc criteria for local extrema. you look at more than the determinant of the matrix of second partials (what do you mean by hessian, matrix or determinant?), you look at whether or not it is definite.
     
  4. Sep 19, 2007 #3
    Morse

    Could you explain why if a linear combination of squares has both signs, then it can't be an extreme point?
    Hessian is a matrix of second partial derivatives.
    I suppose you are saying that:
    If the hessian is positive definite(i.e all the eigenvalues are positive)at some point p, then f achieves local minimum at p.
    If the hessian is negative definite(i.e all the eigenvalues are negative)at some point p, then f achieves local minimum at p.
    If the hessian has both positive and negative eigen values at p, then p is a saddle point.

    Could you prove why those are true? Or is there any book that I can look up for the proof?
    Thanks.
     
  5. Sep 19, 2007 #4

    mathwonk

    User Avatar
    Science Advisor
    Homework Helper
    2015 Award

    well it is negative at some places and positive at others, but zero at the origin.


    those statements are called the second derivative test. they must be in every book.
    e.g. p. 759, university calculus, hass, weir, thomas. (proved two sections later.)

    but if not, the morse lemma, proved in the morse theory book you are reading by milnor, says that in some coordinates, a non degenerate functiion actually equals its approximating quadratic form. this proves those criteria.

    i.e. those criteria are obvious for quadratic forms, and morse lemma (that if the second derivative is non degenrate at a singular point then the function, in some coordinates actually equals its second derivative), implies the criterion holds for the function as well.


    i think you should do a little exercise to convince yourself that those criteria do hold for quadratic forms, since if you cannot do that exercise then you are reading the wrong book.

    if indeed you decide you are reading the wrong book, you might read guillemin and pollack instead, or andrew wallace, differential topology first steps. milnor is very advanced.
     
    Last edited: Sep 20, 2007
  6. Jul 22, 2010 #5
    Re: Morse

    1. Look at Morse's lemma (contained in Milnor's book at a page p < 25).

    2. Show that x_1^2 + ... + x_n^2 is minimal at (x_1, ..., x_n) = (0, ..., 0).

    Hint: the square of a non-zero real number is stricly positive.

    3. Show that - (x_1^2 + ... + x_n^2) is maximal at (x_1, ..., x_n) = (0, ..., 0).

    Hint: the square of a non-zero real number is stricly positive.

    4. Show that e_1 x_1^2 + ... + e_n x_n^2 does not have a minimum/maximum at (x_1, ..., x_n) = (0, ..., 0), where the e_i are (non-zero) real numbers (for i = 1, ..., n) such that there exist 1 <= i, j <= n with e_i > 0 and e_j < 0.

    Hint: look at the sign of partial derivatives i and j at (0, ..., 0). What does that tell you about the restriction of the function to the corresponding axis (stricly increasing, stricly decreasing)?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Morse Index
  1. Morse theory (Replies: 12)

Loading...