Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

N-dimensional Tayor's Theorem and Dynamics

  1. Jul 28, 2004 #1
    I've tried mathworld and wiki but I can't find the n-dimensional version of Taylor's Theorem. Is it formulated in terms of the Jacobian?

    In my dynamics book, it states that a map f from R^2 to itself has an attracting fixed point p if f(p)=p and all eigenvalues of the jacobian lie inside the unit circle, a repelling fixed point p if all eigenvalues lie outside the unit circle, and a saddle point if one is inside and one is outside.

    I'm going to try to justify that terminology for myself and I think I need a higher D analog of the mean value theorem; an n-D Taylor's theorem. I guess my ultimate goal is to prove the following:

    Let f be a map from R^n to itself. If p is a fixed point of f and all eigenvalues of the Jacobian of f at p are inside the n-dimensional hypersphere, then p is an attracting fixed point. By this, I mean that there is a neighborhood of p for which all points in the neighborhood converge to p upon iteration of f.

    Likewise, if all eigenvalues are outside the n-dimensional unit sphere, then there is a nieghborhood N around p such that f(N) contains N and for all x in N\{p}, (f^m)(x) is not in N for some m>0.

    Finally, I want to show that for saddle points, there exist functions f such that f has an attracting fixed point and functions g such that f has a repelling fixed point.
     
  2. jcsd
  3. Jul 28, 2004 #2
  4. Jul 28, 2004 #3
    Oh I suppose I just need an n-dimensional Mean Value Theorem, not Taylor's Theorem.
     
  5. Jul 28, 2004 #4
  6. Jul 28, 2004 #5

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Bah, I was writing a nice post on the n-dimensional taylor series. :tongue2:

    Do you really need a multidimensional mean value theorem? Can't you do the whole thing one dimension at a time?
     
  7. Jul 28, 2004 #6
    Thanks anyway. But I'm not sure how I'd do it one dimension at a time. Can you please explain that in general terms? You don't mean induction do you?

    After all this, I found that I had an analysis book in my little pathetic library anyway. I took one look at the n-m dimensional mean value theorem and knew that the proof would look roughly the same as it does for one dimension. Oh wait... I'm not sure how the eigenvalues being in the unit hypersphere relate. *ponders
     
  8. Jul 28, 2004 #7

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, in the target space, isn't the mapping attractive if and only if it is attractive in each dimension?

    Analyzing the source space one dimension at a time may be possible, but it's not obvious, and I'm not sure it's necessary.
     
  9. Jul 28, 2004 #8

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Also, what about taking a simple differential approximation? (f, x, a are all vectors)

    f(x) = f(a) + df (x-a) + R(x-a)

    where R -> 0 as x -> a
     
  10. Jul 28, 2004 #9
    Oh yeah, that's true. Hmm... That's good enough for me. Thanks, Hurkyl.
     
  11. Jul 28, 2004 #10
    I'm curious to know what the eigenvalues of df have to do with it. Do you know?
     
  12. Jul 28, 2004 #11

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Well, if a is the fixed point, then the goal is:

    |f(x) - a| < |x - a|

    We can write the differential approximation as:

    f(x) - a = (df + R) (x - a)

    |f(x) - a| <= |df + R| |x - a| <= (|df| + |R|) |x - a|

    Where |A| denotes the operator norm (matrix norm) of A.

    That is,

    [tex]
    |A| = \sup_{x \neq 0} \frac{|Ax|}{|x|}
    [/tex]

    If A has a complete set of eigenvectors (that is, n linearly independant eigenvectors), then it is a straightforward exercise to show that |A| is simply the largest absolute value of its eigenvalues. (I have a hunch this is true for all matrices, but I don't recall for sure)

    Also, we have that |R| --> 0 as x --> a since R --> 0 as x --> a

    So, if |df| < 1, we can pick a neighborhood of a such that |R| < 1 - |df|, and thus

    |f(x) - a| <= |df + R| |x - a| <= (|df| + |R|) |x - a| < |x - a|

    And, thus, a is an attractive fixed point.
     
    Last edited: Jul 28, 2004
  13. Jul 28, 2004 #12
    Very cool. I had forgotten that |A| is simply the largest absolute value of its eigenvalues. Doh!
     
  14. Jul 28, 2004 #13

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It's easy to forget a lot of things until you start writing them down. :smile:
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: N-dimensional Tayor's Theorem and Dynamics
Loading...