Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

If f(z) is 1-1, then f'(z) is not zero.

  1. Jun 20, 2011 #1
    Esteemed Analysts:

    I am trying to rigorize the result that if f is 1-1 in a region R, then f'(z) is not zero in R.

    This is what I have: Assume, by contradiction, that f'(zo)=0 for zo in R. Then

    f can be expressed locally as :

    f(z)=z^k.g(z)

    for g(z) analytic and non-zero for some open ball B(zo,r)-{zo}

    From this, we have to somehow use the fact that z^k is k-to-1, contradicting the

    assumption that f is 1-1.

    I don't know if we can use the fact that an open ball is simply-connected to

    define a branch of log, from which we can define a branch of z^k, and then

    conclude with the contradiction that f(z) is not 1-1.

    Any suggestions for rigorizing?

    Thanks.
     
  2. jcsd
  3. Jun 20, 2011 #2
    One argument goes as follows:

    WLOG z0=0 and f(0)=f'(0)=0. So f(z)=zkg(z) for some integer k>1, g analytic and nonzero in a n'hood of 0. So we can take a kth root of g, say g(z)=h(z)k. So f(z)=(zh(z))k. Now zh(z) is a nonconstant holomorphic function (if it's constant, it's zero, so f is 0 identically) and so it's an open map. So the image of any open n'hood of zero is open, and hence contains some open disc around 0. This must include points different by a factor [itex]\exp(2\pi i/k)[/itex], from which f is not injective.
     
  4. Jun 20, 2011 #3
    Thanks, henry_m:

    Yes, that is along the lines I was thinking, but I did know how to justify the
    existence of a holomorphic k-th root in a ball about 0; I was thinking of using
    the fact that a ball is symply-connected, so that we can define a branch of log,
    from which we can define a root, but there may be other ways of showing the
    existence of a k-th root in B(zo,r)?
     
  5. Jun 20, 2011 #4
    Are you worried about the step of finding h given g? You don't quite need to use the fact that a disc is simply connected, basically because we only need to be able to work in a tiny disc around z0.

    In more detail:

    Pick some disc around g(z0) not containing zero. It should be clear that we can define a branch on the logarithm in this disc. Then h(z)=exp(log(g(z))/k) defines h on the preimage of this disc, which contains z0.
     
  6. Jun 21, 2011 #5
    Question: Consider f(z)=z^3. Then, isn't f(z) 1-1 and f'(0)=0?
     
  7. Jun 21, 2011 #6
    Not in the complex plane. For instance, let [itex] \zeta = e^{2 \pi i/3}[/itex]. Then [itex] \zeta [/itex] is a primitive third root of unity, so
    [tex]
    1 = \zeta^3 = (\zeta^2)^3 = 1^3.
    [/tex]

    So f is actually 3-to-1. You are correct for the real line, though, f(x) = x3 is one-to-one on the reals.
     
  8. Jun 21, 2011 #7
    As for the original question:

    If f is one-to-one on R, then it has an inverse [itex] f^{-1} \colon f(R) \to R[/itex]. I think the derivative of the inverse is given by
    [tex]
    (f^{-1})'(w) = \frac{1}{f'(z)}
    [/tex]
    where f(z) = w. So f'(z) can't be 0 or (f-1)' will blow up at f(0).

    Does this work? I might have forgotten a condition...
     
  9. Jun 21, 2011 #8
    The problem with this argument is that you assume that the inverse is analytic, which must be proved.

    The normal complex inverse function theorem assumes nonzero derivative, and proves existence of a continuous inverse, and then analyticity of the inverse and the formula for the inverse. You are trying to use the formula without the first assumption. To see what goes wrong if we try to use the formula here, the argument for the last part goes like this:

    Let g be an inverse for f, and w, w' be in the domain of g and not equal. Let z=g(w), z'=g(w'). Then z and z' are not equal, and:
    [tex]\frac{g(w)-g(w')}{w-w'}=\frac{z-z'}{f(z)-f(z')}[/tex]
    Now as w tends to w', z tends to z' (continuity of g needed) and so the RHS tends to 1/f'(z'), which proves what we wanted; g is differentiable with g'(w')=1/f'(z').

    BUT if we haven't assumed that f' is nonzero, we have proved only that the limit, and hence the derivative of g, does not exist if f' vanishes somewhere. There is no contradiction since we haven't proved that the inverse is differentiable.
     
  10. Jun 21, 2011 #9
    Spamiam:

    The problem is that I don't know how to tell if 1/f'(z) is analytic or not.

    My approach is this, assuming that if f is 1-1, then f'(z) is not 0:

    The Jacobian matrix of an analytic function is an antisymmetric Jacobian,

    with the entries the partials of U,V with respect to x,y respectively. Since

    f'(z) is not zero, the determinant J(f):=U_x^2+U_y^2 is itself not zero, so J(f)

    is invertible. You can then show that the form of the inverse J(f)^-1 is the same

    as that of J(f), i.e., J(f)^-1 is also antisymmetric, and contains the partials

    of f^-1(z) with respect to x,y, and these satisfy Cauchy Riemann, by the

    symmetry of the inverse matrix.
     
  11. Jun 22, 2011 #10
    Ah okay, I thought I was forgetting something. Thanks for the correction.
     
  12. Jun 23, 2011 #11
    Actually, Spamiam, I think your statement is true if f(z) is real ( tho the names are changed

    the innocent :) ):

    If f(z)=U+iV is analytic, then:

    Ux=Vy

    Uy=-Vx

    But , given 1/f'(z)= f'(z)^/f'(z).f'(z)^ (with ^ complex conjugation )

    =U-iV/|f'(z)| , forces Ux=Vy=-Vy , so that Vy is constant; same for other component functions.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: If f(z) is 1-1, then f'(z) is not zero.
  1. Zeros of z^a-1 (Replies: 1)

Loading...