1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Proof involving Derivatives

  1. Dec 4, 2008 #1
    1. The problem statement, all variables and given/known data

    Suppose that f:R->R is differentiable, f(0)=0, and |f'(x)|<=|f(x)| for all x. Show that f(x)=0 for all x

    2. Relevant equations

    f'(a) = limit as x->a [f(x) - f(a)]/[x-a]

    3. The attempt at a solution

    I feel like this should be something simple, but I don't know how to go about it.
    I thought maybe I could somehow show that f' is a constant and thus f(x)= 0 since f(0)=0.

    Anyone have any ideas?
    1. The problem statement, all variables and given/known data



    2. Relevant equations



    3. The attempt at a solution
     
  2. jcsd
  3. Dec 4, 2008 #2

    tiny-tim

    User Avatar
    Science Advisor
    Homework Helper

    Welcome to PF!

    Hi wackikat! Welcome to PF! :smile:

    (have a ≤ :wink:)

    I expect there's a mean-value-theorem way of doing it …

    but the one that immediately catches my eye is to rewrite it f'(x)/f(x) ≤ 1, and to pick a value for which f ≠ 0.
     
  4. Dec 4, 2008 #3

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    It's obvious if there's a convergent power series expansion. But that's asking a lot out of a function that's only differentiable. I'd LOVE to see an MVT way to do this.
     
  5. Dec 4, 2008 #4
    Opps what I meant to say was that I was wondering if there was a way to to show f'(x)=0 which would imply f is constant
     
  6. Dec 4, 2008 #5

    lurflurf

    User Avatar
    Homework Helper

    hint

    assume wlog x<=y
    let k be an interger k=0,1,2,...,n
    let z_k=x+(y-x)/n
    thus
    z_0=x
    z_n=y
    z_n-z_{n-1}=(y-x)/n
    x<=z_k<y
    |f(y)-f(x)|=|f(z_n)-f(z_0)|
    =|Σ(f(z_k)-f(x_{k-1}))|
    <=Σ|f(z_k)-f(x_{k-1})|
    =Σ|f'(t_k)|(y-x)/n (mean value theorem with z_{k-1}<t_k<z_k)
    <=Σ|f(t_k)|(y-x)/n
     
    Last edited: Dec 4, 2008
  7. Dec 4, 2008 #6
    Thanks for trying lurflurf, but I have no idea what you are trying to do.
     
  8. Dec 4, 2008 #7

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    It is a pretty dense hint. I'm still trying to puzzle it out myself. But I usually take what lurflurf says seriously. The last line is a Riemann sum for the integral from x to y of |f(t)|dt. lurflurf says that is less than |f(y)-f(x)|. I'm still not sure I quite get it. Isn't this fun!? I haven't gotten how wlog(x)<y fits in, or a number of other things. But I think there is something going on there.
     
  9. Dec 4, 2008 #8
    It's actually *extremely* simple. Before I tell you what it is, I'll give you some hints...

    Try drawing what you know about this function... that is, draw an axis and your point (0,0). Then, suppose that at some point c f(c) does not equal 0.
     
  10. Dec 4, 2008 #9

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    *extremely* simple? Ok, then. c*f(c) not equal to zero tells me (using the MVT on x*f(x)) that there is a 0<=d<=c such that f(c)=d*f'(d)+f(d) is not equal to zero. Please continue with the *extremely* simple part.
     
  11. Dec 4, 2008 #10
    I figured it must be something really simple. For some reason I seem to do better with the difficult ones.

    I can convince myself that f(x) must equal zero in order for the inquality to hold but still can't grasp how to actually prove it.
    I have a page full of scratch work trying to use the definition of a deriviate and MVT but keep proving things that I already know to be true.
     
  12. Dec 4, 2008 #11

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    You mean f'(0)=0? Sure, that's definitely true. And *extremely* simple. If I say this might be one of the more difficult ones, could you solve it then? I think I've pounded my head on the desk before about this problem, you aren't alone.
     
  13. Dec 4, 2008 #12
    Suppose ∃c such that f(c)[tex]\neq[/tex]0.

    Then by the Mean Value Theorem, ∃c0[tex]\in[/tex] (0, c) such that |f '(c0)|=|[tex]\frac{f(0)-f(c)}{0-c}[/tex]|=|[tex]\frac{0-f(c)}{-c}[/tex]|=|[tex]\frac{f(c)}{c}[/tex]|>0.

    Then |f '(c0)|>0=f(0), which is a contradiction. Therefore, there is no c such that f(c)[tex]\neq[/tex]0. In other words, f(x)=0 [tex]\forall[/tex] x.
     
  14. Dec 4, 2008 #13
    I don't see how this creates a contradiction.
    We need |f '(c0)| > |f (c0)|
     
  15. Dec 4, 2008 #14

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    How, in the sweet name of the lord, do you conclude |f'(c0)|=f(0)? You just made that up. That's baloney. And if you don't know it, I feel sorry for you. If you inject one more nonsequitur into this thread, I'm going to hit "Report".
     
  16. Dec 4, 2008 #15
    I never said that |f'(c0)|=|f(0)|. I said:

    |f'(c0)|>0=|f(0)|

    This is a shorthand/symbolic/whatever-the-proper-term-is-way of saying that |f'(c0)|>0, and that in turn 0=f(0). In other words, |f'(c0)|>f(0).

    That being said, wackikat you're right I misread the thing, or misthought it anyhow. For some reason I was looking at the question to be needing to prove that |f '(x)| is less than or equal to every possible value of f(x), rather than just the particular value at x.

    Very sorry... I'll look at it again.
     
  17. Dec 4, 2008 #16
    I would assume that Lazorlike meant that |f'(c0)| > f(0) which is true since f(0)=0 and absolute value has to be positive, but it's not what we need to show.


    -------I see you've alreay explained this.
    I understand how you interpreted the problem--if only life were that easy.
     
    Last edited: Dec 4, 2008
  18. Dec 4, 2008 #17

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Ok, but as wackikat (you) knows from his pages of notes, that observations like |f(c)|>=0 will get us nowhere. As you (you) know. Gotta sleep. See you tomorrow.
     
  19. Dec 5, 2008 #18

    Hex

    User Avatar

    Tomorrow we will have to turn in this homework, actually in about 8 hours.
     
  20. Dec 5, 2008 #19
    It's very late so perhaps I'm not thinking this through, but try this...

    First, let me give a general description of what I'm trying to say. Take your calculator, computer, whatever, and graph f(x)=x2. Now evaluate the derivative at some point close to 0, say, .01. At .01, the derivative is .02. Now at .01, the functions value is .0001, so we see that the derivative is greater than the function value. Now this may be particular to f(x)=x2 (I don't believe it is, though I am not completely sure), but it's just an example. Think about functions where f(0)=0, and think about them close to 0. If they are ever going to be something other than 0, they have to "get up off the mat" so to speak... they have to move up or down. Now when you get to infinitesimally small values very close to 0, think about the derivative... is it possible for the derivative to remain smaller than those tiny, tiny, infinitesimally small values, and for the function still to increase from 0 to something else? Think about it... if the derivative is less than the function, than the function's absolute value will shrink moving forward ( I believe - it's late), and the function will never get any bigger.

    That's the point of the argument I've made here, or how it works. The point is to show that when you're dealing with this function close to 0, the derivative will always be greater than the function itself. Now in words, I don't know how convincing that is, but the analytical argument works as far as I can tell. So here it is:

    _____________________________________________________________________

    f(0)=0. Now because f is differentiable, we know that it is continuous.

    Given that, suppose that there is some point y where f(y) /= 0. Because f is continuous, the Intermediate Value Theorem requires that there is some point c where f(c) = L for ever L < |f(y)|. (In other words, whether f(y) is positive or negative, every value between f(y) and 0 must be on the function somewhere between y and 0).

    Choose any c0 where |f(c0)-0| < 1. In other words, |f(c0)| < 1.

    Now by the Mean Value Theorem, there is some point c1 where |c1| < |c0| such that f ' (c1)=[tex]\frac{f(c_{0})-f(0)}{c_{0}-0}[/tex]. That is, f ' (c1)=[tex]\frac{f(c_{0})}{c_{0}}[/tex]. (In other words, there's some point c1 between 0 and c0 where the derivative at c1 equals the slope from f(c0) to f(0).)

    Now |c0| < 1, which means that [tex]\left|\frac{f(c_{0})}{c_{0}}\right|[/tex] > |f(c0)|. In other words, |f ' (c1)| > |f(c0)|.

    Now, we are given that |f ' (x)|≤ |f(x)| for all x. This requires that |f(c0)| < |f(c1)|, because |f ' (c1)| > |f(c0)|, and if |f(c0)| > |f(c1)|, then |f ' (c1)| > |f(c1)|.

    This means that for every x and x0 such that |x|> |x0| > 0, |f(x)|< |f(x0)|. (In other words, for all points within 1 of 0, the absolute value of the function must decrease as the points get further away from 0.)

    Now because f is continuous, this requires that for all ε > 0 (and < 1, but that's rather trivial for the area of the number line we're talking about here), there is some x1 such that |x1| < ε and |f(x1)| > |f(ε)|. (In other words, no matter how close you get to 0, there is some point even closer where the function has a greater absolute value.)

    This creates a contradiction, because f is continuous and f(0)=0. This is fairly obvious, because the closer to 0 we get, the greater the absolute value of the function must be, and so there will be a jump - a discontinuity - between 0 and the function's value.

    In technical language, I *think* - and it's *really* late now so don't hold me to this - we explain it this way: If f is continuous, then for all ε > 0, there must be some δ > 0 such that when |x-0| < δ, |f(x)-f(0)| < ε. However, in our function, for all ε > 0, there is no δ which will meet the requirement. For every possible ε and δ, we can find some x so that |f(x)-f(0)| > ε. (In other words, no matter how big a epsilon-neighborhood we try, and no matter how small a delta-neighborhood we come up with to pair with it, there is some point inside that delta-neighborhood which is outside of the epsilon-neighborhood.)

    I'm pretty sure this works, but I've been wrong before...
     
    Last edited: Dec 5, 2008
  21. Dec 5, 2008 #20

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Isn't that really the same information I would get from:

    [tex]|f(y)-f(x)|=|\int_x^y f'(t) dt| \le \int_x^y |f'(t)| dt \le \int_x^y |f(t)| dt [/tex]

    What does wlog x<=z have to do with anything?
     
    Last edited: Dec 5, 2008
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Proof involving Derivatives
  1. Derivative Proof (Replies: 11)

Loading...