How Can We Prove That a Function is Constant Using Derivatives?

In summary, the function f is differentiable, but does not have a constant function f' at x=0. To solve for x, one must use the Mean Value Theorem to find a point c such that |f '(c0)|>0.
  • #1
wackikat
8
0

Homework Statement



Suppose that f:R->R is differentiable, f(0)=0, and |f'(x)|<=|f(x)| for all x. Show that f(x)=0 for all x

Homework Equations



f'(a) = limit as x->a [f(x) - f(a)]/[x-a]

The Attempt at a Solution



I feel like this should be something simple, but I don't know how to go about it.
I thought maybe I could somehow show that f' is a constant and thus f(x)= 0 since f(0)=0.

Anyone have any ideas?
 
Physics news on Phys.org
  • #2
Welcome to PF!

wackikat said:
Suppose that f:R->R is differentiable, f(0)=0, and |f'(x)|<=|f(x)| for all x. Show that f(x)=0 for all x

Hi wackikat! Welcome to PF! :smile:

(have a ≤ :wink:)

I expect there's a mean-value-theorem way of doing it …

but the one that immediately catches my eye is to rewrite it f'(x)/f(x) ≤ 1, and to pick a value for which f ≠ 0.
 
  • #3
It's obvious if there's a convergent power series expansion. But that's asking a lot out of a function that's only differentiable. I'd LOVE to see an MVT way to do this.
 
  • #4
wackikat said:
I thought maybe I could somehow show that f' is a constant and thus f(x)= 0 since f(0)=0.

Opps what I meant to say was that I was wondering if there was a way to to show f'(x)=0 which would imply f is constant
 
  • #5
hint

assume wlog x<=y
let k be an interger k=0,1,2,...,n
let z_k=x+(y-x)/n
thus
z_0=x
z_n=y
z_n-z_{n-1}=(y-x)/n
x<=z_k<y
|f(y)-f(x)|=|f(z_n)-f(z_0)|
=|Σ(f(z_k)-f(x_{k-1}))|
<=Σ|f(z_k)-f(x_{k-1})|
=Σ|f'(t_k)|(y-x)/n (mean value theorem with z_{k-1}<t_k<z_k)
<=Σ|f(t_k)|(y-x)/n
 
Last edited:
  • #6
Thanks for trying lurflurf, but I have no idea what you are trying to do.
 
  • #7
It is a pretty dense hint. I'm still trying to puzzle it out myself. But I usually take what lurflurf says seriously. The last line is a Riemann sum for the integral from x to y of |f(t)|dt. lurflurf says that is less than |f(y)-f(x)|. I'm still not sure I quite get it. Isn't this fun!? I haven't gotten how wlog(x)<y fits in, or a number of other things. But I think there is something going on there.
 
  • #8
It's actually *extremely* simple. Before I tell you what it is, I'll give you some hints...

Try drawing what you know about this function... that is, draw an axis and your point (0,0). Then, suppose that at some point c f(c) does not equal 0.
 
  • #9
*extremely* simple? Ok, then. c*f(c) not equal to zero tells me (using the MVT on x*f(x)) that there is a 0<=d<=c such that f(c)=d*f'(d)+f(d) is not equal to zero. Please continue with the *extremely* simple part.
 
  • #10
I figured it must be something really simple. For some reason I seem to do better with the difficult ones.

I can convince myself that f(x) must equal zero in order for the inquality to hold but still can't grasp how to actually prove it.
I have a page full of scratch work trying to use the definition of a deriviate and MVT but keep proving things that I already know to be true.
 
  • #11
You mean f'(0)=0? Sure, that's definitely true. And *extremely* simple. If I say this might be one of the more difficult ones, could you solve it then? I think I've pounded my head on the desk before about this problem, you aren't alone.
 
  • #12
Suppose ∃c such that f(c)[tex]\neq[/tex]0.

Then by the Mean Value Theorem, ∃c0[tex]\in[/tex] (0, c) such that |f '(c0)|=|[tex]\frac{f(0)-f(c)}{0-c}[/tex]|=|[tex]\frac{0-f(c)}{-c}[/tex]|=|[tex]\frac{f(c)}{c}[/tex]|>0.

Then |f '(c0)|>0=f(0), which is a contradiction. Therefore, there is no c such that f(c)[tex]\neq[/tex]0. In other words, f(x)=0 [tex]\forall[/tex] x.
 
  • #13
Lazerlike42 said:
Suppose ∃c such that f(c)[tex]\neq[/tex]0.

Then |f '(c0)|>0=f(0), which is a contradiction. [tex]\forall[/tex] x.

I don't see how this creates a contradiction.
We need |f '(c0)| > |f (c0)|
 
  • #14
Lazerlike42 said:
Suppose ∃c such that f(c)[tex]\neq[/tex]0.

Then by the Mean Value Theorem, ∃c0[tex]\in[/tex] (0, c) such that |f '(c0)|=|[tex]\frac{f(0)-f(c)}{0-c}[/tex]|=|[tex]\frac{0-f(c)}{-c}[/tex]|=|[tex]\frac{f(c)}{c}[/tex]|>0.

Then |f '(c0)|>0=f(0), which is a contradiction. Therefore, there is no c such that f(c)[tex]\neq[/tex]0. In other words, f(x)=0 [tex]\forall[/tex] x.

How, in the sweet name of the lord, do you conclude |f'(c0)|=f(0)? You just made that up. That's baloney. And if you don't know it, I feel sorry for you. If you inject one more nonsequitur into this thread, I'm going to hit "Report".
 
  • #15
I never said that |f'(c0)|=|f(0)|. I said:

|f'(c0)|>0=|f(0)|

This is a shorthand/symbolic/whatever-the-proper-term-is-way of saying that |f'(c0)|>0, and that in turn 0=f(0). In other words, |f'(c0)|>f(0).

That being said, wackikat you're right I misread the thing, or misthought it anyhow. For some reason I was looking at the question to be needing to prove that |f '(x)| is less than or equal to every possible value of f(x), rather than just the particular value at x.

Very sorry... I'll look at it again.
 
  • #16
I would assume that Lazorlike meant that |f'(c0)| > f(0) which is true since f(0)=0 and absolute value has to be positive, but it's not what we need to show.


-------I see you've alreay explained this.
I understand how you interpreted the problem--if only life were that easy.
 
Last edited:
  • #17
Ok, but as wackikat (you) knows from his pages of notes, that observations like |f(c)|>=0 will get us nowhere. As you (you) know. Gotta sleep. See you tomorrow.
 
  • #18
Tomorrow we will have to turn in this homework, actually in about 8 hours.
 
  • #19
It's very late so perhaps I'm not thinking this through, but try this...

First, let me give a general description of what I'm trying to say. Take your calculator, computer, whatever, and graph f(x)=x2. Now evaluate the derivative at some point close to 0, say, .01. At .01, the derivative is .02. Now at .01, the functions value is .0001, so we see that the derivative is greater than the function value. Now this may be particular to f(x)=x2 (I don't believe it is, though I am not completely sure), but it's just an example. Think about functions where f(0)=0, and think about them close to 0. If they are ever going to be something other than 0, they have to "get up off the mat" so to speak... they have to move up or down. Now when you get to infinitesimally small values very close to 0, think about the derivative... is it possible for the derivative to remain smaller than those tiny, tiny, infinitesimally small values, and for the function still to increase from 0 to something else? Think about it... if the derivative is less than the function, than the function's absolute value will shrink moving forward ( I believe - it's late), and the function will never get any bigger.

That's the point of the argument I've made here, or how it works. The point is to show that when you're dealing with this function close to 0, the derivative will always be greater than the function itself. Now in words, I don't know how convincing that is, but the analytical argument works as far as I can tell. So here it is:

_____________________________________________________________________

f(0)=0. Now because f is differentiable, we know that it is continuous.

Given that, suppose that there is some point y where f(y) /= 0. Because f is continuous, the Intermediate Value Theorem requires that there is some point c where f(c) = L for ever L < |f(y)|. (In other words, whether f(y) is positive or negative, every value between f(y) and 0 must be on the function somewhere between y and 0).

Choose any c0 where |f(c0)-0| < 1. In other words, |f(c0)| < 1.

Now by the Mean Value Theorem, there is some point c1 where |c1| < |c0| such that f ' (c1)=[tex]\frac{f(c_{0})-f(0)}{c_{0}-0}[/tex]. That is, f ' (c1)=[tex]\frac{f(c_{0})}{c_{0}}[/tex]. (In other words, there's some point c1 between 0 and c0 where the derivative at c1 equals the slope from f(c0) to f(0).)

Now |c0| < 1, which means that [tex]\left|\frac{f(c_{0})}{c_{0}}\right|[/tex] > |f(c0)|. In other words, |f ' (c1)| > |f(c0)|.

Now, we are given that |f ' (x)|≤ |f(x)| for all x. This requires that |f(c0)| < |f(c1)|, because |f ' (c1)| > |f(c0)|, and if |f(c0)| > |f(c1)|, then |f ' (c1)| > |f(c1)|.

This means that for every x and x0 such that |x|> |x0| > 0, |f(x)|< |f(x0)|. (In other words, for all points within 1 of 0, the absolute value of the function must decrease as the points get further away from 0.)

Now because f is continuous, this requires that for all ε > 0 (and < 1, but that's rather trivial for the area of the number line we're talking about here), there is some x1 such that |x1| < ε and |f(x1)| > |f(ε)|. (In other words, no matter how close you get to 0, there is some point even closer where the function has a greater absolute value.)

This creates a contradiction, because f is continuous and f(0)=0. This is fairly obvious, because the closer to 0 we get, the greater the absolute value of the function must be, and so there will be a jump - a discontinuity - between 0 and the function's value.

In technical language, I *think* - and it's *really* late now so don't hold me to this - we explain it this way: If f is continuous, then for all ε > 0, there must be some δ > 0 such that when |x-0| < δ, |f(x)-f(0)| < ε. However, in our function, for all ε > 0, there is no δ which will meet the requirement. For every possible ε and δ, we can find some x so that |f(x)-f(0)| > ε. (In other words, no matter how big a epsilon-neighborhood we try, and no matter how small a delta-neighborhood we come up with to pair with it, there is some point inside that delta-neighborhood which is outside of the epsilon-neighborhood.)

I'm pretty sure this works, but I've been wrong before...
 
Last edited:
  • #20
lurflurf said:
hint

assume wlog x<=y
let k be an interger k=0,1,2,...,n
let z_k=x+(y-x)/n
thus
z_0=x
z_n=y
z_n-z_{n-1}=(y-x)/n
x<=z_k<y
|f(y)-f(x)|=|f(z_n)-f(z_0)|
=|Σ(f(z_k)-f(x_{k-1}))|
<=Σ|f(z_k)-f(x_{k-1})|
=Σ|f'(t_k)|(y-x)/n (mean value theorem with z_{k-1}<t_k<z_k)
<=Σ|f(t_k)|(y-x)/n

Isn't that really the same information I would get from:

[tex]|f(y)-f(x)|=|\int_x^y f'(t) dt| \le \int_x^y |f'(t)| dt \le \int_x^y |f(t)| dt [/tex]

What does wlog x<=z have to do with anything?
 
Last edited:
  • #21
Dick said:
Isn't that really the same information I would get from:

[tex]|f(y)-f(x)|=|\int_x^y f'(t) dt| \le \int_x^y |f'(t)| dt \le \int_x^y |f(t)| dt [/tex]

What does wlog x<=z have to do with anything?

I'm not sure if you realize this, and I apologize if I'm just telling you something you already know... it looked like in an earlier post on this problem you treated wlog as a function of some kind, perhaps a logarithm, when you wrote wlog(x)<y. If I'm mistaken, most sincere apologies.

wlog x <= z is shorthand for "without loss of generality, x <=z." In other words, the proof would work the same way were z <=x, you'd just have to exchange the variables. Or, put yet another way, you're picking two values and calling the smaller one x and the larger one z.
 
  • #22
Why did nobody use TinyTim's hint?
|f'(x)|<=|f(x)| means
[tex]
\frac{|f'(x)|}{|f(x)|}\leq 1
[/tex]
If we neglect the absolute signs (I leave the fine tuning to the OP:smile:) we get
[tex]
\frac{d}{dx}\log f(x)\leq 1
[/tex]
and so
[tex]
\log f(x)\leq x+c \Leftrightarrow f(x)\leq e^{x+c}
[/tex]
Since f(x)=0, c must be "-infinity", so
[tex]
f(x)\leq e^{x-\infty}=0
[/tex]
 
  • #23
Pere Callahan said:
Why did nobody use TinyTim's hint?

o:) why does nobody take me seriously? o:)

i see a lot :smile: from my little bowl :wink:
 
  • #24
Lazerlike42 said:
I'm not sure if you realize this, and I apologize if I'm just telling you something you already know... it looked like in an earlier post on this problem you treated wlog as a function of some kind, perhaps a logarithm, when you wrote wlog(x)<y. If I'm mistaken, most sincere apologies.

wlog x <= z is shorthand for "without loss of generality, x <=z." In other words, the proof would work the same way were z <=x, you'd just have to exchange the variables. Or, put yet another way, you're picking two values and calling the smaller one x and the larger one z.

Oohhhhh! I didn't even think of interpreting that is as an abbreviation. Thanks!
 
  • #25
Pere's solution seems much simpler than mine, but for the curious I've checked it with my Analysis professor who agrees with it.

However, I made one error that requires correcting. I said to choose any c0 where |f(c0)-0| < 1. I made two blunders: first, I confused f(c0) with c0, and second, I didn't take into account that the function could equal 0 over some large interval containing 0. For example, if f(x)=0 over [0,4], my argument wouldn't work.

The correct thing to do would be to say:

choose a c0 and a d such that f(d)=0, c0 /= 0 and |c0-d| < 1. (In other words, choose a point c0 where the function doesn't equal 0 within a distance of 1 from some point d where the function equals 0).

Then by the Mean Value Theorem we know that there is some c1 such that |d| < |c1| < |c0| and f ' (c1) = [tex]\frac{f(c_{0})-f(d)}{c_{0}-d}[/tex]= [tex]\frac{f(c_{0})}{c_{0}-d}[/tex].

With that, we know that f'(c1) > f(c0), because |c0-d| < 0. The rest of the proof follows the same way. What I've done here is just adjust the choice of c0 so that the proof works even if the function doesn't start moving away from 0 within (1,1).
 
  • #26
Pere Callahan said:
Why did nobody use TinyTim's hint?
|f'(x)|<=|f(x)| means
[tex]
\frac{|f'(x)|}{|f(x)|}\leq 1
[/tex]
If we neglect the absolute signs (I leave the fine tuning to the OP:smile:) we get
[tex]
\frac{d}{dx}\log f(x)\leq 1
[/tex]
and so
[tex]
\log f(x)\leq x+c \Leftrightarrow f(x)\leq e^{x+c}
[/tex]
Since f(x)=0, c must be "-infinity", so
[tex]
f(x)\leq e^{x-\infty}=0
[/tex]

With all due respect to Tiny Tim, I really don't like that 'proof'. The problem only states that f is differentiable and f(0)=0. They can be pretty wild functions. x^2*sin(1/x) is an example. I don't like the looks of the 'fine tuning' problem. There really ought to be something more direct.
 
  • #27
What about this:

Let
[tex]
M=\sup\{|f(x)|:0\leq x\leq 1\}
[/tex]
and
[tex]
x_0=\inf\{x:|f(x)|=M\}
[/tex]
The function f is differentiable, hence continuous, so there is at least one x in [0,1] with |f(x)|=M. By the mean value theorem there exists [itex]\xi\in (0,x_0)[/itex] with [itex]f'(\xi)=f(x_0)[/itex]. By assumption it follows that
[tex]
|f(\xi)|\geq|f'(\xi)|=|f(x_0)|=M
[/tex]
So, [itex]|f(\xi)|\geq M[/itex] and [itex]\xi<x_0[/itex], but [itex]x_0[/itex] was the smallest such x, so in fact [itex]x_0[/itex] must in fact be zero, and so must M.

Any objections to this proof?:smile:
 
Last edited:
  • #28
Lazerlike42 said:
This means that for every x and x0 such that |x|> |x0| > 0, |f(x)|< |f(x0)|. (In other words, for all points within 1 of 0, the absolute value of the function must decrease as the points get further away from 0.)

Now because f is continuous, this requires that for all ε > 0 (and < 1, but that's rather trivial for the area of the number line we're talking about here), there is some x1 such that |x1| < ε and |f(x1)| > |f(ε)|. (In other words, no matter how close you get to 0, there is some point even closer where the function has a greater absolute value.)

This creates a contradiction, because f is continuous and f(0)=0. This is fairly obvious, because the closer to 0 we get, the greater the absolute value of the function must be, and so there will be a jump - a discontinuity - between 0 and the function's value.

In technical language, I *think* - and it's *really* late now so don't hold me to this - we explain it this way: If f is continuous, then for all ε > 0, there must be some δ > 0 such that when |x-0| < δ, |f(x)-f(0)| < ε. However, in our function, for all ε > 0, there is no δ which will meet the requirement. For every possible ε and δ, we can find some x so that |f(x)-f(0)| > ε. (In other words, no matter how big a epsilon-neighborhood we try, and no matter how small a delta-neighborhood we come up with to pair with it, there is some point inside that delta-neighborhood which is outside of the epsilon-neighborhood.)

I'm pretty sure this works, but I've been wrong before...

Your epsilon-delta reasoning is not correct. Given an epsilon>0 and assuming you have an x with f(x)>epsilon there is no reason, in general, that prevents delta=x/2 to work for this epsilon. It is not certain that you can "propagate" this high value of f at x arbitrarily close to zero by only finding a number y<x with f(x)>=f(y). For example the successive y's you find in this way may have a distance of 1/2, 1/4, 1/16,...1/2^n, ... so the farthest you would come would be x-1...
 
  • #29
Pere Callahan said:
Your epsilon-delta reasoning is not correct. Given an epsilon>0 and assuming you have an x with f(x)>epsilon there is no reason, in general, that prevents delta=x/2 to work for this epsilon. It is not certain that you can "propagate" this high value of f at x arbitrarily close to zero by only finding a number y<x with f(x)>=f(y). For example the successive y's you find in this way may have a distance of 1/2, 1/4, 1/16,...1/2^n, ... so the farthest you would come would be x-1...

I think you are correct. The point we would have to look at if we were to work it out with epsilon and delta would be 0, that is, prove thatthe resultingfunction would not be continuous at 0. As I said, I was working on that pretty late and was fuzzy about that portion. The point is that there is clearly a discontinuity there, because every value of f(x) moving closer to that 0 point must be greater than any further away, and yet at the 0 point it must equal 0. There's a jump.

My professor had a simpler suggestion, which was to approach it using sequential criterion. I'm going to use my fixed argument, where we're not talking about 0 but about d, which is a point where f(d)=0. As I said, this eliminates the problem that my original argument wouldn't work if in fact the function remained at 0 for a while before increasing or decreasing. Basically, if we take a sequence xn which approaches d, we find that |f(xn)| approaches some limit not equal to 0. However, f(d)=0, and so by sequential criterion, the function is dicontinuous at 0. However, we know the function is continuous, and so the supposed point y where f(y) /= 0 must not exist.

Also, now with a more lucid mind, here is the proper epsilon-delta reasoning. We know from my previous work that in order to maintain our property that |f ' (x)| <= |f(x)| for all x, then given any two points y and y0 in (d, c0), |y0|<|y| implies that |f(y0)| > |f(y)|.

Given that, suppose f is continuous at d. Then for every ε > 0, there must be some δ > 0 such that whenever |x-c| < δ, |f(x) - f(c)| < ε. Now choose ε < |f(c0)|. Then no matter what δ we choose, |f(x)-f(d)| > ε. In other words, the second we move an infintessimally small amount to the right of d, f(x) > c0.
 
Last edited:
  • #30
Pere Callahan said:
What about this:

Let
[tex]
M=\sup\{|f(x)|:0\leq x\leq 1\}
[/tex]
and
[tex]
x_0=\inf\{x:|f(x)|=M\}
[/tex]
The function f is differentiable, hence continuous, so there is at least one x in [0,1] with |f(x)|=M. By the mean value theorem there exists [itex]\xi\in (0,x_0)[/itex] with [itex]f'(\xi)=f(x_0)[/itex]. By assumption it follows that
[tex]
|f(\xi)|\geq|f'(\xi)|=|f(x_0)|=M
[/tex]
So, [itex]|f(\xi)|\geq M[/itex] and [itex]\xi<x_0[/itex], but [itex]x_0[/itex] was the smallest such x, so in fact [itex]x_0[/itex] must in fact be zero, and so must M.

Any objections to this proof?:smile:


I think there are a few problems... It looks like you are trying to dosomethingvery similar to what I did, so I like it :). That being said, here are the issues I see.

First, I'm not sure how you conclude that the Mean Value Theorem requires there to be a point [itex]\xi\in (0,x_0)[/itex] with [itex]f'(\xi)=f(x_0)[/itex]. It would require a point [itex]\xi\in (0,x_0)[/itex] with [itex]f'(\xi)=\frac{f(x_0)}{x_0}[/itex]. In other words, somewhere between x0 and 0, there is a point where f ' is equal to the slope from f(x0) to 0.

Second, this arument only holds if we assume that there is some point x /= 0 in (0,1). It's entirely possible that this function is, for example:

f(x) = {0 if x<12 or (x-12)2 if x >=12}

That was the big mistake I made last night, and which I corrected this afternoon.

While I'm writing to you, I'd like to ask about your first proof... how did you get from [tex]\frac{f'(x)}{f(x)}[/tex] <= 1 to [tex]\frac{d}{dx}log f(x)[/tex] <=1? I'm not accusing you of being wrong (though I suppose you could be :p ). I just don't see it and want to understand.
 
  • #31
Thank you for your reply. Oh, yes I forgot about the [itex]x_0[/itex] in the denominator.. stupid...anyway: so there exist [itex]\xi\in(0,1)[/itex] with
[tex]
f'(\xi)=\frac{f(x_0)}{x_0}
[/tex]
and so
[tex]
|f(\xi)|\geq|f'(\xi)|=\left|\frac{f(x_0)}{x_0}\right|\geq|f(x_0)|
[/tex]
Luckily, [itex]|x_0|\leq 1[/itex], so this mistake is easily corrected. The proof then shows, you're right, that f=0 on [0,1] in particular that f(1)=0. But then you can do the same type of thing to show that f=0 on [1,2]... and inductively on the whole of R.

As for your question: Just differentiate log f(x) using the chain rule:smile:
 
  • #32
Pere Callahan said:
Thank you for your reply. Oh, yes I forgot about the [itex]x_0[/itex] in the denominator.. stupid...anyway: so there exist [itex]\xi\in(0,1)[/itex] with
[tex]
f'(\xi)=\frac{f(x_0)}{x_0}
[/tex]
and so
[tex]
|f(\xi)|\geq|f'(\xi)|=\frac{f(x_0)}{x_0}\geq|f(x_0)|
[/tex]
Luckily, [itex]|x_0|\leq 1[\itex], so this mistake is easily corrected. The proof then shows, you're right, that f=0 on [0,1] in particular that f(1)=0. But then you can do the same type of thing to show that f=0 on [1,2]... and inductively on the whole of R.

As for your question: Just differentiate log f(x) using the chain rule:smile:

Pere, I updated my post responding to yours about my epsilon-delta reasoning with proper epsilon-delta, if you happen to care all that much. :)
 
  • #33
Shouldn't there be some sort of simple proof by induction?

perhaps something along the lines of the following:

If you move an infinitesimally small distance [itex]\epsilon[/itex] from the point [itex]x_0[/itex] (to the left or the right), then Taylor's theorem tells you that [itex]f(x_0 \pm \epsilon)=f(x_0)+f'(x_0)\epsilon[/itex]. Since [itex]|f'(x)|\leq |f(x)|[/itex] and f(0)=0, f'(0)=0 and therefor [itex]f(\pm \epsilon)=0+(0)\epsilon=0[/itex]. And [itex]|f'(x)|\leq |f(x)|\implies f'(\pm \epsilon)=0[/itex] and using the same argument as before, [itex]f(\pm 2\epsilon)=0[/itex]...repeat ad nauseam
 
  • #34
Nah. Don't like that much either. That's actually sort of a Riemann sum version of Tiny Tim's original suggestion of just integrating f'(t)/f(t) and ignoring potential pathologies of the function f. I was really hoping there was a way of just stating this in a simple way. But maybe there's not. You can certainly reduce the problem to saying there is an x0 such that f(x0)=0 and an x1 such that f(x1)>0 and f(x)>0 for x between x0 and x1. Can anyone show f'(x)/f(x) is even integrable between x0 and x1? This is really more of a real analysis type question. If f(x) is a 'normal average everyday function' (i.e. has a convergent power series), as I've pointed out before, it's easy.
 
Last edited:
  • #35
Dick said:
Isn't that really the same information I would get from:

[tex]|f(y)-f(x)|=|\int_x^y f'(t) dt| \le \int_x^y |f'(t)| dt \le \int_x^y |f(t)| dt [/tex]

What does wlog x<=z have to do with anything?
without loss of generality assume x<=y to avoid considering the cases y<x

The integral was a wrong turn.
I noticed this is Baby Rudin exercise 5.26.

Show f=0 on [0,1] and f=0 on R will follow.
0<x<1
f(x)=f(x)-f(0)=(x-0)f'(t)=x f'(t) (mean value theorem with 0<t<x<1)
|f(x)|=x|f'(t)|<=x|f(t)| (by given inequality)
M:=sup(t|0<t<x||f(x)|)
|f(x)|<=x M
M<=x M
(1-x)M<=0
M=0
|f|=0 on [0,1]
(f(1)=f(1-)=0
 
Last edited:

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
254
  • Calculus and Beyond Homework Help
Replies
20
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
679
  • Calculus and Beyond Homework Help
Replies
26
Views
884
  • Calculus and Beyond Homework Help
Replies
2
Views
452
  • Calculus and Beyond Homework Help
Replies
20
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
830
  • Calculus and Beyond Homework Help
Replies
21
Views
822
  • Calculus and Beyond Homework Help
Replies
2
Views
588
  • Calculus and Beyond Homework Help
Replies
26
Views
2K
Back
Top