Composition goes to 0? f(x)/x -> 0 and g(x) -> 0 prove g(f(x))/x->0

  • Thread starter Thread starter brian44
  • Start date Start date
  • Tags Tags
    Composition
brian44
Messages
23
Reaction score
0
I've been looking at different math books that have analysis problems to get more perspectives on how to approach various analysis problems. I was following along in the "Derivatives" section of the book "Mathematical Thinking" by D'Angelo and West, second edition, and arrived at a lemma with corresponding proof that doesn't make sense to me.

16.5 Lemma: Suppose e is an error function (any function such that lim_{h \rightarrow 0}\frac{e(h)}{h} = 0 ). If s(h) \rightarrow 0 then the composition e \circ s is an error function.

The proof given goes like: for every \epsilon > 0 we can choose \delta > 0 such that |t| < \delta \Rightarrow |e(t)| \le |t| \epsilon. Therefore |e(s(h))| \le |s(h)| \epsilon for |s(h)| < \delta. Since s(h) \rightarrow 0 we can choose \delta ' s.t. |h| < \delta ' \Rightarrow |s(h)| < \delta and hence |h| < \delta ' \Rightarrow |e(s(h))/h| < \epsilon. This proves that (e \circ s)(h) / h \rightarrow 0. \blacksquare

QUESTION:
However, I don't follow the last step. It seems to me that we only have |h| < \delta ' \Rightarrow |e(s(h))/s(h)| \le \epsilon, how can we say this implies |e(s(h))/h| < \epsilon ? The only way this is true would be if h > s(h) but as s(h) is some function of h, I don't see how this could be true universally.

I would greatly appreciate help, it's bugging me I can't figure it out.

Maybe the proof is wrong and the error was overlooked, if so is there another way to prove it?

Thanks,
Brian
 
Physics news on Phys.org
<br /> |h| &lt; \delta \Rightarrow |e(s(h))/h| &lt; \epsilon <br />

Note that,

<br /> |h| &lt; \delta \Rightarrow |e(s(h))/h | = | e (\delta) / \delta | &lt; \epsilon<br />

Which of course implies what was to be proven.
 
I don't see why this is so...

<br /> <br /> |h| &lt; \delta \Rightarrow |e(h)/h | &lt; \epsilon<br />

for that specific h, it does not imply |e(h)/d | &lt; \epsilon for any d &lt; \delta, obviously this is false because we could make d arbitrarily small while fixing e(h). Similarly, |h| &lt; \delta and |s(h)| &lt; \delta in no way implies they are equal, e.g. define s(h) = h/2 \Rightarrow s(h) &lt; \delta if h &lt; \delta. In fact, even if f(x) goes to 0, this does not imply f(x) \le x for small x, consider srqt(x) on [0,infinity). sqrt(x) goes to 0 as x goes to 0, but sqrt(x) > x for x < 1 !

I might be misunderstanding what you are trying to say, in any case I fail to see your point.

Note the problem I was having with understanding the proof is due to this implication:

<br /> |h| &lt; \delta &#039; \Rightarrow |e(s(h))/h| &lt; \epsilon <br />

I don't see why it holds, obviously if it holds I see the proof is finished.
 
An idea that may work:

for a particular h (with abs. val. < delta and delta'), if |s(h)| <= |h| then obviously |e(s(h))/h| \le |e(s(h))/s(h)| &lt; \epsilon. On the other hand, if |s(h)| > |h|, then because e(x) -> 0 as x->0 if we can say e(h) > e(s(h)) then we can say |e(s(h))/h| \le |e(h)/h| &lt; \epsilon. However, I am not sure this holds.
 
I realized my previous example (using sqrt(x)) gives a counter-example which disproves the given Lemma, i.e. the book is WRONG. I wasted all that time for a stupid error in a book... They probably meant to say s(h) is also an error function, than it would hold, just going to 0 isn't enough.

Counter-example: take e(h) = h^2 , s(h) = sqrt(h) (or sqrt(|h|) on reals). Then e(h)/h = h ->0 as h -> 0. sqrt(h) -> 0 as h -> 0. But, e(s(h)) = h (or |h| on reals), so that e(s(h))/h -> 1 (or does not exist on reals) as h->0 => e(s(h)) is not an error function.
 
Back
Top