Limit Problem

Homework Helper
1. $f(x)= f(0) + E(x), f(0)=f(0) + E(0)$ So E(x)=0 when x is zero, so that limit follows.

2. It completely loses me.All I can reduce it to is that proving that for some C, |E(x)| < C|x|, is eqivalent to the condition for exists a C where |E(x)| < CL. So I guess we can just choose a really large C, but I doubt Im correct.

I Would also like to ask that I really appreciate your help Hurkyl, it has been a long time since I've actually learned any new mathematics. For the last year I've been learning a new fact or some tricks here and there, but I'm actually doing something now. Thanks alot.

mjsd
Homework Helper
mjsd - But look at Hurkyl's comment in that thread...Sure he was not replacing the polynomials with the leading term right from the start, but he did on an equivalent expression! The fact the he couldn't do it at the start but can on an equivalent expression confuses me even more!
with all those subsequent posts after this, I don't think I needed to add any more except to say that I understood what your concern was. I just wasn't clear enough in my response. My point was that I believe what master Hurkyl actually did in that part was to divide top and bottom by x and then looked at the limit as x->infty for each individual term .... and those terms that did disappear was because of 1/x ->0 and NOT because that x^2 >> x and hence leading to approximating (x^2 +x) by x^2...etc.

Hurkyl
Staff Emeritus
Gold Member
2. It completely loses me.All I can reduce it to is that proving that for some C, |E(x)| < C|x|, is eqivalent to the condition for exists a C where |E(x)| < CL. So I guess we can just choose a really large C, but I doubt Im correct.
I thought this might give some trouble, but I figured I'd give you a shot at it before giving the big hints.

The key is this theorem:
Let f be continuous on the interval I = [a, b]. Then f has a maximum and a minimum value on I.​

I don't remember if it's taught in elementary calculus. It might be tough to prove, since it makes essential use of the completeness of the reals. It's false for the rationals; $f(x) = 1 / (x^2 - 2)$ is an example of continuous function on the rationals that is unbounded on [1, 2]. So if you haven't seen this theorem, it might be worth simply accepting it for now, and then worry about proving it as a different exercise.

The desired inequality then follows from the mean value theorem. Alternatively, you could prove |E(x) / x| < C from the definition of the derivative.

(Drawing a picture might be interesting too)

I Would also like to ask that I really appreciate your help Hurkyl, it has been a long time since I've actually learned any new mathematics. For the last year I've been learning a new fact or some tricks here and there, but I'm actually doing something now. Thanks alot.
I had that problem once; I had no idea that there was mathematics beyond trigonometry, so all I could do was study that. Alas, I know of too many things to study now.

My point was that I believe what master Hurkyl actually did in that part was to divide top and bottom by x
If I were to write a formal proof, that's probably what I would do. But it's not how I think of it: my actual thought processes are actually what I said in that other post. And one can write a proof along those lines too. Typically, this is all you need:

$$\frac{f + o(f)}{g + o(g)} = \frac{f}{g}\left(1 + o(1) \right),$$

though if necessary, you can improve error term in the r.h.s. if you have better bounds on the errors in the l.h.s.

Last edited:
Homework Helper
Alternatively, you could prove |E(x) / x| < C from the definition of the derivative.
Ok From f(x) = f(0) + E(x), I get that proving |E(x) / x| < C is the same as proving $\frac{f(x)-f(0)}{x} < C$ for some C. Taking limits, x approaching zero, it becomes $f'(x) < \lim_{x\to 0} C$.

I'm a tiny bit lost as to what that Actually achieves...Does L mean anything in particular or is it just any number?

I don't really understand what I'm trying to prove..I know f is continuous, therefore all values are finite, as also shown by your theorem. Since f(x) is finite, and so is f(0), the error must also be finite. So to show that it is less than C|x| for some x we just have to choose a really large C, is that somewhat correct even if not rigorous?

I prefer to see this from the standpoint of Newton's generalized binominal formula: $$\sqrt{x^2+x}$$ =(x^2)^(1/2)+(1/2)(x^2)^(-1/2)x+(1/2)(-1/2)(1/2!)(x^2)^(-3/2)x^2+ -+=

$$x+\frac{1}{2}-\frac{1}{8x}+\frac{1}{16x^2}-+-$$

The formula is valid for $$\sqrt{Y+a}$$ anytime Y>a.

Last edited:
I don't really understand what I'm trying to prove..I know f is continuous, therefore all values are finite, as also shown by your theorem. Since f(x) is finite, and so is f(0), the error must also be finite. So to show that it is less than C|x| for some x we just have to choose a really large C, is that somewhat correct even if not rigorous?
You have to use the fact that $f(x)$ is differentiable as well. Take the function $f(x)=x^{1/3}$. All values of this function are finite, but the function is not less than $C|x|$ for any $C$ whatsoever when $x$ is sufficiently close to 0.