Limit Problem

  • Thread starter Gib Z
  • Start date
  • #26
Gib Z
Homework Helper
3,346
5
1. [itex]f(x)= f(0) + E(x), f(0)=f(0) + E(0)[/itex] So E(x)=0 when x is zero, so that limit follows.

2. It completely loses me.All I can reduce it to is that proving that for some C, |E(x)| < C|x|, is eqivalent to the condition for exists a C where |E(x)| < CL. So I guess we can just choose a really large C, but I doubt Im correct.

I Would also like to ask that I really appreciate your help Hurkyl, it has been a long time since I've actually learned any new mathematics. For the last year I've been learning a new fact or some tricks here and there, but I'm actually doing something now. Thanks alot.
 
  • #27
mjsd
Homework Helper
726
3
mjsd - But look at Hurkyl's comment in that thread...Sure he was not replacing the polynomials with the leading term right from the start, but he did on an equivalent expression! The fact the he couldn't do it at the start but can on an equivalent expression confuses me even more!
with all those subsequent posts after this, I don't think I needed to add any more except to say that I understood what your concern was. I just wasn't clear enough in my response. My point was that I believe what master Hurkyl actually did in that part was to divide top and bottom by x and then looked at the limit as x->infty for each individual term .... and those terms that did disappear was because of 1/x ->0 and NOT because that x^2 >> x and hence leading to approximating (x^2 +x) by x^2...etc.
 
  • #28
Hurkyl
Staff Emeritus
Science Advisor
Gold Member
14,916
19
2. It completely loses me.All I can reduce it to is that proving that for some C, |E(x)| < C|x|, is eqivalent to the condition for exists a C where |E(x)| < CL. So I guess we can just choose a really large C, but I doubt Im correct.
I thought this might give some trouble, but I figured I'd give you a shot at it before giving the big hints.

The key is this theorem:
Let f be continuous on the interval I = [a, b]. Then f has a maximum and a minimum value on I.​

I don't remember if it's taught in elementary calculus. It might be tough to prove, since it makes essential use of the completeness of the reals. It's false for the rationals; [itex]f(x) = 1 / (x^2 - 2)[/itex] is an example of continuous function on the rationals that is unbounded on [1, 2]. So if you haven't seen this theorem, it might be worth simply accepting it for now, and then worry about proving it as a different exercise.

The desired inequality then follows from the mean value theorem. Alternatively, you could prove |E(x) / x| < C from the definition of the derivative.

(Drawing a picture might be interesting too)


I Would also like to ask that I really appreciate your help Hurkyl, it has been a long time since I've actually learned any new mathematics. For the last year I've been learning a new fact or some tricks here and there, but I'm actually doing something now. Thanks alot.
I had that problem once; I had no idea that there was mathematics beyond trigonometry, so all I could do was study that. Alas, I know of too many things to study now. :frown:


My point was that I believe what master Hurkyl actually did in that part was to divide top and bottom by x
If I were to write a formal proof, that's probably what I would do. But it's not how I think of it: my actual thought processes are actually what I said in that other post. And one can write a proof along those lines too. Typically, this is all you need:

[tex]\frac{f + o(f)}{g + o(g)}
= \frac{f}{g}\left(1 + o(1) \right),[/tex]

though if necessary, you can improve error term in the r.h.s. if you have better bounds on the errors in the l.h.s.
 
Last edited:
  • #29
Gib Z
Homework Helper
3,346
5
Alternatively, you could prove |E(x) / x| < C from the definition of the derivative.
Ok From f(x) = f(0) + E(x), I get that proving |E(x) / x| < C is the same as proving [itex]\frac{f(x)-f(0)}{x} < C[/itex] for some C. Taking limits, x approaching zero, it becomes [itex]f'(x) < \lim_{x\to 0} C[/itex].

I'm a tiny bit lost as to what that Actually achieves...Does L mean anything in particular or is it just any number?

I don't really understand what I'm trying to prove..I know f is continuous, therefore all values are finite, as also shown by your theorem. Since f(x) is finite, and so is f(0), the error must also be finite. So to show that it is less than C|x| for some x we just have to choose a really large C, is that somewhat correct even if not rigorous?
 
  • #30
1,056
0
I prefer to see this from the standpoint of Newton's generalized binominal formula: [tex]\sqrt{x^2+x}[/tex] =(x^2)^(1/2)+(1/2)(x^2)^(-1/2)x+(1/2)(-1/2)(1/2!)(x^2)^(-3/2)x^2+ -+=

[tex]x+\frac{1}{2}-\frac{1}{8x}+\frac{1}{16x^2}-+- [/tex]

The formula is valid for [tex]\sqrt{Y+a}[/tex] anytime Y>a.
 
Last edited:
  • #31
367
1
I don't really understand what I'm trying to prove..I know f is continuous, therefore all values are finite, as also shown by your theorem. Since f(x) is finite, and so is f(0), the error must also be finite. So to show that it is less than C|x| for some x we just have to choose a really large C, is that somewhat correct even if not rigorous?
You have to use the fact that [itex]f(x)[/itex] is differentiable as well. Take the function [itex]f(x)=x^{1/3}[/itex]. All values of this function are finite, but the function is not less than [itex]C|x|[/itex] for any [itex]C[/itex] whatsoever when [itex]x[/itex] is sufficiently close to 0.
 

Related Threads on Limit Problem

  • Last Post
Replies
1
Views
3K
  • Last Post
Replies
2
Views
1K
  • Last Post
Replies
6
Views
1K
  • Last Post
Replies
13
Views
2K
  • Last Post
Replies
12
Views
2K
  • Last Post
Replies
8
Views
2K
  • Last Post
Replies
6
Views
2K
  • Last Post
Replies
2
Views
1K
  • Last Post
Replies
19
Views
3K
  • Last Post
Replies
4
Views
2K
Top