# 0(x^2) ? and taylor series

1. Sep 22, 2007

### futurebird

1. The problem statement, all variables and given/known data

Use the real Taylor series formulae

$$e^{x} = 1 + x + O(x^{2})$$
$$cos x = 1 + O(x^{2})$$
$$sin x = x(1 + O(x^{2}))$$

where $$O(x^{2})$$ means we are omitting terms proportional to power $$x^{2}$$ (i.e., $$\lim_{x\rightarrow0} \frac{O(x^{2})}{x^{2}} = C$$ where C is a constant), to establish the following:

$$\lim_{z\rightarrow0}e^{z} - (1+z) = \lim_{r\rightarrow0} e^{rcos \theta}e^{ircos \theta} - (1 +r(cos \theta +isin \theta)) = 0$$

2. Relevant equations

I think most of them are in the problem.

3. The attempt at a solution

So, using the limit given in polar form above, I separated out the real and imaginary parts like this:

$$\lim_{r\rightarrow0} e^{rcos \theta}cos(rsin \theta) -1 - rcos \theta \;+\; i(\lim_{r\rightarrow0} e^{rcos \theta}sin(rsin \theta) - sin \theta)$$

I don't know how (or why) I should use this "$$O(x^{2})$$" thing to evaluate this limit. It seems to me that I ought to be able to find the limit now that I've moved i/ out of the way. Can you recommend any links to examples of using something like $$O(x^{2})$$ with a series to find a limit? What is grouping terms in a series in this way called?

Since, I know I need to use $$O(x^{2})$$ I tried to work with it a little:

Since,

$$e^{x} = 1 + x + \frac{x^{2}}{2!} + \frac{x^{3}}{3!} + ...$$

and

$$e^{x} = 1 + x + O(x^{2})$$

then

$$O(x^{2}) = \frac{x^{2}}{2!} + \frac{x^{3}}{3!}+ ...$$

$$O(x) = \frac{x}{2!} + \frac{x^{2}}{3!} + \frac{x^{3}}{4!} + ...$$

$$\displaystyle\sum_{j=0}^\infty \frac{x^{j+1}}{(j+2)!}$$

I don't know why that matters? $$O(x^{2})$$ Can't possibly be the same thing for each of the taylor series mentioned in the problem. What is $$O(x^{2})$$??? Is it a function?

I also tried taking the limit mentioned in the problem:

$$\lim_{x\rightarrow0} \frac{O(x^{2})}{x^{2}} = C$$

I got a different constant for $$e^{x}$$ , cos x and sin x... but, how do I relate all of this to evaluating the limit?

I'd (much) rather have a hint than a full solution. I want to work this out on my own, but it just isn't making sense at the moment. What should I read about to get ideas on how to proceed?

Last edited: Sep 22, 2007
2. Sep 22, 2007

### dynamicsolo

They're not looking for anything all that clever, really. If you start from

$$\lim_{z\rightarrow0}e^{z}$$ ,

replace z in the exponent with its polar form. Since you now have a sum in the exponent, rewrite the exponential as an appropriate product. You can now replace the various functions with their truncated approximations. [The $$O(x^{2})$$ symbol here represents all the terms where x has powers of 2 or higher, but since x << 1, these are very tiny, so we are just going to *neglect* them.] What does your *very* approximate form of e^z become?

3. Sep 22, 2007

### futurebird

I'm still lost on how to do this. $$O(x^{2})$$ has a different value for each of the taylor series. Or are you saying I can just ignore that? I don't really see how I can do that.

Why can't I just take the limits now that I have them in terms of real variables? I don't know **why** I'm being asked to use $$O(x^{2})$$.

4. Sep 22, 2007

### futurebird

This is what happens when I try to do the replacement. It just seems to make matters worse.

$$\lim_{r\rightarrow0} e^{rcos \theta}cos(rsin \theta) -1 - rcos \theta \;+\; i(\lim_{r\rightarrow0} e^{rcos \theta}sin(rsin \theta) - sin \theta)$$

$$\lim_{r\rightarrow0}(1+rcos \theta+O(r^{2}cos^{2}\theta)(1+rsin \theta +O(r^{2}sin^{2}\theta)) -1 - rcos \theta \;+\; i(\lim_{r\rightarrow0}(1+rcos \theta+O(r^{2}cos^{2}\theta)(rsin \theta(1+O(r^{2}sin^{2} \theta)) - sin \theta)$$

5. Sep 22, 2007

### Gib Z

What dynamic solo is saying that O(x^2) is just an error term in your approximation. So if you still have the constant and x terms, the error is only Cx^2, where C is some constant. When x is small, then x^2 is very small and becomes negligible.

This is all unnecessary anyway because for extremely small even the x term becomes negligible and 1 becomes the only term needed, either way the limit in non-polar form follows easily, why bother changing to polar.

6. Sep 22, 2007

### futurebird

**I think** it had to be changed to polar form so that the real and imaginary parts of the limit could be written without i in them? Maybe?

I wish I could see an example of en "error term" in action in a similar problem. The whole idea makes me really really queasy. It just seems wrong.

Thank you both for responding. I'll let you know if I ever work this out.

7. Sep 22, 2007

### dynamicsolo

You aren't being asked to use those terms; your instructor have chosen to treat the Taylor series as essentially linear approximations with an infinite set of terms added thereafter, all of which will be inconsequential as x-> 0 . (There are perhaps better ways to have written the description of this problem -- some instructors, in trying to be helpful, succeed mostly in making matters seems more complicated than they are...)

If those error terms are bothering you, try using the actual Taylor series for exp(x), cos(x), and sin(x) with, say, x = 0.001 and carry out the arithmetic for the first few terms. Then imagine repeating that for, say x = 10^-6 . This will illustrate why you can neglect/ignore/drop all the terms beyond the linear ones. Also, take a look at the results you get from a calculator (which *does* use several terms of the Taylor series in its firmware) for these functions for small positive values of x (don't forget that your input *must* be in radians!).

8. Sep 24, 2007

### futurebird

So, $$O(x^{2})$$ is just "the rest of the terms in the series" and nothing more than that?

Can I ask you if it's true that the limits of $$\frac{O(x^{2})}{x^{2}}$$ are $$\frac{1}{2}$$, $$-\frac{1}{2}$$ and $$-\frac{1}{6}$$ for $$e^{x}$$, cos x, and sin x respectively? Or am I still looking at $$O(x^{2})$$ in the wrong way?

Also, what is the significance of these limits?

9. Sep 24, 2007

### Gib Z

Yes.All you need to know is that O(x^2) means the rest of the terms with the same or larger exponent. So as x goes to zero as it does in the question, the error goes to 0, extremely fast as well!

The "Limits of..." are correct, but why are you diving by x^2? Its like saying, Ok the limit of x^2 as x goes to 0 is 0, but if we divide by x^2 it becomes 1, It doesn't matter! The error term goes to zero, thats what matters!

10. Sep 24, 2007

### futurebird

I took those limits becuse they were mentioned in the problem where it said:

$$\lim_{x\rightarrow0} \frac{O(x^{2})}{x^{2}} = C$$, where C is a constant.

I don't know why they said this. If they had said:

$$\lim_{x\rightarrow0} O(x^{2}) = 0$$

Well, that makes sense. Is :

$$\lim_{x\rightarrow0} \frac{O(x^{2})}{x^{2}} = C$$, where C is a constant.
an important step towards saying:

$$\lim_{x\rightarrow0} O(x^{2}) = 0$$?

11. Sep 24, 2007

### Gib Z

Well yes it is an important step because the first limit says that as x goes to 0, the error is only a finite number times x^2. We can conclude from that statement that the error goes to 0 as x goes to 0, since x^2 is also going to 0. C times 0 is 0 =]

12. Sep 24, 2007

### dynamicsolo

Yes, the limit for exp(x) would be 1/2 and the limit for cos(x) would be -1/2 , since the quadratic terms in the Taylor series for those functions are
(x^2)/2! and -(x^2)/2! . However, the limit for sin(x) would be 0 , because the quadratic term for *its* Taylor series is zero and the next non-zero term is -(x^3)/3! .

As GibZ says, this is really just a variant way of saying that the remaining (or residual) terms in the series have powers of x which are 2 or larger, so all of them will go to zero as x-> 0 .