# I Taylor expansions, limits and domains

1. Sep 20, 2017

### haushofer

Dear all,

I have a question concerning calculating the following limit:

$$\lim_{x \rightarrow 0} f(x) = \lim_{x \rightarrow 0} \frac{\sin{(x)}}{x} = 1$$

Obviously, x=0 is not part of the domain of the function. One way to calculate the limit is using l'Hospital. Another way for these kinds of limits is using Taylor expansions (a comparable case is e.g. given in Stewart, section 11.10 example 12). The reasoning is then as follows:

(1) Write down the Taylor expansion of the numerator:

$$sin{(x)} = x - \frac{x^3}{6} + \ldots$$

(2) Calculate

$$\frac{\sin{(x)}}{x} = 1 - \frac{x^2}{6} + \ldots (*)$$

(3) Observe that the limit $x\rightarrow 0$ of this expression equals 1.

By expanding the numerator of $f(x)$ and dividing it by x has added the point x=0 to our original domain of $f(x)$. So this way of calculating the limit feels a bit like cheating; we wouldn't be able to Taylor-expand $\frac{\sin{(x)}}{x}$ around x=0 because it is not part of the domain. So the Taylor expansion (*) is the Taylor expansion of our original $f(x)$ for $x \neq 0$ PLUS $f(0)=1$. With this we have made the function continuous in $x=0$ and the limit equals the function value.

I guess this basically is the same as comparing a function $g(x)$ which is defined for all $x$, with $g(x)= 1 \times g(x) = \frac{x-a}{x-a}g(x)$, which is not defined for $x=a$. Any comments?

2. Sep 20, 2017

### Staff: Mentor

I think, physicists are not always wrong, when they apply mathematics. So why not have a look on their methods. I don't use the Taylor expansion, but the linear approximation of a smooth function instead, i.e. $f(x_0 + v) = f(x_0) + D_{x_0} \cdot v + r(v)$ with a remainder that goes with $v^2$ towards zero. Then we have at $x_0=0$

$$\sin(v) = \sin(x_0+v) = \sin(x_0) + D_{x_0}\cdot v + r(v) = \sin(0) + \cos(0)\cdot v + r(v) = v + r(v) = v\cdot (1+ \bar{r}(v))$$
where $\lim_{v \to 0} \bar{r}(v)= 0$ is still converging with order $1$ in $v$ and the result drops out.

Last edited: Sep 20, 2017
3. Sep 20, 2017

### haushofer

But that's the same reasoning, with the Taylor expansion of the numerator replaced by the linear approx. of the numerator.

4. Sep 20, 2017

### Staff: Mentor

Except that I don't need the entire series and avoid singularities at $0$, because I only consider $\sin$ and the fact that $\sin' = \cos$. So the division doesn't take place in the function, but in the result of the approximation of the sine function instead. In my opinion, these are major differences.

Well, we have to use something. The product expansion also immediately gives the result
$$\sin(x) = x \cdot \prod_{k=1}^\infty \left( 1 - \frac{x^2}{k^2 \pi^2} \right)$$
But if you do not allow analytical methods, then I'll have to ask you: What is $\sin(x)\,$? And if the answer is a geometric one, I'll ask, which linear distance is $x$ then, to compare its length with the sine value?

5. Sep 20, 2017

### mathman

Why is it obvious?

6. Sep 20, 2017

### I like Serena

Alternatively, we have:
$$\sin'(0)=\lim_{x\to 0} \frac{\sin(x) - \sin(0)}{x-0}=\lim_{x\to 0} \frac{\sin(x)}{x}$$
So:
$$\lim_{x\to 0} \frac{\sin(x)}{x} = \sin'(0) = \cos(0) = 1$$

Or if we want to stick to a Taylor expansion, with Lagrange's remainder theorem we have:
$$\sin(x) = \sin(0) + \frac{x}{1!}\sin'(0) + \frac{x^2}{2!}\sin''(0) + \frac{x^3}{3!}\sin'''(\xi)$$
where $\xi$ is a value between $x$ and 0.

It follows that:
$$\sin(x)=x + \frac{x^3}{3!}\cdot -\cos(\xi)$$
And since cosine is limited to [-1,1], we get:
$$x-\left|\frac{x^3}{3!}\right| \le \sin(x) \le x+ \left|\frac{x^3}{3!}\right|$$
Now we can apply the squeeze theorem - and there is no cheating involved!

Last edited: Sep 20, 2017
7. Sep 20, 2017

### Staff: Mentor

Don't you mean $\sin(x) = \sin(0) + \frac{x}{1!}\sin'(0) + \frac{x^2}{2!}\sin''(0) + \frac{x^3}{3!}\sin'''(\xi)$?
Easier to read as $\sin(x)=x - \frac{x^3}{3!} cos(\xi)$

8. Sep 20, 2017

### I like Serena

Yes. Thank you. I've edited the post accordingly.
I guess that would be a matter of preference.
Mine is to only do the substitution when substituting, since as I see it this is where most mistakes occur - typically with plus and minus signs, but also with calculations and reorderings. This was actually drilled into me in high school, after which I made less mistakes.

9. Sep 20, 2017

### Staff: Mentor

Regarding the second point, I think it's easier to read $x + \frac{x^3}{3!}(-\cos(\xi))$ or $x - \frac{x^3}{3!}\cos(\xi)$ than this: $x + \frac{x^3}{3!}\cdot -\cos(\xi)$. That "dot" for multiplication is easy to overlook, making it seem like you're subtracting $\cos(\xi)$ rather than multiplying by its negative.

10. Sep 20, 2017

### Staff: Mentor

There is also another argument to avoid two operations in a row: they are binary operations and $\cdot -$ doesn't make sense, as the dot lacks it's second argument and minus it's first, which is usually taken to be $0$ if there is none. I've been drilled to use parentheses in such cases, and I think that helps to avoid a lot of mistakes.

11. Sep 20, 2017

### I like Serena

Fair point - especially when writing.
I can certainly agree with a parenthesis solution.
Still, the ambiguity here is that the minus is a unary minus instead of the binary one.
Doesn't putting two operations in a row clarify this beyond all doubt?
The parenthesis solution might actually be confused with an expression that we optionally subtract. Or with a function application.
The real problem is that the centered dot (in writing) is just that - a small dot that is easily overlooked (e.g. fixable by writing it as a tiny ${}^{{}_\times}$ symbol and/or using sufficient spacing).
Then again, we can also use a combination of a centered dot and parentheses to remove all doubt.

12. Sep 20, 2017

### Staff: Mentor

I disagree, in part. The - that appeared is not the binary subtraction operator -- it's a unary minus, to phrase it in programming terms. Such an operator takes only one argument.

13. Sep 21, 2017

### haushofer

Because 0/0 is not defined. But these kinds of wording are dangerous and can be the devil of the details, so if you disagree, please tell me :P

Last edited: Sep 21, 2017
14. Sep 21, 2017

### haushofer

That's a good question :P If you define sin(x) by its power series, then there is no "cheating" involved I guess.

Basically, my question is about the difference between a function $f(x)= x$ and a function $g(x)=\frac{x^2}{x}$; they are equal but differ in their domains (x=0 is not included for the second one).

It's nice to see all these different ways of calculating the limit by the way.

15. Sep 21, 2017

### FactChecker

It's good to be cautious. But this is called a meromorphic function, which has been thoroughly studied. (see https://en.wikipedia.org/wiki/Meromorphic_function for a start.) Because both the numerator and denominator of a meromorphic function are so well behaved, their ratio is well behaved.

16. Sep 21, 2017

### Staff: Mentor

I guess you know the two graphs and that one is a line and the other one is the same line interrupted by a missing point at $x=0$. I suppose this was the starting point of the classification of singularities, here a removable one.
$$h(x) = x \cdot \frac{\prod_{i \in I}(x-\iota)}{\prod_{i \in I}(x-\iota)} \quad I \subseteq \mathbb{R} \text{ irrartional numbers }$$
would be fun to test the various concepts.

I think in the case of $f(x)=\frac{\sin x}{x}$ even a geometric interpretation is possible: If we take $x$ as an arc length and $\sin x$ as the corresponding height, then the arc is always of length $x + \text{ something }$ no matter how small $x$ is. So in the limit, $f(x) \to 1 + \text{ something }$ and the something is clearly converging to zero.

Last edited: Sep 21, 2017