Taylor expansions, limits and domains

In summary, the conversation discusses different methods for calculating the limit of a function as x approaches 0, including using l'Hospital's rule and Taylor expansions. The use of Taylor expansions raises the question of whether x=0 is part of the domain of the function, and alternative methods such as linear approximations are suggested. The conversation also delves into the importance of careful notation and avoiding ambiguity in mathematical operations.
  • #1
haushofer
Science Advisor
Insights Author
2,952
1,497
Dear all,

I have a question concerning calculating the following limit:

[tex]
\lim_{x \rightarrow 0} f(x) = \lim_{x \rightarrow 0} \frac{\sin{(x)}}{x} = 1
[/tex]

Obviously, x=0 is not part of the domain of the function. One way to calculate the limit is using l'Hospital. Another way for these kinds of limits is using Taylor expansions (a comparable case is e.g. given in Stewart, section 11.10 example 12). The reasoning is then as follows:

(1) Write down the Taylor expansion of the numerator:

[tex]
sin{(x)} = x - \frac{x^3}{6} + \ldots
[/tex]

(2) Calculate

[tex]
\frac{\sin{(x)}}{x} = 1 - \frac{x^2}{6} + \ldots (*)
[/tex]

(3) Observe that the limit ##x\rightarrow 0## of this expression equals 1.

By expanding the numerator of ##f(x)## and dividing it by x has added the point x=0 to our original domain of ##f(x)##. So this way of calculating the limit feels a bit like cheating; we wouldn't be able to Taylor-expand ##\frac{\sin{(x)}}{x}## around x=0 because it is not part of the domain. So the Taylor expansion (*) is the Taylor expansion of our original ##f(x)## for ##x \neq 0## PLUS ##f(0)=1##. With this we have made the function continuous in ##x=0## and the limit equals the function value.

I guess this basically is the same as comparing a function ##g(x)## which is defined for all ##x##, with ##g(x)= 1 \times g(x) = \frac{x-a}{x-a}g(x)##, which is not defined for ##x=a##. Any comments?
 
Physics news on Phys.org
  • #2
I think, physicists are not always wrong, when they apply mathematics. :wink: So why not have a look on their methods. I don't use the Taylor expansion, but the linear approximation of a smooth function instead, i.e. ##f(x_0 + v) = f(x_0) + D_{x_0} \cdot v + r(v)## with a remainder that goes with ##v^2## towards zero. Then we have at ##x_0=0##

$$
\sin(v) = \sin(x_0+v) = \sin(x_0) + D_{x_0}\cdot v + r(v) = \sin(0) + \cos(0)\cdot v + r(v) = v + r(v) = v\cdot (1+ \bar{r}(v))
$$
where ##\lim_{v \to 0} \bar{r}(v)= 0## is still converging with order ##1## in ##v## and the result drops out.
 
Last edited:
  • #3
But that's the same reasoning, with the Taylor expansion of the numerator replaced by the linear approx. of the numerator.
 
  • #4
haushofer said:
But that's the same reasoning, with the Taylor expansion of the numerator replaced by the linear approx. of the numerator.
Except that I don't need the entire series and avoid singularities at ##0##, because I only consider ##\sin## and the fact that ##\sin' = \cos##. So the division doesn't take place in the function, but in the result of the approximation of the sine function instead. In my opinion, these are major differences.

Well, we have to use something. The product expansion also immediately gives the result
$$
\sin(x) = x \cdot \prod_{k=1}^\infty \left( 1 - \frac{x^2}{k^2 \pi^2} \right)
$$
But if you do not allow analytical methods, then I'll have to ask you: What is ##\sin(x)\,##? And if the answer is a geometric one, I'll ask, which linear distance is ##x## then, to compare its length with the sine value?
 
  • #5
Obviously, x=0 is not part of the domain of the function.
Why is it obvious?
 
  • #6
Alternatively, we have:
$$\sin'(0)=\lim_{x\to 0} \frac{\sin(x) - \sin(0)}{x-0}=\lim_{x\to 0} \frac{\sin(x)}{x}$$
So:
$$\lim_{x\to 0} \frac{\sin(x)}{x} = \sin'(0) = \cos(0) = 1$$Or if we want to stick to a Taylor expansion, with Lagrange's remainder theorem we have:
$$\sin(x) = \sin(0) + \frac{x}{1!}\sin'(0) + \frac{x^2}{2!}\sin''(0) + \frac{x^3}{3!}\sin'''(\xi)$$
where ##\xi## is a value between ##x## and 0.

It follows that:
$$\sin(x)=x + \frac{x^3}{3!}\cdot -\cos(\xi)$$
And since cosine is limited to [-1,1], we get:
$$x-\left|\frac{x^3}{3!}\right| \le \sin(x) \le x+ \left|\frac{x^3}{3!}\right|$$
Now we can apply the squeeze theorem - and there is no cheating involved! :oldwink:
 
Last edited:
  • Like
Likes member 587159
  • #7
I like Serena said:
Alternatively, we have:
$$\sin'(0)=\lim_{x\to 0} \frac{\sin(x) - \sin(0)}{x-0}=\lim_{x\to 0} \frac{\sin(x)}{x}$$
So:
$$\lim_{x\to 0} \frac{\sin(x)}{x} = \sin'(0) = \cos(0) = 1$$Or if we want to stick to a Taylor expansion, with Lagrange's remainder theorem we have:
$$\sin(x) = \sin(0) + \frac{x}{1!}\sin'(0) + \frac{x^2}{2!}\sin''(0) + \frac{x^2}{2!}\sin''(\xi)$$
where ##\xi## is a value between ##x## and 0.
Don't you mean ##\sin(x) = \sin(0) + \frac{x}{1!}\sin'(0) + \frac{x^2}{2!}\sin''(0) + \frac{x^3}{3!}\sin'''(\xi)##?
I like Serena said:
It follows that:
$$\sin(x)=x + \frac{x^3}{3!}\cdot -\cos(\xi)$$
Easier to read as ##\sin(x)=x - \frac{x^3}{3!} cos(\xi)##
I like Serena said:
And since cosine is limited to [-1,1], we get:
$$x-\left|\frac{x^3}{3!}\right| \le \sin(x) \le x+ \left|\frac{x^3}{3!}\right|$$
Now we can apply the squeeze theorem - and there is no cheating involved! :oldwink:
 
  • #8
Mark44 said:
Don't you mean ##\sin(x) = \sin(0) + \frac{x}{1!}\sin'(0) + \frac{x^2}{2!}\sin''(0) + \frac{x^3}{3!}\sin'''(\xi)##?
Yes. Thank you. I've edited the post accordingly.
Mark44 said:
Easier to read as ##\sin(x)=x - \frac{x^3}{3!} cos(\xi)##
I guess that would be a matter of preference.
Mine is to only do the substitution when substituting, since as I see it this is where most mistakes occur - typically with plus and minus signs, but also with calculations and reorderings. This was actually drilled into me in high school, after which I made less mistakes.
 
  • #9
Regarding the second point, I think it's easier to read ##x + \frac{x^3}{3!}(-\cos(\xi))## or ##x - \frac{x^3}{3!}\cos(\xi)## than this: ##x + \frac{x^3}{3!}\cdot -\cos(\xi)##. That "dot" for multiplication is easy to overlook, making it seem like you're subtracting ##\cos(\xi)## rather than multiplying by its negative.
 
  • #10
Mark44 said:
Regarding the second point, I think it's easier to read ##x + \frac{x^3}{3!}(-\cos(\xi))## or ##x - \frac{x^3}{3!}\cos(\xi)## than this: ##x + \frac{x^3}{3!}\cdot -\cos(\xi)##. That "dot" for multiplication is easy to overlook, making it seem like you're subtracting ##\cos(\xi)## rather than multiplying by its negative.
There is also another argument to avoid two operations in a row: they are binary operations and ##\cdot -## doesn't make sense, as the dot lacks it's second argument and minus it's first, which is usually taken to be ##0## if there is none. I've been drilled to use parentheses in such cases, and I think that helps to avoid a lot of mistakes.
 
  • #11
Mark44 said:
Regarding the second point, I think it's easier to read ##x + \frac{x^3}{3!}(-\cos(\xi))## or ##x - \frac{x^3}{3!}\cos(\xi)## than this: ##x + \frac{x^3}{3!}\cdot -\cos(\xi)##. That "dot" for multiplication is easy to overlook, making it seem like you're subtracting ##\cos(\xi)## rather than multiplying by its negative.
Fair point - especially when writing.
fresh_42 said:
There is also another argument to avoid two operations in a row: they are binary operations and ##\cdot -## doesn't make sense, as the dot lacks it's second argument and minus it's first, which is usually taken to be ##0## if there is none. I've been drilled to use parentheses in such cases, and I think that helps to avoid a lot of mistakes.
I can certainly agree with a parenthesis solution.
Still, the ambiguity here is that the minus is a unary minus instead of the binary one.
Doesn't putting two operations in a row clarify this beyond all doubt?
The parenthesis solution might actually be confused with an expression that we optionally subtract. Or with a function application.
The real problem is that the centered dot (in writing) is just that - a small dot that is easily overlooked (e.g. fixable by writing it as a tiny ##{}^{{}_\times}## symbol and/or using sufficient spacing).
Then again, we can also use a combination of a centered dot and parentheses to remove all doubt.
 
  • #12
fresh_42 said:
There is also another argument to avoid two operations in a row: they are binary operations
I disagree, in part. The - that appeared is not the binary subtraction operator -- it's a unary minus, to phrase it in programming terms. Such an operator takes only one argument.
fresh_42 said:
and ##\cdot -## doesn't make sense, as the dot lacks it's second argument and minus it's first, which is usually taken to be ##0## if there is none. I've been drilled to use parentheses in such cases, and I think that helps to avoid a lot of mistakes.
 
  • #13
mathman said:
Why is it obvious?
Because 0/0 is not defined. But these kinds of wording are dangerous and can be the devil of the details, so if you disagree, please tell me :P
 
Last edited:
  • #14
fresh_42 said:
Except that I don't need the entire series and avoid singularities at ##0##, because I only consider ##\sin## and the fact that ##\sin' = \cos##. So the division doesn't take place in the function, but in the result of the approximation of the sine function instead. In my opinion, these are major differences.

Well, we have to use something. The product expansion also immediately gives the result
$$
\sin(x) = x \cdot \prod_{k=1}^\infty \left( 1 - \frac{x^2}{k^2 \pi^2} \right)
$$
But if you do not allow analytical methods, then I'll have to ask you: What is ##\sin(x)\,##? And if the answer is a geometric one, I'll ask, which linear distance is ##x## then, to compare its length with the sine value?
That's a good question :P If you define sin(x) by its power series, then there is no "cheating" involved I guess.

Basically, my question is about the difference between a function ##f(x)= x## and a function ##g(x)=\frac{x^2}{x}##; they are equal but differ in their domains (x=0 is not included for the second one).

It's nice to see all these different ways of calculating the limit by the way.
 
  • #15
haushofer said:
By expanding the numerator of ##f(x)## and dividing it by x has added the point x=0 to our original domain of ##f(x)##. So this way of calculating the limit feels a bit like cheating; we wouldn't be able to Taylor-expand ##\frac{\sin{(x)}}{x}## around x=0 because it is not part of the domain. So the Taylor expansion (*) is the Taylor expansion of our original ##f(x)## for ##x \neq 0## PLUS ##f(0)=1##. With this we have made the function continuous in ##x=0## and the limit equals the function value.
It's good to be cautious. But this is called a meromorphic function, which has been thoroughly studied. (see https://en.wikipedia.org/wiki/Meromorphic_function for a start.) Because both the numerator and denominator of a meromorphic function are so well behaved, their ratio is well behaved.
 
  • Like
Likes haushofer
  • #16
haushofer said:
Basically, my question is about the difference between a function ##f(x)= x## and a function ##g(x)=\frac{x^2}{x}##; they are equal but differ in their domains (x=0 is not included for the second one).
I guess you know the two graphs and that one is a line and the other one is the same line interrupted by a missing point at ##x=0##. I suppose this was the starting point of the classification of singularities, here a removable one.
$$
h(x) = x \cdot \frac{\prod_{i \in I}(x-\iota)}{\prod_{i \in I}(x-\iota)} \quad I \subseteq \mathbb{R} \text{ irrartional numbers }
$$
would be fun to test the various concepts.

I think in the case of ##f(x)=\frac{\sin x}{x}## even a geometric interpretation is possible: If we take ##x## as an arc length and ##\sin x## as the corresponding height, then the arc is always of length ##x + \text{ something }## no matter how small ##x## is. So in the limit, ##f(x) \to 1 + \text{ something }## and the something is clearly converging to zero.
 
Last edited:
  • Like
Likes haushofer

1. What is a Taylor expansion?

A Taylor expansion is a mathematical representation of a function using an infinite series of derivatives at a specific point. It is used to approximate the value of a function at a point, and can be used to evaluate functions that are not easily solvable.

2. How do you find the Taylor expansion for a function?

To find the Taylor expansion for a function, you need to first find the coefficients for the derivatives of the function at a specific point. Then, you can plug these coefficients into the Taylor expansion formula to get the final series.

3. What is the purpose of finding the limit of a function?

The limit of a function is used to determine the behavior of the function at a specific point or as it approaches a certain value. It can help determine if a function is continuous, and can also be used to evaluate difficult or undefined functions.

4. How do you find the limit of a function?

To find the limit of a function, you need to evaluate the function at values that are very close to the specific point or value you are interested in. If the function approaches a single value as these values get closer and closer, then that value is the limit of the function.

5. What is the domain of a function?

The domain of a function is the set of all possible input values for the function. It is the set of values for which the function is defined and can be evaluated. The domain can be restricted by the function's properties or by any constraints given in the problem.

Similar threads

Replies
2
Views
1K
Replies
3
Views
1K
  • Calculus
Replies
9
Views
2K
Replies
3
Views
2K
Replies
8
Views
431
  • Calculus
Replies
3
Views
2K
Replies
29
Views
2K
Back
Top