# Big O notation

## Homework Statement

Can someone explain big O notation to me in the context of taylor series?

For instance, how do you know that

sint t = t - t^3/(3t)! + O(t^5) as t -> 0?

Does that hold when t -> infinity as well?

Is there a generalization of this rule? Is it derived from the formula for the remainder of a taylor series?

## The Attempt at a Solution

mjsd
Homework Helper
loosely, $$O(t^n)$$ means terms of the order of $$t^n$$ or above. in other words it is used to represent higher order terms that are usually not so important compare to other terms in the series when you are taking some limit. But precisely, it tells you how "fast" such terms are approaching the limit (whether it is zero or not). there are also symbols like $$\Theta(t^n)$$ which in strict computating/information theory sense are different from $$O(t^n)$$. It approaches the limit at different "speed". For maths/physics usage we usually do not care about such distinction or the real asymptotic behaviors. or to avoid confusion we just write in words what the behavior is like rather than relying on the usual interpretation of the symbols (which may not be universal anyway)

I understand the definition of big O (O(f(x)) = g(x) if there is a constant C such that Cf(x) = g(x) for all x sufficiently small (or large)).

My question was when you have a taylor series or a polynomial how do you know when you can replace a set of terms with a big O.

it is used to represent higher order terms that are usually not so important compare to other terms in the series when you are taking some limit.

So, you are saying it would not work if t approached infinity I think?

Hurkyl
Staff Emeritus
Gold Member
I understand the definition of big O (O(f(x)) = g(x) if there is a constant C such that Cf(x) = g(x) for all x sufficiently small (or large)).

My question was when you have a taylor series or a polynomial how do you know when you can replace a set of terms with a big O.
Taylor's theorem gives you a remainder term.

The Lagrange form of the remainder holds when t -> 0 and t -> infinity, right?

So sin t = t - t^3/3! + O(t^5) when t-> 0 and t-> infinity, right?

Also is it true that if f = O(t^5), then (1/t)*f = (1/t)* O(t^5) = O(t^4)?

How would you show that?

Last edited:
This notation is possibly the most confusing thing I have every encountered. I have seen it defined at least 5 different ways. Some books define it when t is sufficiently large, some books define when t is sufficiently small, some books define when t is in a range. If someone else could agree that this is confusing, I would be very happy. If someone else could explain to me which context the respective definitions are used, I would be very happy.
For example, how do you define big O notation when you use it to truncate a taylor series? And if someone could answer the questions in my last post I would like that?
This stuff is driving me crazy!
Could someone show me a complete proof using the definition of big O notation and Taylor's theorem of why we can use the big O notation in Taylor expansions? That would be amazingly helpful.

Last edited:
Hurkyl
Staff Emeritus
Gold Member
Could someone show me a complete proof using the definition of big O notation and Taylor's theorem of why we can use the big O notation in Taylor expansions? That would be amazingly helpful.
Suppose f is differentiable n + 1 times on a closed, bounded interval I containing a in its interior. Then,

$$\begin{equation*} \begin{split} \left| f(x) - \sum_{k = 0}^n \frac{d^k f}{dx^k}(a) \frac{(x - a)^k}{k!} \right| &= \left| \frac{f^{n+1}(\xi)}{(n+1)!} (x - a)^{n + 1} \right| \\ &\leq \max_{t \in I} \left| \frac{f^{n+1}(t)}{(n+1)!} \right| |x - a|^{n + 1} \in O((x - a)^{n + 1}). \end{split}\end{equation*}$$
($\xi$ is an element between x and a that makes that first equality true)

Last edited:
OK...you used Taylor's theorem. Now can you use the definition of big O and Taylor's theorem to verify that last inclusion? With that parenthetical, one needs to be very careful about how you define big O to make that work. That is what confuses me so much.

NateTG
Homework Helper
Big-O notation is, in many senses inconsistent notation.
http://en.wikipedia.org/wiki/Big_O_notation

Basically, $g(x)=O(n)$ means
$$g(x) \leq K_g x$$
on some interval (which must be inferred from context).

Since Taylor series aren't necessarily convergent, it's not generally possible to have a bounded error term. However, in regions where the series converges, it should make sense that there would be an upper and lower bound on the trailing fraction of the sum.

Big-O notation is, in many senses inconsistent notation.
http://en.wikipedia.org/wiki/Big_O_notation

Basically, $g(x)=O(n)$ means
$$g(x) \leq K_g x$$
on some interval (which must be inferred from context).

Since Taylor series aren't necessarily convergent, it's not generally possible to have a bounded error term. However, in regions where the series converges, it should make sense that there would be an upper and lower bound on the trailing fraction of the sum.

Right. I understand why Taylor's theorem works, I am just still a little confused about how big O notation works with it. So, you are saying that in the Taylor series case the O notation only works when we are in the interval -a and a? So, every book that replaces a Taylor expansion with a big O should have technically said that their equation only works between -a and a, right?

Yes, I have seen little ohs, but in Taylor series truncations it is always big Oh that is used. Can someone please answer my questions in post #10?

Hurkyl
Staff Emeritus
Gold Member
Right. I understand why Taylor's theorem works, I am just still a little confused about how big O notation works with it. So, you are saying that in the Taylor series case the O notation only works when we are in the interval -a and a? So, every book that replaces a Taylor expansion with a big O should have technically said that their equation only works between -a and a, right?
My example was with an interval containing a in its interior -- not an interval about zero with radius a.

And no, one doesn't have to qualify the big O notation -- recall that big O does not require $|f(x)| < M |g(x)|$ to be true everywhere. It merely has to be true for some interval about a.

HallsofIvy
Homework Helper
The whole point is that
$$sin(x)= x- \frac{1}{6}x^3+ \frac{1}{5!}x^5- \frac{1}{6!}x^6+ \cdot\cdot\cdot$$
so that
$$sin(x)= x- \frac{1}{6}x^3+ x^5(\frac{1}{5}- \frac{1}{6!}x+ \dot\dot\cdot$$
That just says that sin(x) can be written as x- (1/6)x3 "plus other terms that involve at least the fifth power of x".

Your confusion with intervals may be because the only time that's important is when the higher powers can be ignored. That happens for small values of x, not large ones.

To elaborate on what HallofIvy said, there is a relationship known as the 'small angle approximation.' It is given as

$$\sin{x} \simeq x$$

for small x. For slightly larger values of x, you can use

$$\sin{x} \simeq x - \frac{x^{3}}{3!}$$

Last edited:
So, the definition of big Oh that is used to truncate Taylor series is:

f(x) = O(g(x)) when there exists and interval (a,b) s.t. there exists a constant C such that abs(f(x))<K*abs(g(x)) when x is in (a,b)?

It doesn't really seem like that is saying much since it is true for any 2 functions that intersect i.e. x = O(x^15) and x^15 = O(x) because you can just take a small interval around zero. Am I missing something?

NateTG
Homework Helper
It doesn't really seem like that is saying much since it is true for any 2 functions that intersect i.e. x = O(x^15) and x^15 = O(x) because you can just take a small interval around zero. Am I missing something?

Let's assume that $x=O(x^{15})$ near zero so there is some positive constant $k$ so that
$$kx < x^{15}$$
on
$$(0,\epsilon)$$
(Where $\epsilon$ is some small positive number.)
Now consider
$$x=\min\left(\frac{\epsilon}{2}, \frac{\sqrt[14]{k}}{2}\right)$$
which is clearly on the interval, but which violates the inequality.

Hurkyl
Staff Emeritus
Gold Member
So, the definition of big Oh that is used to truncate Taylor series is:

f(x) = O(g(x)) when there exists and interval (a,b) s.t. there exists a constant C such that abs(f(x))<K*abs(g(x)) when x is in (a,b)?
(a, b) has to contain your target point. Maybe an additional decoration would help; the following are true:

$$x \in O_{+\infty}(x^{15})$$
$$x^{15} \in O_0(x)$$
$$x^{15} \notin O_{+\infty}(x)$$
$$x \notin O_0(x^{15})$$

The target point is usually left out because it's clear from context. e.g. if you're analyzing algorithms, you are always looking at behavior near $+\infty$.

Last edited:
I see. And you could also write e.g.

$$(x-5)^{15} \in O_5(x-5)$$

which is why it is possible to use big O notation whenever you have a finite Taylor series expansion, right?