# Definition of Asymptoticity to a Function

1. Aug 3, 2006

### hanson

Hi all!
Can anyone explain me why " a power series is asymptotic to a function" is defined as shown in the picture?

Referring to the first definitoin, it means that the remainder after N terms should be much smaller than (x-xo)^N? Actually, I can't "sense" the meaning of such definition.

Maybe, sometimes, a definition may not bare any meaning....

But, "asymptotic", in my mind, means "closer and closer" to something. And I guess this defintion should bare some implication for this. Can anyone explain this defintion for me?

I am completely stucked with this..Please kindly help.

File size:
11.1 KB
Views:
130
2. Aug 3, 2006

### BoTemp

I'm assuming those should be power series, of form $a_n(x - x_o)^n$ in the summation.

Let ~ mean =, let M = N + 1 (can't be guaranteed that it's non-zero, but can always replace 1 with lowest integer b s.t. a_b non-zero), substitute the bottom equation into the
$$a_{N+1}(x - x_o)^1 << 1 [/itex] Since this must be true for all x, the a's must become arbitrarily small. Think what you're asking for is more intuitive understanding. Remember that for a series to converge, the terms ($a_n(x - x_o)^n$ in this case) must approach 0 fast enough; in fact they become arbitrarily small. What the second equation is saying is that the difference between the function and the power series to which it's equal becomes arbitrarily small as the number of terms are included. The error between f_N(x), the series with the first N terms, and f(x) the actual function, is about $a_M(x - x_o)^M$, which goes to 0 as N goes to infinity. Does that help? 3. Aug 5, 2006 ### Clausius2 A series is said to be asymptotic to a function [tex]f(\epsilon)$$ if

$$f(\epsilon)=\sum_{m=0}^{N-1} c_m \delta_m(\epsilon)+o(\delta_N)$$ as $$\epsilon\rightarrow 0$$

where $$\delta_m(\epsilon)$$ is called the asymptotic secquence. Thus, the Nth $$c$$ coefficient is small compared with the N-1 one. To say the truth, some asymptotic series may converge and another diverge. Usually one stops at the term which has a minimum error. The error uses to have a local minimum or a minimum at infinity. In fact, uniform convergence of an asymptotic series is not always needed in problems related to physics, because usually one only needs the first two terms of the series. Also, a function $$f$$ may have several asymptotic series, some of them may be divergent and others convergent.

Last edited: Aug 5, 2006
4. Aug 5, 2006

### Hurkyl

Staff Emeritus
Let N be a very, large number.

N^2 is very, very large number, right?
N^2 - N is a very, very large number right?

But the difference between the two is only very large -- that's an insignificant when you're dealing with very, very large quantities!

So, there's a real intuitive sense that the functions:

f(x) = x^2

and

f(x) = x^2 - x

are asymptotically similar.

For suitably well-behaved functions, f(x) and g(x) are asymptotic iff:

$$\lim_{x \rightarrow +\infty} \frac{f(x)}{g(x)} = 1$$

Some contexts might play with the constant on the r.h.s. For example, the big-O notation you see in analytic number theory and computer science is given by:

$$f(x) \in O(g(x)) \Leftrightarrow \lim_{x \rightarrow +\infty} \frac{f(x)}{g(x)} < +\infty$$

(again, I'm assuming suitably well-behaved functions to simplify things)