Definition of Asymptoticity to a Function

hanson
Messages
312
Reaction score
0
Hi all!
Can anyone explain me why " a power series is asymptotic to a function" is defined as shown in the picture?

Referring to the first definitoin, it means that the remainder after N terms should be much smaller than (x-xo)^N? Actually, I can't "sense" the meaning of such definition.

Maybe, sometimes, a definition may not bare any meaning...

But, "asymptotic", in my mind, means "closer and closer" to something. And I guess this defintion should bare some implication for this. Can anyone explain this defintion for me?

I am completely stucked with this..Please kindly help.
 

Attachments

  • asymptotic relation.JPG
    asymptotic relation.JPG
    10.8 KB · Views: 648
Physics news on Phys.org
I'm assuming those should be power series, of form a_n(x - x_o)^n in the summation.

Let ~ mean =, let M = N + 1 (can't be guaranteed that it's non-zero, but can always replace 1 with lowest integer b s.t. a_b non-zero), substitute the bottom equation into the
a_{N+1}(x - x_o)^1 &lt;&lt; 1 [/itex]<br /> Since this must be true for all x, the a&#039;s must become arbitrarily small. <br /> <br /> Think what you&#039;re asking for is more intuitive understanding. Remember that for a series to converge, the terms (a_n(x - x_o)^n in this case) must approach 0 fast enough; in fact they become arbitrarily small. What the second equation is saying is that the difference between the function and the power series to which it&#039;s equal becomes arbitrarily small as the number of terms are included. <br /> <br /> The error between f_N(x), the series with the first N terms, and f(x) the actual function, is about a_M(x - x_o)^M, which goes to 0 as N goes to infinity. <br /> <br /> Does that help?
 
A series is said to be asymptotic to a function f(\epsilon) if

f(\epsilon)=\sum_{m=0}^{N-1} c_m \delta_m(\epsilon)+o(\delta_N) as \epsilon\rightarrow 0

where \delta_m(\epsilon) is called the asymptotic secquence. Thus, the Nth c coefficient is small compared with the N-1 one. To say the truth, some asymptotic series may converge and another diverge. Usually one stops at the term which has a minimum error. The error uses to have a local minimum or a minimum at infinity. In fact, uniform convergence of an asymptotic series is not always needed in problems related to physics, because usually one only needs the first two terms of the series. Also, a function f may have several asymptotic series, some of them may be divergent and others convergent.
 
Last edited:
Let N be a very, large number.

N^2 is very, very large number, right?
N^2 - N is a very, very large number right?

But the difference between the two is only very large -- that's an insignificant when you're dealing with very, very large quantities!

So, there's a real intuitive sense that the functions:

f(x) = x^2

and

f(x) = x^2 - x

are asymptotically similar.


For suitably well-behaved functions, f(x) and g(x) are asymptotic iff:

<br /> \lim_{x \rightarrow +\infty} \frac{f(x)}{g(x)} = 1<br />

Some contexts might play with the constant on the r.h.s. For example, the big-O notation you see in analytic number theory and computer science is given by:

<br /> f(x) \in O(g(x)) \Leftrightarrow<br /> \lim_{x \rightarrow +\infty} \frac{f(x)}{g(x)} &lt; +\infty<br />

(again, I'm assuming suitably well-behaved functions to simplify things)
 
Back
Top