# Laurent series

Hi! There's a few things I'm confused about, and I hope some of you would bother helping me with:

1) Why do I need these laurent series? As I understood from Calculus 1, the taylor series around ##x_0## will always approximate a function ##f(x)## gradually better as the order ##n## increases. First it takes into consideration ##f(x_0)##, then the first order differentiated, then second and so on until it is able to perfectly predict how the function will behave over all the ##x## space around ##x_0##.

However, now in the complex plane, I see this stuff about "singularities" (points where ##f(z)## is unanalytic)... I don't understand. Doesn't this mean that the taylor series of ##f(z)## still apply everywhere EXCEPT the singularities?

2) What exactly is the "radius of convergence" for a taylor series? I thought a taylor series around ##z_0## would always converge to its function ##f(z)## regardless of what the value ##z_0## or ##z## is, provided ##f(z)## is analytic on those points.

3) What do the mathematicians mean when they use those annuluses to explain laurent series? I realize that they mean a function ##f(z)## is analytic for any ##z## inside an annulus defined by the two circles ##C_1## and ##C_2##, but then I understand little more...

I need to understand step 2) before I can understand step 3), I guess.

All help is highly appreciated!

1. You cannot expand a function about its singularity into a Taylor series, because a Taylor series the value of the function at the point about which the expansion is made.

2. Consider 1/(1 - x). Clearly the function us analytic everywhere except x = 1. What is its Taylor series around x = 0? Does it converge for x > 1?

3. I am not sure what is being asked. A Laurent series converges within an anulus, while a Taylor series converges within a disk. That is all there is to it.

1) I don't understand. "because a Taylor series the value of the function at the point about which the expansion is made." ?

3) But why is the laurent-series' addition of negative power polynomials make them converge in an annulus?

1. You cannot expand a function about its singularity into a Taylor series, because a Taylor series the value of the function at the point about which the expansion is made.
Your first point is a lil bit unclear ://

I would say that you cannot expand a function about its singularity into a Taylor series because a Taylor Series "computes" the value of the function at the point about which the expansion is made.

1) I don't understand. "because a Taylor series the value of the function at the point about which the expansion is made." ?
Assume ## f(x) ## has a singularity at ## x = x_0 ##. Its Taylor expansion about ## x_0 ## is, formally, ## f(x) = f(x_0) + f'(x_0)(x - x_0) + \ ... ##. But ## f(x_0) ## does not exist.

3) But why is the laurent-series' addition of negative power polynomials make them converge in an annulus?
Consider ## \frac {e^z} { z } ##. It is singular at ## z = 0 ##. But ## e^z## is OK at zero, so it has a converging Taylor expansion there: ## e^z = 1 + z + \frac {z^2} 2 + \ ... ##. Thus ## \frac {e^z} { z } = \frac {1} { z } (1 + z + \frac {z^2} 2 + \ ...) = \frac 1 z + 1 + \frac z 2 + \ ... ##. So we "pull" the singularity out of a function, and that singularity is represented by negative powers.

• 1 person
pwsnafu
As I understood from Calculus 1, the taylor series around ##x_0## will always approximate a function ##f(x)## gradually better as the order ##n## increases.
No, they wouldn't have said that. On the reals, there are infinitely differentiable functions whose Taylor series doesn't converge.

First it takes into consideration ##f(x_0)##, then the first order differentiated, then second and so on until it is able to perfectly predict how the function will behave over all the ##x## space around ##x_0##.
If the Taylor series converges then the series equals the function on a neighborhood around x0. The radius of convergence corresponds to the largest neighborhood around x0.

However, the points where Taylor series converges can be smaller than the domain of f. Example: real function ##f(x) = \frac{1}{1+x^2}##. If you expand around ##x_0=0## the radius is 1.

However, now in the complex plane, I see this stuff about "singularities" (points where ##f(z)## is unanalytic)... I don't understand. Doesn't this mean that the taylor series of ##f(z)## still apply everywhere EXCEPT the singularities?
No. Visualize a disk slowly expanding from x0. The boundary of the disk hits the singularity, and then the disk stops.

2) What exactly is the "radius of convergence" for a taylor series? I thought a taylor series around ##z_0## would always converge to its function ##f(z)## regardless of what the value ##z_0## or ##z## is, provided ##f(z)## is analytic on those points.
A function is "analytic on a set U" iff for each ##x_0 \in U## the Taylor expansion around x0 converges in an open neighborhood of x0. A function is said to be an "analytic function" if it is analytic on its domain.

The point of the definition is an open neighborhood. Just because it converges near x0, it does not mean it converges far away from x0.

Note that in complex analysis we are more interested in functions with singularities than analytic functions.

3) What do the mathematicians mean when they use those annuluses to explain laurent series? I realize that they mean a function ##f(z)## is analytic for any ##z## inside an annulus defined by the two circles ##C_1## and ##C_2##, but then I understand little more...
A Laurent series has two parts:
1. the normal Taylor series that converges in a disk (eg. the set of all z such that |z| < 2).
2. A polynomial in z-1 that converges in the complement of a disk (eg. the set of all z such that |z|>1).
The Laurent series is the sum of the two so it can only converge in the intersection, which is an annulus.

• 1 person
thanks guys! I think i almost got it now! :)

If the Taylor series converges then the series equals the function on a neighborhood around x0. The radius of convergence corresponds to the largest neighborhood around x0.

However, the points where Taylor series converges can be smaller than the domain of f. Example: real function ##f(x) = \frac{1}{1+x^2}##. If you expand around ##x_0=0## the radius is 1.
why is that the case? could you perhaps explain a bit on the "radius of convergence" for taylor and laurent series? For instance, what happens when there are more than 1 singularities? Then you have one Laurent series about each singularity?

No. Visualize a disk slowly expanding from x0. The boundary of the disk hits the singularity, and then the disk stops.
Why does it stop in all directions just because of one point in the complex plane?

A Laurent series has two parts:
1. the normal Taylor series that converges in a disk (eg. the set of all z such that |z| < 2).
2. A polynomial in z-1 that converges in the complement of a disk (eg. the set of all z such that |z|>1).
The Laurent series is the sum of the two so it can only converge in the intersection, which is an annulus.
aha, OK. I understand quite a bit more now, but there is one last thing:

Why exactly is the taylor series converging in a disk, and the negative powers in an annulus? And why are the negative powers not convergent everywhere? I mean, the taylor series does not take into account points where its function goes to infinity, but what about the negative power series? When do they collapse?

Consider ## \frac {e^z} { z } ##. It is singular at ## z = 0 ##. But ## e^z## is OK at zero, so it has a converging Taylor expansion there: ## e^z = 1 + z + \frac {z^2} 2 + \ ... ##. Thus ## \frac {e^z} { z } = \frac {1} { z } (1 + z + \frac {z^2} 2 + \ ...) = \frac 1 z + 1 + \frac z 2 + \ ... ##. So we "pull" the singularity out of a function, and that singularity is represented by negative powers.
Hmm, I think I understand now. So the entire point with laurent series is to get rid of "disconnects" between taylor expansion and its function? So when the function is undefined, it is also undefined in its expansion?

My book ("advanced engineering mathematics by erwin kreyzig") explains everything in proofs, so it is nice to see some intuitive arguments for a change.

Last edited:
HallsofIvy
Homework Helper
No, they wouldn't have said that. On the reals, there are infinitely differentiable functions whose Taylor series doesn't converge.
Indeed, there are infinitely differentiable functions whose Taylor series does converge but not to the function.

An example is $$f(x)= e^{-1/x^2}$$ if x is not 0, f(0)= 0. That is infinitely differentiable and every derivative is 0 at x= 0. So the Taylor's series about x= 0 is identically 0. But f(x)= 0 only at x= 0.

If the Taylor series converges then the series equals the function on a neighborhood around x0. The radius of convergence corresponds to the largest neighborhood around x0.
No. There exist Taylor's series (the above example is one) that have radius of convergence infinity but do not converge to the function on any open neighborhood of the point. Functions that have the property that there exist some neighborhood of the point on which the Taylor series converges to the function are called (real) "analytic". It true that if a function defined on the complex numbers is infinitely differentiable (indeed just "differentiable" is sufficient) on a neighborhood of a point then it is "analytic" at that point. But that is not true of functions defined on the real numbers.

However, the points where Taylor series converges can be smaller than the domain of f. Example: real function ##f(x) = \frac{1}{1+x^2}##. If you expand around ##x_0=0## the radius is 1.

No. Visualize a disk slowly expanding from x0. The boundary of the disk hits the singularity, and then the disk stops.

A function is "analytic on a set U" iff for each ##x_0 \in U## the Taylor expansion around x0 converges in an open neighborhood of x0. A function is said to be an "analytic function" if it is analytic on its domain.

The point of the definition is an open neighborhood. Just because it converges near x0, it does not mean it converges far away from x0.

Note that in complex analysis we are more interested in functions with singularities than analytic functions.

A Laurent series has two parts:
1. the normal Taylor series that converges in a disk (eg. the set of all z such that |z| < 2).
2. A polynomial in z-1 that converges in the complement of a disk (eg. the set of all z such that |z|>1).
The Laurent series is the sum of the two so it can only converge in the intersection, which is an annulus.

Last edited by a moderator:
• 1 person
pwsnafu
thanks guys! I think i almost got it now! :)

why is that the case? could you perhaps explain a bit on the "radius of convergence" for taylor and laurent series? For instance, what happens when there are more than 1 singularities? Then you have one Laurent series about each singularity?
Take the example from before ##f(z) = 1/(1+z^2)## it has two singularities ##z=i## and ##z=-i##.
If we expand a Taylor around ##z=0## then the radius is 1. This is because both i and -i are 1 unit away from 0.
If we expand a Taylor around ##z=i/2## then the radius is 1/2. -i is 3/2 unit away, but i is 1/2 away. We use the minimum of the two. The disk expands hits i first, and stops.
If we expand a Laurent around ##z=i## we get
##-\frac{i}{2}\frac{1}{z-i} + \frac14 +\frac{i}{8}(z-i) - \frac{1}{16}(z-i)^2- \ldots##
and converges when ##0 < |z-i| < 2##. 0 is the distance of i to itself. 2 is the distance from i to -i.

Why does it stop in all directions just because of one point in the complex plane?
The shape is always a disk.

aha, OK. I understand quite a bit more now, but there is one last thing:

Why exactly is the taylor series converging in a disk,
This is a metric space question. Open balls on the complex plane are shaped liked that.

and the negative powers in an annulus?
The negative powers converge in a complement of a disk. It's the intersection which results in an annulus.
The negative powers are "expanding a power series from the point of infinity of the Riemann sphere". It's...complicated.

And why are the negative powers not convergent everywhere? I mean, the taylor series does not take into account points where its function goes to infinity, but what about the negative power series? When do they collapse?
Remember your initial function has the co-domain ℂ. There is no point of infinity for the plane (it does exist on the Riemann sphere). From the above example ##-\frac{i}{2}\frac{1}{z-i}## can't be defined at z=i, because it's in the denominator.

You are not at the level where you can work with co-domains more general than ℂ. If you are interested these are called Riemann surfaces (the Riemann sphere is the simplest example). But this is waaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaay above your level. Hmm, I think I understand now. So the entire point with laurent series is to get rid of "disconnects" between taylor expansion and its function? So when the function is undefined, it is also undefined in its expansion?
No, there is a technique called analytic continuation, where you use power series to expand the domain. But that's above your level right now.

No. There exist Taylor's series (the above example is one) that have radius of convergence infinity but do not converge to the function on any open neighborhood of the point. Functions that have the property that there exist some neighborhood of the point on which the Taylor series converges to the function are called (real) "analytic". It true that if a function defined on the complex numbers is infinitely differentiable (indeed just "differentiable" is sufficient) on a neighborhood of a point then it is "analytic" at that point. But that is not true of functions defined on the real numbers.
Bleh. I mixed up "converge" and "converge to the function in question".
Another thing about complex: "on a [open] neighborhood" is required. It's possible to be differentiable on a line and no where else, in which case it is no longer analytic.

• 1 person
Hmm, I think I understand now. So the entire point with laurent series is to get rid of "disconnects" between taylor expansion and its function?
Not exactly. If a function has a singularity, then there is simply NO Taylor expansion around that singularity, as remarked earlier.

If, however, we still want for whatever reason to expand a function around a singularity, we seek for a couple of functions ## f(z) ## and ## g(z) ## that are well-behaved at that point, such that ## f(z)/g(z) ## is the original function (which is not always possible). Then we divide the Taylor expansion of ## f(x) ## by the Taylor expansion of ## g(z) ## and get the Laurent expansion of the original function.

• 1 person
OK, I understand it at an adequate level now. Thanks allot for your time and patience!