Series representations of function

zezima1
Messages
119
Reaction score
0
I have been reading about the legendre polynomials and how their completeness allows you to write any function as a sum of them. I have seen that used in electrostatics for the multipole expansion, which I guess is pretty nice, but here's the deal:
It seems that I am to learn more and more series representations of functions. Originally I only knew the taylor polynomials, and they were simple and easy to work with. Why is it interesting to know all these other series representations, when taylor polynomials work for every function and are so easy to work with?
 
Physics news on Phys.org
zezima1 said:
I have been reading about the legendre polynomials and how their completeness allows you to write any function as a sum of them. I have seen that used in electrostatics for the multipole expansion, which I guess is pretty nice, but here's the deal:
It seems that I am to learn more and more series representations of functions. Originally I only knew the taylor polynomials, and they were simple and easy to work with. Why is it interesting to know all these other series representations, when taylor polynomials work for every function and are so easy to work with?

Taylor series don't work well for "every" function. A particularly large and useful series theory is that of Fourier Series for periodic functions. These expand periodic functions in terms of sines and cosines, not at all like Taylor series. Very useful in fields like Electrical Engineering and Partial Differential Equations, to give a couple of examples.
 
Series expansion works for 1-d problems. I think the problem is, the recurrence relations that your differential equation imply could be inconvenient.

Many problems in physics take place in more than 1-dimension. Then, depending on the system in question, you want to pick the best coordinates.

For instance in freshman physics, you do linear motion and rotational motion. The equations come out a bit different; for instance, looking at angle instead of x and y, or torque instead of force.

In 3-d, depending on the physical setup, we may prefer rectangular, cylindrical or spherical coordinates. We may be faced with certain differential equations, which we want to solve.

In rectangular coordinates, we use Fourier series in each direction.

In cylindrical, we use Fourier expansion in the theta direction, bessel expansion in radial direction, and i think decaying exponentials in the z direction.

In spherical, we use Fourier in the theta direction (around the z-axis), legendre expansion in the orange slice direction (angle from positive z axis), and laguerre expansion in the radial direction.

Various expansions interact with certain differential equations nicely. Power series interect with the derivative in a straight forward way, but the phi part of the laplacian differential equation interacts nicely with the legendre expansion. In rectangular coords., the x part of the laplacian interacts nicely with the Fourier expansion.
 
zezima1 said:
I have been reading about the legendre polynomials and how their completeness allows you to write any function as a sum of them. I have seen that used in electrostatics for the multipole expansion, which I guess is pretty nice, but here's the deal:
It seems that I am to learn more and more series representations of functions. Originally I only knew the taylor polynomials, and they were simple and easy to work with. Why is it interesting to know all these other series representations, when taylor polynomials work for every function and are so easy to work with?
In linear algebra terms, the Legendre polynomials, for example, are an orthogonal basis whereas {1, x, x2, ...} aren't. If you want to take advantage of the machinery of linear algebra, the Legendre polynomials may be a more convenient choice. This sort of thing pops up all over the place.

Solutions to differential equations are often more naturally expressed in terms of these functions. Sure, you might be able to brute force a Taylor series solution to some differential equation, but it would obscure the algebraic structure which underlies the solutions.
 
Also, the "basis functions" for these various series satisfy various differential equations. The Legendre polynomials are solutions to the "Legendre differential equation", Bessel functions are solutions to "Bessel equations", etc.

Just as you can find a solution to Ax= y, with x and y in some vector space, A a linear operator on that vector space, by finding eigenvectors for A and then writing the vectors as linear combinations of those eigenvectors, so you can write general solutions to the various differential equations by writing them as linear combinations of the "eigenfunctions" of the differential equations which are exactly what the "Legendre" functions, "Bessel functions", and "Laguerre" functions are.

(Historically, the "Fourier series" were the first example, but we don't call the eigenfunctions for y''+ \lambda y= 0, cos(\sqrt{\lambda}x) and sin(\sqrt{\lambda}x), "Fourier functions"!)
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top