# Understanding Jackson - calculation of coefficients

1. Oct 19, 2009

### Old Guy

1. The problem statement, all variables and given/known data After demonstrating that a set of functions are orthogonal and complete, Jackson presents equations like the ones shown below. I've used the equations for the Legendre series representation as an example, but he does almost the exact same thing with the Bessel functions, too. This isn't a homework problem, I'm just trying to understand what he means. The first expression defines a function f as a series of (in this case) Legendre polynomials with coerricients $$A_l$$. He then defines the coefficients, but the definition of the coefficients is based on the integral of f which seems to be a circular definition. I'd like to know what this means, and how something like this could be used.

2. Relevant equations
$$f\left( x \right) = \sum\limits_{l = 0}^\infty {A_l P_l \left( x \right)} {\rm{ where }}A_l = \frac{{2l + 1}}{2}\int\limits_{ - 1}^1 {f\left( x \right)P_l \left( x \right)dx}$$

3. The attempt at a solution
1. The problem statement, all variables and given/known data

2. Relevant equations

3. The attempt at a solution
1. The problem statement, all variables and given/known data

2. Relevant equations

3. The attempt at a solution

2. Oct 19, 2009

### javierR

The second expression is not really a definition of the A's. You can define them by the first expression alone: They are defined to be the coefficients in the expansion of a function in terms of the basis of P functions. There are some cases in which you have a function already, and determine that you want to find out its form in terms of a particular basis of functions like the P's. The second expression is the inversion of the first where that allows you to determine the coefficients that must appear in front of each of the P's. This is all completely analogous to Fourier expansions in terms of sines and cosines, which form a basis of functions like the P's.

3. Oct 19, 2009

### ordirules

I think he's saying that Legendre Polynomials form a sort of "function" space that you can make any function out of the Legendre Polynomials with that sum you wrote.

The integral you wrote is just a method of how to find out "how much of f(x) is in Pl(x)," so you can construct f(x) purely as a sum of legendre polynomials.

Different people have their ways of understanding this, but here is one I like to think of:
Think of the legendre functions sort of like vectors in some way. When you do that integral, you can think of it as a dot product (the limits never change and are important). If you think of it this way, this means that you are making f(x) a linear combination of "legendre polynomial" vectors. The neat thing about orthogonal functions is that their "dot product" (or integral : $$\int_{-1}^{+1} P_l(x) P_m(x) dx = \frac{2}{2m+1} \delta_{l,m}$$) is zero unless if it is with itself.

This way you can make an orthogonal function space where you can do some neat mathematical tricks. I have not done much electromagnetism, but I know a good example is in quantum mechanics where you need the eigenvalues of an operator. I don't know if you've seen this before but when an operator acts on its eigenvector, it gives an eigenvalue. So if you have an operator acting on an arbitrary vector, you need to break it into a sum of its eigenvectors (which by nature are orthogonal) to get the answer.

Typically, in quantum mechanics, the operators act on functions, so these "eigenvectors" are then functions, but usually sines and cosines (of the form cos(npix/L) for example, for a space from 0 to L), not legendre polynomials.

I don't want to go too much into this if you haven't seen this, but I do believe most likely what is happening is that it must be easier to tell something about the system if you break it up into its legendre polynomial "components."

Another example is with sines and cosines. By the Maxwell's equations, you know that electric signals are in the form of sines and cosines. If you receive a signal that looks like some arbitrary f(x), you can make an orthogonal space of sines and cosines and find out "how much f(x)" is in each sine or cosine component.

For example consider:
$$\sqrt{\frac{L}{2}}sin(\frac{n \pi x}{L})$$
on the interval 0 to L. Each value of n is a different sine function. This integral with find the sum that follows:
$$C_n = \int_0^L F(x) \sqrt{\frac{L}{2}} sin(\frac{n \i x}{L}) dx$$

for
$$\Sigma_{n = 0}^{\infty} C_n \sqrt{\frac{L}{2}} sin(\frac{n \i x}{L})$$

I hope this helps

4. Oct 19, 2009

### Old Guy

So, practically speaking, f(x) is the general series expression; you would normally use f(a) in the expression for the coefficient, where a would come from (for example) a given boundary condition, correct?

5. Oct 19, 2009

### Old Guy

Ordirules, thanks for the clarification; the different perspective makes sense. I just wanted to say that I was responding to javierR's post before I saw yours. Anyway, I think I got it now - thanks to you both.

6. Oct 19, 2009

### ordirules

no problem, im glad to help :-)