Understanding Jackson - calculation of coefficients

Old Guy
Messages
101
Reaction score
1

Homework Statement

After demonstrating that a set of functions are orthogonal and complete, Jackson presents equations like the ones shown below. I've used the equations for the Legendre series representation as an example, but he does almost the exact same thing with the Bessel functions, too. This isn't a homework problem, I'm just trying to understand what he means. The first expression defines a function f as a series of (in this case) Legendre polynomials with coerricients A_l. He then defines the coefficients, but the definition of the coefficients is based on the integral of f which seems to be a circular definition. I'd like to know what this means, and how something like this could be used.



Homework Equations


f\left( x \right) = \sum\limits_{l = 0}^\infty {A_l P_l \left( x \right)} {\rm{ where }}A_l = \frac{{2l + 1}}{2}\int\limits_{ - 1}^1 {f\left( x \right)P_l \left( x \right)dx} $<br />


The Attempt at a Solution



 
Physics news on Phys.org
The second expression is not really a definition of the A's. You can define them by the first expression alone: They are defined to be the coefficients in the expansion of a function in terms of the basis of P functions. There are some cases in which you have a function already, and determine that you want to find out its form in terms of a particular basis of functions like the P's. The second expression is the inversion of the first where that allows you to determine the coefficients that must appear in front of each of the P's. This is all completely analogous to Fourier expansions in terms of sines and cosines, which form a basis of functions like the P's.
 
I think he's saying that Legendre Polynomials form a sort of "function" space that you can make any function out of the Legendre Polynomials with that sum you wrote.

The integral you wrote is just a method of how to find out "how much of f(x) is in Pl(x)," so you can construct f(x) purely as a sum of legendre polynomials.

Different people have their ways of understanding this, but here is one I like to think of:
Think of the legendre functions sort of like vectors in some way. When you do that integral, you can think of it as a dot product (the limits never change and are important). If you think of it this way, this means that you are making f(x) a linear combination of "legendre polynomial" vectors. The neat thing about orthogonal functions is that their "dot product" (or integral : \int_{-1}^{+1} P_l(x) P_m(x) dx = \frac{2}{2m+1} \delta_{l,m}) is zero unless if it is with itself.

This way you can make an orthogonal function space where you can do some neat mathematical tricks. I have not done much electromagnetism, but I know a good example is in quantum mechanics where you need the eigenvalues of an operator. I don't know if you've seen this before but when an operator acts on its eigenvector, it gives an eigenvalue. So if you have an operator acting on an arbitrary vector, you need to break it into a sum of its eigenvectors (which by nature are orthogonal) to get the answer.

Typically, in quantum mechanics, the operators act on functions, so these "eigenvectors" are then functions, but usually sines and cosines (of the form cos(npix/L) for example, for a space from 0 to L), not legendre polynomials.

I don't want to go too much into this if you haven't seen this, but I do believe most likely what is happening is that it must be easier to tell something about the system if you break it up into its legendre polynomial "components."

Another example is with sines and cosines. By the Maxwell's equations, you know that electric signals are in the form of sines and cosines. If you receive a signal that looks like some arbitrary f(x), you can make an orthogonal space of sines and cosines and find out "how much f(x)" is in each sine or cosine component.

For example consider:
\sqrt{\frac{L}{2}}sin(\frac{n \pi x}{L})
on the interval 0 to L. Each value of n is a different sine function. This integral with find the sum that follows:
C_n = \int_0^L F(x) \sqrt{\frac{L}{2}} sin(\frac{n \i x}{L}) dx

for
\Sigma_{n = 0}^{\infty} C_n \sqrt{\frac{L}{2}} sin(\frac{n \i x}{L})

I hope this helps
 
So, practically speaking, f(x) is the general series expression; you would normally use f(a) in the expression for the coefficient, where a would come from (for example) a given boundary condition, correct?
 
Ordirules, thanks for the clarification; the different perspective makes sense. I just wanted to say that I was responding to javierR's post before I saw yours. Anyway, I think I got it now - thanks to you both.
 
no problem, I am glad to help :-)
 
Thread 'Need help understanding this figure on energy levels'
This figure is from "Introduction to Quantum Mechanics" by Griffiths (3rd edition). It is available to download. It is from page 142. I am hoping the usual people on this site will give me a hand understanding what is going on in the figure. After the equation (4.50) it says "It is customary to introduce the principal quantum number, ##n##, which simply orders the allowed energies, starting with 1 for the ground state. (see the figure)" I still don't understand the figure :( Here is...
Thread 'Understanding how to "tack on" the time wiggle factor'
The last problem I posted on QM made it into advanced homework help, that is why I am putting it here. I am sorry for any hassle imposed on the moderators by myself. Part (a) is quite easy. We get $$\sigma_1 = 2\lambda, \mathbf{v}_1 = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \sigma_2 = \lambda, \mathbf{v}_2 = \begin{pmatrix} 1/\sqrt{2} \\ 1/\sqrt{2} \\ 0 \end{pmatrix} \sigma_3 = -\lambda, \mathbf{v}_3 = \begin{pmatrix} 1/\sqrt{2} \\ -1/\sqrt{2} \\ 0 \end{pmatrix} $$ There are two ways...
Back
Top