A Legendre Polynomials -- Jackson Derivation

Reverend Shabazz
Messages
19
Reaction score
1
Hello all,

I'm reading through Jackson's Classical Electrodynamics book and am working through the derivation of the Legendre polynomials. He uses this ##\alpha## term that seems to complicate the derivation more and is throwing me for a bit of a loop. Jackson assumes the solution is of the form:

upload_2017-2-27_9-43-10.png


Inserting this into the differential equation we need to solve,
upload_2017-2-27_9-44-5.png
,
we obtain:
upload_2017-2-27_9-44-27.png


He then says:
upload_2017-2-27_9-47-36.png
,
which I agree with. The issue I'm having is with the next statement:

upload_2017-2-27_9-49-31.png


First, in saying the two relations are equivalent, I assume he means that if ##a_0 \neq 0## then we have ##P(x)_{a_0 \neq 0} = P_{\alpha=0}(x) + P_{\alpha=1}(x)##
##= a_0 +a_1*x^1+a_2*x^2 +...## ##+## ##a_0'*x^1+a_1'*x^2+a_2'*x^3 +...##

whereas if ##a_1 \neq 0## then we have ##P(x)_{a_1 \neq 0} = P_{\alpha=0}(x) + P_{\alpha=-1}(x)##
##= a_0 +a_1*x^1+a_2*x^2 +...## ##+## ##a_0''*x^{-1}+a_1''+a_2''*x^1+...##

and, setting ##a''##=0 to avoid the potential blowing up at x=0, ##P(x)_{a_0 \neq 0}## is basically the same as ##P(x)_{a_1 \neq 0}##, since each can be expressed as an infinite series in the form of ##c_0+c_1*x^1+c_2*x^2+...##. Therefore, we can choose either ##a_0 \neq 0## or ##a_1 \neq 0##.

But then he says we can't have both ##a_0 \neq 0## and ##a_1 \neq 0##, which I don't quite follow. If they were both 0, we'd need ##\alpha \equiv 0 ## (to satisfy his eq 3.13) and the solution would be of the form ##a_0+a_1*x^1+a_2*x^2+...##. What's wrong with that at this point in the derivation?

I recognize that eventually we will see that either ##a_0## (and ##a_1'## ) or ##a_1 ##(and ##a_0'##) must equal 0 for the solution to be finite, but that's at a later stage in the derivation. At this stage in the derivation, what's wrong with still assuming ##a_0 \neq 0## and ##a_1 \neq 0##??

I also recognize that I may be mucking things up due to staring at this for far too long..

Thanks!
 
Physics news on Phys.org
I don't think it was meant to be over-analyzed. If ## a_1 \neq 0 ## then there is a possibility that ## \alpha=-1 ##, but if ## a_o \neq 0 ##, this possibility does not exist. ## \\ ## Editing... I don't quite follow J.D. Jackson's logic either, because what about ## \alpha=0 ## ? I'm going to need to study it further...
 
Last edited:
An additional look at these formulas shows that when multiple terms are included in these polynomials, for the different integer values of ## l ##, they only contain even or odd terms. That seems to be the purpose of assuming ## a_0 \neq 0 ## or ## a_1 \neq 0 ## but not both at the same time. The actual solution can contain both even and odd terms, but there are no constraints between the even and odd terms of the series solutions.
 
Last edited:
Reverend Shabazz said:
Hello all,

I'm reading through Jackson's Classical Electrodynamics book and am working through the derivation of the Legendre polynomials. He uses this ##\alpha## term that seems to complicate the derivation more and is throwing me for a bit of a loop. Jackson assumes the solution is of the form:

View attachment 113833

Inserting this into the differential equation we need to solve,
View attachment 113834,
we obtain:
View attachment 113836

He then says:
View attachment 113837,
which I agree with. The issue I'm having is with the next statement:

View attachment 113838

First, in saying the two relations are equivalent, I assume he means that if ##a_0 \neq 0## then we have ##P(x)_{a_0 \neq 0} = P_{\alpha=0}(x) + P_{\alpha=1}(x)##
##= a_0 +a_1*x^1+a_2*x^2 +...## ##+## ##a_0'*x^1+a_1'*x^2+a_2'*x^3 +...##

whereas if ##a_1 \neq 0## then we have ##P(x)_{a_1 \neq 0} = P_{\alpha=0}(x) + P_{\alpha=-1}(x)##
##= a_0 +a_1*x^1+a_2*x^2 +...## ##+## ##a_0''*x^{-1}+a_1''+a_2''*x^1+...##

and, setting ##a''##=0 to avoid the potential blowing up at x=0, ##P(x)_{a_0 \neq 0}## is basically the same as ##P(x)_{a_1 \neq 0}##, since each can be expressed as an infinite series in the form of ##c_0+c_1*x^1+c_2*x^2+...##. Therefore, we can choose either ##a_0 \neq 0## or ##a_1 \neq 0##.

But then he says we can't have both ##a_0 \neq 0## and ##a_1 \neq 0##, which I don't quite follow. If they were both 0, we'd need ##\alpha \equiv 0 ## (to satisfy his eq 3.13) and the solution would be of the form ##a_0+a_1*x^1+a_2*x^2+...##. What's wrong with that at this point in the derivation?

I recognize that eventually we will see that either ##a_0## (and ##a_1'## ) or ##a_1 ##(and ##a_0'##) must equal 0 for the solution to be finite, but that's at a later stage in the derivation. At this stage in the derivation, what's wrong with still assuming ##a_0 \neq 0## and ##a_1 \neq 0##??

I also recognize that I may be mucking things up due to staring at this for far too long..

Thanks!

Let's start by considering the case a_0 \neq 0. In this case we have \alpha = 0 or \alpha = 1. Let's consider what happens when \alpha = 0

In this case the expression for general a_{j+2} is
a_{j+2} = \frac{j\left(j+1 \right)-l\left(l+1\right)}{\text{not zero}}a_j
we are looking for a polynomial where a_j =0 for all j>j_{max}. There are two ways that this can happen. The first is that the seed coefficient a_0 or a_1 is equal to zero. Or in the second case the numerator j\left(j+1 \right)-l\left(l+1\right) has to equal zero for some values of j. In our case the numerator only equals zero if j = l or j = -l -1. Note that the two solutions have opposite signs. This implies that there is only 1 positive j for a given l for which the numerator will be zero. If l is even then the coefficients for the even series will be zero (a_n=0) for j>l but the coefficients for the odd series will never zero unless a_1= 0 . Thus if a_0 \neq 0 and \alpha = 0 then we need a_1= 0.

If you repeat this analysis for a_0 \neq 0 but \alpha = 1 you will see that a_1= 0 by the same logic. This gives us two linearly independent solutions. One solution is odd in x (this is the \alpha = 0 case) and a second solution that is even in x (this is the \alpha = 1 case).

If you consider the case a_1 \neq 0, then by the same logic you will find that a_0 = 0 and you will get an even and an odd solution. The even and odd solution that you find will be the same solutions you found for a_0 \neq 0 but the indices will be shifted by one.

For Jackson the above explanation is a "moment's thought."
 
Thank you both for your replies!
 
  • Like
Likes Charles Link
the_wolfman said:
Let's start by considering the case a_0 \neq 0. In this case we have \alpha = 0 or \alpha = 1. Let's consider what happens when \alpha = 0

In this case the expression for general a_{j+2} is
a_{j+2} = \frac{j\left(j+1 \right)-l\left(l+1\right)}{\text{not zero}}a_j
we are looking for a polynomial where a_j =0 for all j>j_{max}. There are two ways that this can happen. The first is that the seed coefficient a_0 or a_1 is equal to zero. Or in the second case the numerator j\left(j+1 \right)-l\left(l+1\right) has to equal zero for some values of j. In our case the numerator only equals zero if j = l or j = -l -1. Note that the two solutions have opposite signs. This implies that there is only 1 positive j for a given l for which the numerator will be zero. If l is even then the coefficients for the even series will be zero (a_n=0) for j>l but the coefficients for the odd series will never zero unless a_1= 0 . Thus if a_0 \neq 0 and \alpha = 0 then we need a_1= 0.

If you repeat this analysis for a_0 \neq 0 but \alpha = 1 you will see that a_1= 0 by the same logic. This gives us two linearly independent solutions. One solution is odd in x (this is the \alpha = 0 case) and a second solution that is even in x (this is the \alpha = 1 case).

If you consider the case a_1 \neq 0, then by the same logic you will find that a_0 = 0 and you will get an even and an odd solution. The even and odd solution that you find will be the same solutions you found for a_0 \neq 0 but the indices will be shifted by one.

For Jackson the above explanation is a "moment's thought."
Why are we looking for a polynomial (not a power series)?
 
This is just the usual Frobenius method. I'd not do it this way, because it's complicated compared to the elegant algebraic method used in quantum theory (I'd teach electromagnetism after QM, but that's another subject).

So you make the ansatz
$$P(x)=\sum_{j=0}^{\infty} a_j x^{j+\alpha},$$
where ##\alpha## has to be determined as well as the ##a_j## from the differential equation. It's now clear that you can without restricting generality that ##a_0 \neq 0##, because if ##a_0=0## you just have to redefine ##\alpha## to get the same series.

If you choose ##a_0 \neq 0## then you get either ##\alpha=0## or ##\alpha=1##. So you can set ##a_1=0##, because you get all solutions, because the case ##a_1 \neq 0## and ##a_0=0## with ##\alpha=0## leads to the same series as choosing ##a_0 \neq 0## and ##\alpha=1##.
 

Similar threads

Replies
2
Views
5K
Replies
13
Views
2K
Replies
1
Views
3K
Replies
15
Views
2K
Replies
7
Views
1K
Replies
1
Views
2K
Replies
1
Views
3K
Back
Top