Power Series Following an example problem

Saladsamurai
Messages
3,009
Reaction score
7

Homework Statement



I am following along in an example problem and I am getting hung up on a step. We are seeking a power series solution of the DE:

(x - 1)y'' + y' +2(x - 1)y = 0 \qquad(1)

With the initial values y(4) = 5 \text{ and }y'(4) = 0. We seek the solution in the form

y(x) = \sum_{n=0}^{\infty}a_n(x - x_o)^n\qquad(2)Here it is convenient to let xo = 4. So we seeky(x) = \sum_{n=0}^{\infty}a_n(x - 4)^n\qquad(3)

Here is the book text:

To put (1) in standard form we divide by x - 1 which yields

y'' + \frac{1}{x - 1}y' + 2y = 0 \qquad(4)

Essentially we put (1) in the standard form of (4) so that we can test for analyticity and we find that p(x) and q(x) in y'' + p(x)y' +q(x)y = 0 are analytic at xo = 4.

Ok that's all great. Here is where I lose them:
To proceed we can either use the form (1) or (4). Since we are expanding each term in the differential equation about x = 4, we need to expand (x - 1) and 2(x - 1) if we use (1), or the 1/(x - 1) factor if we use (4). The former is easier since x - 1 = 3 + (x - 4)\qquad(5) is merely a two-term Taylor series, whereas 1/(x - 1) is an infinite series ...
I am not sure what is happening here. Why are we expanding the coefficients of the DE (1) ? When we had a simple constant coefficient DE, the procedure was simple: i. assume y(x) takes the form of (2); ii. differentiate (2) as many times as necessary and plug back into DE; iii. tidy up.

Now we have a DE with variable coefficients the procedure is changing a bit, but I am not seeing how.

Let's talk generally for a moment. If I have a DE of the form A_1(x)y''(x) + A_2(x)y'(x) + A_3(x)y = 0 \qquad(6), and I seek the solution in the form of y(x) = \sum_{n=0}^{\infty}a_n(x - x_o)^n\qquad(2), how does the procedure that I outlined above for the constant coefficient case (italicized) change for the case of (6) with the Ai(x) coefficients? Do I need to expand my Ai(x) ' s about xo as well? If yes, why? (I have a feeling I know the answer, but would like verification. I feel like it only makes sense that my coefficients that depend on x should 'track' my assumed y(x). I know that is not very rigorous, but it's all I am coming up with.)

Thanks for reading :smile:
 
Physics news on Phys.org
Hi Saladsamurai! :smile:

The power series method ends with adding factors of (x - 4)n, and putting them equal to zero, for each n separately.

With constant coefficients, that's easy … an(x-4)n becomes nan(x-4)n-1 and so on, only one term at a time.

But with linear coefficients, eg p + q(x-4), you get pan and qan-1 in the same equation (ie for the same power of (x-4)).

With quadratic coefficients, you get three terms, and so on (and a Taylor expansion is of course even worse).

For a Taylor expansion, you'd get an infinite number of terms for each power.

Saladsamurai said:
Let's talk generally for a moment. If I have a DE of the form A_1(x)y''(x) + A_2(x)y'(x) + A_3(x)y = 0 \qquad(6), and I seek the solution in the form of y(x) = \sum_{n=0}^{\infty}a_n(x - x_o)^n\qquad(2), how does the procedure that I outlined above for the constant coefficient case (italicized) change for the case of (6) with the Ai(x) coefficients? Do I need to expand my Ai(x) ' s about xo as well? If yes, why? (I have a feeling I know the answer, but would like verification. I feel like it only makes sense that my coefficients that depend on x should 'track' my assumed y(x). I know that is not very rigorous, but it's all I am coming up with.)

(ooh, i like your use of "black"! :wink:)

It's not really a question of expanding the As about x0, rather of turning the As into a polynomial in (x - x0) … though of course it comes to the same thin. :wink:

If your y is a polynomial in (x - x0), then your As must be also. :smile:
 
tiny-tim said:
Hi Saladsamurai! :smile:

The power series method ends with adding factors of (x - 4)n, and putting them equal to zero, for each n separately.

With constant coefficients, that's easy … an(x-4)n becomes nan(x-4)n-1 and so on, only one term at a time.

But with linear coefficients, eg p + q(x-4), you get pan and qan-1 in the same equation (ie for the same power of (x-4)).

With quadratic coefficients, you get three terms, and so on (and a Taylor expansion is of course even worse).

For a Taylor expansion, you'd get an infinite number of terms for each power.



(ooh, i like your use of "black"! :wink:)

It's not really a question of expanding the As about x0, rather of turning the As into a polynomial in (x - x0) … though of course it comes to the same thin. :wink:

If your y is a polynomial in (x - x0), then your As must be also. :smile:


Ok. I think I follow you. Thanks tiny-tim! I think I will figure this out when I go to solve a problem. (I am actually working on one now that I need a hand with; so look out for my next post if you have time :smile: )
 
Hi tiny-tim again :smile:

I am working in another Power Series solution in which x_o=0. I also have that the coefficient of y is a polynomial. I am thinking that because it is polynomial, then the taylor series expansion it about zero is that polynomial. Is this true in general? If we take TS\left[f(x)\right]_{x_o} to mean the Taylor series of f(x) expanded about xo, then if p(x) is some polynomial we have

TS\left[p(x)\right]_{x_o=0}=p(x)

Since by definition:

TS\left[f(x)\right]_{x_o} = f(x_o) + \frac{f'(x_o)}{1!}(x-x_o)+\frac{f''(x_o)}{2!}(x-x_o)+ \dots

Which will give:

TS\left[p(x)\right]_{x_o} = p(x_o) + \frac{p'(x_o)}{1!}(x)+\frac{p''(x_o)}{2!}(x)+ \dots

Which I am thinking is the same as p(x) ... but I am not sure what rule I can use to prove it ...
Any thoughts?
 
Hi Saladsamurai! :smile:

Yes, that's fine … the Taylor series of a polynomial about zero is itself.

The proof is trivial, isn't it? … f(n)xk|x=0 = k! if n = k, and = 0 otherwise. :wink:

(btw, I vaguely remember that that's used in some important proof about differentiating Taylor series … it's considered obvious)
 
Okie dokie! Thanks again! :smile:
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top