# Power series solution to differential equation

1. Aug 21, 2014

### V0ODO0CH1LD

1. The problem statement, all variables and given/known data

Find the power series solution of the differential equation
$$y''-\frac{2}{(1-x)^2}y=0$$
around the point $x=0$.

2. Relevant equations
$$y=\sum_{n=0}^\infty{}c_nx^n$$
$$y'=\sum_{n=0}^\infty{}c_{n+1}(n+1)x^n$$
$$y''=\sum_{n=0}^\infty{}c_{n+2}(n+2)(n+1)x^n$$

3. The attempt at a solution

If I substitute the power series for $y$ in the differential equation (and mess around with it a bit) I get:
$$\sum_{n=0}^\infty{}\left[c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n\right]x^n=0.$$
Okay, so I replaced the problem of solving a differential equation with the problem of finding the coefficients of an infinite power series that satisfy the equation above, right? So what is the condition that the coefficients have to satisfy in order that the equation above is true for (at least) every $x\in\mathbb{R}-{1}$. Usually the next step here is to say that
$$c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n=0\,\Longrightarrow\, c_{n+2}=\frac{2}{(n+2)(n+1)(1-x)^2}c_n,$$
but why is this the "general" condition that the $c_i$'s have to satisfy? Are the terms inside the brackets always bigger than or equal to zero?

Also, what is the meaning of "around the point $x=0$"? I assume I should take the limit as $x$ goes to zero at some point, but when?

2. Aug 21, 2014

### Ray Vickson

Try rewriting the DE as $(1-x)^2 \, y'' - 2 y = 0$, or $(1 - 2x + x^2) y'' - 2 y = 0$.

3. Aug 21, 2014

### V0ODO0CH1LD

Why?

4. Aug 21, 2014

### vela

Staff Emeritus
$x$ shouldn't appear in the recurrence relation.

5. Aug 21, 2014

### Ray Vickson

To avoid having to multiply out the two power series for y(x) and 1/(1-x)^2.

6. Aug 22, 2014

### Ray Vickson

To expand on my previous answer: if you keep the $y/(1-x)^2$ form you must evaluate the product of the two infinite series $y(x) =\sum_{n=0}^{\infty} c_n x^n$ and $1/(1-x)^2 = \sum_{n=0}^{\infty} (n+1) x^n$. You must do that in order to have $y'' - 2y/(1-x)^2$ expressed as an infinite series in $x$, whose coefficients would then all be equated to zero. The way you did it was not valid because your LHS was not an infinite series in $x.$

7. Aug 23, 2014

### V0ODO0CH1LD

I get that it wouldn't be an infinite power series in $x$, but why isn't it an infinite series in $x$?

Also, I know that the whole $1/(1-x)^2$ thing shouldn't be there, but it will disappear when I take the limit as $x$ approaches $0$ of it. I just don't know when I am supposed to do it.

My main question however is: even if the original equation didn't contain $1/(1-x)^2$ why is it that equating a power series to zero means that either $x=0$ (this I get) or that every coefficient of the series equals zero? I feel like if the coefficients are all zero the series is zero for all $x$ but the other way around doesn't cover all cases, does it?

8. Aug 23, 2014

### Xiuh

Because the powers $x^k$ are linearly independent.

9. Aug 23, 2014

### Ray Vickson

Your infinite series $\sum t_n$ has terms of the form
$$t_n = c_{n+2}(n+2)(n+1)-\frac{2}{(1-x)^2}c_n$$
Just having $\sum t_n \equiv 0$ does NOT mean you can say that $t_n = 0$ for all $x$, or even for most $x$. In fact, for any given values of the $c_k$ your $t_n$ will be non-zero except at (maybe) two values of $x$; these would be where
$$(1-x)^2 = \frac{2 c_n}{(n+1)(n+2) c_{n+2}}$$
If the right-hand-side here is $> 0$ there are two values of $x$ that make $t_n = 0$; if the right-hand-side = 0 there is 1 value of $x$ that works. If the right-hand-side is $< 0$ we cannot ever have $t_n = 0$.

No, the only way you can guarantee the truth of the statement that "a sum of zero for all x implies that all terms vanish" is to have linearly-independent terms, so your terms must all be of the form $r_n x^n$ for constants $r_n$. Since you want this to = 0 for ALL x, it absolutely requires that you have $r_n = 0$ for all $n$.

Here is a little exercise for you: assuming that the (nicely convergent) infinite series $S(x) = \sum_n r_n x^n$ is zero for all $x$, PROVE that this implies $r_n = 0$ for all $x$. I'll show you how to start. First, $0 = S(0) = r_0,$ so we have $r_0 = 0$. Therefore, we have $S(x) = r_1 x + r_2 x^2 + \cdots$. Since this is supposed to be identically equal to 0 we must have $S'(x) = 0$ for all $x$. Therefore, we have $S'(0) = r_1 = 0$. And so it goes.

10. Aug 23, 2014

### vela

Staff Emeritus
I'm not sure what your point is here.

You generally don't take limits in this kind of problem where you're simply trying to find a power-series solution to a differential equation.

Right, and since the differential equation has to hold for all $x$, you want all of the coefficients to vanish.

11. Aug 24, 2014

### V0ODO0CH1LD

Okay, I got why I can't have $1/(1-x)^2$ in there, but how do I get rid of it so that I got left is a power series with constant coefficients?

12. Aug 24, 2014

### ehild

Follow Ray's hint in Post # 2 , rewrite the equation in the form $(1-x)^2 y'' - 2 y = 0$.

ehild

Last edited: Aug 24, 2014
13. Aug 24, 2014

### pasmith

Use the suggestion in post #2:

$$(1 - x)^2y'' -2y = y'' - 2xy'' + x^2y'' -2y \\ = \sum_{n=0}^{\infty} a_n n(n-1)x^{n-2} - 2x\sum_{n=0}^\infty a_nn(n-1)x^{n-2} + x^2 \sum_{n=0}^\infty a_n n(n-1)x^{n-2} - 2\sum_{n=0}^\infty a_nx^n \\ = \sum_{n=0}^{\infty} a_n n(n-1)x^{n-2} - 2\sum_{n=0}^\infty a_nn(n-1)x^{n-1} + \sum_{n=0}^\infty a_n n(n-1)x^{n} - 2\sum_{n=0}^\infty a_nx^n$$

Alternatively you can expand $(1 - x)^{-2}$ in binomial series, but you'll need to treat the cases $|x| < 1$ and $|x| > 1$ separately.

14. Aug 24, 2014

### Ray Vickson

I already told you that in posts #2 and #6.

15. Aug 24, 2014

### V0ODO0CH1LD

Sorry, it didn't seem obvious that the $x$'s could be "absorbed" into the summation that way.

But thanks!

16. Aug 24, 2014

### Ray Vickson

The product $\sum_n u_n x^n \, \times \, \sum_n v_n x^n$ can be expressed as $\sum_n w_n x^n$, where the $w$-sequence is the convolution of the $u$ and $v$ sequences; that is,
$$w_n = \sum_{k=0}^n u_k v_{n-k}$$
see, eg., http://en.wikipedia.org/wiki/Power_series .

So the product of
$$y(x) = \sum_{n=0}^{\infty} c_n x^n$$
and
$$1/(1-x)^2 = (1-x)^{-2} = \sum_{n=0} (n+1) x^n$$
is
$$\frac{y(x)}{(1-x)^2} = \sum_{n=0}^{\infty} d_n x^n,\\ \text{where}\\ d_n = \sum_{k=0}^n (k+1) c_{n-k} = c_n + 2c_{n-1} + 3 c_{n-2}+ \cdots + (n+1) c_0$$

17. Aug 26, 2014

### V0ODO0CH1LD

Thanks!

But what if $p(x)$ and $q(x)$ in
$$y''+py'+qy=0$$
can't be expanded around $x=0$? I know there's the Frobenius method but could't I just expand $y$ around some other $c\in\mathbb{R}$? So that the assumed solution would look like
$$\sum_{n=0}^\infty{}a_n(x-c)^{n}$$
and then also expand $p(x)$ and $q(x)$ around $c$ so I can use the convolution formula and factor out the $(x-c)^n$ to get the recurrence relation?

I know that if $x=0$ is a regular singular point of the differential equation there's the Frobenius method (which I know how to use but I don't understand why it works).

Instead of assuming a solution in a "regular" power series we assume a solution of the form
$$\sum_{n=0}^\infty{}a_nx^{n+r}$$
but what is this $r$? What does it represent? Is it somehow compensating for the fact that a singularity happens for some $a_i$'s? Why does it work?

EDIT: Also, is the frobenius method a generalization of the power series solution method (i.e. does it only work if the point in question is a regular singular point or does it also work if the point is ordinary)?

Last edited: Aug 26, 2014