# Assuming separability when solving for a Green's Function

1. Mar 23, 2015

### Exp HP

Edit: I have substantially edited this post from its original form, as I realize that it might have fallen under the label of "textbook-style questions".

Really, the heart of my issue here is that, anywhere I look, I can't seem to find a clear description anywhere of the limitations of the separation of variables approach to e.g. Laplace's equation. As in, when exactly are we are allowed to assume that the solution to a differential equation in multiple variables has a form like, for instance,

$$\Phi(r,\theta,\phi)=\sum_m\sum_n\sum_kC_{mnk}R_m(r)P_n(\theta)Q_k(\phi)$$

To me, it seems that the validity of this approach depends on whether or not the set of all separable solutions is complete. What I mean is, given a set of elementary separable solutions $\psi_{nmk}=X_n(x)Y_m(y)Z_n(z)$ to a differential equation, it wouldn't be correct to write $\Phi=\sum_n\sum_m\sum_kC_{nmk}\psi_{nmk}$ unless we knew that $\psi_{nmk}$ formed a complete basis for all possible solutions.

Unfortunately, the impression I get from most of my courses is that, since completeness proofs are so hard to come by, we generally just take for granted that certain well-known problems are known to work out.

The subject of my original post --- a worked example of solving for a Green's function in Jackson --- was just a specific instance that I used as an example my frustration. At some point, Jackson makes a conclusion that only appears possible to me if we assume that a function in 4 variables: $r, r' , \theta',$ and $\phi'$ can be composed of solutions which are separable into radial (r,r') and angular parts. As we're dealing with coordinates from two different position vectors, I would hardly think this is a trivial assumption, yet there seems to be no getting around it.

Right now, I'm going through Classical Electrodynamics (Jackson), and am bothered by a particular worked example: (pages 120-122)

Context

We are solving for the Green's function for the region inside a sphere of radius $a$ with Dirichelet boundary conditions. This is to say that we seek a function $G(\mathbf{x},\mathbf{x}')$ such that

$$\nabla^2_xG(\mathbf{x},\mathbf{x}') = -4\pi\delta^3(\mathbf{x}-\mathbf{x}')$$
$$G(\mathbf{x},\mathbf{x}')|_{\mathbf{x'}\in S}=0$$
(Note: the phrase "with Dirichelet boundary conditions" simply refers to the second equation above, as this constraint allows one to construct the electric potential inside a region in terms of $G$, $\rho$, and the potential at the boundary)

Jackson uses the completeness of the spherical harmonics $Y_{lm}$ to expand both the delta function and the Green's function:
$$\delta^3(\mathbf{x}-\mathbf{x}')=\frac{1}{r^2}\delta(r-r')\sum_{l=0}^{\infty}\sum_{m=-l}^l Y_{lm}^*(\theta',\phi')Y_{lm}(\theta,\phi)$$
$$G(\mathbf{x},\mathbf{x}')=\sum_{l=0}^{\infty}\sum_{m=-l}^lA_{lm}(r,r',\theta',\phi')Y_{lm}(\theta,\phi)$$
So far, so good.

My problem

He then states that substitution of these into the first equation above (the one with $\nabla^2_x$) yields
$$A_{lm}(r,r',\theta',\phi')=g_l(r,r')Y^*_{lm}(\theta',\phi')$$
where $g_l(r,r')$ is a function which remains to be solved for.

I'm personally having trouble seeing how this works.
I find that the substitution produces something like
$$\sum_{l=0}^{\infty}\sum_{m=-l}^l\left(-\frac{l(l+1)}{r^2}+\frac{1}{r^2}\partial_r(r^2\partial_r)\right) A_{lm}(r,r',\theta',\phi')Y_{lm}(\theta,\phi)\\ =-4\pi\frac{1}{r^2}\delta(r-r')\sum_{l=0}^{\infty}\sum_{m=-l}^l Y_{lm}^*(\theta',\phi')Y_{lm}(\theta,\phi)$$
and from here, I'm not sure what we're allowed to do.

In particular, if we were allowed to assume that $A_{lm}$ is separable into radial and angular parts:
$$A_{lm}(r,r',\theta',\phi')\equiv g_l(r,r')h_{lm}(\theta',\phi')$$
then Jackson's result can by obtained by a simple application of the completeness and orthogonality of $Y_{lm}$. But I am not convinced that this assumption is valid. Are we allowed to assume separability in this case?

Last edited: Mar 23, 2015
2. Mar 23, 2015

### Exp HP

Quick update: I noticed we can go one step further before making any assumptions about separability.

Take the second-to-last equation in my above post (the one with sums on each side), left multiply by $Y_{l'm'}(\theta,\phi)$ and integrate over $\theta$ and $\phi$. This yields:

$$\left(-\frac{l(l+1)}{r^2}+\frac{1}{r^2}\partial_r(r^2\partial_r)\right) A_{lm}(r,r',\theta',\phi') =-4\pi\frac{1}{r^2}\delta(r-r') Y_{lm}^*(\theta',\phi')$$
Unfortunately, it would appear that I still need to assume separability to get any further, as otherwise I cannot be certain that $A_{lm}$ and $\frac{1}{r^2}\partial_r(r^2\partial_r)A_{lm}$ have the same $\theta'$ and $\phi'$ dependence.

Last edited: Mar 23, 2015
3. Mar 24, 2015

### the_wolfman

The key thing when using separation of variables is really uniqueness. If we can find a solution using separation of variables and if we know uniqueness, then we know that our separable solution is the only solution.

This implies that if you can find the Green's Function using separation of variables, then it is the Green's Function.

Completeness is a slightly different issue. We know that solutions to PDE's (even simple ones) are not always separable. This often occurs when you have irregular domains or non-separable boundary conditions.

4. Mar 24, 2015

### Exp HP

My issue is knowing that the solution you have is a proper solution in the first place (as in, one that satisfies the boundary conditions). When working with an incomplete basis, I think one could easily find themselves inadvertently producing an invalid solution and assuming that it is correct.

Consider, for instance, solving for a potential with azimuthal symmetry. Frequently, one will write
$$\Phi(\mathbf{x})=\sum_{l=0}^{\infty}\left(A_lr^l+B_lr^{-l-1}\right)P_l(\cos\theta)$$
and then solve for the coefficients by invoking the boundary conditions and exploiting orthogonality. This will always result in a valid solution... but that's only due to completeness.

What I mean is, suppose I were to sneak up and erase the $l=0$ term from the original expansion:
$$\Phi(\mathbf{x})=\sum_{l=1}^{\infty}\left(A_lr^l+B_lr^{-l-1}\right)P_l(\cos\theta)$$
We can still compute all the remaining coefficients which remain in this expression, but because the basis is no longer complete, there may be boundary conditions that it is no longer able to satisfy. (such as e.g. a nonzero potential at $r=0$.)

That said, I suppose there's no reason one couldn't manually verify that their solution satisfies the boundary conditions, after finding all coefficients. It's just not something I ever really see anybody do, especially in the face of infinite series and bessel functions.

5. Mar 25, 2015

### the_wolfman

There are two related by separate things. There's completeness of the original unmodified PDE and then there is the completeness of the ODE's that result from applying separation of variables to the original PDE.

The ODE's the result from applying separation of variables to a PDE often take on a Sturm-Liouville form. You can then use S-L theory to derive uniqueness, orthogonality, and completeness theorems for these ODE's. The is a ton of literature (on-line and off-line) on S-L. If I recall correctly Jackson does actually address these issues for many ODE's that arise, although its not always the best book to learn from.

In your example, S-L theory gives you the tools to determine the correct basis for a separable solution of the potential.
But this only works when the solution is separable, and not all solutions are separable.

6. Mar 25, 2015

### Exp HP

Thanks for bringing up Sturm Liouville theory, I'll be sure to take a look at it when I have time.

I suppose my question is, then, is there any rule of thumb for determining when a solution is separable?

Oh, also... I have another bit of confusion that I probably should get squared away: When you say that "the solution is separable," do you mean that the final constructed solution itself is separable? As in:
$$\Phi(x,y,z)= \left(\sum_mA_mX_m(x)\right) \left(\sum_nB_nY_n(y)\right) \left(\sum_kC_kZ_k(z)\right)$$
Or are you referring to the weaker condition that it can be described as a linear combination of basis functions that are separable? As in,
$$\Phi(x,y,z)=\sum_m\sum_n\sum_kC_{mnk}X_m(x)Y_n(y)Z_k(z)$$
Lately, my impression has actually been the latter, i.e. that the full solution produced by separation of variables is not necessarily itself separable. I imagine this may sound like an odd interpretation, but it's a conclusion I've somehow come to from seeing the technique used over the years.

7. Mar 25, 2015

### the_wolfman

Separation of variables often fails in irregular domains or when the B.C.'s are not separable. However, there are many tricks that allow you to convert some PDEs into a form amenable to analysis using separation of variables.

When we solve ODEs with constant coefficients we look for solutions of the form $e^{ax}$. We find that there are multiple solutions of this form. The particular solution is then a sum over these solutions.

An analogous things happens in separation of variable. We look for solutions that are the product of multiple functions that depend on different variable(s). For example $f\left(x,y\right)=X\left(x\right) Y\left(y\right)$. We find that there are multiple solutions of this form. And the particular solutions is then a sum of these functions.