Assuming separability when solving for a Green's Function

In summary, the conversation discusses the limitations of the separation of variables approach to solving differential equations, specifically in regards to completeness and uniqueness. The validity of this approach depends on the completeness of the set of separable solutions. The example of solving for a Green's function in Jackson is used to highlight the issue of completeness and the possibility of producing invalid solutions. The use of Sturm-Liouville theory is mentioned as a tool for determining the correct basis for a separable solution, but it only works when the solution is separable, which is not always the case. The question of when a solution is separable and whether the final constructed solution itself or the basis functions are separable is also discussed.
  • #1
Exp HP
4
0
Edit: I have substantially edited this post from its original form, as I realize that it might have fallen under the label of "textbook-style questions".

Really, the heart of my issue here is that, anywhere I look, I can't seem to find a clear description anywhere of the limitations of the separation of variables approach to e.g. Laplace's equation. As in, when exactly are we are allowed to assume that the solution to a differential equation in multiple variables has a form like, for instance,

[tex]\Phi(r,\theta,\phi)=\sum_m\sum_n\sum_kC_{mnk}R_m(r)P_n(\theta)Q_k(\phi)[/tex]

To me, it seems that the validity of this approach depends on whether or not the set of all separable solutions is complete. What I mean is, given a set of elementary separable solutions [itex]\psi_{nmk}=X_n(x)Y_m(y)Z_n(z)[/itex] to a differential equation, it wouldn't be correct to write [itex]\Phi=\sum_n\sum_m\sum_kC_{nmk}\psi_{nmk}[/itex] unless we knew that [itex]\psi_{nmk}[/itex] formed a complete basis for all possible solutions.

Unfortunately, the impression I get from most of my courses is that, since completeness proofs are so hard to come by, we generally just take for granted that certain well-known problems are known to work out.The subject of my original post --- a worked example of solving for a Green's function in Jackson --- was just a specific instance that I used as an example my frustration. At some point, Jackson makes a conclusion that only appears possible to me if we assume that a function in 4 variables: [itex]r, r' , \theta',[/itex] and [itex]\phi'[/itex] can be composed of solutions which are separable into radial (r,r') and angular parts. As we're dealing with coordinates from two different position vectors, I would hardly think this is a trivial assumption, yet there seems to be no getting around it.

Right now, I'm going through Classical Electrodynamics (Jackson), and am bothered by a particular worked example: (pages 120-122)

Context

We are solving for the Green's function for the region inside a sphere of radius [itex]a[/itex] with Dirichelet boundary conditions. This is to say that we seek a function [itex]G(\mathbf{x},\mathbf{x}')[/itex] such that

[tex]\nabla^2_xG(\mathbf{x},\mathbf{x}') = -4\pi\delta^3(\mathbf{x}-\mathbf{x}')[/tex]
[tex]G(\mathbf{x},\mathbf{x}')|_{\mathbf{x'}\in S}=0[/tex]
(Note: the phrase "with Dirichelet boundary conditions" simply refers to the second equation above, as this constraint allows one to construct the electric potential inside a region in terms of [itex]G[/itex], [itex]\rho[/itex], and the potential at the boundary)

Jackson uses the completeness of the spherical harmonics [itex]Y_{lm}[/itex] to expand both the delta function and the Green's function:
[tex]\delta^3(\mathbf{x}-\mathbf{x}')=\frac{1}{r^2}\delta(r-r')\sum_{l=0}^{\infty}\sum_{m=-l}^l
Y_{lm}^*(\theta',\phi')Y_{lm}(\theta,\phi)[/tex]
[tex]G(\mathbf{x},\mathbf{x}')=\sum_{l=0}^{\infty}\sum_{m=-l}^lA_{lm}(r,r',\theta',\phi')Y_{lm}(\theta,\phi)[/tex]
So far, so good.

My problem

He then states that substitution of these into the first equation above (the one with [itex]\nabla^2_x[/itex]) yields
[tex]A_{lm}(r,r',\theta',\phi')=g_l(r,r')Y^*_{lm}(\theta',\phi')[/tex]
where [itex]g_l(r,r')[/itex] is a function which remains to be solved for.

I'm personally having trouble seeing how this works.
I find that the substitution produces something like
[tex]\sum_{l=0}^{\infty}\sum_{m=-l}^l\left(-\frac{l(l+1)}{r^2}+\frac{1}{r^2}\partial_r(r^2\partial_r)\right)
A_{lm}(r,r',\theta',\phi')Y_{lm}(\theta,\phi)\\
=-4\pi\frac{1}{r^2}\delta(r-r')\sum_{l=0}^{\infty}\sum_{m=-l}^l
Y_{lm}^*(\theta',\phi')Y_{lm}(\theta,\phi)
[/tex]
and from here, I'm not sure what we're allowed to do.

In particular, if we were allowed to assume that [itex]A_{lm}[/itex] is separable into radial and angular parts:
[tex]A_{lm}(r,r',\theta',\phi')\equiv g_l(r,r')h_{lm}(\theta',\phi')[/tex]
then Jackson's result can by obtained by a simple application of the completeness and orthogonality of [itex]Y_{lm}[/itex]. But I am not convinced that this assumption is valid. Are we allowed to assume separability in this case?
 
Last edited:
Physics news on Phys.org
  • #2
Quick update: I noticed we can go one step further before making any assumptions about separability.

Take the second-to-last equation in my above post (the one with sums on each side), left multiply by [itex]Y_{l'm'}(\theta,\phi)[/itex] and integrate over [itex]\theta[/itex] and [itex]\phi[/itex]. This yields:

[tex]\left(-\frac{l(l+1)}{r^2}+\frac{1}{r^2}\partial_r(r^2\partial_r)\right)
A_{lm}(r,r',\theta',\phi')
=-4\pi\frac{1}{r^2}\delta(r-r')
Y_{lm}^*(\theta',\phi')
[/tex]
Unfortunately, it would appear that I still need to assume separability to get any further, as otherwise I cannot be certain that [itex]A_{lm}[/itex] and [itex]\frac{1}{r^2}\partial_r(r^2\partial_r)A_{lm}[/itex] have the same [itex]\theta'[/itex] and [itex]\phi'[/itex] dependence.
 
Last edited:
  • #3
The key thing when using separation of variables is really uniqueness. If we can find a solution using separation of variables and if we know uniqueness, then we know that our separable solution is the only solution.

This implies that if you can find the Green's Function using separation of variables, then it is the Green's Function.

Completeness is a slightly different issue. We know that solutions to PDE's (even simple ones) are not always separable. This often occurs when you have irregular domains or non-separable boundary conditions.
 
  • #4
the_wolfman said:
This implies that if you can find the Green's Function using separation of variables, then it is the Green's Function.
My issue is knowing that the solution you have is a proper solution in the first place (as in, one that satisfies the boundary conditions). When working with an incomplete basis, I think one could easily find themselves inadvertently producing an invalid solution and assuming that it is correct.

Consider, for instance, solving for a potential with azimuthal symmetry. Frequently, one will write
[tex]\Phi(\mathbf{x})=\sum_{l=0}^{\infty}\left(A_lr^l+B_lr^{-l-1}\right)P_l(\cos\theta)[/tex]
and then solve for the coefficients by invoking the boundary conditions and exploiting orthogonality. This will always result in a valid solution... but that's only due to completeness.

What I mean is, suppose I were to sneak up and erase the [itex]l=0[/itex] term from the original expansion:
[tex]\Phi(\mathbf{x})=\sum_{l=1}^{\infty}\left(A_lr^l+B_lr^{-l-1}\right)P_l(\cos\theta)[/tex]
We can still compute all the remaining coefficients which remain in this expression, but because the basis is no longer complete, there may be boundary conditions that it is no longer able to satisfy. (such as e.g. a nonzero potential at [itex]r=0[/itex].)

That said, I suppose there's no reason one couldn't manually verify that their solution satisfies the boundary conditions, after finding all coefficients. It's just not something I ever really see anybody do, especially in the face of infinite series and bessel functions.
 
  • #5
There are two related by separate things. There's completeness of the original unmodified PDE and then there is the completeness of the ODE's that result from applying separation of variables to the original PDE.

The ODE's the result from applying separation of variables to a PDE often take on a Sturm-Liouville form. You can then use S-L theory to derive uniqueness, orthogonality, and completeness theorems for these ODE's. The is a ton of literature (on-line and off-line) on S-L. If I recall correctly Jackson does actually address these issues for many ODE's that arise, although its not always the best book to learn from.

In your example, S-L theory gives you the tools to determine the correct basis for a separable solution of the potential.
But this only works when the solution is separable, and not all solutions are separable.
 
  • #6
Thanks for bringing up Sturm Liouville theory, I'll be sure to take a look at it when I have time.

the_wolfman said:
But this only works when the solution is separable, and not all solutions are separable.

I suppose my question is, then, is there any rule of thumb for determining when a solution is separable?

Oh, also... I have another bit of confusion that I probably should get squared away: When you say that "the solution is separable," do you mean that the final constructed solution itself is separable? As in:
[tex]\Phi(x,y,z)=
\left(\sum_mA_mX_m(x)\right)
\left(\sum_nB_nY_n(y)\right)
\left(\sum_kC_kZ_k(z)\right)[/tex]
Or are you referring to the weaker condition that it can be described as a linear combination of basis functions that are separable? As in,
[tex]\Phi(x,y,z)=\sum_m\sum_n\sum_kC_{mnk}X_m(x)Y_n(y)Z_k(z)[/tex]
Lately, my impression has actually been the latter, i.e. that the full solution produced by separation of variables is not necessarily itself separable. I imagine this may sound like an odd interpretation, but it's a conclusion I've somehow come to from seeing the technique used over the years.
 
  • #7
Separation of variables often fails in irregular domains or when the B.C.'s are not separable. However, there are many tricks that allow you to convert some PDEs into a form amenable to analysis using separation of variables.

When we solve ODEs with constant coefficients we look for solutions of the form [itex] e^{ax}[/itex]. We find that there are multiple solutions of this form. The particular solution is then a sum over these solutions.

An analogous things happens in separation of variable. We look for solutions that are the product of multiple functions that depend on different variable(s). For example [itex]f\left(x,y\right)=X\left(x\right) Y\left(y\right)[/itex]. We find that there are multiple solutions of this form. And the particular solutions is then a sum of these functions.
 

1. What is separability in the context of solving for a Green's Function?

Separability refers to the ability to break down a complex problem into simpler, independent parts. In the context of solving for a Green's Function, it means that the solution can be expressed as a product of functions that depend only on one variable each.

2. Why is it important to assume separability when solving for a Green's Function?

Assuming separability makes the problem easier to solve by reducing it to a series of simpler problems. It also allows for the application of established techniques and methods for solving ordinary differential equations, making the solution more accessible and efficient.

3. How do you determine if a problem is separable when solving for a Green's Function?

To determine if a problem is separable, you need to examine the differential equation and look for terms that are dependent on only one variable. If the equation can be rewritten as a product of functions that depend only on one variable, then the problem is separable.

4. Are there any limitations to assuming separability when solving for a Green's Function?

Yes, assuming separability may not always be possible or accurate. Some problems may not have a separable solution, or the assumption of separability may introduce errors in the solution. It is important to carefully consider the problem and determine if the assumption is appropriate.

5. Can separability be applied to all types of Green's Functions?

No, separability can only be applied to certain types of Green's Functions, such as those that involve linear, homogeneous, and time-invariant systems. For other types of Green's Functions, different techniques and assumptions may be needed for their solution.

Similar threads

  • Differential Equations
Replies
4
Views
2K
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
0
Views
505
Replies
2
Views
2K
  • Special and General Relativity
Replies
5
Views
267
  • Differential Equations
Replies
2
Views
1K
  • Differential Equations
Replies
2
Views
1K
  • Differential Equations
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Differential Equations
Replies
2
Views
1K
Back
Top