Separation of variables technique - When is valid? (and other questions)

falbani
Messages
8
Reaction score
0
Hi. This is my first post in PF. I'm an undergraduate student of Electronics Engineering with strong interest in math & physics (and weak understanding of them :P).

One of the things I always hate is when in some book (or some lectures) ODEs or PDEs are solved after the magic words "[...] and by virtue of separation of variables [...]", and do not care to specify under which conditions such procedure is valid and guarantees uniqueness of solution.

For the sake of unambiguity, I'm talking about the technique that imply searching only for solutions of the form F(x, y) = X(x) · Y(y).

For example, take the Wave Equation, a 2nd order PDE:

\nabla^2 u - \frac{1}{c^2} \frac{\partial^2 u }{\partial t^2} = 0

for which we know that a general solution is of the form:
(supose 1D for simplicity)

u(x, t) = g(x - ct) + h(x + ct).

I can't see any way of expresing that as a product of a function exclusively of 'x' and a function exclusively of 't'. Where I would ended with 'separation of variables'?

Other questions:

How this technique doesn't restrict the solution? I'm always afraid of "losing" solutions on the way.

I have seen vague statements about the validity of this such us "time and space are independent" (maybe I am missquoting). Can anybody make sense of that? (Any conection with the fact that the joint density of independent random variables is the product of their densities?)

I searched on books and internet (this forum included) and not been able to satisfy all my doubts.

I'm after a deep understanding of the electromagnetic vector wave equation and I will be grateful if you can help me with this.
 
Physics news on Phys.org
Saying "by separation of varables" they usually mean "by Fourier transform".
The fact is that the sample equation you gave is linear: given two solutions u_1(x,t)
and u_2(x,t), then also u=c_1u_1+c_2u_2 is a solution. Now, when you search solutions in the form u(x,t)=X(x)T(t), of
course you loose solutions, but it happens that you find a whole set of solutions u_{k,\omega}(x,t)=e^{ikx}e^{\pm i\omega t}, with \omega = kc, and ALL solutions of the equation can be expressed as a sum of these. Note that u_{k,\omega}(x,t)=e^{ik(x\pm ct)} and this IS of the form u(x-ct).

Hope it's clear.
 
Last edited:
It was very clear and useful. Thanks.

But, how do we know that linear combinations of the solutions found by setting u(x, t) = X(x)·T(t) span the entire set of solutions. It is as simple as saying "all functions are expresable in terms of the Fourier basis"?

Although I'm very familiar with Fourier Theory, I always worked one-dimensional situations, so I don't know how to extend it's results for functions like u(x, t). 2D Fourier basis is formed with e^{ik(x\pm ct)}?

Evidently I wasn't thinking with complex numbers in mind. :P I completely overlooked the fact that with complex exponentials is trivial to go from f(x)·g(t) to h(x-ct).
 
The Fourier transform in more variables is not different than the one in one variable. Put

x=(x_1,\dots,x_n)

and

k=(k_1,\dots,k_n)

and denote by

k\cdot x =k_1x_1+\cdots k_nx_n

the usual scalar product in \mathbb{R}^n. Define the Fourier F(k) transform of a function f(x) by

F(k)=\int f(x)e^{ik\cdot x}d^nx

then the Fourier theorem says (among other things) that

f(x)=\int F(k)e^{-ik\cdot x}\frac{d^nk}{(2\pi)^n}

I have to go right now, if you have patience I will continue later...:redface:
 
Last edited:
The Fourier transfor has a number of properties, the one that we need now is that, if

g(x)=\frac{\partial f}{\partial x_i}

then

G(k)=ik_iF(f)

So that, when you have a linear differential equation for f(x), you can transform it in a linear
algebric equation for F(k). For example, your equation in n = 2, naming (x,ct) the position and (ck,\omega) the wave vector, becomes

(\omega^2-c^2k^2)U(k,\omega)=0

This implies (I assume that you know something about distributions) that

U(k,\omega)=H(k, \omega)\delta(\omega^2-c^2k^2)

where H is a generic function, which has to satisfy the only condition

H^{*}(k,\omega)=H(-k,-\omega)

that means that u is a real function. Now that you know U, you can antitransform to get u (I omit some steps), and you find

u(x,t)=f(x-ct)+g(x+ct)^{}

with f and g generic real-valued functions, to be chosen imposig boundary conditions.
 
Last edited:
Thanks for all that.

I have very little experience with distributions, but I think I got the essence of the step in which H and Dirac's delta appear.

But I still feel I can't state the conditions under which this method work and how "general" is the solution founded.

I'm having a lot of troubles finding a "general" solution for the vectorial counterparts of the wave equation and of helmholtz's one.

Another fear I have is erroneously and unconciously expecting that PDE's have the same properties as ODE's. For example, is the solution for a PDE unique (up to constants here and there)? I'm afraid it is not, and maybe this missconception is the root of all my confusion.
 
falbani said:
Thanks for all that.

I have very little experience with distributions, but I think I got the essence of the step in which H and Dirac's delta appear.

But I still feel I can't state the conditions under which this method work and how "general" is the solution founded.

I'm having a lot of troubles finding a "general" solution for the vectorial counterparts of the wave equation and of helmholtz's one.

Another fear I have is erroneously and unconciously expecting that PDE's have the same properties as ODE's. For example, is the solution for a PDE unique (up to constants here and there)? I'm afraid it is not, and maybe this missconception is the root of all my confusion.

I have more the physician point of view, so I don't bother too much about things like analiticity, Lipschitzianity (or wathever!), etc. For the wave equation above, the solution found is the most general, in the sense that f(...)+g(...) spans all solutions, varying f and g.

If you put here the Helmotz equation we'll have it a look.

PDE's share a lot of common features with ODE's, although they are obviously more complicated. There are theorems that assure existence and unicity under certain conditions (try googling "Poisson problem", "Laplace problem", "Liouville problem", etc.). The difference is that, since we now have more than one variable, the "initial conditions" are no longer given at a point, but rather at a surface (the so called boundary conditions).
 
falbani said:
Hi. This is my first post in PF. I'm an undergraduate student of Electronics Engineering with strong interest in math & physics (and weak understanding of them :P).

One of the things I always hate is when in some book (or some lectures) ODEs or PDEs are solved after the magic words "[...] and by virtue of separation of variables [...]", and do not care to specify under which conditions such procedure is valid and guarantees uniqueness of solution.

If you are really concerned about uniqueness, you usually prove that, for a give equation (and type of boundary conditions), the solution is unique. Then any solution you can find, by any method, is the unique solution!

In general separation of variables requires at least a) the equation is separable!, b) the boundaries are along constant coordinate lines (eg. square for Cartesian coordinates, circles/wedges/anuli for polar, etc.). For example, you cannot use separation of variables to solve the wave equatio in a star shaped region! Other restrictions probably apply - perhaps boundary conditions cannot be too crazy, and in general I think the equation needs to be homogeneous. The particular solution is found by other means.

Verifying every step in the solution process is difficult and not worthwhile. If you are worried about the legitimacy of your solution, you should verify that the solution you construct really does solve the PDE and teh boundary/initial conditions. This is usually not easy, but will be easier than justifying every step. Same advice for using integral transform techniques.

falbani said:
Other questions:

How this technique doesn't restrict the solution? I'm always afraid of "losing" solutions on the way.

If you have found a valid solution and know that it is the unique solution, then you know that you didn't miss anything. That is why we care about uniqueness.

But I understand your concern. To me, separation of variables, at first glance, seems like it shouldn't work, but the amazing thing is that for simple geometries it works just fine! Of course, outside of those simple geometries you cannot use it, and is why in many real-world (engineering-type) problems we get stuck with numerical solutions instead of analytical.

good luck,

jason
 
Last edited:
Thanks Jason. You were very clear.

(I will use my own spooky language here because many years have passed since I did a course on ODE's and I forgot the correct terminology)

By separation of variables one usually gets the "structure" of an "atomic" solution, and then has to arrange a linear combination of them to fully span the solution space. So, estrictly speaking, SoV does not find the most general one. This makes me think that is not fair to judge the "quality" of the method by its capability of reaching the general solution, instead we have to evaluate it terms of reaching the "atomic" solution that is just at a "linear combination of distance" from the general one. Am I right?

Will some knowledge of Green functions make things clear for me?

Can you recommend some definite authoritatively book on this subject?
 
  • #10
falbani said:
Thanks Jason. You were very clear.

(I will use my own spooky language here because many years have passed since I did a course on ODE's and I forgot the correct terminology)

By separation of variables one usually gets the "structure" of an "atomic" solution, and then has to arrange a linear combination of them to fully span the solution space. So, estrictly speaking, SoV does not find the most general one. This makes me think that is not fair to judge the "quality" of the method by its capability of reaching the general solution, instead we have to evaluate it terms of reaching the "atomic" solution that is just at a "linear combination of distance" from the general one. Am I right?

For linear problems, once you have a basis for the space you can construct anything you need. That is really the best you could hope for in general. Now, proving that SoV gives you a basis (that is, the functions you find form a "complete" set) for a certain problem isn't easy - that is the realm of functional analysis and I have to refer you to the mathematicians around here.

falbani said:
Will some knowledge of Green functions make things clear for me?

Can you recommend some definite authoritatively book on this subject?

Knowledge of Green's functions won't hurt, although it may not help either. You would find that for "nice" equations like the wave equation it is often possible to construct a Green's function by using the eigenfunctions that you found from SoV.

For books, I would recommend you go to your university library and browse the books on Partial Differential Equations, or the books on Boundary Value Problems. You will likely find one or two that are at the right level and cover the right material for you.


good luck,

jason
 
Back
Top