Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

For which PDEs is the solution in the form of F(x)*G(t)?

  1. Sep 26, 2013 #1
    So we just started finding general solutions for homogenous&linear two-variabled PDEs by using separation of variables in my engineering-math class. There the professor tells us to assume the solution of a PDE is in the form of F(x)*G(t).

    But when is the solution in the form of F(x)*G(t)? When does separation of variables work and not (does it always work, in theory atleast, on linear PDEs?)? For multivariabled linear PDEs will the solution be on the form of F(x)*G(y)*H(z)*J(t)... etc?

    I'm a bit confused currently... all help is appreciated.
     
  2. jcsd
  3. Sep 26, 2013 #2

    arildno

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    A good question!

    Basically, you should think of product solutions you find as a single term in a series solution like the following:
    [tex]U(x,t)=\sum_{i=0}^{\infty}F_{i}(x)G_{i}(t) (*)[/tex]
    where linearity and homogeneity of the differential equation shows that we can solve the "same" problem at each "i" level.
    If we take a wave equation like u_tt=A*u_xx, we can represent the general solution as solving:
    [tex]\frac{\frac{d^{2}F_{i}}{dx^{2}}}{F_{i}}=\frac{\frac{d^{2}G_{i}}{dt^{2}}}{G_{i}}=k_{i}[/tex], where the k_i's are constants related to the wavenumber/frequencies of component waves.
    -----------------------------------------------------------
    Now, it is STILL a problem:
    How can we be sure that an infinite series in terms of product functions are ADEQUATE in specifying a particular solution?
    This requires that the expansion functions F_i and G_i represents what we call a COMPLETE set of functions for "x" and "t", respectively (essentially, that ALL functions in x must be representable by a sum of weighted F_i's, and similarly for all t-functions)

    Is this trivial to show is true in every single case?
    Not at all!
    -----------------------
    However, remember that you DO know of a similar series expansion of an ARBITRARY, nice function u(x,t) already, namely the two-variable Taylor series of it, in product powers of x and t.
    So:
    The product powers of "x" and "t" DO represent what we call a complete set of expansion functions; lots of other such sets exist, and for different types of differential equations, one set of expansion functions will be more natural to use, as dictated by the shape of the diff.eq. The diff.eq, effectively, tells you how such expansion functions must look like (i.e, as solutions to the simplified, separated problems)
     
  4. Sep 26, 2013 #3
    Thanks for the clarifications :)! I understand more now, but I am still a bit unsure on why the form of F(x)*G(t) is the (only?) correct one for each partial solution.

    You were comparing the Fourier series one gets out to its taylor series - so this means there are many ways to solve, say, the wave equation? Is it always an infinite-series expansion? Hell, assuming the initial conditions are satisfied and if v=1, then simply the polynomial [tex]Qx^2 + Qt^2 + ax + bt +cxt[/tex] should be a solution to the wave equation too... but we use the fourier series because it's the most practical?
     
    Last edited: Sep 26, 2013
  5. Sep 26, 2013 #4

    WannabeNewton

    User Avatar
    Science Advisor

    What series expansion you use depends entirely on the problem at hand. For example, vacuum electrostatics is essentially all about solving Laplace's equation ##\nabla^{2}\varphi = 0## for the electrostatic potential ##\varphi##. Now if I write this in Cartesian coordinates (which I would use if I have for example a system consisting of infinite charged rectangular plates) then I just have ##\{\frac{\partial^2 }{\partial x^2} + \frac{\partial^2 }{\partial y^2} + \frac{\partial^2 }{\partial z^2} \}\varphi = 0## and I can represent the solution, using separation of variables, as a series expansion in terms of sinusoidal functions and exponential functions. If I then specify the boundary conditions (for example the values of the potential on the plates-say two are grounded and another two are kept at some non-zero potential), I know this must be the only possible solution thanks to the uniqueness theorem for solutions to Laplace's equation.

    If, on the other hand, I write ##\nabla^2 \varphi = 0## in spherical coordinates (which I would use for say a spherical grounded sphere of charge), Fourier series would not be of help to me. Rather, I would want to write my solution as a series expansion of Legendre polynomials. In cylindrical coordinates (say an infinitely long cylinder of charge) I would want to use Bessel function expansions and so on. The uniqueness theorem still holds as long as I specify boundary conditions.

    The point is that once you have more or less "guessed" a solution e.g. one of the series expansions mentioned above using separation of variables, and you have provided sufficient boundary conditions, the uniqueness theorem will guarantee that this can be the only solution so you're done.
     
  6. Sep 27, 2013 #5

    arildno

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    Nikitin:
    Note that even in the cases where the nature of the PDE sort of specifies the type of expansion functions we ought to use, this does NOT mean that the solution isn't representable in, say, power products or Fourier series.
    After all, these classes DO represent a set of complete functions!!

    The critical issues here would be:
    1. How the coefficient structure within the new class of expansian functions ought to be
    2. Extremely slow convergence of finite appproximations to the exact solutions.

    Thus, SOME expansion set tends to less stupid to use than others! :smile:
     
  7. Sep 27, 2013 #6

    arildno

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    Dearly Missed

    Also,just to clarify:
    The uniqueness theorem does not say that a particular REPRESENTATION of the solution is the only one possible
    It "merely" says that if you have found a solution to a problem, then there are no others to be found, not that your solution cannot be warped into an equivalent formulation in another expansion set of functions.
     
  8. Sep 27, 2013 #7

    jasonRF

    User Avatar
    Science Advisor
    Gold Member

    There are a few conditions required for separation of variables to work on a linear PDE.

    1. the equation must be separable. That is, you can separate the variables and get ODEs for each variable
    2. Boundaries must be constant coordinate surfaces (at infinity is also okay)
    3. Boundary conditions conditions cannot be completely arbitrary. If I remember right, if the boundary is at a constant [itex]\eta[/itex] (to pick an arbitrary coordinate system that may or may not be Cartesian) then the boundary condition cannot depend upon partial derivatives with respect to the other coordinates. There may be other restrictions as well.

    Basic problems in heat transfer, electrodynamics, quantum mechanics, etc., can often be solved with this approach. However, it may be required to solve the problem in a non-Cartesian coordinate system in order for the approach to work (eg. solving laplaces equation inside a circle requires polar coordinates). Many real-world cases no longer fall into these categories, so other techniques are required. There are other exact approaches (eg. conformal mapping), approximate approaches (eg. perturbation theory), and of course for anything with really complicated geometry numerical methods eventually win out as the most logical approach.

    jason
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: For which PDEs is the solution in the form of F(x)*G(t)?
Loading...