Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Infinite sum PDE solution

  1. Jul 2, 2009 #1
    Hi.

    When solving a PDE by separation of variables, we obtain a collection of so-called normal modes. My book then tells me to make an "infinite linear combination" of these normal modes, and that this will be a solution to the PDE. But how do we know that this is in fact a solution? I have only seen a proof of the superposition principle for a finite number of functions.
     
  2. jcsd
  3. Jul 2, 2009 #2
    As you say yourself, we use the superposition principle, and the reason why we use an infinite sum is because we want the most general solution: This way we are able to satisfy all the possible starting conditions.

    Remark: Superposing solutions only work for linear PDEs.
     
    Last edited: Jul 2, 2009
  4. Jul 3, 2009 #3
    But I have only seen a proof of the superposition principle for a finite linear combination, so how do we know that an infinite linear combination is in fact a solution?
     
  5. Jul 3, 2009 #4
    In completely the same way: you apply your differential operator to each spectral term and obtain zero, so it works whatever number of the spectral addenda is.
     
  6. Jul 3, 2009 #5
    Yes that is right, but I guess for that to work you would have to assume that the solution is sufficiently smooth, because we are dealing with an infinite series. For example, if it is a second order linear differential operator, the solution would have to be two times continuously differentiable. How do we know that this is true?
     
  7. Jul 3, 2009 #6
    It is the original equation itself that should be analysed. Indeed, there may be some problems in certain cases (see Appendix 3 in my article, for example).

    http://arxiv.org/abs/0906.3504
     
  8. Jul 3, 2009 #7

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Because we have actually learned enough calculus to know that uniformly convergent sums of smooth functions are smooth.
     
  9. Jul 3, 2009 #8
    Yes, that is my question, how do you know that the series and its derivatives are uniformly convergent?
     
  10. Jul 3, 2009 #9
    Look at the equation: it contains the derivatives. If there are no "singular" terms in the equation, then the derivatives are finite.
     
  11. Jul 3, 2009 #10
    Suppose we want to solve this problem
    [tex]
    \frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=0
    [/tex]

    for some boundary conditions. How do I tell that the derivatives are continuous? Why might there not be a solution which is discontinuous?
     
  12. Jul 3, 2009 #11
    If a solution is a step-wise, the corresponding equation should contain a derivative of the delta-function. If your equation does not contain too big terms (the second derivative is finite) then the first derivative if finite too and thus the function is smooth. It can be decomposed into a spectral sum and this sum will converge to the exact function without fail.
     
  13. Jul 3, 2009 #12
    Here is what I am saying.

    We have a PDE given by a linear operator [itex]\mathcal Lu(x,t)=0[/itex]

    We then obtain a solution by separation of variables, [itex]u(x,t)=\sum_{k=0}^\infty a_k u_k(x,t)[/itex]

    Of cause each [itex]u_k(x,t)[/itex] is a solution of [itex]\mathcal Lu_k(x,t)=0[/itex].

    Now I want to verify that the entire infinite series [itex]u(x,t)[/itex] is also a solution:
    [tex]
    \mathcal L\left(\sum_{k=0}^\infty a_k u_k(x,t)\right)=0
    [/tex]

    This is easy if I can move the operator inside the sum, but this is only allowed if the infinite series and a sufficient number of its derivatives are uniformly convergent(i.e. u(x,t) and its derivatives must be continuous). Now how do we know if this is true?
     
  14. Jul 3, 2009 #13
    Similarly. You look at the original equation. If there is no delta-function and/or its derivative, then the combination of derivatives is always finite. So there is no problem.

    If you know what the Green's function of the original equation is, you will understand. The Green's function equation is like Lu=delta-function =>the Green's function is a step-wise function.
     
  15. Jul 4, 2009 #14
    I guess you are right. I find it annoying that none of the books I have seen even comment on it...
     
  16. Jul 4, 2009 #15
    Do you know the Lebesgue's Dominated Convergence Theorem? It is IMO the most basic tool to justify commutation of integration and limit like this

    [tex]
    \lim_{n\to\infty} \int d\mu(x) f_n(x) = \int d\mu(x) \lim_{n\to\infty} f_n(x),
    [/tex]

    and says that this can be done if there exists a dominating function [itex]h[/itex] such that [itex]|f_n(x)|\leq h(x)[/itex] for all [itex]x[/itex] and [itex]n[/itex], and [itex]\int d\mu(x) h(x) < \infty[/itex].

    The question about commutation of a derivative and integral like this

    [tex]
    \partial_x \int d\mu(y) f(x,y) = \int d\mu(y) \partial_x f(x,y)
    [/tex]

    is a similar question. The question is equivalent with this:

    [tex]
    \lim_{\Delta x\to 0} \int d\mu(y) \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x} = \int d\mu(y) \lim_{\Delta x\to 0} \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x}
    [/tex]

    According to the Mean Value Theorem we can write

    [tex]
    \frac{f(x+\Delta x, y) - f(x,y)}{\Delta x} = \partial_x f(\xi_{x,y,\Delta x}, y)
    [/tex]

    with some [itex]\xi_{x,y,\Delta x}[/itex]. Now the commutation of derivation and integration can be justified according to the dominated convergence, if we find an integrable function [itex]h(y)[/itex] such that [itex]|\partial_x f(x,y)|\leq h(y)[/itex] for all [itex]x[/itex] and [itex]y[/itex]. There may be other results to justify commutation of derivation and integration too, but IMO this is the most general argument, that can easily be used to derive some others.

    Suppose you have a sequence [itex]a_1,a_2,a_3,\ldots[/itex], and denote [itex]a(n)=a_n[/itex]. It should be noticed that the infinite series (if converging absolutely) is the same thing as integral of [itex]a[/itex] over the set [itex]\mathbb{N}[/itex] with measure [itex]\mu(\{n\})=1[/itex] for all [itex]n=1,2,3,\ldots [/itex]. So

    [tex]
    \sum_{n=1}^{\infty} a_n = \int\limits_{\mathbb{N}} d\mu(n) a(n).
    [/tex]

    If you want to justify that

    [tex]
    \partial_x \sum_{n=1}^{\infty} f_n(x) = \sum_{n=1}^{\infty} \partial_x f_n(x)
    [/tex]

    for some functions [itex]f_1,f_2,f_3,\ldots[/itex], I would interpret the series as an integral, recall the definition of a derivative as a limit, and use dominated convergence to justify commutation of the limit and integration.
     
    Last edited: Jul 5, 2009
  17. Jul 5, 2009 #16
    jostpuur: Sorry, I don't know any measure theory or Lebesgue integration. But I think what you are saying sounds a bit like the Weierstrass M-test applied to the solution of the PDE.
     
  18. Jul 5, 2009 #17

    I was talking about how to commute limit and summation of series. The Weierstrass M-test is about convergence of series, not about commuting limit and the summation of series.

    Details of measure theory are not important now. Measure theory should be considered as a tool which can be forgotten once the useful results have been obtained (IMO) :wink:

    The Lebesgue's dominated convergence theorem holds for arbitrary measures. Both Riemann integrals and discrete series can be thought to be integrals over certain measures. So the abstract result about the equation

    [tex]
    \lim_{n\to\infty} \int d\mu(x) f_n(x) = \int d\mu(x) \lim_{n\to\infty} f_n(x)
    [/tex]

    immediately implies the same results for Riemann integrals

    [tex]
    \lim_{n\to\infty} \int\limits_a^b dx\; f_n(x) = \int\limits_a^b dx\; \lim_{n\to\infty} f_n(x)
    [/tex]

    and for the infinite series

    [tex]
    \lim_{n\to\infty} \sum_{k=1}^{\infty} a(k,n) = \sum_{k=1}^{\infty} \lim_{n\to\infty} a(k,n).
    [/tex]

    If you want to commute limit and Riemann integral, or limit and summation of infinite series, think of the Riemann integral or infinite series as an abstract integral, and then use the Lebesgue's dominated convergence.
     
  19. Jul 6, 2009 #18
    Yes! You can use the M-test to show that the series is uniformly convergent and then you can interchange limit and sum, or limit and integral. You are talking about a dominating function, and that sounds exactly like the criterion used in the M-test, i.e. [itex]\sum_n f_n(x)[/itex] converges uniformly if [itex]|f_n(x)|\le M_n[/itex] for all [itex]x[/itex] and [itex]\sum_n M_n<\infty[/itex].
     
  20. Jul 6, 2009 #19
    A nice example of summation-integration non commutativity is contained in my article arxiv:0906.3504
     
  21. Jul 6, 2009 #20
    I see this now, you were right. If a function [itex]\mathbb{R}\times\mathbb{N}\to\mathbb{C}[/itex], [itex](x,n)\mapsto f(x,n)[/itex] is integrated over [itex]\mathbb{N}[/itex], then functions of type [itex]\mathbb{N}\to [0,\infty[[/itex], [itex]n\mapsto M_n[/itex] are correct kind of dominating functions for the commutation

    [tex]
    \partial_x \int\limits_{\mathbb{N}} d\mu(n) f(x,n) = \int\limits_{\mathbb{N}} d\mu(n) \partial_x f(x,n).
    [/tex]

    [itex]|\partial_x f(x,n)|\leq M_n[/itex] should be satisfied.

    But I don't think it is correct to talk about the Weierstrass M-test now, to be fully precise. This is more Lebesgue's dominated convergence than Weierstrass M-test, IMO.

    Are you sure that it is smart to take uniform convergence as an intermediate step in the proof? I'm not yet convinced that it can be used for the proof.
     
    Last edited: Jul 6, 2009
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Infinite sum PDE solution
  1. Solutions to this PDE? (Replies: 4)

Loading...