Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Sturm-louville question

  1. Aug 13, 2014 #1

    joshmccraney

    User Avatar
    Gold Member

    hi pf!

    ok, so my math text for PDE's states the following theorem: $$f(x) = \sum_n a_n \phi_n (x)$$ for "nice enough" functions. however, the next theorem states that ##\phi_n (x)## and ##\phi_m (x)## are orthogonal relative to a weight function, ##\sigma(x)##. in other words, $$\int_\Omega \phi_n(x) \phi_m(x) \sigma(x) dx = 0 : m \neq n$$
    can someone explain why this would be zero?

    thanks!
     
  2. jcsd
  3. Aug 14, 2014 #2

    MathematicalPhysicist

    User Avatar
    Gold Member

    If [tex]\phi_m[/tex] is an orthogonal basis with respect to a dot product of the form:

    [tex]<f,g> = \int fg w[/tex] for some weight function w then obviously by definition the above product should vanish since that's the definitino of orthogonal basis.
     
  4. Aug 14, 2014 #3

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    You should see this as a far-reaching generalization of Fourier series. Indeed, we have that

    $$f(x) = \sum_n a_n \sin(nx) + b_n \cos(nx)$$

    for "nice enough" functions, and the functions ##\sin(nx)## and ##\cos(nx)## are orthogonal in the sense that

    $$\int_{-\pi}^\pi \sin(nx)\sin(mx)dx = \int_{-\pi}^\pi \sin(nx)\cos(mx)dx = 0.$$
     
  5. Aug 14, 2014 #4

    ShayanJ

    User Avatar
    Gold Member

    Take a look at this.
     
  6. Aug 14, 2014 #5

    joshmccraney

    User Avatar
    Gold Member

    This looks very similar to my text. I notice page 197-198 they use this orthogonality relation, but it's taken from the preceding theorem on page 189, part 3. But why is this the case. For sure it's true with sine and cosines. But then there's the ##\sigma##.

    thanks for your help
     
  7. Aug 14, 2014 #6

    joshmccraney

    User Avatar
    Gold Member

    Yes, this is how I began to accept the theorem, but what about that ##\sigma(x)##? That's what threw me off.
     
  8. Aug 14, 2014 #7

    pasmith

    User Avatar
    Homework Helper

    The general theory of linear operators on an inner product space then says that if an operator is self-adjoint with respect to the inner product then its eigenvalues (if any) are real, and that eigenvectors corresponding to different eigenvalues are orthogonal with respect to the inner product. By definition, [itex]f[/itex] and [itex]g[/itex] are orthogonal with respect to an inner product [itex]\langle \cdot, \cdot \rangle[/itex] if and only if [itex]\langle f, g \rangle = 0[/itex], and the operator [itex]A[/itex] is self-adjoint if and only if [itex]\langle A(f), g \rangle = \langle f, A(g) \rangle[/itex] for every [itex]f[/itex] and [itex]g[/itex] in the space.

    Now with suitable boundary conditions it can be shown that the second order linear differential operator [itex]\mathcal{L}_1(y) = (p(x)y')' - q(x)y[/itex] is self-adjoint with respect to the inner product [tex]
    \langle f, g \rangle_1 = \int_a^b f(x)g(x)\,dx.
    [/tex]

    Consider now the most general linear second-order differential operator [tex]\mathcal{L}(y) = y'' + B(x)y' + C(x)y.[/tex] This can be put into self-adoint form by taking [tex]
    y'' + B(x)y' + C(x)y = \frac{1}{w(x)}\left( \frac{d}{dx}(p(x)y') - q(x)y\right)
    [/tex] where [itex]w[/itex], [itex]p[/itex] and [itex]q[/itex] can be found in terms of [itex]B[/itex] and [itex]C[/itex] as [tex]
    w(x) = \exp\left( \int B(x)\,dx\right), \\
    p(x) = \exp\left( \int B(x)\,dx \right), \\
    q(x) = - C(x) \exp\left( \int B(x)\,dx \right).
    [/tex]
    It follows that [itex]\mathcal{L} = (1/w(x))\mathcal{L}_1[/itex] is self-adjoint with respect to the inner product [tex]
    \langle f, g \rangle_2 = \int_a^b w(x)f(x)g(x)\,dx.
    [/tex] (I suppose one should at some point prove that if [itex]w(x) > 0[/itex] on [itex](a,b)[/itex] then indeed this is an inner product). Thus if you include a weight function then you can apply Sturm-Liouville theory to any second-order linear differential operator, and the most frequent example in practice (aside from fourier series) are the Bessel functions which arise in separable solutions of Laplace's equation in cylindrical polar coordinates and are the solutions of [tex]
    y'' + \frac{y'}{x} - \frac{\alpha^2}{x^2}y = \lambda y,
    [/tex] where [itex]\alpha[/itex] is a parameter and for which we find [itex]w(x) = x[/itex].
     
  9. Aug 14, 2014 #8

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    We don't know how your textbook motivated this, but in practice you might start with the function ##\sigma## and then derive some orthogonal functions that are consistent with it.

    For example if you want to work in cylindrical coordinates, translating some physics into a double integral over an area is likely to include ##\iint \dots\, r\,dr\,d\theta## as the element of area, so you would probably want to take ##\sigma(r) = r##. In that situation, the relevant orthogonal functions are Bessel functions, not sines and cosines.

    Most of the other "special functions" in mathematical physics (e.g. Legendre, Laguerre, and Chebyshev polynomials, etc) have similar orthogonality relations with different functions ##\sigma##. You can consider Sturm-Liouville theory as a generalization of all these "special cases".
     
    Last edited: Aug 14, 2014
  10. Aug 14, 2014 #9

    ShayanJ

    User Avatar
    Gold Member

    Page 195, proof of theorem 6.9!!!
    The weight function appears because sometimes you should multiply a function by the operator so that the operator becomes self-adjoint.
    Just read the paper from beginning and you'll get your answer.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook