Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Homework Help: Bessel equation & Orthogonal Basis

  1. Feb 18, 2010 #1
    I remember some of my linear algebra from my studies but can't wrap my head around this one.

    1. The problem statement, all variables and given/known data

    Say my solution to a DE is "f(x)" (happens to be bessel's equation), and it contains a constant variable "d" in the argument of the bessel's functions (i.,e. J(d*x) and Y(d*x)). So my solution is:

    f(x)= A*J(d*x) + B*Y(d*x)

    I can find a general solution for f(x) by imposing 2 boundary conditions f(x1)=f(x2)=0. That would give me an equation for f_n(x).

    First question: The author calls this f_n(x) the "eigenfunctions" and the "orthogonal basis". Why is this given these names? I'm not sure why these solutions form an orthogonal basis.

    Second question:

    The author then states that an arbitrary vector F(x) "can be expanded in this orthogonal basis" via:

    F(x)= sum{from n=1 to inf} [ a_n*f_n(x) ]

    where

    a_n = [ (f_n(x) , F(x)) ] / [ (f_n(x) , f_n(x) ]

    What in the world is this on about? Any guidance would be helpful!
     
  2. jcsd
  3. Feb 18, 2010 #2

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    The differential equation you're solving can be written in the form

    [tex]L[f(x)]=\lambda w(x) f(x)[/tex]

    where L is a self-adjoint linear operator and w(x) is a weighting function. From linear algebra, you should recall that solutions to [itex]A\vec{x}=\lambda\vec{x}[/itex], where A was the matrix representing a linear transformation, were called eigenvectors with eigenvalue [itex]\lambda[/itex]. Here f(x) plays the role of [itex]\vec{x}[/itex], which is why f(x) is called an eigenfunction.

    A self-adjoint linear operator corresponds to a Hermitian matrix, which has real eigenvalues and orthogonal eigenvectors (neglecting the possibility of degeneracy for now), which form a basis of the vector space. Likewise, the [itex]f_n(x)[/itex]'s are orthogonal with respect to an inner product of the form

    [tex]\langle f, g\rangle=\int_a^b f^*(x)g(x)w(x) dx[/tex]

    and they too form a basis of a vector space of functions.

    If you have a vector [itex]\vec{x}[/itex] and an orthogonal basis [itex]\{\vec{v}_1, \vec{v}_2, \cdots, \vec{v}_n\}[/itex], you can express [itex]\vec{x}[/itex] as a linear combination of the basis vectors:

    [tex]\vec{x} = a_1 \vec{v}_1 + \cdots + a_n \vec{v}_n[/tex]

    Taking the inner product of [itex]\vec{x}[/itex] with a basis vector [itex]\vec{v}_i[/itex], you get

    [tex]\langle \vec{x},\vec{v}_i \rangle = a_1 \langle \vec{v}_1, \vec{v}_i \rangle + \cdots + a_n \langle \vec{v}_n, \vec{v}_i \rangle[/tex]

    Because they're orthogonal, only the i-th term on the RHS survives, so you get

    [tex]a_i=\frac{\langle \vec{x},\vec{v}_i \rangle}{\langle \vec{v}_i, \vec{v}_i \rangle}[/tex]

    What the author is saying is analogous to this. Now F(x) plays the role of [itex]\vec{x}[/itex] and your [itex]f_n(x)[/itex]'s are the orthogonal basis vectors, and you get

    [tex]a_i=\frac{\langle F(x),f_i(x)\rangle}{\langle f_i(x), f_i(x) \rangle}[/tex]

    If you've ever worked with Fourier series before, this is what you were doing with an orthogonal basis consisting of the sine and cosine functions.
     
  4. Feb 18, 2010 #3
    That's superb. Thank you.
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook