# Homework Help: Bessel equation & Orthogonal Basis

1. Feb 18, 2010

### the_dialogue

I remember some of my linear algebra from my studies but can't wrap my head around this one.

1. The problem statement, all variables and given/known data

Say my solution to a DE is "f(x)" (happens to be bessel's equation), and it contains a constant variable "d" in the argument of the bessel's functions (i.,e. J(d*x) and Y(d*x)). So my solution is:

f(x)= A*J(d*x) + B*Y(d*x)

I can find a general solution for f(x) by imposing 2 boundary conditions f(x1)=f(x2)=0. That would give me an equation for f_n(x).

First question: The author calls this f_n(x) the "eigenfunctions" and the "orthogonal basis". Why is this given these names? I'm not sure why these solutions form an orthogonal basis.

Second question:

The author then states that an arbitrary vector F(x) "can be expanded in this orthogonal basis" via:

F(x)= sum{from n=1 to inf} [ a_n*f_n(x) ]

where

a_n = [ (f_n(x) , F(x)) ] / [ (f_n(x) , f_n(x) ]

What in the world is this on about? Any guidance would be helpful!

2. Feb 18, 2010

### vela

Staff Emeritus
The differential equation you're solving can be written in the form

$$L[f(x)]=\lambda w(x) f(x)$$

where L is a self-adjoint linear operator and w(x) is a weighting function. From linear algebra, you should recall that solutions to $A\vec{x}=\lambda\vec{x}$, where A was the matrix representing a linear transformation, were called eigenvectors with eigenvalue $\lambda$. Here f(x) plays the role of $\vec{x}$, which is why f(x) is called an eigenfunction.

A self-adjoint linear operator corresponds to a Hermitian matrix, which has real eigenvalues and orthogonal eigenvectors (neglecting the possibility of degeneracy for now), which form a basis of the vector space. Likewise, the $f_n(x)$'s are orthogonal with respect to an inner product of the form

$$\langle f, g\rangle=\int_a^b f^*(x)g(x)w(x) dx$$

and they too form a basis of a vector space of functions.

If you have a vector $\vec{x}$ and an orthogonal basis $\{\vec{v}_1, \vec{v}_2, \cdots, \vec{v}_n\}$, you can express $\vec{x}$ as a linear combination of the basis vectors:

$$\vec{x} = a_1 \vec{v}_1 + \cdots + a_n \vec{v}_n$$

Taking the inner product of $\vec{x}$ with a basis vector $\vec{v}_i$, you get

$$\langle \vec{x},\vec{v}_i \rangle = a_1 \langle \vec{v}_1, \vec{v}_i \rangle + \cdots + a_n \langle \vec{v}_n, \vec{v}_i \rangle$$

Because they're orthogonal, only the i-th term on the RHS survives, so you get

$$a_i=\frac{\langle \vec{x},\vec{v}_i \rangle}{\langle \vec{v}_i, \vec{v}_i \rangle}$$

What the author is saying is analogous to this. Now F(x) plays the role of $\vec{x}$ and your $f_n(x)$'s are the orthogonal basis vectors, and you get

$$a_i=\frac{\langle F(x),f_i(x)\rangle}{\langle f_i(x), f_i(x) \rangle}$$

If you've ever worked with Fourier series before, this is what you were doing with an orthogonal basis consisting of the sine and cosine functions.

3. Feb 18, 2010

### the_dialogue

That's superb. Thank you.