Bessel equation & Orthogonal Basis

the_dialogue
Messages
77
Reaction score
0
I remember some of my linear algebra from my studies but can't wrap my head around this one.

Homework Statement



Say my solution to a DE is "f(x)" (happens to be bessel's equation), and it contains a constant variable "d" in the argument of the bessel's functions (i.,e. J(d*x) and Y(d*x)). So my solution is:

f(x)= A*J(d*x) + B*Y(d*x)

I can find a general solution for f(x) by imposing 2 boundary conditions f(x1)=f(x2)=0. That would give me an equation for f_n(x).

First question: The author calls this f_n(x) the "eigenfunctions" and the "orthogonal basis". Why is this given these names? I'm not sure why these solutions form an orthogonal basis.

Second question:

The author then states that an arbitrary vector F(x) "can be expanded in this orthogonal basis" via:

F(x)= sum{from n=1 to inf} [ a_n*f_n(x) ]

where

a_n = [ (f_n(x) , F(x)) ] / [ (f_n(x) , f_n(x) ]

What in the world is this on about? Any guidance would be helpful!
 
Physics news on Phys.org
the_dialogue said:
First question: The author calls this f_n(x) the "eigenfunctions" and the "orthogonal basis". Why is this given these names? I'm not sure why these solutions form an orthogonal basis.
The differential equation you're solving can be written in the form

L[f(x)]=\lambda w(x) f(x)

where L is a self-adjoint linear operator and w(x) is a weighting function. From linear algebra, you should recall that solutions to A\vec{x}=\lambda\vec{x}, where A was the matrix representing a linear transformation, were called eigenvectors with eigenvalue \lambda. Here f(x) plays the role of \vec{x}, which is why f(x) is called an eigenfunction.

A self-adjoint linear operator corresponds to a Hermitian matrix, which has real eigenvalues and orthogonal eigenvectors (neglecting the possibility of degeneracy for now), which form a basis of the vector space. Likewise, the f_n(x)'s are orthogonal with respect to an inner product of the form

\langle f, g\rangle=\int_a^b f^*(x)g(x)w(x) dx

and they too form a basis of a vector space of functions.

Second question:

The author then states that an arbitrary vector F(x) "can be expanded in this orthogonal basis" via:

F(x)= sum{from n=1 to inf} [ a_n*f_n(x) ]

where

a_n = [ (f_n(x) , F(x)) ] / [ (f_n(x) , f_n(x) ]

What in the world is this on about? Any guidance would be helpful!
If you have a vector \vec{x} and an orthogonal basis \{\vec{v}_1, \vec{v}_2, \cdots, \vec{v}_n\}, you can express \vec{x} as a linear combination of the basis vectors:

\vec{x} = a_1 \vec{v}_1 + \cdots + a_n \vec{v}_n

Taking the inner product of \vec{x} with a basis vector \vec{v}_i, you get

\langle \vec{x},\vec{v}_i \rangle = a_1 \langle \vec{v}_1, \vec{v}_i \rangle + \cdots + a_n \langle \vec{v}_n, \vec{v}_i \rangle

Because they're orthogonal, only the i-th term on the RHS survives, so you get

a_i=\frac{\langle \vec{x},\vec{v}_i \rangle}{\langle \vec{v}_i, \vec{v}_i \rangle}

What the author is saying is analogous to this. Now F(x) plays the role of \vec{x} and your f_n(x)'s are the orthogonal basis vectors, and you get

a_i=\frac{\langle F(x),f_i(x)\rangle}{\langle f_i(x), f_i(x) \rangle}

If you've ever worked with Fourier series before, this is what you were doing with an orthogonal basis consisting of the sine and cosine functions.
 
That's superb. Thank you.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top