Bessel equation & Orthogonal Basis

Click For Summary
SUMMARY

The discussion centers on the Bessel equation and its solutions, specifically the eigenfunctions denoted as f_n(x) which form an orthogonal basis. The solutions are derived from the differential equation L[f(x)]=λw(x)f(x), where L is a self-adjoint linear operator. The orthogonality of the eigenfunctions is established through an inner product defined as ⟨f, g⟩=∫_a^b f*(x)g(x)w(x) dx. Additionally, any arbitrary vector F(x) can be expressed as a linear combination of these eigenfunctions using the coefficients a_n calculated via the inner product.

PREREQUISITES
  • Understanding of Bessel functions, specifically J(d*x) and Y(d*x).
  • Familiarity with self-adjoint linear operators and their properties.
  • Knowledge of inner product spaces and orthogonality in function spaces.
  • Basic concepts of eigenvalues and eigenfunctions in linear algebra.
NEXT STEPS
  • Study the properties of Bessel functions and their applications in solving differential equations.
  • Learn about self-adjoint operators and their role in quantum mechanics and differential equations.
  • Explore the concept of orthogonal bases in function spaces, including Fourier series.
  • Investigate the derivation and applications of inner products in functional analysis.
USEFUL FOR

Mathematicians, physicists, and engineering students focusing on differential equations, linear algebra, and functional analysis will benefit from this discussion.

the_dialogue
Messages
77
Reaction score
0
I remember some of my linear algebra from my studies but can't wrap my head around this one.

Homework Statement



Say my solution to a DE is "f(x)" (happens to be bessel's equation), and it contains a constant variable "d" in the argument of the bessel's functions (i.,e. J(d*x) and Y(d*x)). So my solution is:

f(x)= A*J(d*x) + B*Y(d*x)

I can find a general solution for f(x) by imposing 2 boundary conditions f(x1)=f(x2)=0. That would give me an equation for f_n(x).

First question: The author calls this f_n(x) the "eigenfunctions" and the "orthogonal basis". Why is this given these names? I'm not sure why these solutions form an orthogonal basis.

Second question:

The author then states that an arbitrary vector F(x) "can be expanded in this orthogonal basis" via:

F(x)= sum{from n=1 to inf} [ a_n*f_n(x) ]

where

a_n = [ (f_n(x) , F(x)) ] / [ (f_n(x) , f_n(x) ]

What in the world is this on about? Any guidance would be helpful!
 
Physics news on Phys.org
the_dialogue said:
First question: The author calls this f_n(x) the "eigenfunctions" and the "orthogonal basis". Why is this given these names? I'm not sure why these solutions form an orthogonal basis.
The differential equation you're solving can be written in the form

[tex]L[f(x)]=\lambda w(x) f(x)[/tex]

where L is a self-adjoint linear operator and w(x) is a weighting function. From linear algebra, you should recall that solutions to [itex]A\vec{x}=\lambda\vec{x}[/itex], where A was the matrix representing a linear transformation, were called eigenvectors with eigenvalue [itex]\lambda[/itex]. Here f(x) plays the role of [itex]\vec{x}[/itex], which is why f(x) is called an eigenfunction.

A self-adjoint linear operator corresponds to a Hermitian matrix, which has real eigenvalues and orthogonal eigenvectors (neglecting the possibility of degeneracy for now), which form a basis of the vector space. Likewise, the [itex]f_n(x)[/itex]'s are orthogonal with respect to an inner product of the form

[tex]\langle f, g\rangle=\int_a^b f^*(x)g(x)w(x) dx[/tex]

and they too form a basis of a vector space of functions.

Second question:

The author then states that an arbitrary vector F(x) "can be expanded in this orthogonal basis" via:

F(x)= sum{from n=1 to inf} [ a_n*f_n(x) ]

where

a_n = [ (f_n(x) , F(x)) ] / [ (f_n(x) , f_n(x) ]

What in the world is this on about? Any guidance would be helpful!
If you have a vector [itex]\vec{x}[/itex] and an orthogonal basis [itex]\{\vec{v}_1, \vec{v}_2, \cdots, \vec{v}_n\}[/itex], you can express [itex]\vec{x}[/itex] as a linear combination of the basis vectors:

[tex]\vec{x} = a_1 \vec{v}_1 + \cdots + a_n \vec{v}_n[/tex]

Taking the inner product of [itex]\vec{x}[/itex] with a basis vector [itex]\vec{v}_i[/itex], you get

[tex]\langle \vec{x},\vec{v}_i \rangle = a_1 \langle \vec{v}_1, \vec{v}_i \rangle + \cdots + a_n \langle \vec{v}_n, \vec{v}_i \rangle[/tex]

Because they're orthogonal, only the i-th term on the RHS survives, so you get

[tex]a_i=\frac{\langle \vec{x},\vec{v}_i \rangle}{\langle \vec{v}_i, \vec{v}_i \rangle}[/tex]

What the author is saying is analogous to this. Now F(x) plays the role of [itex]\vec{x}[/itex] and your [itex]f_n(x)[/itex]'s are the orthogonal basis vectors, and you get

[tex]a_i=\frac{\langle F(x),f_i(x)\rangle}{\langle f_i(x), f_i(x) \rangle}[/tex]

If you've ever worked with Fourier series before, this is what you were doing with an orthogonal basis consisting of the sine and cosine functions.
 
That's superb. Thank you.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 13 ·
Replies
13
Views
4K
Replies
3
Views
2K
Replies
3
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K