What Is the Meaning of Orthogonality in Fourier Analysis and Bessel Functions?

Click For Summary
SUMMARY

The discussion centers on the concept of orthogonality in Fourier analysis and Bessel functions, specifically examining the integral expressions that define orthogonality. The integral of the product of sine functions, \int_{0}^{2\pi}\sin mx\sin nxdx, equals zero unless m=n, where it evaluates to \pi. In contrast, the Bessel function integral \int_{0}^{2L}xJ_{0}(\sqrt{\lambda_{n}}x)J_{0}(\sqrt{\lambda_{m}}x)dx does not yield a simple result when m=n. The discussion emphasizes the generalization of orthogonality from finite-dimensional vectors to functions treated as infinite-dimensional vectors, utilizing integrals as a form of the dot product.

PREREQUISITES
  • Understanding of Fourier analysis and its applications.
  • Familiarity with Bessel functions and their properties.
  • Knowledge of integral calculus and inner product spaces.
  • Concept of eigenvalues in the context of differential equations.
NEXT STEPS
  • Study the properties of Bessel functions and their orthogonality relations.
  • Learn about the application of Fourier series in solving partial differential equations (PDEs).
  • Explore the concept of inner products in functional spaces and their implications.
  • Investigate the role of kernels in generalizing dot products for functions.
USEFUL FOR

Mathematicians, physicists, and engineers involved in signal processing, differential equations, and mathematical analysis will benefit from this discussion.

AStaunton
Messages
100
Reaction score
1
a question on orthogonality relating to Fourier analysis and also solutions of PDEs by separation of variables.

I've used the fact that the following expression (I chose sine, also cosine works):

[tex]\int_{0}^{2\pi}\sin mx\sin nxdx[/tex]

equals 0 unless m=n in which case it equals pi in Fourier analysis and also determing the coefficients of solutions for PDEs by the method of separation of varaibles.

The word orthogonal means perpendicular - what I have never understood is in what sense is sin(mx) perpendicular to sin(nx)?

Also, I have used this orthogonality method when dealing with bessel functions also to collapse a summation to one term as in:

[tex]\int_{0}^{2L}xJ_{0}(\sqrt{\lambda_{n}}x)J_{0}(\sqrt{\lambda_{m}}x)dx[/tex]

where in this problem sqrt(lambda) is eigenvalue. The difference that here when m=n it doesn't evaluate to L as would have been if dealing with trig functions. also had multiply by an extra x as you can see in the above expression...

again my question is, in what sense are the bessel functions perpendicular?
why must multiply the expression by an extra x when dealing with bessels?
and out of interest, does the bessel integral evaluate to something simple when m=n, in the same way that the trig functions evaluate to pi or more generally L?

Be grateful for clarity on these points.

Andrew
 
Physics news on Phys.org


To me, the notion of orthogonality for functions is a generalization of the notion of orthogonality of finite dimensional vectors based on the dot product ("inner product").

For example in 2 dimensions [tex]\mathbf{a} = (a_x,a_y)[/tex] is orthogonal to [tex]\mathbf{b} = (b_x,b_y)[/tex] iff [tex]\mathbf{a} \cdot \mathbf{b} = 0 = a_x b_x + a_y b_y[/tex]

If you think of a functions as vectors with an infinite dimensional number of components, the natural way to generalize the dot product to functions is to take the integral of their product since an integral is based on the idea of "an infinite sum".

Beyond 3 dimensions, I can't visualize what orthogonality "looks" like. So visualizing it for functions (as infinite dimensional vectors) isn't more of problem! The thing that I can appreciate in more than 3 dimensions is that if you have a "basis" for the vector space and you want to represent a vector in that basis, you do so by projecting the vector onto each of vectors in the basis. For finite dimensional vectors, a handy way to do the projection is to use the dot product. If the basis vector [tex]\mathbf{u}[/tex] is a vector of unit length, you can find the projection of [tex]\mathbf{a}[/tex] on to [tex]\mathbf{u}[/tex] by computing [tex]\mathbf{a} \cdot \mathbf{u}[/tex]. If representing [tex]\mathbf{a}[/tex] as a sum of basis vectors assigns a zero coefficient to [tex]\mathbf{u}[/tex] then I can grasp (even in higher dimensional spaces) the intuitive idea that somehow [tex]\mathbf{a}[/tex] is orthogonal to [tex]\mathbf{u}[/tex] since [tex]\mathbf{a}[/tex] has zero projection on it.

The best kind of basis for a vector space is one where the basis vectors are mutually orthogonal. We can express this by saying that the dot product of any two distinct basis vectors is zero.

If you look at what's done with functions, the same sort of procedures are performed using the integral of a product as if it were a dot product. There are mathematical worries that arise. Integrals of some functions don't exist. if the integrals exist and you suceed in writing a function as an infinite sum of basis functions, does this infinite sum converge?

I don't know why an extra x is needed for Bessel functions, but there are other ways to generalize the dot product than taking a simple product of functions. For example we can regard [tex]\mathbf{f} \cdot \mathbf{g} = \int f(x) g(x) K(x) dx[/tex]. The [tex]K(x)[/tex] is called a "kernel". Whether this relates to the use of the word "kernel" in linear algebra, I don't know.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
Replies
8
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
8
Views
2K