What Is the Meaning of Orthogonality in Fourier Analysis and Bessel Functions?

AStaunton
Messages
100
Reaction score
1
a question on orthogonality relating to Fourier analysis and also solutions of PDEs by separation of variables.

I've used the fact that the following expression (I chose sine, also cosine works):

\int_{0}^{2\pi}\sin mx\sin nxdx

equals 0 unless m=n in which case it equals pi in Fourier analysis and also determing the coefficients of solutions for PDEs by the method of separation of varaibles.

The word orthogonal means perpendicular - what I have never understood is in what sense is sin(mx) perpendicular to sin(nx)?

Also, I have used this orthogonality method when dealing with bessel functions also to collapse a summation to one term as in:

\int_{0}^{2L}xJ_{0}(\sqrt{\lambda_{n}}x)J_{0}(\sqrt{\lambda_{m}}x)dx

where in this problem sqrt(lambda) is eigenvalue. The difference that here when m=n it doesn't evaluate to L as would have been if dealing with trig functions. also had multiply by an extra x as you can see in the above expression...

again my question is, in what sense are the bessel functions perpendicular?
why must multiply the expression by an extra x when dealing with bessels?
and out of interest, does the bessel integral evaluate to something simple when m=n, in the same way that the trig functions evaluate to pi or more generally L?

Be grateful for clarity on these points.

Andrew
 
Physics news on Phys.org


To me, the notion of orthogonality for functions is a generalization of the notion of orthogonality of finite dimensional vectors based on the dot product ("inner product").

For example in 2 dimensions \mathbf{a} = (a_x,a_y) is orthogonal to \mathbf{b} = (b_x,b_y) iff \mathbf{a} \cdot \mathbf{b} = 0 = a_x b_x + a_y b_y

If you think of a functions as vectors with an infinite dimensional number of components, the natural way to generalize the dot product to functions is to take the integral of their product since an integral is based on the idea of "an infinite sum".

Beyond 3 dimensions, I can't visualize what orthogonality "looks" like. So visualizing it for functions (as infinite dimensional vectors) isn't more of problem! The thing that I can appreciate in more than 3 dimensions is that if you have a "basis" for the vector space and you want to represent a vector in that basis, you do so by projecting the vector onto each of vectors in the basis. For finite dimensional vectors, a handy way to do the projection is to use the dot product. If the basis vector \mathbf{u} is a vector of unit length, you can find the projection of \mathbf{a} on to \mathbf{u} by computing \mathbf{a} \cdot \mathbf{u}. If representing \mathbf{a} as a sum of basis vectors assigns a zero coefficient to \mathbf{u} then I can grasp (even in higher dimensional spaces) the intuitive idea that somehow \mathbf{a} is orthogonal to \mathbf{u} since \mathbf{a} has zero projection on it.

The best kind of basis for a vector space is one where the basis vectors are mutually orthogonal. We can express this by saying that the dot product of any two distinct basis vectors is zero.

If you look at what's done with functions, the same sort of procedures are performed using the integral of a product as if it were a dot product. There are mathematical worries that arise. Integrals of some functions don't exist. if the integrals exist and you suceed in writing a function as an infinite sum of basis functions, does this infinite sum converge?

I don't know why an extra x is needed for Bessel functions, but there are other ways to generalize the dot product than taking a simple product of functions. For example we can regard \mathbf{f} \cdot \mathbf{g} = \int f(x) g(x) K(x) dx. The K(x) is called a "kernel". Whether this relates to the use of the word "kernel" in linear algebra, I don't know.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top