A question on orthogonality relating to fourier analysis and also solutions of PDES

In summary, orthogonality for functions is a generalization of the concept of orthogonality for finite dimensional vectors, where the dot product is replaced by an integral of a product. This method is used in Fourier analysis and in determining coefficients for solutions of PDEs by separation of variables. Bessel functions also use this method, but with a different basis function and an extra x term. The concept of orthogonality in higher dimensions can be visualized as projecting a vector onto a basis vector, but the mathematical calculations can be more complex due to issues such as convergence. Other methods, such as using a kernel, can also be used to generalize the dot product for functions.
  • #1
AStaunton
105
1
a question on orthogonality relating to Fourier analysis and also solutions of PDEs by separation of variables.

I've used the fact that the following expression (I chose sine, also cosine works):

[tex]\int_{0}^{2\pi}\sin mx\sin nxdx[/tex]

equals 0 unless m=n in which case it equals pi in Fourier analysis and also determing the coefficients of solutions for PDEs by the method of separation of varaibles.

The word orthogonal means perpendicular - what I have never understood is in what sense is sin(mx) perpendicular to sin(nx)?

Also, I have used this orthogonality method when dealing with bessel functions also to collapse a summation to one term as in:

[tex]\int_{0}^{2L}xJ_{0}(\sqrt{\lambda_{n}}x)J_{0}(\sqrt{\lambda_{m}}x)dx[/tex]

where in this problem sqrt(lambda) is eigenvalue. The difference that here when m=n it doesn't evaluate to L as would have been if dealing with trig functions. also had multiply by an extra x as you can see in the above expression...

again my question is, in what sense are the bessel functions perpendicular?
why must multiply the expression by an extra x when dealing with bessels?
and out of interest, does the bessel integral evaluate to something simple when m=n, in the same way that the trig functions evaluate to pi or more generally L?

Be grateful for clarity on these points.

Andrew
 
Physics news on Phys.org
  • #2


To me, the notion of orthogonality for functions is a generalization of the notion of orthogonality of finite dimensional vectors based on the dot product ("inner product").

For example in 2 dimensions [tex] \mathbf{a} = (a_x,a_y) [/tex] is orthogonal to [tex] \mathbf{b} = (b_x,b_y) [/tex] iff [tex] \mathbf{a} \cdot \mathbf{b} = 0 = a_x b_x + a_y b_y [/tex]

If you think of a functions as vectors with an infinite dimensional number of components, the natural way to generalize the dot product to functions is to take the integral of their product since an integral is based on the idea of "an infinite sum".

Beyond 3 dimensions, I can't visualize what orthogonality "looks" like. So visualizing it for functions (as infinite dimensional vectors) isn't more of problem! The thing that I can appreciate in more than 3 dimensions is that if you have a "basis" for the vector space and you want to represent a vector in that basis, you do so by projecting the vector onto each of vectors in the basis. For finite dimensional vectors, a handy way to do the projection is to use the dot product. If the basis vector [tex] \mathbf{u} [/tex] is a vector of unit length, you can find the projection of [tex] \mathbf{a} [/tex] on to [tex] \mathbf{u} [/tex] by computing [tex] \mathbf{a} \cdot \mathbf{u} [/tex]. If representing [tex] \mathbf{a} [/tex] as a sum of basis vectors assigns a zero coefficient to [tex] \mathbf{u} [/tex] then I can grasp (even in higher dimensional spaces) the intuitive idea that somehow [tex] \mathbf{a} [/tex] is orthogonal to [tex] \mathbf{u} [/tex] since [tex] \mathbf{a} [/tex] has zero projection on it.

The best kind of basis for a vector space is one where the basis vectors are mutually orthogonal. We can express this by saying that the dot product of any two distinct basis vectors is zero.

If you look at what's done with functions, the same sort of procedures are performed using the integral of a product as if it were a dot product. There are mathematical worries that arise. Integrals of some functions don't exist. if the integrals exist and you suceed in writing a function as an infinite sum of basis functions, does this infinite sum converge?

I don't know why an extra x is needed for Bessel functions, but there are other ways to generalize the dot product than taking a simple product of functions. For example we can regard [tex] \mathbf{f} \cdot \mathbf{g} = \int f(x) g(x) K(x) dx [/tex]. The [tex] K(x) [/tex] is called a "kernel". Whether this relates to the use of the word "kernel" in linear algebra, I don't know.
 

What is orthogonality in relation to Fourier analysis?

Orthogonality in Fourier analysis refers to the property of two functions being perpendicular to each other in a mathematical sense. This means that when the two functions are plotted on a graph, they intersect at a right angle. In the context of Fourier analysis, orthogonality is essential for decomposing a function into a series of sine and cosine functions.

How is orthogonality used in Fourier analysis to solve PDEs?

Orthogonality plays a crucial role in solving partial differential equations (PDEs) using Fourier analysis. By expressing a function as a series of sine and cosine functions, PDEs can be transformed into algebraic equations, which are easier to solve. The orthogonality of these functions allows for the coefficients in the series to be determined through integration, leading to a solution for the PDE.

What is the Fourier series and how does it relate to orthogonality?

The Fourier series is a mathematical tool used to represent a function as a sum of sine and cosine functions. This technique relies on the orthogonality of these functions, as the coefficients in the series are determined through integration using the orthogonality property. The Fourier series is often used in Fourier analysis to solve PDEs and other mathematical problems.

What are some practical applications of Fourier analysis and orthogonality?

Fourier analysis and orthogonality have a wide range of practical applications, including signal processing, image analysis, and quantum mechanics. In signal processing, Fourier analysis is used to decompose signals into their frequency components, allowing for efficient filtering and compression. In image analysis, orthogonality is used to transform images for compression and enhancement. In quantum mechanics, Fourier analysis is used to describe the wave-like behavior of particles.

How does the concept of orthogonality extend beyond Fourier analysis?

Orthogonality is a fundamental concept in mathematics and has applications beyond Fourier analysis. In linear algebra, orthogonal vectors are used to represent perpendicular directions in space. In abstract algebra, orthogonal elements satisfy certain algebraic properties. In geometry, orthogonal lines intersect at right angles. In statistics, orthogonal variables are uncorrelated. The concept of orthogonality is a powerful tool in many areas of mathematics and science.

Similar threads

  • Calculus and Beyond Homework Help
Replies
5
Views
220
  • Calculus and Beyond Homework Help
Replies
16
Views
474
  • Calculus and Beyond Homework Help
Replies
3
Views
163
  • Calculus and Beyond Homework Help
Replies
1
Views
200
  • Topology and Analysis
Replies
4
Views
204
  • Calculus and Beyond Homework Help
Replies
12
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
3K
  • Calculus and Beyond Homework Help
Replies
1
Views
968
Replies
2
Views
334
Back
Top