A courious thing about orthogonal polynomials

In summary, the roots of the Fourier cosine transform are always real due to the symmetry and analyticity of the transform, as well as the properties of the measures used in the definition of orthogonal polynomials.
  • #1
zetafunction
391
0
given a set of orthogonal polynomials [tex] p_{n} (x) [/tex] with respect to a certain positive measure [tex] \mu (x) > 0 [/tex] on a certain interval (a,b)

then i have notices for several cases that f(z) defined by the integral transform

[tex] \int_{a}^{b}dx\mu(x)cos(xz)=f(z)[/tex]

has ALWAYS only real roots ¡¡

* Laguerre : measure is exp(-x) on (0,oo) then [tex] \int_{0}^{\infty}dxe^{-x}cos(xz)=(1+z^{2})^{-1}[/tex] , there is a root only as x tends to oo

* Chebyshev: measure is [tex] (1-x^{2})^{-1/2}[/tex] defined on (-1,1) , then [tex] \int_{-1}^{1}dxcos(xz)(1-x^{2})^{-1/2})=J_{0} (2z)[/tex] ALL the roots are real

* Hermite : measure is [tex] e^{-x^{2}} [/tex] then the Fourier transform is [tex]\int_{-\infty}^{\infty}dxe^{-x^{2}}cos(xz)= Cexp(-z^{2}/4) [/tex] with ONLY a real root as x tends to oo

* Legendre: measure is 1 defined on (-1,1) , Fourier transform [tex] \int_{-1}^{1}dxcos(xz)=sin(z)/z [/tex] having ALL the roots to be real


why does this happen ? , also it seems that the Fourier cosine transform f(z) is always an ENTIRE function having always real roots
 
Physics news on Phys.org
  • #2


First of all, it is important to note that the concept of orthogonal polynomials and their associated measures is a fundamental tool in mathematical analysis and has many applications in various fields such as physics, engineering, and statistics. These polynomials are defined as a sequence of polynomials that are orthogonal with respect to a given measure on a certain interval. This means that the inner product of any two different polynomials in the sequence is equal to zero.

Now, let's consider the integral transform given in the forum post:

\int_{a}^{b}dx\mu(x)cos(xz)=f(z)

This transform is known as the Fourier cosine transform and it is a fundamental tool in the study of orthogonal polynomials. It is also closely related to the Fourier transform, which is widely used in many areas of mathematics and physics.

The reason why the roots of f(z) are always real can be explained by the properties of the Fourier cosine transform. Firstly, it is important to note that the Fourier cosine transform is an even function, meaning that it is symmetric about the y-axis. This implies that the real and imaginary parts of f(z) are also even functions.

Moreover, the Fourier cosine transform is an entire function, which means that it is defined and analytic for all complex values of z. This property is closely related to the fact that the roots of f(z) are always real. It can be shown that if an entire function has only real coefficients, then its roots are also real. In the case of the Fourier cosine transform, the coefficients of the function are all real, as they are given by the integral of a real-valued function (cosine) over a real interval. Therefore, the roots of f(z) are always real.

Furthermore, the properties of the measures used in the examples given in the forum post also play a crucial role in ensuring that the roots of f(z) are real. In particular, the measures used are symmetric about the origin, which further supports the symmetry of the Fourier cosine transform and its real roots.

In conclusion, the reason why the Fourier cosine transform always has real roots can be explained by the properties of the transform itself, as well as the properties of the measures used in the definition of orthogonal polynomials. This is a fundamental and interesting result that has many applications in various fields of mathematics and science.
 

1. What are orthogonal polynomials?

Orthogonal polynomials are a special type of mathematical function that satisfy certain properties, including being orthogonal to each other. They are commonly used in physics, engineering, and statistics to solve problems involving polynomial equations.

2. How are orthogonal polynomials different from regular polynomials?

Orthogonal polynomials have the property of being orthogonal to each other, meaning that their inner product is equal to zero. This makes them useful in solving certain types of mathematical problems, whereas regular polynomials do not have this property.

3. What are some common applications of orthogonal polynomials?

Orthogonal polynomials are commonly used in fields such as physics, engineering, and statistics to solve problems involving polynomial equations. They are also used in numerical analysis, signal processing, and other areas of mathematics.

4. Can you give an example of an orthogonal polynomial?

The most well-known example of an orthogonal polynomial is the Legendre polynomial, which is commonly used to solve problems involving spherical harmonics and potential fields. Other examples include the Chebyshev polynomials, Hermite polynomials, and Jacobi polynomials.

5. How are orthogonal polynomials calculated?

The process of calculating orthogonal polynomials involves using a recursive formula or a three-term recurrence relation. This involves finding the coefficients of the polynomial using a specific set of equations and then using these coefficients to construct the polynomial function.

Similar threads

Replies
1
Views
931
Replies
3
Views
696
Replies
12
Views
2K
  • Differential Equations
Replies
4
Views
2K
Replies
2
Views
657
  • Calculus and Beyond Homework Help
Replies
5
Views
352
Replies
3
Views
2K
  • Calculus
Replies
8
Views
2K
Back
Top