Understanding Eigenvalue Problems

  • Thread starter Thread starter Xyius
  • Start date Start date
  • Tags Tags
    Eigenvalue
Xyius
Messages
501
Reaction score
4
We are doing Eigenvalue problems in my Differential Equations class and I just want to make sure I understand some of these concepts. If anyone could look through my current understanding and guide me in the right direction that would be great!

So when you have some equation

L[y]=\lambda y

The set of eigenfunctions associated with the operator L[y] will always form an orthogonal set if it is a Sturm-Liouville differential equation. (And it is my understanding that any second order differential equation can be put into Sturm-Liouville form.)

Here is one question, can a function be approximated by an infinite sum of ANY set of orthogonal functions? My book does this a lot and I want to understand why. For example, solving the Sturm-Liouville problem..
L[y]+\mu r y =f
Through their analysis, they write f as..
f=\sum_{n=1}^{\infty}\gamma_n \phi_n
Where gamma is just the constants, and phi are the eigenfunctions. But, the eigenfunctions for the differential equation are in no way related to f. So is it fair to assume you can write an approximation of any function with an infinite sum of an orthogonal set of functions?

Thanks a lot!
 
Physics news on Phys.org
No the set of orthogonal functions must also be complete. This is obvious as a set like {cos(x),cos(10^10 x),cos(100^400 x),cos(1000^9000),...,cos((10^n)^(n^2 10^n)),...} would not be a basis. However Sturm-Liouville problems have eigenfunctions associated with them that have certain properties, among them orthoganality and completeness.
 
What does it mean for a set to be complete?
 
Complete means every function can be written as an expansion in the set.
ie g=sum aifi
We need a certain amount of functions for this to work.
Often we also want the functions to be so few that it can work in only one way.
The fact that the set is orthogonal only allows the expansion to be easily found if it exists, it does not mean that itt does exist.

In a regular Sturm-Liouville problem we are sure of several helpful facts

-If we know the eigenfunctions the eigenvalues are easily found (Rayleigh quotient).

-Each eigen value has one eigenfuncion

-The eigenfunctions are a complete set

-Each eigenfunction has one more zero than the previous one (the first has 1)

-There is a smallest eigenvalue, but they grow as large as desired

-eigenvalues are real

-eigenfunctions are orthogonal
 
A set of eigenvectors is "complete" if it forms a basis for the vector space.
 
SO basically, if the orthogonal set of eigenfunctions forms a basis in the vector space of functions, then any function can be written as a linear combination of these functions?

I am trying to make sense as to why "f" can be written the way it is. (In my original post.)
:\
 
Yes, that's the whole point! Any function can be written as a linear combination of a set of functions if and only if that set is "complete"- and that is exactly because "complete" means any function can be written as a linear combination of those functions!

Of course, the problem proving that a given set is complete. There are a variety of ways to do that, depending on the set.
 
Back
Top