Eigenfunction expansion method in PDE solutions

In summary, the method uses eigenvectors to find the right formulas and applies which I feel uncomfortable about.
  • #1
pivoxa15
2,255
1
How does this method work? What are the mathematical ideas behind this method? Unlike separation of variables techniques, where things can be worked out from first principles, this method of solving ODE seems to find the right formulas and apply which I feel uncomfortable about.
 
Physics news on Phys.org
  • #2
Essentially, you are saying that (for self-adjoint differential operators) the eigenvectors form a basis for the linear space of all solutions to the differential equation. Typically, the eigen-functions are chosen so as to form an ortho-normal (or at least orthogonal) basis.
 
  • #3
What are examples of eigenvectors that you talk about in a DE? Could you give an example?

Why does it only work for homogeous BC? Is it because otherwise you won't get orthogonality?
 
  • #4
pivoxa15 said:
What are examples of eigenvectors that you talk about in a DE? Could you give an example?

Consider the differential operator [tex]{d^2}/{d x ^2}[/tex] acting on the vector space of (twice-diffentiable, continuous) functions on [0, Pi] which satisfy f(0) = f(Pi) = 0. Then Sin(n x) is an eigenfunction for positive integer n:
[tex]\frac{d ^2}{d x ^2} \sin n x = -n^2 \sin (x)[/tex]

Note this is a vector space: linear combinations of functions satisfying f(0)=f(Pi)=0 themselves satisfy that equation.

Why does it only work for homogeous BC? Is it because otherwise you won't get orthogonality?

You don't get a vector space, because a sum of two solutions to an inhomogenous equation is not a solution.

e.g., solutions x1, x2 to the equation Lx=f,
L(x1+x2) = Lx1 + Lx2 = 2f != f
 
Last edited by a moderator:
  • #5
[tex]\frac{d ^2}{d x ^2} \sin n x = -n^2 \sin (x)[/tex]

In case you ask: the linear differential operator d^2/dx^2 acts on the eigenfunction sin(nx), and yields the eigenfunction again scaled by an eigenvalue (-n^2).

The reason we're looking at integers n in sin(nx) is because of our boundary condition, f(Pi)=0. For nonintegeral a, sin(ax) does not satisfy this and is not (by definition) a member of the vector space.
 
  • #6
pivoxa15 said:
How does this method work? What are the mathematical ideas behind this method?

What have you learned so far, and what in particular don't you get?

Also, have you studied much linear algebra?
 
  • #7
I have done a first course in linear algebra so I can understand your language. I noticed you haven't mentioned eigenvectors. In this case, would the eigenvectors not be vectors but eigenfunctions? Because the space is a function space although functions can be part of a vector space because a vector space is the most general space in linear algebra?

But we are dealing with functions not vectors so only the word eigenfunctions should be used?
 
Last edited:
  • #8
HallsofIvy said:
Essentially, you are saying that (for self-adjoint differential operators) the eigenvectors form a basis for the linear space of all solutions to the differential equation. Typically, the eigen-functions are chosen so as to form an ortho-normal (or at least orthogonal) basis.

What do you mean by self-adjoint differential operators? How can you know when one is one? I see the definition in here
http://en.wikipedia.org/wiki/Differential_operator
especially,
<u,Tv>=<T*u,v>
T is a self adjoint operator if and only if T=T*
so <u,Tv>=<Tu,v>

If T is the second derivative operator than it would matter what u and v are to qualify whether T is self adjoint or not wouldn't it? Because if u and v were sin and cosine functions than T would be self adjoint but if they were some other functions like u=x, v=x^3 than T being the second derivative, wouldn't be self adjoint. This seem a bit strange that T is dependent on u and v? Why? Does that make T, that is the second derivative operator self adjoint or not?

Are self-adjoint differential operators related to self adjoint matrices in which case the solution space produces vectors which are orthogonal to each other. So a similar analogy can be drawn with functions
 
Last edited:
  • #9
Rach3 said:
e.g., solutions x1, x2 to the equation Lx=f,
L(x1+x2) = Lx1 + Lx2 = 2f != f
I think you should have put in a boundary value for f to make it more clear.

I.e. say f(L)=1 is a boundary condition. If x1 and x2 are solutions than x1+x2=x3 is not a solution because x1(L)+x2(L)=x3(L)=1+1=2!=1 which means x3(x) is not a solution to the differential equation because it does not satisfy the boundary condition. Therefore in general the addition of two functions is not a solution of the original DE because it would not satisfy the boundary condition(s).

Another reason why nonhomogenous BC won't work is because the solutions won't be orthogonal functions which could be problematic for one because we can't use the Fourier series to construct our infinite solution in that the constants for each n in that series cannot be determined. Two it could have something to do with what HallsofIvy was talking about in terms of selfadjoint differential operators.
 
Last edited:
  • #10
pivoxa15 said:
What do you mean by self-adjoint differential operators? How can you know when one is one? I see the definition in here
http://en.wikipedia.org/wiki/Differential_operator
especially,
<u,Tv>=<T*u,v>
T is a self adjoint operator if and only if T=T*
so <u,Tv>=<Tu,v>
In a function space the "usual" inner product is
[tex]\int f(x)g(x)dx[/itex]
where the integral is over whatever set the functions are defined on,
so saying a differential operator, L, is self adjoint means
[tex]\intL(f(x))g(x)dx= \intf(x)L(x)dx[/itex]
In particular, if L is a "Sturm-Liouville" operator:
[tex]\frac{d }{dx}\left(p(x)\frac{dy}{dx}\right)+ q(x)y[/tex]
with boundary conditions y(a)= y(b)= 0, then
[tex]\int_a^b \left(\frac{d }{dx}\left(p(x)\frac{df}{dx}\right)+ q(x)f\right)g(x)dx= \int_a^b \frac{d }{dx}\left(p(x)\frac{df}{dx}\right)g(x)dx+ \intq(x)f(x)g(x)dx[/tex]
That second integral is clearly "symmetric" in f and g. Do the first integral by parts, letting u= g(x),
[tex]dv= \frac{d }{dx}\left(q(x)\frac{df}{dx}\right)dx[/tex]
so that
[tex]du= \frac{dg}{dx}dx[/tex]
and
[tex]v= p(x)\frac{df}{dx}[/itex]
Since f and g are 0 at both ends, that integral is
[tex]-\int_a^bp(x)\frac{dg}{dx}\frac{df}{dx}dx[/tex]
Now "reverse" it. Do another integration by parts, letting
[tex]u= p(x)\frac{dg}{dx}[/tex]
and
[tex]dv= \frac{df}{dx}[/tex]
and its easy to see that we have just swapped f and g.

If T is the second derivative operator than it would matter what u and v are to qualify whether T is self adjoint or not wouldn't it? Because if u and v were sin and cosine functions than T would be self adjoint but if they were some other functions like u=x, v=x^3 than T being the second derivative, wouldn't be self adjoint. This seem a bit strange that T is dependent on u and v? Why? Does that make T, that is the second derivative operator self adjoint or not?
No, that is not correct. For an operator to be self adjoint, <Tu,v>= <u, Tv> must be true for all u and v in the space. If you mean here that T= d2/dx2 then it is self adjoint (it is a Sturm-Liouville operator above with p(x)= 1, q(x)= 0). However, note that the "vector space" on which we are working must be the set of infinitely differentiable functions, that are equal to 0 at two specified points. That let's out your "x" and "x3" examples.

Are self-adjoint differential operators related to self adjoint matrices in which case the solution space produces vectors which are orthogonal to each other. So a similar analogy can be drawn with functions
Yes, both matrices and linear differential operators are linear transformations on vector spaces and all properties of self adjoint linear operators apply to them as well. (Linear differential operators, however, are necessarily operators on infinite dimensional vector spaces.)
 
  • #11
HallsofIvy said:
However, note that the "vector space" on which we are working must be the set of infinitely differentiable functions, that are equal to 0 at two specified points.

The first condition is imposed because of the self adjoint differential operator (this operator comes naturally to solve solutions of PDE) which by definition can only operate on infinitely differentiable functions (which includes x and x^3)

The second condition is imposed because the solutions of the differential equation must satisfy the two BC and we only want to include these functions in our vector space (these functions also satisfy the conditions of a vector space but many others that don't satisfy homogenous BC don't).

However it is the second condition which disallow x and x^3.

Would you say that d2/dx2 is a self adjoint operator if and only if it is a Sturm-Liouville operator?

All Correct?

I don't follow how this line came about
[tex]\intL(f(x))g(x)dx= \intf(x)L(x)dx[/tex]
Are you representing the operator as L(x) or L? Is the first (x) a function?
 
Last edited:
  • #12
Sorry my Latex got messed up. It should have been
[tex]\int L(f(x))g(x)dx= \int f(x)L(g(x))dx[/tex]

(I forgot the spaces after \int.)
 
  • #13
pivoxa15 said:
Would you say that d2/dx2 is a self adjoint operator if and only if it is a Sturm-Liouville operator?
?? [itex]\frac{d }{dx^2}[/itex] is both self adjoint anda Sturm-Liouville operator- is no "if and only if" about it. A general second order differential operator is Sturm-Liouville, then it is self adjoint. I think the other way is also true but I'm not sure. Of course, self adjoint is defined for operators in any inner-product space while "Sturm-Liouville" applies only to second order differential operators.
 
  • #14
HallsofIvy said:
?? [itex]\frac{d }{dx^2}[/itex] is both self adjoint anda Sturm-Liouville operator- is no "if and only if" about it. A general second order differential operator is Sturm-Liouville, then it is self adjoint. I think the other way is also true but I'm not sure. Of course, self adjoint is defined for operators in any inner-product space while "Sturm-Liouville" applies only to second order differential operators.

Actually, I think there are other self adjoint differential operators, i.e. The Hamiltonian operator. Or is the Hamiltonian operator a branch of the Sturm-Liouville operator. I wonder if the Sturm-Liouville operator is the most general second order differential operator? If so than the 'the other way' is also true.

Returning to the more general question, how do we go from a PDE to the method of eigenfunction expansion? This process for me is a bit unclear. For example, say we have a 2D Poisson equation (equaling a constant C). To work out by separation of variables is not possible, so we have to use eigenfunction expansion. But how do we justify which eigenfunction to use? Is it only based on the BC, i.e. If Dirichlet than sine eigenfunctions?
 
Last edited:
  • #15
No, the Sturm-Liouville operators are not the "most general second order differential operators". For example,
[tex]\frac{dy}{dx^2}+ x\frac{dy}{dx}+ y[/itex] is a second order differential operator but is not a Sturm-Liouville operator. But, of course, that example is not self-adjoint.
 
  • #16
Yes. I made mistake, I meant whether the Sturm-Liouville operators were the most general self adjoint differential operators.
 

What is the Eigenfunction expansion method?

The Eigenfunction expansion method is a mathematical technique used to solve partial differential equations (PDEs). It involves expressing the solution of a PDE as a sum of eigenfunctions, which are solutions to simpler differential equations.

How does the Eigenfunction expansion method work?

The Eigenfunction expansion method works by breaking down a complex PDE into simpler components. The eigenfunctions used in the expansion are chosen to satisfy the boundary conditions of the original PDE, and the coefficients in the expansion are determined using techniques such as separation of variables or Fourier series.

What types of PDEs can be solved using the Eigenfunction expansion method?

The Eigenfunction expansion method can be used to solve a wide range of PDEs, including linear and non-linear equations. It is particularly useful for solving PDEs with boundary conditions that can be expressed as a linear combination of eigenfunctions.

What are the advantages of using the Eigenfunction expansion method?

One of the main advantages of the Eigenfunction expansion method is that it can provide an exact solution to a PDE, rather than an approximation. It also allows for complex PDEs to be broken down into simpler components, making them easier to solve. Additionally, the method is versatile and can be applied to various types of PDEs.

Are there any limitations to the Eigenfunction expansion method?

The Eigenfunction expansion method may not always be applicable, as it relies on being able to find a suitable set of eigenfunctions and coefficients. It also has limited use for non-linear PDEs, as the eigenfunctions may not be able to satisfy the necessary boundary conditions. In some cases, the method may also lead to cumbersome calculations and may not provide a practical solution.

Similar threads

  • Differential Equations
Replies
4
Views
304
  • Differential Equations
Replies
3
Views
2K
Replies
4
Views
1K
Replies
5
Views
600
  • Differential Equations
Replies
11
Views
2K
  • Differential Equations
Replies
6
Views
2K
  • Differential Equations
Replies
7
Views
2K
  • Science and Math Textbooks
Replies
12
Views
913
  • Differential Equations
Replies
3
Views
2K
Replies
2
Views
2K
Back
Top