Undergrad General solution to linear homogeneous 2nd order ODEs

Click For Summary
SUMMARY

The general solution to a linear homogeneous second-order ordinary differential equation (ODE) of the form $$y''(x) + p(x)y'(x) + q(x) = 0$$ is expressed as $$y(x) = c_{1}y_{1}(x) + c_{2}y_{2}(x)$$, where ##c_{1}## and ##c_{2}## are arbitrary constants, and ##y_{1}(x)## and ##y_{2}(x)## are linearly independent solutions. The proof of this general solution relies on the linearity of the ODE and the properties of vector spaces, specifically that the solution space for a second-order equation is two-dimensional. If two initial conditions are provided, the constants can be uniquely determined. References to Theorems 3.2 and 3.3 from "An Introduction to Linear Analysis" by Kreider et al. provide foundational support for this conclusion.

PREREQUISITES
  • Understanding of linear homogeneous second-order ordinary differential equations (ODEs)
  • Familiarity with concepts of linear independence and basis functions
  • Basic knowledge of vector spaces and their dimensions
  • Proficiency in substitution methods for solving differential equations
NEXT STEPS
  • Study the proof of Theorems 3.2 and 3.3 in "An Introduction to Linear Analysis" by Kreider, Kuller, Ostberg, and Perkins
  • Learn about the Wronskian determinant and its role in determining linear independence of solutions
  • Explore the concept of dimension in vector spaces and its application to differential equations
  • Investigate methods for solving higher-order linear homogeneous ODEs and their general solutions
USEFUL FOR

Mathematicians, students of differential equations, and anyone interested in the theoretical foundations of linear algebra as applied to ordinary differential equations.

Frank Castle
Messages
579
Reaction score
23
Given a linear homogeneous 2nd order ODE of the form $$y''(x)+p(x)y'(x)+q(x)=0$$ the general solution is of the form $$y(x)=c_{1}y_{1}(x)+c_{2}y_{1}(x)$$ where ##c_{1},c_{2}## are arbitrary constants and ##y_{1}(x), y_{2}(x)## are linearly independent basis solutions.

How does one prove that the general solution is given by the above?
 
Last edited by a moderator:
Physics news on Phys.org
substitute the proposed general solution into the DE and see.

[edit] is there a typo in the general solution equation?
Mod note: Now fixed in orig. post...
 
Last edited by a moderator:
Frank Castle said:
Given a linear homogeneous 2nd order ODE of the form $$y''(x)+p(x)y'(x)+q(x)=0$$ the general solution is of the form $$y(x)=c_{1}y_{1}(x)+c_{1}y_{1}(x)$$ where ##c_{1},c_{2}## are arbitrary constants and ##y_{1}(x), y_{2}(x)## are linearly independent basis solutions.

How does one prove that the general solution is given by the above?
One explanation appeals to concepts from linear algebra. The solution space for a first-order, homogeneous differential equation has dimension one, so if a nontrivial solution (i.e., not identically zero) can be found, every solution will be a constant multiple of this solution.

The solution space for a second-order, homogeneous differential equation has dimension two. If you have two linearly independent solutions, then the solution space is spanned by these two functions. That is, the general solution consists of all linear combinations of the two basis functions, exactly as you show in the second equation. If two initial conditions are given, then the constants ##c_1## and ##c_2## can be determined to give a unique solution.

For a third-order, homogeneous differential equation, there need to be three linearly independent basis functions, and so on, for higher-order equations.
 
Simon Bridge said:
[edit] is there a typo in the general solution equation?

Yes, sorry. It should read ##y(x)=c_{1}y_{1}(x)+c_{2}y_{2}(x)##.

Mark44 said:
One explanation appeals to concepts from linear algebra. The solution space for a first-order, homogeneous differential equation has dimension one, so if a nontrivial solution (i.e., not identically zero) can be found, every solution will be a constant multiple of this solution.

The solution space for a second-order, homogeneous differential equation has dimension two. If you have two linearly independent solutions, then the solution space is spanned by these two functions. That is, the general solution consists of all linear combinations of the two basis functions, exactly as you show in the second equation. If two initial conditions are given, then the constants c1c_1 and c2c_2 can be determined to give a unique solution.

For a third-order, homogeneous differential equation, there need to be three linearly independent basis functions, and so on, for higher-order equations.

That's kind of how I intuitively see it, but I was wondering how one proves it (or is such a proof fiendishly hard)?
 
Frank Castle said:
That's kind of how I intuitively see it, but I was wondering how one proves it (or is such a proof fiendishly hard)?
I have several DE textbooks, but I don't remember any of them proving that a linear combination of basis functions is the general solution, but then, I haven't looked at these books for quite a while. However, a proof of this wouldn't be "fiendishly hard," I don't believe, and possibly would use a proof by contradiction. If I get a chance later today, I'll see what I can find.
 
Mark44 said:
If I get a chance later today, I'll see what I can find.

Ok great, I'd much appreciate that.
 
Frank Castle said:
Given a linear homogeneous 2nd order ODE of the form $$y''(x)+p(x)y'(x)+q(x)=0$$ the general solution is of the form $$y(x)=c_{1}y_{1}(x)+c_{2}y_{1}(x)$$ where ##c_{1},c_{2}## are arbitrary constants and ##y_{1}(x), y_{2}(x)## are linearly independent basis solutions.

How does one prove that the general solution is given by the above?
If you can get hold of a copy of "An introduction to Linear Analysis", Kreider, Kuller, Ostberg, and Perkins, then you can see their partial proof of this in their Theorems 3.2 and 3.3.

Theorem 3.2 states an existence and uniqueness theorem for the solutions of a normal n'th order linear DE, given a set of initial conditions but refers the reader elsewhere for the proof, and Theorem 3.3 relies on this to show that said DE has n l.i. solutions which span the solution space of the DE.

They prove Theorem 3.3 by choosing a set of n initial conditions for which it is easy to show that the corresponding solutions (which exist and are unique by Theorem 3.2) are l.i.
 
I see I misunderstood:

Given DE: y'' + py' + q = 0

knowing y1 and y2 are independent solutions, then y = Ay1 + By2 is also a solution, for any arbitrary A,B... prove by substitution.

So what you want to prove is that any other solution can be written as a linear sum of two independent solutions, not just that any linear combination is a solution. i.e. is there a y, that is a solution, that cannot be written as a linear sum of y1 and y2?

If y1 and y2 are orthogonal, and the solutions to the DE form a vector space of dimension 2, then doesn't it follow that any other solution must be a linear sum of y1 and y2?

If we want to prove that it does follow - then look up the corresponding proof for a general vector space.
What did I miss?
 
Simon Bridge said:
So what you want to prove is that any other solution can be written as a linear sum of two independent solutions, not just that any linear combination is a solution. i.e. is there a y, that is a solution, that cannot be written as a linear sum of y1 and y2?

Yes, this is basically what I want to prove.

Simon Bridge said:
If y1 and y2 are orthogonal, and the solutions to the DE form a vector space of dimension 2, then doesn't it follow that any other solution must be a linear sum of y1 and y2?

This follows if their corresponding Wronskian is non-zero, right? I can see how it must be the case if you consider ##y_{1}## and ##y_{2}## as a basis for a two dimensional vector space, but isn't there some sort of proof without using linear algebra?

I think what I find hard to justify is why the solution space to an ##n##-th order differential equation must be ##n##-dimensional? I can kind of see that if this is the case, then if we can find ##n## linearly independent solutions ##y_{1},\cdots ,y_{n}## (whereby linear independence corresponds to ##W(y_{1},\cdots ,y_{n};x)\neq 0##), then every solution must be a linear combination of these "basis" solutions, but I'm unsure how to prove that this is true?!
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 2 ·
Replies
2
Views
948
  • · Replies 36 ·
2
Replies
36
Views
5K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K