Solving Second Order Linear Diff Eqns: Proving Theorems & More

Click For Summary

Discussion Overview

The discussion revolves around second order linear differential equations, focusing on the nature of their solutions, theorems related to these solutions, and specific cases involving constant coefficients. Participants explore the proofs of various theorems, the uniqueness of solutions, and the implications of different forms of the equations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants question the assertion that the solution to the differential equation is always of the form erx, seeking proof that no other function can serve this role.
  • There is a discussion about the theorem stating that if y1 and y2 are linearly independent solutions, the general solution can be expressed as a linear combination of these solutions, with requests for proof of this theorem.
  • Participants examine the case where P(x)=a, Q(x)=b, R(x)=c, and discuss the implications of the condition b² = 4ac, leading to the assumption that y = f(x)erx is a solution and the resulting equation f''(x)=0.
  • Questions arise regarding the choice of y = f(x)erx as a trial solution and whether f(x) = cx + c' is the only solution to f''(x)=0.
  • One participant introduces the concept of fundamental solutions and their role in forming a vector space of solutions for nth degree linear ordinary homogeneous differential equations.
  • Another participant discusses the operator approach to solving constant coefficient equations, highlighting the factorization of the differential operator and the uniqueness of exponential solutions.

Areas of Agreement / Disagreement

Participants express varying degrees of agreement on the nature of solutions and theorems, but no consensus is reached on the proofs or the uniqueness of certain solutions. Multiple competing views and questions remain unresolved.

Contextual Notes

Participants note that the trial solution y = erx is primarily applicable when P, Q, and R are constants, and that different approaches may be necessary for non-constant coefficients. The discussion also highlights the need for proofs of theorems presented without justification.

Who May Find This Useful

This discussion may be of interest to students and educators in mathematics and engineering, particularly those studying differential equations and their applications in various fields.

anachin6000
Messages
50
Reaction score
3
So, I had studied oscillatory motion for a while and I found it unpleasant to have to remember the various different solutions for the equations of motion. I began to learn about second order linear differential equations and now I know how to solve this kind of stuff. But there is a problem:

For the next considerations, this is the general form of the equation:
P(x)y'' + Q(x)y'+ R(x)y = 0 , where P,Q,R are functions of x

Firstly, I watched some videos and that guy said there is theorem that says the solution it's always formed by erx, where r is a constant (but he gave no proof). And by replacing this function in the equation you can find r and then form the general solution. My first problem is: how do you show that there is no other function except the erx which does that?

Then I found a book where there was this theorem:
" If y1 and y2 are linearly independent solutions of equation, and P(x) is never 0, then the general solution is given by
y = c1y1 + c2y2
where c1 and c2 are arbitrary constants."
But the authors give no proof. So my second problem is how do you actually prove this theorem.

The next questions are related to the case P(x)=a, Q(x)=b, R(x)=c, where a,b,c are real constants.

If you have b2 = 4ac, you may assume that y = f(x)erx is a solution and you will get that f''(x)=0. Now it is obvious that f(x) = cx +c' satisfies f''(x)=0, where c and c' are some constants.
So here are two questions:
1. Why you take y = f(x)erx and not other function?
2. Is f(x) = cx +c' the only solution for f''(x)=0?

I'd also be very grateful if someone would suggest a book on differential equations that really studies them deeply.
 
Physics news on Phys.org
anachin6000 said:
So, I had studied oscillatory motion for a while and I found it unpleasant to have to remember the various different solutions for the equations of motion. I began to learn about second order linear differential equations and now I know how to solve this kind of stuff. But there is a problem:

For the next considerations, this is the general form of the equation:
P(x)y'' + Q(x)y'+ R(x)y = 0 , where P,Q,R are functions of x

Firstly, I watched some videos and that guy said there is theorem that says the solution it's always formed by erx, where r is a constant (but he gave no proof). And by replacing this function in the equation you can find r and then form the general solution. My first problem is: how do you show that there is no other function except the erx which does that?

The trial solution y = erx works only when P, Q, and R are constants. When P, Q, or R are not constants, a different solution approach must be used, sometimes involving the case where y = infinite series in x.

The trial solution y = erx is often the easiest to use because y' = r ⋅ erx; y" = r2 ⋅ erx; etc. Notice the common factor erx. Since the original ODE is homogeneous, erx can easily be eliminated to find the characteristic equation of the original ODE.

This trial solution also has the advantage that erx never is zero, so you don't run into any messy situations about dividing zero by zero when eliminating erx to find the characteristic equation of the original ODE.

For certain types of equations, different trial solutions may work, but it can be shown that these other solutions can be transformed into exponential functions of some sort.

http://tutorial.math.lamar.edu/Classes/DE/SecondOrderConcepts.aspx
Then I found a book where there was this theorem:
" If y1 and y2 are linearly independent solutions of equation, and P(x) is never 0, then the general solution is given by
y = c1y1 + c2y2
where c1 and c2 are arbitrary constants."
But the authors give no proof. So my second problem is how do you actually prove this theorem.

The next questions are related to the case P(x)=a, Q(x)=b, R(x)=c, where a,b,c are real constants.

If you have b2 = 4ac, you may assume that y = f(x)erx is a solution and you will get that f''(x)=0. Now it is obvious that f(x) = cx +c' satisfies f''(x)=0, where c and c' are some constants.
So here are two questions:
1. Why you take y = f(x)erx and not other function?
2. Is f(x) = cx +c' the only solution for f''(x)=0?
When b2 = 4ac, the characteristic equation has two identical roots.

This article shows what happens in this case:

http://tutorial.math.lamar.edu/Classes/DE/RepeatedRoots.aspx

I'd also be very grateful if someone would suggest a book on differential equations that really studies them deeply.
Just about any DE text should explain the material above.

Boyce & DiPrima has been used as a college text for many years (an earlier edition of it was the text I used waaay back when):

http://www.wiley.com/WileyCDA/WileyTitle/productCd-EHEP002451.html
 
  • Like
Likes   Reactions: anachin6000
This is really helpful, thanks.
 
The crucial point about "linear ordinary homogeneous differential equations" in general is this: "The set of all solutions to an nth degree linear ordinary homogeneous differential equation form a vector space of dimension n". That can be proven by looking at the "fundamental solutions", the solutions to the differential equation satisfying initial conditions y(0)= 1, y'(0)= 0, y''(0)= 0, ...; y(0)= 0, y'(0)= 1, y''(0)= 0, ...; y(0)= 0, y'(0)= 0, y''(0)= 1, ... and showing that any solution to that differential equation can be written as a linear combination of those n "fundamental solutions". In particular, if we have a second degree linear homogenous differential equation, we define the two "fundamental solutions" to be the solutions, y_1(x) and y_2(x) satisfying y_1(0)= 1, y_1'(0)= 0 and y_2(0)= 0, y_2(0)= 1. Then it is easy to show that if y(x) is a solution to that differential equation, satisfying y(0)= A, y'(0)= B, then y(x)= Ay_1(x)+ By_2(x).
 
  • Like
Likes   Reactions: anachin6000
The constant coefficient case is quite nice and simple. The point is that if we consider differentiating as an operator D, then solving say ay'' + by' + cy = 0, is the same as asking which y satisfy (aD^2 + bD + c)y = 0. (We might as well divide out by a and have lead coefficient 1.) So let's solve y'' + by' + cy = (D^2 + bD+c)y = 0.

Now the nice part about constant coefficients is that the quadratic operator on the left factors just as in high school algebra into a composition ("product") of two linear operators, namely by the quadratic formula there are (complex) constants p and q, such that D^2+bD+c = (D-p)(D-q), and these linear operators commute! So if we solve them separately we get two solutions, and that's all we need.

I.e. if (D-p)y1 = 0, then also (D-q)(D-p)y1 = (D-q)(0) = 0. Similarly if (D-q)y2 = 0, then also (D-p)(D-q)y2 = (D-q)(0) = 0. So we will have to basic solutions y1 and y2. And since the operator is linear, any linear combination of y1 and y2 will also be a solution. So evertuything boils down to solving (D-p)y = 0.

But that is easy! I.e. (D-p)y = 0 if and only if Dy = py, i.e. iff the derivative of y is a constant times y, and we know the exponential function is the only such function. I.e. we know D(e^pt) = p.e^pt, so y = e^pt is a solution. Moreover if y is any solution of Dy = py, then dividing y by e^pt and differentiating gives D(y/e^pt) = [Dy.e^pt - y.D(e^pt)]/e^2pt = [py.e^pt - y.p.e^pt]/e^2pt = 0. so y/e^pt is a constant so y = constant.e^pt, and those are the onlty solutions.

Now composing two such operators, i.e. applying (D-p)(D-q), the number of solutions can only go up by one dimension so it goes up from a one diml space of solutions to a two diml space. Since we already have the solutions e^pt and e^qt, hence all combinations namely r.e^pt + s.e^qt for all numbers r,s, we have them all.

There is one special case where the operators are squared, i.e. solving (D-p)(D-p)y = 0. In this case you have to look for a function y such that (D-p)y = r.e^pt. I.e. D-p does not kill this function but it sets it up to be killed by another application of D-p. It is easy to see by integration by parts I guess, that another such solution is t.e^pt. voila!

Oh and my favorite DE text by far is by Martin Braun. I never liked Boyce and dePrima in spite of its woide spread popularity and long use record. So if you have the same reaction you might try Braun.
 
Last edited:
Ross is also great.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
865
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 0 ·
Replies
0
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K