Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Problem in understanding some theorems about second order linear differential equations

  1. Dec 28, 2015 #1
    So, I had studied oscillatory motion for a while and I found it unpleasant to have to remember the various different solutions for the equations of motion. I began to learn about second order linear differential equations and now I know how to solve this kind of stuff. But there is a problem:

    For the next considerations, this is the general form of the equation:
    P(x)y'' + Q(x)y'+ R(x)y = 0 , where P,Q,R are functions of x

    Firstly, I watched some videos and that guy said there is theorem that says the solution it's always formed by erx, where r is a constant (but he gave no proof). And by replacing this function in the equation you can find r and then form the general solution. My first problem is: how do you show that there is no other function except the erx which does that?

    Then I found a book where there was this theorem:
    " If y1 and y2 are linearly independent solutions of equation, and P(x) is never 0, then the general solution is given by
    y = c1y1 + c2y2
    where c1 and c2 are arbitrary constants."
    But the authors give no proof. So my second problem is how do you actually prove this theorem.

    The next questions are related to the case P(x)=a, Q(x)=b, R(x)=c, where a,b,c are real constants.

    If you have b2 = 4ac, you may assume that y = f(x)erx is a solution and you will get that f''(x)=0. Now it is obvious that f(x) = cx +c' satisfies f''(x)=0, where c and c' are some constants.
    So here are two questions:
    1. Why you take y = f(x)erx and not other function?
    2. Is f(x) = cx +c' the only solution for f''(x)=0?

    I'd also be very grateful if someone would suggest a book on differential equations that really studies them deeply.
  2. jcsd
  3. Dec 28, 2015 #2


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    The trial solution y = erx works only when P, Q, and R are constants. When P, Q, or R are not constants, a different solution approach must be used, sometimes involving the case where y = infinite series in x.

    The trial solution y = erx is often the easiest to use because y' = r ⋅ erx; y" = r2 ⋅ erx; etc. Notice the common factor erx. Since the original ODE is homogeneous, erx can easily be eliminated to find the characteristic equation of the original ODE.

    This trial solution also has the advantage that erx never is zero, so you don't run into any messy situations about dividing zero by zero when eliminating erx to find the characteristic equation of the original ODE.

    For certain types of equations, different trial solutions may work, but it can be shown that these other solutions can be transformed into exponential functions of some sort.

    When b2 = 4ac, the characteristic equation has two identical roots.

    This article shows what happens in this case:


    Just about any DE text should explain the material above.

    Boyce & DiPrima has been used as a college text for many years (an earlier edition of it was the text I used waaay back when):

  4. Dec 28, 2015 #3
    This is really helpful, thanks.
  5. Dec 29, 2015 #4


    User Avatar
    Staff Emeritus
    Science Advisor

    The crucial point about "linear ordinary homogeneous differential equations" in general is this: "The set of all solutions to an nth degree linear ordinary homogeneous differential equation form a vector space of dimension n". That can be proven by looking at the "fundamental solutions", the solutions to the differential equation satisfying initial conditions y(0)= 1, y'(0)= 0, y''(0)= 0, ...; y(0)= 0, y'(0)= 1, y''(0)= 0, ...; y(0)= 0, y'(0)= 0, y''(0)= 1, ... and showing that any solution to that differential equation can be written as a linear combination of those n "fundamental solutions". In particular, if we have a second degree linear homogenous differential equation, we define the two "fundamental solutions" to be the solutions, [itex]y_1(x)[/itex] and [itex]y_2(x)[/itex] satisfying [itex]y_1(0)= 1[/itex], [itex]y_1'(0)= 0[/itex] and [itex]y_2(0)= 0[/itex], [itex]y_2(0)= 1[/itex]. Then it is easy to show that if y(x) is a solution to that differential equation, satisfying y(0)= A, y'(0)= B, then [itex]y(x)= Ay_1(x)+ By_2(x)[/itex].
  6. Dec 30, 2015 #5


    User Avatar
    Science Advisor
    Homework Helper

    The constant coefficient case is quite nice and simple. The point is that if we consider differentiating as an operator D, then solving say ay'' + by' + cy = 0, is the same as asking which y satisfy (aD^2 + bD + c)y = 0. (We might as well divide out by a and have lead coefficient 1.) So lets solve y'' + by' + cy = (D^2 + bD+c)y = 0.

    Now the nice part about constant coefficients is that the quadratic operator on the left factors just as in high school algebra into a composition ("product") of two linear operators, namely by the quadratic formula there are (complex) constants p and q, such that D^2+bD+c = (D-p)(D-q), and these linear operators commute! So if we solve them separately we get two solutions, and that's all we need.

    I.e. if (D-p)y1 = 0, then also (D-q)(D-p)y1 = (D-q)(0) = 0. Similarly if (D-q)y2 = 0, then also (D-p)(D-q)y2 = (D-q)(0) = 0. So we will have to basic solutions y1 and y2. And since the operator is linear, any linear combination of y1 and y2 will also be a solution. So evertuything boils down to solving (D-p)y = 0.

    But that is easy! I.e. (D-p)y = 0 if and only if Dy = py, i.e. iff the derivative of y is a constant times y, and we know the exponential function is the only such function. I.e. we know D(e^pt) = p.e^pt, so y = e^pt is a solution. Moreover if y is any solution of Dy = py, then dividing y by e^pt and differentiating gives D(y/e^pt) = [Dy.e^pt - y.D(e^pt)]/e^2pt = [py.e^pt - y.p.e^pt]/e^2pt = 0. so y/e^pt is a constant so y = constant.e^pt, and those are the onlty solutions.

    Now composing two such operators, i.e. applying (D-p)(D-q), the number of solutions can only go up by one dimension so it goes up from a one diml space of solutions to a two diml space. Since we already have the solutions e^pt and e^qt, hence all combinations namely r.e^pt + s.e^qt for all numbers r,s, we have them all.

    There is one special case where the operators are squared, i.e. solving (D-p)(D-p)y = 0. In this case you have to look for a function y such that (D-p)y = r.e^pt. I.e. D-p does not kill this function but it sets it up to be killed by another application of D-p. It is easy to see by integration by parts I guess, that another such solution is t.e^pt. voila!

    Oh and my favorite DE text by far is by Martin Braun. I never liked Boyce and dePrima in spite of its woide spread popularity and long use record. So if you have the same reaction you might try Braun.
    Last edited: Dec 31, 2015
  7. Dec 31, 2015 #6
    Ross is also great.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Problem in understanding some theorems about second order linear differential equations