Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Assumptions of solution form

  1. Jan 15, 2007 #1
    My question can be most simply put as how can we guarantee that the only solutions of a homogenous linear differential equation are of the form ce^at?

    or for that matter that the particular solution of a function is given by u(x)f(x)
    in the variation of parameters method.

    how can we guarentee that there aren't other functions that also meet the requirements of the differential equation?
  2. jcsd
  3. Jan 15, 2007 #2


    User Avatar
    Science Advisor

    We don't know that. For example, the very simple homogeneous linear differential equation with constant coefficients y"+ y= 0 has general solution y= C sin(t)+ D cos(t). Even more simply, the general solution to y"= 0 is y= Ct+ D. Further, you did not include "with constant coefficients" in your description. Homogeneous linear equations with non-constant can have solutions very different from exponentials.

    Perhaps the best way to think about it is this: Since the differential equation is homogeneous, the left hand side, involving only derivatives of y and y itself must add to 0- the various derivatives must cancel. Since the coefficients are constant, in order to cancel the derivatives of y must be the same kind of function. The only functions whose derivatives are the same kind of function are exponentials, sine and cosine, and polynomials. Any solution to a homogenous linear differential equation with constant coefficients must be a combination of those.

    I can only suggest that you go back and reread the derivation of "variation of parameters" in your text. Recall that there is no such thing as the particular solutions. Variation of parameters gives one of an infinite number of possible particular solutions.

    Are you still talking about linear differential equations? Then use the fundamental theory: the set of all solutions of an nth order linear homogeneous differential equation form an nth dimensional vector space. Once you have found n independent solutions, you have a basis for that vector space.

    If you are not talking about linear differential equations, then it is not necessarily true. For example, the simple differential equation [itex]y'= y^{\frac{1}{2}}[/itex] has, by integrating, [itex]2y^{\frac{3}{2}}= t+ C[/itex] or [itex]y= \left(\frac{t+C}{2}\right)^{\frac{2}{3}}[/itex] as solution for all C but also y= 0 which is not of that form. In fact, there exist an infinite number of solutions all satisfying, say, y(1)= 0.
  4. Jan 15, 2007 #3
    actually all but one of those equations for y are of the form Ae^at by eulers identity, and my mistake by not mentioning constant coeficients.

    as for the derivation of "variation of parameters" I just checked my book to ensure that I didn't have any horrible misconceptions about it, yet my book agrees with me that variation of arameters yields one unique particular solution for any non-homogenous system, perhaps your definition of unique is different then mine, personally I would define it as being unique with respect to any set of initial conditions.

    either that or your referring to the way the simpifying assumptions were constructed.

    alright I remember the fundamental theory now , and I can see its limitations, the course I took on differential equations didn't spend much time on it.
    Last edited: Jan 15, 2007
  5. Jan 15, 2007 #4


    User Avatar
    Science Advisor

    How can you do that when you don't apply the initial conditions to a "specific solution"? The "undetermined constants" appear in the general solution to the homogeneous equation. Yes, it is true that the value of those coefficients will depend on the "particular solution". But given any initial conditions, you can find values for the coefficients no matter what particular solution you happened to use. There exist an infinite number of solutions to the equation. "Variation of parameters" gives one of them.

    For example: Solve y"- y= f(t) for some given function f(t).

    The characteristic equation is r2- 1= 0 which gives r= 1 and r= -1. That tells us that the general solution to the equation y"- y= 0 must be of the form y= Cet+ De-t.

    "Variation of parameters" suggests that we find a solution of the form
    y= u(t)et+ v(t)e-t. (In other words the "parameters" or constants in the solution are allowed to vary.) How do we know that we can do that? Well, ANY function can be written in that form for SOME u and v! In fact, there are an infinite number of ways that can be done: choose any function you want for u(t) and then solve the algebraic equation y= u(t)et+ ve-t for v.

    From y= uet+ ve-t we get y'= u'et+ uet+ v'e-t- ve-t. Now, we require that
    u'et+ v'e-t= 0. How do we know that it is 0? We don't, we require that it be equal to 0! There are an infinite number of functions u and v which will work. We are reducing our search to only those that satisfy u'et+ v'e-t= 0 in order to simplify the calculations.

    Because of that requirement, we have y'= uet- ve-t and so y"= u'et+ uet-v'e-t+ ve-t. Now put that into our differential equation y"- y= u'et+ uet-v'e-t+ ve-t- uet- ve-t= f(t). That has no u" or v" because of our "restriction" above. But since et and e-t satisfy the homogeneous equation so the terms involving only u and v also cancel. This reduces to u'et- v'e-t= f(t). This is a single algebraic equation for the two variables u' and v'. We also have the "requirement" above: u'et+ v'e-t= 0. Solve those two equations for u' and v' and integrate to find u and v. The fact that u(t)et+ v(t)e-t satisfies the equation follows from the fact that u' and v' satisfy the equations above.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook