Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A few queries on the variation of parameters method

  1. Jan 28, 2015 #1
    I've been reviewing my knowledge on the technique of variation of parameters to solve differential equations and have a couple of queries that I'd like to clear up (particularly for 2nd order inhomogeneous ODEs), if possible.

    The first is that, given the complementary solution, [itex]y_{c}(x)=c_{1}y_{1}(x)+c_{2}y_{2}(x)[/itex], to some 2nd-order inhomogeneous ODE: $$a_{2}(x)y''(x)+a_{1}(x)y'(x)+a_{0}(x)y(x)=f(x)$$ we assume that the particular solution $y_{p}(x)$ has the form $$y_{p}(x)=u_{1}(x)y_{1}(x)+u_{2}(x)y_{2}(x)$$ where [itex]u_{1}(x)[/itex] and [itex]u_{2}(x)[/itex] are arbitrary functions.

    Is the motivation for this ansatz that the homogeneous equation can be viewed as a special case of the inhomogeneous one, i.e. with [itex]f(x)=0[/itex], and as such it is reasonable to assume that the particular solution to the inhomogeneous equation will be of a similar form to the complementary solution?!

    The second is that, starting from this ansatz we note that we require that [itex]y_{p}[/itex] is a solution to the inhomogeneous equation, and upon inserting this into the ODE (and doing a little algebra), this leaves us with the equation $$a_{2}\frac{d}{dx}\left[u'_{1}y_{1}+u'_{2}y_{2}\right]+a_{1}\left[u'_{1}y_{1}+u'_{2}y_{2}\right]+ a_{2}\left[u'_{1}y'_{1}+u'_{2}y'_{2}\right]= f(x)$$ which applies a single constraint on the forms of [itex]u_{1}(x)[/itex] and [itex]u_{2}(x)[/itex]. However, in order to find solutions for both [itex]u_{1}(x)[/itex] and [itex]u_{2}(x)[/itex] we require two equations (just a single equation would enable us to find a solution of one in terms of the other, but this other function is still arbitrary and thus we need a further equation to determine its form [I'm a bit unsure whether my argument is correct here?!]). As such, we have one constraint (that the LHS equals the RHS [which has a fixed form [itex]f(x)[/itex]]), and this leaves us with one degree of freedom that we are free to constrain. Thus, we choose that $$u'_{1}(x)y_{1}(x)+u'_{2}(x)y_{2}(x)=0$$ such that $$\left[u'_{1}(x)y'_{1}(x)+u'_{2}(x)y'_{2}(x)\right]= \frac{f(x)}{a_{2}(x)}$$ Is this the correct reasoning?
     
  2. jcsd
  3. Jan 28, 2015 #2

    HallsofIvy

    User Avatar
    Staff Emeritus
    Science Advisor

    Given two functions, [itex]y_1[/itex] and [itex]y_2[/itex]. Any function can be written in the form "[itex]u_1y_1+ u_2y_2[/itex]" for some functions [itex]u_1[/itex] an [itex]u_2[/itex] so, no, we do not "assume that the particular solution to the inhomogeneous equation will be of a similar form to the complementary solution". The point is simply that, by the product rule, some parts of a derivative of [itex]u_1y_1+ u_2y_2[itex] will have only the "y"s differentiated so that the "u"s are treated like constants and that part will be 0 because the "y"s satisfy the homogeneous equation. So we never have an equation involving the "u"s without any differentiation. Further, because there are an infinite number of ways to write a solution to the entire equation in that way, we can add additional requirements, such as "[tex]u_1'y_1+ u_2'y_2= 0[/tex] so that we never have higher derivatives- we will always end with equations involving only the first order derivatives of the "u"s. If we have an nth order equation, this will always result in n equations for the first derivatives. We can solve for [itex]u_1'[/itex] and [itex]u_2'[/itex] and then integrate to find [itex]u_1[/itex] and [itex]u_2[/itex]
     
  4. Jan 28, 2015 #3
    So is there any particular motivation behind this ansatz at all then, or just that, when differentiated, it can solve the homogeneous part of the equation and still has terms remaining that can be used to find a solution to the inhomogeneous part?

    So, in general, if one has an nth order equation then we seek a solution containing n arbitrary functions with the constraint that they satisfy the original ODE. Then, this would, in principle, provide us with an infinite number of particular solutions, however, we only need to find one particular solution. As such, in order to find a particular solution, as n-1 of the functions remain arbitrary (the nth being determined by the condition that the overall function is a solution to the ODE), we may apply n-1 constraints to determine particular forms for these functions. We are free to choose whatever constraint we like and so the easiest is one that keeps the equations to first order in derivatives, resulting in the constraint [itex]u'_{1}y_{1}+u'_{2}y_{2}=0[/itex] (for the 2nd order case). This then enables us to solve for the n arbitrary functions and find specific forms for them and thus enabling us to specify a particular solution to the original ODE. Would this be correct?
     
    Last edited: Jan 28, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: A few queries on the variation of parameters method
  1. Limit with parameter (Replies: 3)

Loading...