Question about the def. of solving 2nd order ODEs through Var. of Parameters.

Je m'appelle
Messages
117
Reaction score
0
Ok, so I've been studying the method of variation of parameters in order to solve 2nd order ODEs, and I have a question regarding a supposition that is made in the definition of the method.

Say,

y'' + p(t)y' + q(t)y = g(t)

Then the general solution to the above equation is

c_1y_1(t) + c_2y_2(t) + y_p, and now replacing c_1\ and\ c_2\ by\ u_1(t)\ and\ u_2(t) on the complementary solution c_1y_1(t) + c_2y_2(t) in order to find the particular solution y_p

y_p = u_1(t)y_1(t) + u_2(t)y_2(t)

y'_p = u'_1(t)y_1(t) + u_1(t)y'_1(t) + u'_2(t)y_2(t) + u_2(t)y'_2(t)

And then the methods states that

u'_1(t)y_1(t) + u'_2(t)y_2(t) = 0

And this is where I'm curious and confused at the same time, I would like to know why it can be considered true, is there some kind of a proof to back this up? I mean, the person who developed this method certainly had something in mind to state that, so what does it consist of?

Then we get to derive y'_p once again and then we substitute the values for y''_p,\ y'_p\ y_p on the original equation to get the following system

u'_1(t)y_1(t) + u'_2(t)y_2(t) = 0\ OBS:\ I\ wanna\ know\ why\ this\ is\ equal\ to\ zero.

u'_1(t)y'_2(t) + u'_2(t)_2y'_2(t) = g(t)

So we can solve for u_1(t)\ and\ u_2(t) and find the particular solution for the ODE.

Leaving us with the general solution for the ODE

y = c_1y_1(t) + c_2y_2(t)\ -\ y_1(t)\int \frac{y_2(t)g(t)}{W(y_1,y_2)(t)}\ dt\ +\ y_2(t)\int \frac{y_1(t)g(t)}{W(y_1,y_2)(t)}\ dt

Thanks.
 
Last edited:
Physics news on Phys.org
You have defined two functions u_i, and you only have one constraint on them, which is your differential equation. You are free to choose your second constraint however you wish, and the constraint you're asking about happens to be convenient.
 
Je m'appelle said:
Ok, so I've been studying the method of variation of parameters in order to solve 2nd order ODEs, and I have a question regarding a supposition that is made in the definition of the method.

Say,

y'' + p(t)y' + q(t)y = g(t)

Then the general solution to the above equation is

c_1y_1(t) + c_2y_2(t) + y_p, and now replacing c_1\ and\ c_2\ by\ u_1(t)\ and\ u_2(t) on the complementary solution c_1y_1(t) + c_2y_2(t) in order to find the particular solution y_p

y_p = u_1(t)y_1(t) + u_2(t)y_2(t)

y'_p = u'_1(t)y_1(t) + u_1(t)y'_1(t) + u'_2(t)y_2(t) + u_2(t)y'_2(t)

And then the methods states that

u'_1(t)y_1(t) + u'_2(t)y_2(t) = 0

And this is where I'm curious and confused at the same time, I would like to know why it can be considered true, is there some kind of a proof to back this up? I mean, the person who developed this method certainly had something in mind to state that, so what does it consist of?

It's hard to guess what the original developer of the method was thinking. It might have been as easy as "since cty1+c2y2 doesn't have enough flexibility to work, maybe if I let the c's be variables, I can make it work". But keep in mind that the goal is to see if somehow a u1(t) and u2(t) can be found that will work. Given that these are two unknown functions that you hope to solve for you need solving for them not to be as difficult or more so than the original DE. The author cleverly noted that by setting that equation equal to zero, he can avoid getting any second derivatives of the u's when they are substituted into the second order DE. It isn't that he noted that the above equation must be zero. Rather, it was an assumption that was made to try to keep solving for the u's as simple as possible. So it turns out that the system of equations that come out for the u derivatives can be solved and, in principle, the u functions themselves and hence yp.

Surely he must have shouted "Yes!" and celebrated with a glass of beer!
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top