No. of solutions of 1st and 2nd order ODE

  • Context: Graduate 
  • Thread starter Thread starter Trave11er
  • Start date Start date
  • Tags Tags
    2nd order Ode
Click For Summary
SUMMARY

This discussion confirms that a linear first-order ordinary differential equation (ODE) has a unique solution, while a second-order linear homogeneous ODE has two linearly independent solutions. The proof relies on the continuity of coefficients and the leading coefficient being non-zero. The existence and uniqueness theorem is applied, demonstrating that solutions can be expressed as linear combinations of independent solutions. The discussion also highlights the transition from second-order to first-order equations using vector forms.

PREREQUISITES
  • Understanding of linear ordinary differential equations (ODEs)
  • Familiarity with the existence and uniqueness theorem for ODEs
  • Knowledge of vector spaces and linear combinations
  • Concept of Lipschitz continuity in the context of differential equations
NEXT STEPS
  • Study the proof of the existence and uniqueness theorem for first-order ODEs
  • Learn about the properties of linear homogeneous second-order ODEs
  • Explore the application of vector forms in solving differential equations
  • Investigate the implications of Lipschitz continuity on solution existence
USEFUL FOR

Mathematicians, students of differential equations, and anyone involved in the analysis of linear ODEs seeking to deepen their understanding of solution uniqueness and independence.

Trave11er
Messages
71
Reaction score
0
Well, surely there is one unique solution to linear 1st order ODE and two linearly independent ones for 2nd order linear ODE, but can someone share the proof of this?
 
Last edited:
Physics news on Phys.org
It seems to me I just did this on a different forum! Did you ask there also?

Strictly speaking, it is not necessarily true- you have to assume nice properties of the coefficients which may be functions of x. In particular,we need the coefficients to be continuous and the leading coefficient not equal to 0. You also need to have the equation "homogeneous"- that is, that there no functions of the independent variable, x, that are not multiplying the dependent variable, y, or derivatives of it.

Assuming that, a first order linear equation can be written dy/dx= f(x, y). As long as f is continuous in x and "Lipschitz" in y ("Lipschitz" is midway between "continuous" and "differentiable" so many texts take "differentiable" as a sufficient condition). Requiring the coefficents to be continuous and the leading coefficient not equal to 0 guarentees that.

Now, it is easy to show that "the set of all solutions to a linear differential equation forms a vector space". To do that, assume f and g are solutions, put y(x)= af(x)+ bg(x), where a and b are numbers, into the equation, then show, using (af+ bg)'= af'+ bg', etc., show that you can separate everything into a( ... terms in f...)+ b(... terms in g...)= a(0)+ b(0)= 0.

Now, as long as the coefficients are as I have said, we can use the "existence and uniqueness" equation to show that there exist a unique solution to the equation satisfying y1(a)= 1 for any specific a. Now, let y be any solution to the first order equation. Let Y= y(a). It is easy to see that the function Yy1(x) satisfies the same differential equation: py'+ qy= p(Yy1')+ q(Yy1)= Y(py1'+ qy1)= Y(0)= 0. But the "uniqueness" theorem then requires that y(x)= Yy1(x).

Similarly with second order differential equations with some variations. Given the second derivative linear, homogeneous d.e. py''+ qy'+ r= 0, with p, q, and r continuous functions, we can let Y1= y', Y2= y and write this as a pair of first order equations. Since Y1= y', y''= Y1' so we have pY1'+ qY+ r= 0 and Y2'= Y1. We can then write that as a "vector" equation, taking our vector as Y= (Y1, Y2) so that our two equations become the single vector equation [tex]\frac{dY}{dx}= \frac{\begin{pmatrix}Y1 \\ Y2\end{pmatrix}}{dx}= \begin{pmatrix}p(x) & q(x) \\ 1 & 0\end{pmatrix}\begin{pmatrix}Y1 \\ Y2\end{pmatrix}[/tex]

One can without too much trouble, cast the "existence and uniqueness" theorem for first order problems into "vector" form- the proof follows the same logic. The main difference is that the "additional condition" you need for uniqueness becomes
[tex]Y(0)= \begin{pmatrix}Y1(0) \\ Y2(0)\end{pmatrix}= \begin{pmatrix}y(a) \\ y'(a)\end{pmatrix}= \begin{pmatrix}A \\ B\end{pmatrix}[/tex]
the standard "initial value" condition for such a problem.

Now, one can show that there exist a unique solution to the differential equation, y1, satisfying y(a)= 1, y'(a)= 0 and another solution, y2, satisfying y(a)= 0, y'(a)= 1.

Finally, given any solution, Y(x) to the differential equation, let A= Y(a) and B= Y'(a) and show that y(x)= Ay1(x)+ By2(x) satisfies the differential equation and y(a)= A, y'(a)= B and so, by the "uniqueness" Y(x)= Ay1(x)+ By2(x). A general extension of that, to n dimensions, shows that the set of all solutions to an nth order linear, homogeneous d.e. for an nth dimensional vector space and so can be written as a linear combination of n linearly independent solutions.
 
Last edited by a moderator:

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K