Picard's existence theorem for DE

In summary, the conversation discusses a maths lecturer's explanation of why certain special differential equations only have one solution. The concept of superposition principle is mentioned, but the lecturer refers to Picard's existence theorem to explain why only one solution exists. The person asking the question struggles to understand the theorem and asks if it can be understood without advanced calculus proofs. Another person in the conversation explains that intuitively, the theorem can be understood by imagining integrating the ODE numerically. However, this explanation is challenged and it is stated that there are actually an infinite number of solutions to the problem. The original questioner clarifies their question and the conversation concludes with a reference to the Banach fixed point theorem as a possible proof for the existence theorem.
  • #1
Defennder
Homework Helper
2,593
5

Homework Statement



I was talking with my maths lecturer about how he knew certain special differential equations such as y'' = y has only y = e^x + e^-x as a possible solution. I understand the superposition principle but not why only y = e^x satisfies the DE. Why can't there be some other functions which also satisfy it, even if we can't think of any at the moment?

He simply referred me to Picard's existence theorem, which I found here:

http://en.wikipedia.org/wiki/Picard–Lindelöf_theorem

but I don't get it at all. Is this something that can be understood without having to read through advanced calculus proofs?

Homework Equations



Picard's existence theorem

The Attempt at a Solution

 
Physics news on Phys.org
  • #2
Intuitively, you can understand it by imagining integrating the ODE numerically. If you know the value of the function and it's derivative at, say, x=0, that's all you need to determine the rest of the function uniquely. Now by combining multiples of e^x and e^(-x) you can produce a function that has those same initial values. So all solutions can be written that way.
 
  • #3
Actually I don't get what you mean. The "function" which you refers is the anti-derivative of y? What has initial values got to do with the GS? y''=y doesn't specify any initial values.
 
  • #4
The function to which I 'refers' is the solution to the ODE. A*e^x+B*e^(-x) saturates all possible solutions. There aren't any others.
 
  • #5
But the whole question is to PROVE that! You can't simply assert that Ae^x+ Be^(-x) "saturates" all possible solutions and then say "therefore there are no others".

And talking about integrating the function numerically doesn't help. Yes, integrating the function numerically will give a solution. It does not follow that there cannot be a solution that the numerical method does not give.

For example, the differential equation dy/dx= y1/2, with initial condition y(0)= 0, can be integrated as y-1/2dy= dx so 2y1/2= x+ C. So y(x)= (x/2+ C)2. Taking y= 0 when x= 0 gives y(0)= C2= 0 so C= 0 and we have y(x)= x2/4. But it is obvious that y(x)= 0 for all x also satisfies the differential equation and if you were to do, say, an Euler's method numerical solution, it is y(x)= 0 that you would get. Actually, there are an infinite number of solutions to this problem: take any (x/2+ C1)2 that is 0 for x< 0, (x/2+ C2[/sub])2 that is 0 for x> 0 and patch them together with y= 0 in between.

Notice that this is of the form dy/dx= f(x,y) with f(x,y)= y1/2. The partial derivative of f with respect to y, fy= (1/2)y-1/2, does not exist at y= 0 so this does not satisfy the hypotheses of Picard's existence and uniqueness theorem.
 
  • #6
The question doesn't ask for a proof. The OP was simply looking for a way to 'understand' why there were only two linearly independent solution WITHOUT going through the formal proof.
 
  • #7
I just reread my first reply and realized I was probably half-asleep when I wrote part of it. My question was why knowing that a particular function satisfies the ODE and its "initial values", whatever that means, implies that we have found all possible solutions to the DE. If explaining this part requires the technical advanced calculus proof, then never mind then.

What does it mean to "imagine integrating the function numerically"? I mean how does one integrate y''=y numerically? It's a second order DE and I can't just do the simple separation of variables twice, can I?
 
  • #8
Defennnder said:
I just reread my first reply and realized I was probably half-asleep when I wrote part of it. My question was why knowing that a particular function satisfies the ODE and its "initial values", whatever that means, implies that we have found all possible solutions to the DE. If explaining this part requires the technical advanced calculus proof, then never mind then.
?? Knowing that a particular function satisfies the ODE and its "initial values" DOESN'T mean "we have found all possible solutions"! That was my point and the point of the "Existance and Uniqueness" theorem. IF you know that an initial value problem satisfies the conditions of the "Existance and Uniqueness" theorem, then knowing that a given function satisfies the conditions AND is unique tells you that you have found all possible solutions: that's what "unique" means after all. I don't think there is any simple way of "understanding" that theorem without know the "advance calculus" required in its proof. Of course, it is relatively easy to use it without understanding its proof.

What does it mean to "imagine integrating the function numerically"? I mean how does one integrate y''=y numerically? It's a second order DE and I can't just do the simple separation of variables twice, can I?
No, "integrating the function numerically" does not refer to separation of variables. In a second order equation like that, you define u= y' so that u'= y"= y and have TWO first order equations: y'= u and u'= y. Now, if you know y(0) and y'(0)= u(0) you can do something like setting u= u(0) y= y(0) and "integrating" y'=u(0)=> y(x)= u(0)x+ y(0) and from u'= y(0), u(x)= y(0)x+ u(0). Use those to find the values of y(a), u(a) for a some short distance from 0 and repeat.
 
  • #9
Ok then thanks, guess I'll have to plough through the maths if I'm interested to know why.
 
  • #10
How to proof this theorem using the Banach Fixed Point Theoem?
do you know?
I need it...
 
  • #11
Banach fixed point theorem: If X is a complete metric space, S is a non-empty subset of X, and f(x) is a contraction map of S into itself, then f has a unique fixed point.

Some preleminary definitions:
A metric on a set, X, is a "distance" function: a function, d, from [itex]X\times X[/itex] to the real numbers such that:

1) [itex]d(x,y)\ge 0[/itex].
(The distance between two points is never negative)

2) d(x, y)= 0 if and only if x= y.
(The distance between a point and itself is 0 but the distance between two distinct points is non-zero)

3) d(x,y)= d(y, x)
(The distance from x to y is the same as the distance from y to x)

4) [itex]d(x,y)\le d(x,z)+ d(y,z)[/itex]
(The "triangle inequality": going around a two sides of a triangle, from x to z and then to y, is not less than going directly from x to y: the shortest distance between two points is a straight line.

A "metric space" is a set of points with a metric assigned.

A "Cauchy sequence" is a sequence of points such that distances between the points goes to 0 as you go out along the sequence: [itex]d(p_n, p_m)[/itex] goes to 0 as n and m go to infinity independently.

A complete space is one in which all Cauchy sequences converge. For example, the set of real numbers is complete, the set of rational numbers is not. That is, in a sense, the "defining" difference between them.

A function f(x) is a "contraction" map if and only if there is a number 0< c< 1 such that [itex]d(f(x), f(y))\le cd(x,y)[/itex]- it "contracts" in the sense that f(x) and f(y) are closer together than x and y were. Notice this is NOT just "d(f(x),f(y))< d(x,y)". It turns out that is not enough for this theorem. It's not difficult to prove that any contraction map is continuous.

A "fixed point" of a function, f, is a point x, such that f(x)= x.

Basic idea of the proof. Suppose S is some set of points, f is a contraction map that maps S to itself. If we apply f to every point of S, the result, f(S) is smaller than S and, since f maps S into itself, is a subset of S. If we do it again, [itex]f^2(S)[/itex] is a yet smaller subset of both S and f(S). If we continue doing this, in the limit the set contracts to a single point, x which was in all the sets in the sequence. Applying f to x is the same, now, as applying f to "the entire set" and must "inside" the single point itself: f(x)= x.
How do we know that S contracts to a single point? For example, let S be the disk of radius 2R. Isn't it possible that f(S) is the disk of radius (3/2)R, f(f(S)) is the disk of radius (5/4)R, and, in general, f applied to S gives the disk or radius ((n+1)/n)R? That contracts to the disk of radius R, not a single point! That would be a map that satisfies d(f(x),f(y))< d(x,y)- the "c" we had before is not constant but goes to 1 as we repeat f. That's why we need 'c< 1'.


Finally: the proof

With S a non-empty subset of the complete metric space X, f a contraction map that maps S to itself, let x0 be any point of S. Let x1= f(x0), x2= f(x1, and, in general, xn= f(xn-1). We will prove that {xn} is a Cauchy sequence and so converges.

Lemma: [itex]d(x_n, x_{n+1})\le c^n d(x_0, x_1)[/itex]
Proof by induction:
If n= 0, this says [itex]d(x_0,x_1)\le d(x_0,x_1)[/itex] which is certainly true.

Suppose [itex]d(x_n, x_{n+1})\le c^n d(x_0, x_1)[/itex]. Then [itex]d(x_{n+1},x_{n+2})= d(f(x_n),f(x_{n+1})\le cd(x_n, x_{n+1})= c(c^n d(x_0,x_1))= c^{n+1}d(x_0,x_1)[/itex] so we are done.


Of course, since c< 1, cn goes to 0 as n goes to infinity so we have proved that [itex]d(x_n,x_{n+1})[/itex] goes to 0 as n goes to infinity.

But proving that the distance between consecutive terms goes to 0 is not enough. For example, if [itex]a_n= \sum_{i=1}^n 1/i[/itex], then the distance between an and an+1 is 1/n(n+1) which goes to 0, but {an is the "harmonic sequence" which does not converge. To show that {xn} is a Cauchy sequence we must show that [itex]d(x_n,x_m)[/itex] goes to 0 as n and m go to infinity independently- that is, without any relation such as m= n+1.

Because the metric is symmetric (d(x,y)= d(y,x), we can, without loss of generality, assume m> n. For any such n, m, by the "triangle inequality", [itex]d(x_n,x_m)\le d(x_n, x_{n+1}+ d(x_{n+1},x_m)[/itex]. By the "triangle inequality" again, [itex]d(x_{n+1},x_m)\le d(x_{n+1},x_{n+1}+ d(x_{n+2},x_m)[/itex] so that [itex]d(x_n,x_m)\le d(x_n,x_{n+1})+ d(x_{n+1},x_{n+2})+ d(x_{n+2},x_m)[/itex]. Repeating that m-n times, we have
[tex]d(x_n,x_m)\le d(x_n,x_{n+1})+ d(x_{n+1},x_{n+2})+ \cdot\cdot\cdot+ d(x_{m-1},x_m)[/itex]

Now these are all consecutive points so we can apply [itex]d(x_i,x_{i+1})\le c^id(x_0,x_1)[/itex]:
[tex]d(x_n,x_m)\le c^nd(x_0,x_1)+ c^{n+1}d(x_1,x_0)+ \cdot\cdot\cdot+ c^{m-1}d(x_1,x_0)[/tex]
[tex]d(x_n,x_m)\le c^n d(x_0,x_1)(1+ c+ \cdot\cdot\cdot+ c^{m-n-1})[/itex]
Since c> 0, adding higher higher power terms to that last sum can only make it larger so we also have
[tex]d(x_n,x_m)\le c^n d(x_0,x_1)(\sum_{i=0}^\infty c^i)[/tex]

But now that sum is a geometric series. Since c< 1, it converges and its sum is [itex]\frac{1}{1- c}[/itex]. So for any n, m, we have
[tex]d(x_n,x_m)\le \frac{d(x_0,x_1)}{1- c} c^n[/itex]
Notice that m has disappeared from the formula. But since m> n, as n goes to infinity, so must m. This is a constant times cn. As m and n go to infinity, the distance between xn and xm goes to 0. This is a Cauchy sequence and, because our space is complete, it converges.

Now, let x be the limit of the sequence. Since f is a contraction map on S it is continuous on S and f(x)= f(lim xn)= lim f(xn)= lim xn+1. But that is the same sequence as before- its limit is x. f(x)= x.

That proves that f has a fixed point but not that it is unique. Had we started with some other point a x0, we would get a different sequence and possibly a different limit. However, it is easy to prove that contraction map cannot have two distinct fixed points.

Suppose f(x)= x and f(y)= y. Then [itex]d(f(x), f(y))\le d(x,y) But we also have d(f(x),f(y))= c d(x,y) so we must have [itex]d(x,y)\le cd(x,y)[itex] or [itex](1- c)d(x,y)= 0[/itex]. Since c is not 1 and d(x,y) is never negative, we have d(x,y)= 0 and so x= y.

End of proof.
 
  • #12
Thank you a lot, HallsofIvy
I tried to prove it ... so I already had a proof ...and now when I made a comparison with yours I can see that I did it very good :)
Thanks...
 

1. What is Picard's existence theorem for DE?

Picard's existence theorem for DE (differential equations) is a mathematical theorem that guarantees the existence of solutions to certain types of differential equations. It states that if a differential equation is continuous and satisfies certain conditions, then there exists at least one unique solution to that equation.

2. What types of differential equations does Picard's theorem apply to?

Picard's theorem applies to ordinary differential equations (ODEs) and partial differential equations (PDEs) that are continuous and satisfy certain conditions, such as being Lipschitz continuous.

3. How does Picard's theorem differ from other existence theorems for DE?

Picard's theorem is a more general existence theorem compared to other theorems, such as Cauchy-Lipschitz theorem and Peano's existence theorem. It does not require the differential equation to be in a specific form, and it guarantees the existence of solutions for a wider range of equations.

4. Can Picard's theorem be used to find solutions to DE?

No, Picard's theorem only guarantees the existence of solutions but does not provide a method for finding them. Additional techniques, such as separation of variables or using numerical methods, are needed to find the specific solutions to a given differential equation.

5. Are there any limitations to Picard's theorem?

Yes, Picard's theorem only applies to continuous differential equations and may not work for discontinuous or singular equations. It also does not guarantee the uniqueness of solutions, as there may be multiple solutions that satisfy the given conditions.

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
496
  • Calculus and Beyond Homework Help
Replies
2
Views
319
  • Calculus and Beyond Homework Help
Replies
7
Views
682
  • Calculus and Beyond Homework Help
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
284
  • Calculus and Beyond Homework Help
Replies
2
Views
273
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
705
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
972
Back
Top