# Picard's existence theorem for DE

1. Jan 23, 2008

### Defennder

1. The problem statement, all variables and given/known data

I was talking with my maths lecturer about how he knew certain special differential equations such as y'' = y has only y = e^x + e^-x as a possible solution. I understand the superposition principle but not why only y = e^x satisfies the DE. Why can't there be some other functions which also satisfy it, even if we can't think of any at the moment?

He simply referred me to Picard's existence theorem, which I found here:

http://en.wikipedia.org/wiki/Picard–Lindelöf_theorem

but I don't get it at all. Is this something that can be understood without having to read through advanced calculus proofs?

2. Relevant equations

Picard's existence theorem

3. The attempt at a solution

2. Jan 23, 2008

### Dick

Intuitively, you can understand it by imagining integrating the ODE numerically. If you know the value of the function and it's derivative at, say, x=0, that's all you need to determine the rest of the function uniquely. Now by combining multiples of e^x and e^(-x) you can produce a function that has those same initial values. So all solutions can be written that way.

3. Jan 24, 2008

### Defennder

Actually I don't get what you mean. The "function" which you refers is the anti-derivative of y? What has initial values got to do with the GS? y''=y doesn't specify any initial values.

4. Jan 24, 2008

### Dick

The function to which I 'refers' is the solution to the ODE. A*e^x+B*e^(-x) saturates all possible solutions. There aren't any others.

5. Jan 25, 2008

### HallsofIvy

But the whole question is to PROVE that! You can't simply assert that Ae^x+ Be^(-x) "saturates" all possible solutions and then say "therefore there are no others".

And talking about integrating the function numerically doesn't help. Yes, integrating the function numerically will give a solution. It does not follow that there cannot be a solution that the numerical method does not give.

For example, the differential equation dy/dx= y1/2, with initial condition y(0)= 0, can be integrated as y-1/2dy= dx so 2y1/2= x+ C. So y(x)= (x/2+ C)2. Taking y= 0 when x= 0 gives y(0)= C2= 0 so C= 0 and we have y(x)= x2/4. But it is obvious that y(x)= 0 for all x also satisfies the differential equation and if you were to do, say, an Euler's method numerical solution, it is y(x)= 0 that you would get. Actually, there are an infinite number of solutions to this problem: take any (x/2+ C1)2 that is 0 for x< 0, (x/2+ C2[/sub])2 that is 0 for x> 0 and patch them together with y= 0 in between.

Notice that this is of the form dy/dx= f(x,y) with f(x,y)= y1/2. The partial derivative of f with respect to y, fy= (1/2)y-1/2, does not exist at y= 0 so this does not satisfy the hypotheses of Picard's existance and uniqueness theorem.

6. Jan 25, 2008

### Dick

The question doesn't ask for a proof. The OP was simply looking for a way to 'understand' why there were only two linearly independent solution WITHOUT going through the formal proof.

7. Jan 25, 2008

### Defennder

I just reread my first reply and realised I was probably half-asleep when I wrote part of it. My question was why knowing that a particular function satisfies the ODE and its "initial values", whatever that means, implies that we have found all possible solutions to the DE. If explaining this part requires the technical advanced calculus proof, then never mind then.

What does it mean to "imagine integrating the function numerically"? I mean how does one integrate y''=y numerically? It's a second order DE and I can't just do the simple separation of variables twice, can I?

8. Jan 25, 2008

### HallsofIvy

?? Knowing that a particular function satisfies the ODE and its "initial values" DOESN'T mean "we have found all possible solutions"! That was my point and the point of the "Existance and Uniqueness" theorem. IF you know that an initial value problem satisfies the conditions of the "Existance and Uniqueness" theorem, then knowing that a given function satisfies the conditions AND is unique tells you that you have found all possible solutions: that's what "unique" means after all. I don't think there is any simple way of "understanding" that theorem without know the "advance calculus" required in its proof. Of course, it is relatively easy to use it without understanding its proof.

No, "integrating the function numerically" does not refer to separation of variables. In a second order equation like that, you define u= y' so that u'= y"= y and have TWO first order equations: y'= u and u'= y. Now, if you know y(0) and y'(0)= u(0) you can do something like setting u= u(0) y= y(0) and "integrating" y'=u(0)=> y(x)= u(0)x+ y(0) and from u'= y(0), u(x)= y(0)x+ u(0). Use those to find the values of y(a), u(a) for a some short distance from 0 and repeat.

9. Jan 27, 2008

### Defennder

Ok then thanks, guess I'll have to plough through the maths if I'm interested to know why.

10. Feb 16, 2009

### elenamk

How to proof this theorem using the Banach Fixed Point Theoem?
do you know???
I need it...

11. Feb 17, 2009

### HallsofIvy

Banach fixed point theorem: If X is a complete metric space, S is a non-empty subset of X, and f(x) is a contraction map of S into itself, then f has a unique fixed point.

Some preleminary definitions:
A metric on a set, X, is a "distance" function: a function, d, from $X\times X$ to the real numbers such that:

1) $d(x,y)\ge 0$.
(The distance between two points is never negative)

2) d(x, y)= 0 if and only if x= y.
(The distance between a point and itself is 0 but the distance between two distinct points is non-zero)

3) d(x,y)= d(y, x)
(The distance from x to y is the same as the distance from y to x)

4) $d(x,y)\le d(x,z)+ d(y,z)$
(The "triangle inequality": going around a two sides of a triangle, from x to z and then to y, is not less than going directly from x to y: the shortest distance between two points is a straight line.

A "metric space" is a set of points with a metric assigned.

A "Cauchy sequence" is a sequence of points such that distances between the points goes to 0 as you go out along the sequence: $d(p_n, p_m)$ goes to 0 as n and m go to infinity independently.

A complete space is one in which all Cauchy sequences converge. For example, the set of real numbers is complete, the set of rational numbers is not. That is, in a sense, the "defining" difference between them.

A function f(x) is a "contraction" map if and only if there is a number 0< c< 1 such that $d(f(x), f(y))\le cd(x,y)$- it "contracts" in the sense that f(x) and f(y) are closer together than x and y were. Notice this is NOT just "d(f(x),f(y))< d(x,y)". It turns out that is not enough for this theorem. It's not difficult to prove that any contraction map is continuous.

A "fixed point" of a function, f, is a point x, such that f(x)= x.

Basic idea of the proof. Suppose S is some set of points, f is a contraction map that maps S to itself. If we apply f to every point of S, the result, f(S) is smaller than S and, since f maps S into itself, is a subset of S. If we do it again, $f^2(S)$ is a yet smaller subset of both S and f(S). If we continue doing this, in the limit the set contracts to a single point, x which was in all the sets in the sequence. Applying f to x is the same, now, as applying f to "the entire set" and must "inside" the single point itself: f(x)= x.
How do we know that S contracts to a single point? For example, let S be the disk of radius 2R. Isn't it possible that f(S) is the disk of radius (3/2)R, f(f(S)) is the disk of radius (5/4)R, and, in general, f applied to S gives the disk or radius ((n+1)/n)R? That contracts to the disk of radius R, not a single point! That would be a map that satisfies d(f(x),f(y))< d(x,y)- the "c" we had before is not constant but goes to 1 as we repeat f. That's why we need 'c< 1'.

Finally: the proof

With S a non-empty subset of the complete metric space X, f a contraction map that maps S to itself, let x0 be any point of S. Let x1= f(x0), x2= f(x1, and, in general, xn= f(xn-1). We will prove that {xn} is a Cauchy sequence and so converges.

Lemma: $d(x_n, x_{n+1})\le c^n d(x_0, x_1)$
Proof by induction:
If n= 0, this says $d(x_0,x_1)\le d(x_0,x_1)$ which is certainly true.

Suppose $d(x_n, x_{n+1})\le c^n d(x_0, x_1)$. Then $d(x_{n+1},x_{n+2})= d(f(x_n),f(x_{n+1})\le cd(x_n, x_{n+1})= c(c^n d(x_0,x_1))= c^{n+1}d(x_0,x_1)$ so we are done.

Of course, since c< 1, cn goes to 0 as n goes to infinity so we have proved that $d(x_n,x_{n+1})$ goes to 0 as n goes to infinity.

But proving that the distance between consecutive terms goes to 0 is not enough. For example, if $a_n= \sum_{i=1}^n 1/i$, then the distance between an and an+1 is 1/n(n+1) which goes to 0, but {an is the "harmonic sequence" which does not converge. To show that {xn} is a Cauchy sequence we must show that $d(x_n,x_m)$ goes to 0 as n and m go to infinity independently- that is, without any relation such as m= n+1.

Because the metric is symmetric (d(x,y)= d(y,x), we can, without loss of generality, assume m> n. For any such n, m, by the "triangle inequality", $d(x_n,x_m)\le d(x_n, x_{n+1}+ d(x_{n+1},x_m)$. By the "triangle inequality" again, $d(x_{n+1},x_m)\le d(x_{n+1},x_{n+1}+ d(x_{n+2},x_m)$ so that $d(x_n,x_m)\le d(x_n,x_{n+1})+ d(x_{n+1},x_{n+2})+ d(x_{n+2},x_m)$. Repeating that m-n times, we have
$$d(x_n,x_m)\le d(x_n,x_{n+1})+ d(x_{n+1},x_{n+2})+ \cdot\cdot\cdot+ d(x_{m-1},x_m)[/itex] Now these are all consecutive points so we can apply $d(x_i,x_{i+1})\le c^id(x_0,x_1)$: [tex]d(x_n,x_m)\le c^nd(x_0,x_1)+ c^{n+1}d(x_1,x_0)+ \cdot\cdot\cdot+ c^{m-1}d(x_1,x_0)$$
$$d(x_n,x_m)\le c^n d(x_0,x_1)(1+ c+ \cdot\cdot\cdot+ c^{m-n-1})[/itex] Since c> 0, adding higher higher power terms to that last sum can only make it larger so we also have [tex]d(x_n,x_m)\le c^n d(x_0,x_1)(\sum_{i=0}^\infty c^i)$$

But now that sum is a geometric series. Since c< 1, it converges and its sum is $\frac{1}{1- c}$. So for any n, m, we have
[tex]d(x_n,x_m)\le \frac{d(x_0,x_1)}{1- c} c^n[/itex]
Notice that m has disappeared from the formula. But since m> n, as n goes to infinity, so must m. This is a constant times cn. As m and n go to infinity, the distance between xn and xm goes to 0. This is a Cauchy sequence and, because our space is complete, it converges.

Now, let x be the limit of the sequence. Since f is a contraction map on S it is continuous on S and f(x)= f(lim xn)= lim f(xn)= lim xn+1. But that is the same sequence as before- its limit is x. f(x)= x.

That proves that f has a fixed point but not that it is unique. Had we started with some other point a x0, we would get a different sequence and possibly a different limit. However, it is easy to prove that contraction map cannot have two distinct fixed points.

Suppose f(x)= x and f(y)= y. Then $d(f(x), f(y))\le d(x,y) But we also have d(f(x),f(y))= c d(x,y) so we must have [itex]d(x,y)\le cd(x,y)[itex] or [itex](1- c)d(x,y)= 0$. Since c is not 1 and d(x,y) is never negative, we have d(x,y)= 0 and so x= y.

End of proof.

12. Feb 21, 2009

### elenamk

Thank you a lot, HallsofIvy
I tried to prove it ... so I already had a proof ....and now when I made a comparison with yours I can see that I did it very good :)
Thanks...