# Theorem of the uniqueness and existence of a solution of ODE

1. Sep 14, 2007

### O.J.

Could you please explain the theory intuitively and provide a proof to it. I understand how to apply it but i want to understand the logic behind it.

2. Sep 14, 2007

### HallsofIvy

Staff Emeritus
The standard proof is based on two things: the Banach "fixed point" theorem and thinking of the set of functions as a "complete metric space".

A "contraction map" is, roughly, a function f(x), from A to itself such that, for any pair, x and y, in A, the distance from f(x) to f(y) is less than the distance from x to y. In specific terms, using d(a,b) as the distance between a and b, $d(f(x),f(y))\le c d(x,y)$ where c is a number strictly less than 1. Applying the function f to points "contracts" the distance between them and so "contracts" all of A. One way of thinking about it is this: Apply the function to every point in A and the result is a slightly smaller subset of A. Apply again and you get a still smaller set. Keep doing that and, in the limit, A is reduced to a single point. You can then show that, for that point, x, f(x)= x: the "fixed point".
Now think of picking any single point in A. Applying applying f repeatedly, give f(x), f(f(x)), etc. inside each of those decreasing sets f(A), f(f(A)), etc. Since the sets reduce down to a single point the sequence x, f(x), f(f(x)), etc. must converge to that point. That' the idea of the standard proof of Banach's fixed point theorem. Choose any point x in A, and form the sequence, x, f(x), f(f(x)), f(f(f(x))), etc. Use the "contration" property of f to show that is a Cauchy sequence. Since we are in a "complete" space, all Cauchy sequences converge- this sequence converges to some a. Since you have already proved that contraction maps are continuous, applying f to a is the same as applying f to each point in that sequence. But that just gives us f(x), f(f(x)), f(f(f(x))), f(f(f(f(x))))... the same sequence: it still converges to the same limit: f(a)= a.

That's the Banach fixed point theorem.

Now suppose we have the differential equation problem dy/dx= f(x,y), y(x0)= y0. If we KNEW y, we could integrate both sides and get
$$y= \int_{x_0)^x f(t,y(t))dt+ y_0[/itex]. Of course we don't know y but the point is that any y(x) that satisfies one must the other. The solutions to the differential equation are exactly the same as the solutions to the integral equation. We can prove that the solution differential equation problem exists and is unique by showing that the solution to the integral equation exists and is unique. That's important because integrals are "better behaved" than derivatives. If I take a differentiable function and differentiate it, the result may not be differentiable. (Example: y= x2 if $x\e 0$, y= -x2 if x< 0. The derivative of that exists for all x and is y'= |x|. But of course, |x| is not differentiable at x=0.) On the other hand, if f is an integrable function, its integral is also (in fact, it's "smoother": f may not be continuous but its integral is). That means we can start a set of functions and apply the integral over and over again. Lispschitz property: A function, f, is said to satisfy a Lipschitz property on a set A if and only if $|f(x)- f(y)|\le C|x-y|$ for some positive number C (not necessarily less than 1). It is easy to show that if f is "Lipschitz" on a set it is continuous at each point in the set. Also, if f is differentiable at each point, you can use the mean value theorem to show it is "Lipschitz" on that set. However, there exist Lipschitz functions that are not differentiable and continuous functions functions that are not Lipschitz. Now suppose f(x,y) in the differential equation above is continuous and "Lipschitz in y" for some neighborhood of (x0,y0). We convert from the differential equation dy/dx= f(x,y) to the corresponding integral equation [tex]y= \int_{x_0}^x f(t,y(t))dt$$
and use that to define the "operator"
$$F(y)= y_0+ \int_{x_0}^x f(t,y(t))dt$$
For each function y, F(y) gives a function. We reduce from "neighborhood of (x0,y0)" to a rectangle containing that point (we can always do that). Use the continuity of f to show that f maps some set of functions on the rectangle to itself (we may need to reduce the size of the rectangle to do that). Use the "Lipschitz" property to show that F is a "contraction map". Again, we may need to reduce the rectangle to do that, but still we have F a contraction map from that rectangle to itself.

Now, it is know that the set of all (integrable) functions on a rectangle forms a complete metric space. We apply the Banach fixed point theorem to show that there exist a unique function, y, such that F(y)= y. That is, there exist a unique y such that
$$y(x)= \int_{x_0}^x f(t,y(t))dt$$
Since the integral equation has a unique solution (in some, perhaps small, interval about x0), it follows that the differential equation has a unique solution.

3. Sep 15, 2007

### O.J.

it seems the proof relies on some concepts i havent covered yet and that sound vague to me. can you gimme some intuitive explanation/? or perhaps a simpler proof?

4. Sep 15, 2007

### Chris Hillman

Disentangling Several Major Themes in Analysis

Actually, Halls gave a wonderful and clear summary, but he might have been going a bit fast for you. There are really several big ideas here:

1. Function spaces and metrics on same are useful for many many things, so read about them in a good book.

2. Fixed point theorems for mappings on function spaces are extremely powerful and can be used to do all kinds of things, so read about them in a good book.

3. An important trick which is useful in many contexts: we can reformulate differential equations as integral equations. (Note that integrals against a kernel tend to smooth out small errors in the integrand, whereas derivatives tend to amplify errors.)

4. A typical combination of these ideas is the proof outlined by Halls. (If you think this proof is hard, well, other proofs are likely to be much harder and more complex--- fixed point theorems are virtually black magic for existence and uniqueness proofs.)

5. Sep 15, 2007

### HallsofIvy

Staff Emeritus
A very good exercise in the ideas used above is to solve an equation like dy/dx= y, y(0)= 1, by "Poisson's method". You start by approximating y by a constant. Since the only value we know for y is y(0)= 1, that's a good choice! Setting y= 1 on the right hand side, the equation becomes dy/dx= 1 which we can integrate to get y= t+ C. Of course, to satisfy y(0)= 1, we must have C= 1: now we have y= x+ 1. Repeat: solve dy/dx= x+1 to get y= (1/2)x2+ x+ C and, clearly, y(0)= 1 requires that C= 1. We have y= (1/2)x2+ x+ 1. Repeat: solve dy/dx= (1/2)x2+ x+ 1. Integrating gives y= (1/6)x3+ (1/2)x2+ x+ C. Again, we must have C= 1 to satisfy y(0)= 1. y= (1/6)x3+ (1/2)x2+ x+ 1. Do you see what is happening? We are getting more and more terms of the Taylor's series for ex which is, of course, the solution to the problem!
That is precisely the sequence I referred to as x1, x2, ... before.

Yes, you are right that this contains things that may be too advanced for you. That's why introductory, first, courses in differenial equations typically do NOT give the proof of the "existence and uniqueness" theorem but just state it (and typically require that f(x,y) be continuous and differentiable with respect to y. That's simpler and stronger than "Lipschitz": it is a sufficient condition for uniqueness but not a necessary condition.

Last edited: Sep 16, 2007