Start with the basic first: [tex]\frac{dy}{dx}=f(x,y)[/tex] A unique solution of this equation about the point [itex](x_0,y_0)[/itex] exists if f(x,y) and it's partial with respect to y is continuous about the point. So: [tex]\frac{dy}{dx}=\frac{1}{x}[/tex] would fail this criteria along the x-axis. That is, any initial condition (0,y) would not be guaranteed to lead to a unique solution. How about: [tex]\frac{dy}{dx}=\frac{1}{1+y}[/tex] Well, taking the partial with respect to y leads to [itex]\frac{-1}{(1+y)^2}[/itex] and that's not continuous at y=-1 so again, any initial condition of the form (x,-1) would not be guaranteed to lead to a unique solution and in fact the IVP of y(0)=-1 leads to 2 solutions. Same dif with higher-ordered ODEs. Just break it up into a simultaneous system and apply this rule to the individual parts. What about: [tex]ty^{''}+(t-1)y^{'}+y=t^2;\quad y(0)=0,\quad y^{'}(0)=0[/tex] Come up with any ODE with a form that fails this criteria and there will be a problem with existence or uniqueness. Or are you having problems with the specific details of the proof? Rainville and Bedient, "Elementary Differential Equations" is the reference I've used to study the proof and is one I think is nice to follow.
Thanks for Amazing Clarification Salty but my doubt is this that i want to know why the happening of continuity of F(x,y) and Fy(x,y) ensures the uniqueness.!!
Well, I tell you what Heman. It's best to carefully go through the proof. But in general, continuity implies that it's bounded. And as you go through the proof, an expression is created which is bounded by a particular series which is show to converge. Later in the proof, a series of those expressions converge to the solution of the ODE with convergence guaranteed by the convergence of the series which bounds it.
Salty, Can you give me an online link on Differential eqns. which stresses on proof! Actually my textbook Kreyzig sucks!
Actually, continuity of F_{y}(x,y) is not necessary. What is necessary for uniqueness is that F be "Lipschitz" in y. A function of one variable is Lipschitz (on a set) if and only if [itex]|f(x)-f(y)|\leq C|x-y|[/itex]. If a function has a continuous derivative on a set, then you can use the mean value theorem to show that it must be Lipschitz on that set so that "continuous derivative" is a sufficient condition. The basic idea of the "uniqueness and existence" theorem is this: Show that the initial value problem [tex] \frac{dy}{dx}= F(x, y) y(x_0)= y_0[/tex] has a unique solution if and only if the integral equation [tex]y(x)= \int_{x_0}^x f(t,y(t))dt+ y_0[/tex] has a unique solution. Define the the operator: [tex]A(y)= y_0+ \int_{x_0}^xf(t,y(t))dt[/tex] from the set of functions continuous on some interval containing x_{0} to itself. Clearly, the integral equation has a unique solution if and only if that operator has a unique "fixed point". You then use continuity and Lipschitz property of F to show that, if necessary limiting the domain in R^{2}, the operator is a "contraction operator" and so the Banach fixed point theorem can be applied.
Hall, thanks. Is that a typo? Should it read: [tex]|f(x)-f(y)|\leq K|x-y|[/tex] for some value K thus ensuring the function f is bounded.
Heman, really I think only "text book in hand" is the best way to learn the proof. Go through each line carefully and make sure you understand every detail. Draw pictures, go back and review all the theorems that are quoted. Do examples. Also, I have Kreyzig. It's a good book for Engineers but not for Mathematicians. God I hope I don't get in trouble with the Engineers in here for saying that. :surprised
You can also think about athers existence theorem like LAX-Miligram, Hill-yosida-Lumeur-Phillips,.......
YOU ARE IN TROUBLE!!! I LOVE Kreyzig... lol. I still use that book as a reference for O.D.E's and Linear Algebra stuff. Then again, I am an engineer. :rofl:
can any one explain about the lipschitz condition? with an example and how it is related to existence and uniqueness of solution......i am in problem.....help me
Me too required further explaination on the lipschitz condition. It is not that we prove uniqueness by assuming there are two solutions y_{1}(x) and y_{2}(x) and then show that y_{1}=y_{2}. But do not know how they go about proving the existence of solution.
give a link for the proof of this uniqueness and existence theorem....and also give the link of differential equation which includes singular points, that is covering pg syllabus........
I also need a proof of this uniqueness and existence theorem!!! I have to use the Banach Fixed point theorem....
The existence and uniquess theorem says: If f(x,y) is continuous in both variables in some neighborhood of (x_{0},y_{0}) and satifies the Lipshchitz property in the variable y in that neighborhood, then there exist some interval about x_{0 such that there is a unique solution to the differential equation dy/dx= f(x,y) on that interval that satisfies y(x0)= y0. I explained what "Lispschitz" means earlier in this thread. Yes, you should use the Banach fixed point theorem. You would need to know (probably will not be required to prove it) that the set of all continuous functions on a given interval, with the metric (distance function) d(f, g)= max|f(x)- g(x)| where the max is taken over all points in the interval, is a "complete metric space" in order to apply the Banach fixed point theorem. Since f(x,y) satisfies properties on some neighborhood of (x0, y0) there exists some 'interval' [itex]\left{(x, y)| |x- x_0|< \delta |y- y_0|< \epsilon\right}[/itex] on which f satisfies those properties. Now consider the set of functions continuous on that interval, with the "metric" defined above. In order to use the Banach fixed point theorem you would need to prove that the operator, [tex]\int_{x_0}^x f(t, y(t))dt+ y_0[/itex] maps that set of functions into itself. In fact, it doesn't! But you can reduce [itex]\delta[/itex], reducing the possible values for x, so that it does map this smaller interval into itself. Then, since the Banach fixed point theorem requires a "contraction map" you need to prove that this operator is a "contraction". That is, that, if f and g are two functions in the (smaller) interval, then, for some c< 1, [tex]max\left|\left(\int_{x_0}^x f(t,y(t))dt+ y_0\right)- \left\int_{x_0}^x g(t,y(t))dt+ y_0\right)\right|[/tex] [tex]= max\left|\int_{x_0}^x f(t,y(t))- g(t,y(t))dt\right|\le c max\left|f((x,y(x))- g(x,y(x)\right|[/tex] where the maximum is take over all x values in the interval. Again, this NOT generally true! You will have to again reduce the [itex]\delta[/itex] value so that it is. Once you have done that, you can appeal to the Banach fixed point theorem to assert that there is a unique function y(x), on that interval, such that [tex]y(x)= \int_{x_0}^x f(t, y(t))dt+ y_0[/tex] and so dy/dx= f(x,y), y(x0)= y0.}