Intro. to Differential Equations

In summary, this conversation discusses the intention of creating a thread for people interested in Differential Equations, with the author being a student of the class. They will use excerpts and questions from the book "Elementary Differential Equations and Boundary Value Problems: Seventh Edition" by William E. Boyce and Richard C. DiPrima. The conversation also touches on the classification of Differential Equations into Ordinary and Partial, and the concept of linear and nonlinear equations. It also provides a brief explanation of how to make math symbols and solve first-order and linear equations with variable coefficients using integrating factors. Examples are given to demonstrate the process.
  • #71
mathwonk said:
boy am mi ticked. i just lost a post that I had been woprking nop fopr over an hour about o.d.e's from the biog picture and various books and their different characteristics, and essential ingredients of a good d.e. cousre etc etc. when i tried to pst it the computer said I was not logged in but when I logged in my post was gone.
this is not the first time this has happened to me.
well good luck for you, bad luck for me.

When you type up a big post, copy* it before you submit it. I've been burned too many times before to ever let a huge post of mine disappear because of some bad message board voodoo!

*Select your whole post and press CTRL V, incase you didn't know.
 
Physics news on Phys.org
  • #72
It happened again, but I SAVED! Here is todays post:
I am starting to teach o.d.e. and the first thing I am going to do tomorrow (the 2nd day of class) is try to explain why the sort of thing appearing in post #3 here, apparently taken from Boyce and Diprima, (and not to be blamed on the student trying to learn it, my apologies to that student) is completely meaningless nonsense.
I.e. solving the d.e. "dy/dt = g(t)" may or may not be possible, depending on the nature of g.
In particular it makes no sense at all to simply write
"indefinite integral of g = y", and claim to have solved the problem, since the notation "indefinite integral of g" stands for any function whose derivative is g, provided one exists.
So one has made no progress at all in solving the problem in writing this, as one is merely restating it in different notation. Books which say this sort of thing drive me nuts, as they give young students entirely the wrong idea as to what a function is, and what it means to "solve" a d.e.
Assuming "solving" an equation means "finding" a function that satisfies it, what does it mean to "find" such a function? To call out its name? If so, then I have solved dy/dx = 1/x by saying the magic words "natural log". Unfortunately I do not know the value of this function even at the number t = 3.
So I really do not know much about this solution except that it exists.
Actually I don't even know that, I am only claiming it exists, since most students probably cannot really prove the natural log function does exist, and satisfies this equation. But I have been told it does, so I say "natural log" solves this equation.
But wouldn't it make more sense to say I have "found" a function, if I can actually tell you some of its values, or even an arbitrary value, at least to any desired degree of accuracy?
The way to do this with ln(t) is actually to approximate the area under the curve of y = 1/t. Defining ln as that area function at least shows it exists, but I still ought to prove that area function is both differentiable and solves the equation.
So for some g, an antiderivative function exists and for others it does not. For example it does exist for g(t) = 1/t, or g(t) = e^(t^2)), but does not exist for g(t) = 0 for t<0 and g(t) = 1, for t g.e. 0.
The usual sufficient criterion for such an antiderivative to exist is that g be continuous. But these hypotheses are nowhere mentioned, before writing "indefinite integral of g".
Indeed the indefinite integral of 1/t does not exist because ln(t) exists, rather it is the other way around, ln(t) exists for all t>0, because the area function of 1/t exists, because 1/t is continuous.
So the area function (definite integral) is a machine for making differentiable functions out of continuous ones. Every now and then we will learn that some area function equals some other function we have met in a different situation, but so what?
The ones we have met before are no better solutions than the ones we have not. We just understand those better. So books that say: "well, we can solve the d.e. dt/dy = 1/t, but not dy/dt = e^(t^2), because e^(t^2)) cannot be integrated", are lying, and promulgating a false understanding of d.e's, functions, and their solutions.
More correct would be to say: every continuous integrand leads us to a differentiable area function, and to a potentially new and interesting function. A few of these we have met before and given names to.
But in general the integral (area function of) a function is more complicated than the original function, and many of them we have not yet had time to name.
E.g. the integrand g(t) = 1, has area function t+c, and the integrand g(t) = t^r has area function (1/r+1) t^(r+1), except when this makes no sense, namely when r = -1.
In that case the area function (starting at t=1) of 1/t is a function which we shall call ln(t). It happens to be inverse to an exponential function e^t with a base "e" which we would never have met if we were not solving this d.e. But otherwise it resembles exponential functions like 2^t which of course we previously only defined for rational t.
Now this let's us also name the area functions for all fractions, since by partial fractions (using complex numbers) all fractions have integrals which reduce to sums of ones like 1/(t-a), and these are also all natural logs. So spending hours and weeks integrating more and mroe examples of fractions is just repeating the same thing over and over, or if you like, it is practicing algebra. But once you have integrated 1/(t-c) you are not learning anymore about integration by integrating 1/(any polynomial in t).
Now let's try some really new integrands (studied by the Bernoullis in the 1600's?). let's put a square root in the denominator.
e.g. let g(t) = 1/sqrt(1-t^2). this is continuous within the interval -1<t<1, so has a differentiable area function there, which turns out to be the arclength function for a circle, and it already has a name "arcsin" because arclength for circles came up long ago. (Euclid, 1800 BC?)
Moving on we try perhaps 1/sqrt(1-t^3), also continuous on (-1,1) so it too has a nice differentiable area function but most of us do not know any name for it. But Weierstrass studied it in the 1900's and it is called an elliptic integral because it comes up in trying to measure arclength on a lemniscate? Wait a minute, for this story to be any good, it should be arclength on an ellipse. Well maybe it comes up there too.
Now shall we say this integral does not exist? or that we have not solved the d.e. dy/dt = sqrt(1-y^3) ? just because we have not yet chosen a name for this function?
That would be absurd. so we call it maybe frank, as one of my friends says.
anyway, it should be obvious that all the integrands 1/sqrt(1-t^n) do have differentiable area functions, and all deserve names, but we only name the ones we need in our own problems.
frank turns out to have an inverse function which is periodic like sin and cos, but even better, it is not just singly periodic, but doubly periodic in the complex plane. These functions are really wonderful, and not only do exist, but played the main role in Wiles solution of Fermat's last theorem.
So one should never give students the idea that those d.e.'s whose solutions are functions we have not named yet, or whose names we have not heard yet, somehow are not solved, or even not solvable, and yet this is precisely the impression i get from some books.
Solving an equation means producing a function that solves it. Producing a function means defining it, not just hollering out its name (Oh yeah, I know the solution of that equation, it's Harold, or is it Maude? Is that any worse than a student saying the solution of dy/dt = 1/(1+t^2) is arctan, but the student does not know one single value of arctan?).

Defining it can be done by a lot of different processes, usual among these being "taking the area function of a given function" i.e. really integrating it, or inverting a given function.
Now here is the first lesson of o.d.e.:
If g(t) is any continuous function on an interval I, then the area function of g (Riemann's definite integral of g from a to t) is a differentiable function on I which solves the d.e. dy/dt = g(t).
second lesson of o.d.e.:
If g(t) is any continuous function that is never zero on an interval I, then the area function of
1/g(t) is an invertible differentiable function on I, whose inverse function solves the o.d.e. dy/dt = g(y). (That's right the letter in g is y not t, and the g moved from the denominator to the numerator, because that's what happens to the derivative when you take an inverse function.)
This method is usually called "separation of variables", or just "integration".
I.e. the "monkey see monkey do" way of solving the d.e. dy/dt = g(y)
is to multiply by dt and divide by g(y) and get dy/g(y) = dt, thus "separating the variables".
Then, step one, "integrate" both sides, to get G(y) = t+c. (*)
then step 2: invert the function G (which has an inverse because its derivative 1/g(t) was assumed to be zero nowhere), getting a function H,
and
step 3): apply H to both sides of (*) getting: y = H(t+c).
Now none of this makes any sense, unless you understand how the two processes, taking area function, and inverting a function, do in fact transform an appropriate differentiable function, i.e. one satisfying certain hypotheses, into another differentiable function, namely the solution.
ok, I iterate: the student learning this material, or trying to, from the usual books is not to blame for the confusion the books spew everywhere, but is actually the victim.
My point is: one has NOT solved an o.d.e. simple by saying "the name of the solution is frank", but by knowing why the solution exists, what its domain is, what properties it has there, and how to approximate its values there, and sketch the graph.
E.g. one can do all of these things for the solution of dy/dt = e^(t^2)), so it is quite false for books like the one I was reading today, to say "one cannot integrate this d.e. directly".
ok, end of rant. I am just finding out why I never understood this subject, as the books on introductory d.e. are probably the worst in all of elementary mathematics. it is very hard to find a decent, correct, explanation of d.e. the best I have found is the book of v.i. arnol'd, and that is pretty dense going. I admit also to having learned something from Blanchard Devaney and hall, but it is very wordy and there is no theory at all.
Most of the rest are cookbooks of the worst sort, teaching you to spout out the usual names of the solutions to the simplest possible equations,
i.e. ones like y''' + 3y'' + 3y' + y = 0, which are just chosen because they are all (complex) exponentials (i.e. cosines and sines and exponentials).
A d.e. is a racecourse, with speed signs all over, and a solution is a driver in a fast, well handling car, navigating the course at the right speed at every instant, and in the right direction.
 
Last edited:
  • #73
bvy the way, there is essentially nothing to solving all d.e's of form:

dy/dt = P(y) where P is any polynomial, since these are all solved by first integrating 1/P(t), and then inverting the answer. (i.e. separating variables).

Similarly, all equations of form ay^(n) + by(n-1) + cy^(n-2) + ...+ey = 0, are all compositions of the equation dy/dt = ay, so are all solved by the same idea as for solving that one. In fact they can be considered as that same equation in the form dY/dt = AY, where Y is a vector and A is a matrix, and then the solution is actually just the vector Y = e^(At), where e^(At) is defined by the same power series as e^(at).

I guess I could call this the 3rd lesson of o.d.e.

In spite of the fact these are almost all the same equation, namely the world's easiest one: dy/dt = a(y-c),

one can spend a whole semester on them, and many books spend hundreds of pages on them.
 
  • #74
OK I just read the whole thread and it looks really nice, my compliments to extravagant dreams. you clearly know more about d.e's than me.

still i am bold enough to offer my views.

I liked post #20, where the existence theorem is discussed. this is really important, since it shows how to solev ALL such d.e.'s. namjely you find a way of approximating a solution, then you find a process of amking the approximations betterm then you prove that in the limit they converge to a perfect solution.


This is actually useful when needing only an approaximate solution, i.e. in real life.


then post #39 interested me since mit shows the limitations of this books approach, namjely not explaining what is going on and just telling the student to try thios and that.


the point is that: 1) the non homogeneous equation can always be solved by varying the solution of the homogeneous equation.

then they are limiting their study to thoe homogeneous equations which are basically just variations on dy/dt = y, so the solutions are always exponentials and sins and cosines, which we know are the same as (complex) exponentials.


But then theory of even these equation is missing:

look at this equation: y'' - 3y' - 4y = sin(t).


The point is to look at the elft hand side as a linear operator on functions, just like in linear algebra, i.e. L(y) = y'' - 3y' - 4y. Now we want ay such that L(y) = sin(x).


First of all, we should solve the equation L(y) = 0, to find the kernel of the l;ienar operator.

now to do thsi correctly, instead of just memorizing the characteristic equtions approach, one should factor the linear operator L as

L = MoN where M,N are simpler differential operators.

I.e. let D be the simplest differential operator of all, D(y) = y'.

then let M = D+1, so M(y) = (D+1)y = y'+y.

And let N = (D-4) so N(y) = (D-4)y = y' -4y.

Then L(y) = (D+1)(D-4)y = (D^2 -3D -4)y = y'' - 3y' - 4y.

Thus if we compoute the kernel of M, then the kernel of L = NoM, is the inverse imnage of the kernel of M under the operator N.

I.e. we know that M(y) = (D+1)y = y'-y = 0 iff y' = -y, iff y = ce^(-t).


so the kernel of M is {ce^-t} for all c.

then we need to know which y's are such that Ny = ce^-t?


So solving the homogeneous equation My = 0, is the same as solving the non homogeneous equation Ny = ce^t.


But, N kills de^4t, and it is not too hard to guess that N acts on 3^-t to give something of form e^-t. of course this is also guessing but it is easier guessing.

i.e. we get, N(e^-t) = (D-4)e^-t = -e^-t -4e^-t = -5e^-t.

spo that can easily be solved for something that maps straight to e^-t, namely N(-1/5 3^-t) works I guess.


So now since the kernel of N is de^4t, and N maps (-1/5 3^-t) to e^-t, I guess, oh yes it is even easier than I made it,

N maps things of form ae^4t + b e^-t to things of form ce^-t.

Thus L = NoM kills things of form ae^4t + b e^-t.


(maybe I got the order backwards, but it does not matter as NoM = MoN, i.e. these operators commute.)


Now this decomposition idea also makes the non homogeneous problem easier and more motivated.


I.e. now let's attack y'' - 3y- -4y = sin(t).

well first we should look at the composition NoM(y) = N(M(y)) = sin(t), so we need something that N takes to sin(t), so we need to solve

N(y) = sin(t). Now it becomes really clear why sin(t) does not work, because

N(sin(t)) = -4sin(t) + cos(t). so cloearly you are going to need some sins and cosines to do this. this seems a more natural way to see why the solution should like like asin(t) + bcos(t).

of course then once we have an f such that (D-4)f = sin(t), we still need to find a g such that (D+1)g = f. This si the same process.

Now it looks as if the solcvinbg process is idneed shorter when done as in the post 39. But thinking of it seems more naturla to me anyway, this way.

I.e. bring in some linear algebra.
 
  • #75
By the way, Sharipov's post is to me a nice remark that this book being studied is just going over and over the same d.e., namely variations on dy/dt = y, so one is not getting too far, and it might be useful to branch out a bit.
i.e. what has been done here is to remark that linear equatioins can be solved by variation of parameters, provided the homogeneous one can be solved. then atgtention is restrictred tot he constant coefficient case, where the homogeneous one is always solved by exponentials.
the one extension of that theory is a nice excursion into power series, by showing that with non constant coefficients, one can at least solve recursively for the power series solution of other linear equations.
the other idea is that in dealing with linear equations one can always solve for the sum of any functions one can already solve for. This makes sense also for infinite sums. hence one can solve for infinite sums of sines and cosines i.e. exponentials. these are called Fourier series methods.
so basically all linear (constant coeff) d.e.s are solved by exponentials.
the higher order ones are no different if one uses matrix exponentials.
I.e. to solve y'' - 3y' -4y = 0, look at it as a 2x2 system, for a vector function of t, namely (x(t), y(t)) where x = y' and x' = y'' = 3y' +4y = 3x+4y.
so the linear system is (x,y) where (x',y') = (3x+4y,x).
Thus our matrix is A = with first row [3,4] and second row [1,0]. the characteristic polynomial of A is (surprize surprize) x^2 - 3x - 4, with eigenvalues x = -1, 4.
So we can diagonalize the matrix as A' = diagonal matrix with -1,4 on main diagonal.
then the exponential of the matrix tA' is just the diagonal matrix with
e^-t and e^4t on the diagonal. the eigenvectors are (1,-1) and (4,1), and the solution vectors of the system are linear combinations of (ce^-t, -ce^-t), and (4d(e^4t), de^(4t)).
In particular the solutions of the original equation, i.e. the y entry of such a linear combination, are of form ae^-t + be^4t.
but what if we want to understand a pendulum, i.e. find y satisfying the equation:
d^2y/dt^2 - sin(y) = 0?
we need some more tools. so we should teach phase plane analysis, i.e. how to draw pictures of vector fields and pictures of the solution flows.
 
Last edited:
  • #76
another topic that is often omitted, but that arnol'd clarifies, is that of the domain of the solution. i.e. if the manifold where then vector field is defiend is compact, then the domain of the solution is all of R, so one gets a 1 parameter flow on the whole manifold, i.e. any point can be flowed along a solution curve for time t, whatever t is. this adds much to the geometry of the subject.
 
  • #77
guys,was just reading diff.eqns by ross.
uniqueness of an 1-d d.eqn of the form
y'=f(x,y) is said to be guaranteed when
i>partial der.of f(x,y) w.r.t y is a continuous of x & y over the same domain D over which f(x,y) is defined & is continuous.
can somebody explain how this condition guarantees uniqueness of the soln??
 
Last edited:
  • #78
total chaos said:
guys,was just reading diff.eqns by ross.
uniqueness of an 1-d d.eqn of the form
y'=f(x,y) is said to be guaranteed when
i>partial der.of f(x,y) w.r.t y is a continuous of x & y over the same domain D over which f(x,y) is defined & is continuous.
can somebody explain how this condition guarantees uniqueness of the soln??

In order that the differential equation, dy/dx= f(x,y) have a unique solution (in some region around x0) satisfying y(x0)= y0 it is sufficient but not necessary that f be differentiable with respect to y in some region around (x0,y0. It is sufficient that f(x,y) be "Lipschitz" in y: If (x, y1) and (x,y2) are two points in that region, then |f(x,y1)- f(x,y2)|< C|y1- y2|. If a function is differentiable in a region, then you can use the mean value theorem to show that it is Lipschitz in that region but there exist functions that are Lipschitz in a region but not differentiable there. Most elementary texts use the simpler "differentiable" as a necessary condition and may or may not point out that it is not sufficient.

Picard's proof of the existence and uniqueness of such a solution is long and deep. I won't post it here, but this is a link to a detailed explanation:

http://academic.gallaudet.edu/courses/MAT/MAT000Ivew.nsf/ID/918f9bc4dda7eb1c8525688700561c74/$file/Picard.PDF

Essentially, you replace the differential equation by the corresponding integral equation (dy/dx= f(x,y) if and only if [itex]y= \int f(x, y(x)) dx[/itex]) then show that the integral operator defined in that equation satisfies the hypotheses of Banach's fixed point theorem.
 
Last edited by a moderator:
  • #79
sorry for responding after such along time. 'was sick & hence away from my institute.
thanx halllsofivy, but i did not ask how to prove that continuity of p.der of f wrt y was a sufficient condition, rather i wanted to know if it is possible to EXPLAIN this requirement in simpler terms !
let me explain,existence of a soln of the problem of the type given by me above is guaranteed if there f(x,y) is a cont fn of x& y in some domain D.this can be explained thus:: if f(x,y) was not cont, it wud not have been possible to integrate f(x,y) over the domain D (which is after all the method by which we find out the soln).
what I'm seeking is some explanation of this nature!
thanks in advance!
 
  • #80
total chaos said:
but i did not ask how to prove that continuity of p.der of f wrt y was a sufficient condition, rather i wanted to know if it is possible to EXPLAIN this requirement in simpler terms !
No!

let me explain,existence of a soln of the problem of the type given by me above is guaranteed if there f(x,y) is a cont fn of x& y in some domain D.this can be explained thus:: if f(x,y) was not cont, it wud not have been possible to integrate f(x,y) over the domain D (which is after all the method by which we find out the soln).
??Why not? The fact that a function is not continuous does not mean it is not integrable. Even if that were true that would be showing that continuity is a necessary condition, not a sufficient condition, which is what by "solution is guaranteed". If that's the kind of "explanation" you are looking for, I surely can't help you!
 
  • #81
Just Wondering How...

Hey Guys,

I have learned that Mars and Earth closest encounter so far happened August 27... and it is estimated to commence again by year 2287 if I am not mistaken... I am just wondering how the calculations were made employing the principles of Differential Equations? I am really curious 'bout the accuracy or exactness of the date or just the exact year only...

Hoping for your prompt response.

DANDYBOY
 
  • #82
hi to everyone
I am a new user and happy to find this forum.so I'm looking for a way to solve these equations by using delphi programming language.i am a physics student and have "use computer in physics" in this course.
thank you
 
  • #83
i just found this useful topic to use in my DE class. does anyone know where i could find solutions (the ones located on the first page of this topic)? they are all too old or not available.
 
  • #84
hawaiidude said:
hey thanks...nice examples,,,clear and easy to understand...but here's a problem...when x=0 3x^2y''-xy'+y=0

I tink what hawaii meant was to find the solution to the diff. eqn [tex]3x^2 \frac{d^2y}{dx^2} - x \frac{dy}{dx} + y = 0 [/tex] at the point xo = 0. After looking through the "Series solution of 2nd Order Linear Equations" in thread #47 written by ExtravagantDreams, i realize that the above diff. eqn can be solved easily. The solution will be of the form y = [tex]\sum_{n=0}^\infty a_n(x - x_0)^n [/tex].
Since xo = 0, y = [tex]\sum_{n=0}^\infty {a_n x^n} [/tex].
y' = [tex]\sum_{n=0}^\infty {a_n nx^{n-1}} [/tex].
y'' = [tex]\sum_{n=0}^\infty {a_n n(n-1)x^{n-2}} [/tex].
Now, substitute these expressions into the original diff. eqn:
[tex]3x^2 \sum_{n=0}^\infty {a_n n(n-1)x^{n-2}} - x \sum_{n=0}^\infty {a_n nx^{n-1}} + \sum_{n=0}^\infty {a_n x^n} = 0 [/tex]
Factor in the external x terms:
[tex]\sum_{n=0}^\infty {3a_n n(n-1)x^n} - \sum_{n=0}^\infty {a_n nx^n} + \sum_{n=0}^\infty {a_n x^n} = 0 [/tex]
Combining all the terms since they already have the same degree and same starting pt:
[tex]\sum_{n=0}^\infty {[3n(n-1) - n + 1]a_n x^n} = 0 [/tex]
So finally we arrive at 3n² - 4n + 1 = 0 and an = 0. We get 2 values of n, n=1 and 1/3. How do i proceed from here then??

Any good help would be appreciated :)
 
Last edited:
  • #85
hawaiidude said:
ok here's question... solve the differential equation d^3x/dt^3-2d^2d^2x/dt^2 +dx/dt=0 and find the Fourier series for f(x)={0,pi -pi<x<0
0<x<pi

This diff. eqn is of the form [tex] a \frac{d^3x}{dt^3} + b \frac{d^2x}{dt^2} + c \frac{dx}{dt} + d = f(t) [/tex] where a,b,c and d are numeric constants. The solution to this is the same as that for 2nd order linear diff. eqns with constant coefficients, provided that f(t) = 0.
So we have [tex]\frac{d^3x}{dt^3} - 2\frac{d^2x}{dt^2} + \frac{dx}{dt} = 0 [/tex].
The 'kernel' or characteristic equation is in fact:
r³ - 2r² + r = 0
r(r²-2r+1) = 0
r(r-1)² = 0
r= 0, 1 (repeated)
The general solution will be [tex]x = (c_1t + c_2)e^t + c_3 [/tex] where c1, c2 and c3 are constants of integration.

Correct me if I'm wrong as I'm still new to differential equations :)
 
  • #87
Hello Sir,
I m a student of high energy particle physics. Sir i need the solution manual of Differential equations by S.Balachandra Rao. S.B.Rao is the professor in a college of Banglore, India. Sir please if u can do me a favor, please give me the solution manual of this book. I shall be very very thankful to u. U can email me on this id lost_somewhere@live.com.

Thank U...
 
  • #88
Integral said:
Where you presented

[tex] \frac {d} {dt}[\mu (t)y] = \mu (t) g(t)[/tex]
I was thrown for a bit.
What does

[tex] \frac {d} {dt}[\mu (t)y] = \mu (t) g(t)[/tex]

mean ? You see I don't have a copy of B&D. )
 
Last edited:
  • #89
hello, I'm also taking a class on ODE but i have a problem -i use An Intro course in Diff. eq.'s by Zill - that i get a nonsense result here is the eq:

sin3x + 2y(cos3x)^3 = 0 (here ^ is to raise a power.how are u raising powers?)

the last result i get which is nonsense ofcourse is: y^2 = -1/6(cos3x)^2. another result includes tan3x but is still negative.

so y^2 is negative which is impossible. is the result right? I think there's a problem with the D.E. given.

hope u can help. thx
 
  • #90
I had to solve a first-order nonlinear ODE which led me to a this equation.how can I find the solution for y?
yey=f(x)
 
  • #91
Erfan said:
I had to solve a first-order nonlinear ODE which led me to a this equation.how can I find the solution for y?
yey=f(x)
Where are the derivatives, e.g., y', or differentials?
 
  • Like
Likes Z_Lea7
  • #92
Erfan said:
I had to solve a first-order nonlinear ODE which led me to a this equation.how can I find the solution for y?
yey=f(x)
"Lambert's W function", W(x), is defined as the inverse function to f(x)= xex. Taking the W function of both sides gives y= W(f(x)).
 
  • #93
So the question should be solved numerically using the Lambert's W function? I mean that can't we then have a function in the form: y=f(x)? or we can no more go further than the Lambert's W function?
 
  • #94
Greg Bernhardt said:
Sounds great! Tutorials like this have been very successful here.

Howto make math symbols:
https://www.physicsforums.com/announcement.php?forumid=73

You can make subscripts and subscripts by using these tags

[ sup ] content [ /sup ]
[ sub ] content [ /sub ]

* no spaces

Mathematicians,
i need an insight and understanding of asymptotic behaviour as applied to singular cauchy problem...anyone can comment...
ken chwala BSC MATHS, MSC APPLIED MATHS FINALIST
 
Last edited by a moderator:
  • #95
good job
 
  • #96
Good night,


Last week I begun to study differential equations by my own and first saw ODE's of separable variables. I've learned very well what they are and how to find constant and non-constant solutions. But something extremely trivial is boring me: I can't figure out why some ODE is or is not of separable variable. For example, I know that an ODE of s.v. is an ODE of the type

[; \frac{dx}{dt} = g(t)h(x) ;]​

but I simply cannot say why

[; \frac{dy}{dx}=\frac{y}{x} ;]​

is and ODE of s.v. and why

[; \frac{dy}{dx}=\frac{x+y}{x^2 +1} ;]​

is not.

I know this is very trivial and I am missing something, but I don't know what. Can you help me, please? :-)


[]'s!

Ps.: sorry for my lousy English.
 

Similar threads

Replies
5
Views
804
Replies
5
Views
2K
Replies
3
Views
2K
Replies
11
Views
2K
Replies
1
Views
2K
Replies
3
Views
3K
Back
Top