Intro. to Differential Equations

ExtravagantDreams
Messages
82
Reaction score
5
My intent is to create a thread for people interested in Differential Equations. However, I will explicitly state that I am only a student of this class myself and that many things could end up being incorrect or an improper way to present the material.

I will merely be going along with the class, mostly using excerpts and questions from the book, "Elementary Differential Equations and Boundary Value Problems: Seventh Edition," by William E. Boyce and Richard C. DiPrima. So truthfully, this is more for myself. Looking things up and explaining it to others seems to be the best way to learn.

If people have any questions or comments, feel free to share. Also, I know there are many knowledgeable people on this board, so be sure to correct me or make suggestions.

This will require knowledge of Calculus but don't be shy to ask if there is something that you are unsure of.First, a little background;

What is a Differential Equation?
A Differential Equation is simply an equation containing a derivative.

Classifications:

Ordinary Differential Equations (ODE) - Equations that appear with ordinary derivatives (single independent variable, could have multiple dependent variables).
Examples:

<br /> \frac {dy} {dt} = ay - b<br />

<br /> a \frac {dy_1} {dx} + b \frac {dy_2} {dx} + cy_1 = dy_2 = e<br />

Partial Differential Equations (PDE) - Equations that appear with partial derivatives (multiple independent variable).
Examples:

<br /> \alpha^2 [ \frac {\partial^2 u(x,t)} {\partial x^2} ] = \frac {\partial u(x,t)} {\partial t}<br />

<br /> \frac {\partial^2 V(x,y)} {\partial x^2} + \frac {\partial^2 V(x,y)} {\partial y^2} = 0<br />Don't let any of this frighten you. Math is always scary when looked at a glance with a bunch of undefined variables.

Linear and Nonlinear
The ordinary differential equation:
<br /> F(t, y, y&#039;, ..., y^{(n)}) = 0<br />
is said to be linear if F is a linear function of the variables y, y',..., yn (Dependant variable must be first order). Thus the general linear ordinary differential equation of order n is:
<br /> a_0 (t) y^{(n)} + a_1 (t) y^{(n-1)} + ... + a_n (t) y = g(t)<br />

where (n) is not the power of but the nth derivative.

An example of a simple Nonlinear ODE would simply be:
<br /> y \frac {dy} {dx} = x^4<br />
This concludes the introduction. I may or may not write the next chapter tonight. However, a question, does anyone know an easier way for writing math on the computer and one that looks less confusing. I know I will have difficulty finding some things, especially subscripts and superscripts. Anyone know a better way to denote these?
 
Last edited by a moderator:
  • Like
Likes Tahseen amin and Z_Lea7
Physics news on Phys.org
Sounds great! Tutorials like this have been very successful here.

Howto make math symbols:
https://www.physicsforums.com/announcement.php?forumid=73

You can make subscripts and subscripts by using these tags

[ sup ] content [ /sup ]
[ sub ] content [ /sub ]

* no spaces
 
Last edited:
First Order Differential Equations

"This chapter deals with differential equations of the first order

<br /> \frac {dy} {dt} = f(t,y)<br />

where f is a given function of two variables. Any differentiable function y = Φ(t) that satisfies this equation for all t in some interval is called a solution."

Linear Equations with Variable Coefficients

Using the previous example for ODE (dy/dx = ay + b) and replacing the constants we write the more general form:

<br /> \frac {dy} {dt} + p(t) y = g(t)<br />
or
<br /> y&#039; + p(t) y + g(t) = 0<br />

where p(t) and g(t) are given functions of the independent variable t.

Special cases:
If p(t) = 0 then,

<br /> y&#039; = g(t)<br />

and the integral is easily taken;

<br /> \frac {dy} {dt} = g(t)<br />

<br /> \int (\frac {dy} {dt}) dt = \int g(t) dt<br />

<br /> y = \int g(t) dt + C<br />

If g(t) = 0, then,

<br /> y&#039; = p(t) y<br />

and the integral is once more relatively easy to take;

<br /> \frac {dy} {dt}= p(t) y<br />

<br /> \int \frac {dy} {y} = \int p(t) dt<br />

<br /> ln|y| = \int [p(t) dt] + C<br />

<br /> e^{ln|y|} = e^{\int [p(t) dt] + C}<br />

<br /> y = Ke^{\int [p(t) dt] + C}, K = ± e^{C}<br />

However, if neither p(t) or g(t) are zero in the general equation, a function µ(t) (the integrating factor) is used to solve the equation;

<br /> \mu (t) \frac {dy} {dt} + \mu (t) p(t) y = \mu (t) g(t)<br />

where now the left hand side will be a known derivative

<br /> \mu (t) \frac {dy} {dt} + \mu (t) p(t) y = \frac {d} {dt}[\mu (t)y]<br />

so that, in theory, you end up with;

<br /> \frac {d} {dt}[\mu (t)y] = \mu (t) g(t)<br />

Since µ(t) must be carefully chosen to make the previous statement true, let us find it.

<br /> \frac {d} {dt}[\mu (t)y] = \mu (t) \frac {dy} {dt} + \mu (t)p(t)y<br />

<br /> \frac {d} {dt}[\mu (t)y] = \mu (t) \frac {dy} {dt} + \frac {d \mu (t)} {dt} y<br />

Where the latter is simply the derivative of µ(t)y in general form.

<br /> \mu (t) \frac {dy} {dt} + \mu (t) p(t) y = \mu (t) \frac {dy} {dt} + \frac {d\mu (t)} {dt} y<br />

Subtracting µ(t)(dy/dt) from both sides

<br /> \frac {d \mu (t)} {dt} y = \mu (t) p(t) y<br />

Cancel y

<br /> \frac {d \mu (t)} {dt} = \mu (t) p(t)<br />

<br /> \frac {d \mu (t)} {\mu (t)} = p(t) dt<br />

<br /> \int \frac {d \mu (t)} {\mu (t)} = \int p(t)d t<br />

<br /> ln|\mu (t)| = \int p(t) dt<br />

The constant C is arbitrary and can be dropped to form the equation,

µ(t) = e∫[p(t)dt]

So the integrating factor µ(t) can always be found by the last equation.

Lets try some problems together.

Ex1.
y' + 2y = 3

In this equation p(t) = 2 and g(t) = 3. Since neither of them are zero, use an the integrating factor µ(t) to create a differentiable equation,

µ(t)y' + µ(t)2y = µ(t)3

Solve µ(t)to be,

µ(t) = e∫[p(t)dt]
µ(t) = e∫[2dt]
µ(t) = e2t

Plug the value of µ(t) back into the equation to obtain,

e2ty' + e2t2y = e2t3

Recognize that the left hand side of the equation is merely [ye2t]',

[ye2t]' = d/dt[ye2t] = 3e2t
∫d/dt[ye2t] = ∫3e2t
ye2t = (3/2)e2t + C
y = (3/2) + Ce-2t

Ex2.
y' + (1/2)y = 2 + t
µ(t)y' + (1/2)µ(t)y = (2 + t)µ(t)

µ(t) = e∫[p(t)dt] = et/2

et/2y' + (1/2)et/2y = 2et/2 + tet/2
d/dt[et/2y] = 2et/2 + tet/2
et/2y = ∫[2et/2 + tet/2]dt
et/2y = 4et/2 + ∫[tet/2]dt

Using integration by parts,

u = t, du = dt
v = 2et/2, dv = et/2dt

∫[tet/2]dt = 2tet/2 - ∫[2et/2]dt
∫[tet/2]dt = 2tet/2 - 4et/2 + C

et/2y = 4et/2 + 2tet/2 - 4et/2 = 2tet/2 + C
y = 2t + Ce-t/2

For initial value problems it is easy to solve for C. Taking the last problem, solve if
y(0) = 2

2 = 2(0) + Ce-(0)/2 = C
C = 2

Therefore,
y = 2t + 2e-t/2


That is enough for now. Here are some problems to practice on if you so wish.

1.) y' + 3y = t + e-2t
2.) 2y' + y = 3t, hint: rewrite to fit general equation y' + p(t)y = g(t)
3.)t3(dy/dt) + 4t2y = e-t, y(-1) = 0
 
Last edited:
  • Like
Likes Bosko, MartK and (deleted member)
Thanks Greg, I will run through it tomorrow and change it to make it more readable.
 
Well I had to dig out my copy of Boyce and Diprima (2nd Edition!) to follow your development, it all works out as you have presented.

I will follow along with you, relearning what I have not see for a number of years, and perhaps able to help you out if you hit some rough spots.
 
http://home.comcast.net/~Integral50/Math/diffeq1.PDF to the first exercise.
 
Last edited by a moderator:
Thank you for your participation. Your solution is infact correct except for the constant is missing. No biggy, I always forget those too. Did you find it hard to follow without the book and should I have presented this more clearly some how?

I'm glad to know that someone else knows this stuff. I have some trouble understanding finding the interval for nonlinear functions for which a solution exists, so if I havn't figured it out by the time I do the write up, perhaps you can help.
 
It was very sloppy of me to leave off the constant, sorry about that.

I was a bit confused by your presentation as it is light on connective text. Where you presented

\frac {d} {dt}[\mu (t)y] = \mu (t) g(t)
I was thrown for a bit. My copy of B&D helped out. The fact is everything you wrote is absolutely correct.

I have taken grad level ODE & PDE courses in the dim and distant past ('86-'88 time frame) So should be able to dredge up some long buried knowledge to help out.

I have always found Differential Equations to be interesting, you might say they are where math and reality meet. With a good back ground in Diff Eqs and some numerical methods you can do dang near anything.

edit: corrected symbols
 
Last edited:
http://home.comcast.net/~Integral50/Math/diffeq3.PDF to the 3rd exercise.
 
Last edited by a moderator:
  • #10
Yes, your answer is correct. It actually took me a while to get it. Out of curiousity, why did you change to using the variable s? Are you just used to using it and forgot that it was in terms of t or can this be done?
 
  • #11
In the integral

\int \mu (s) g(s)ds
The variable, s, is what is called a dummy variable, it can be anything. You will see this frequenty.
 
Last edited:
  • #12
Integral, how come your solutions don't show up?
 
  • #13
They are PDFs do you have acrobat reader installed?
 
  • #14
Yeah I do. I didn't realize your text was a link.
 
  • #15
Integral, for the last part of your solution to the thrid exercise, I get
y(t)=t-4 [inte]te-tdt=t-4[-te-t-e-t + C]
So the t-4 is distributed through the C.
y(t)=-t-4e-t(t + 1)+ t-4C
Applying the initial condition, I get C=0, so
y(t)=-t-4e-t(t + 1). We get the same result, but through different ways.
 
Last edited:
  • #16
Separable Equations

I'm terribly sorry, I have been entirely too busy recently but I'm back, for the moment.

So now that we know how to differentiate a first order linear equation with variable coefficients, let us move on to linear separable equations.

In the last section we used the form dy/dt = ay + b, where the more general form of first order equation is;

dy/dx = f(x,y)

If this function can be rewritten in the form
M(x) + N(y)dy/dx = 0

where M is a function of x only and N is a function of y only, then the function is said to be separable. It can also be written in the form of
M(x)dx + N(y)dy = 0

It is as simple as that.

Let's try a couple examples.

Ex. 1
dy/dx = x2/(1 - y2)

-x2 + (1 - y2)dy/dx = 0
&int;-x2dx + &int;[(1 - y2)(dy/dx)]dx = &int;0dx
-x3/3 + (y - (y3)/3) = C

Where the answers can be left in a few different forms;
-x3/3 + (y - (y3)/3) = C
-x3 + 3y - y3 = 3C = C1
3y - y3 = x3 + C1


The "cheating" way would be to cross multiply the initial equation. Although, this is not correct, it will end in the same answer.

dy/dx = x2/(1 - y2)
&int;(1 - y2)dy = &int;x2dx
y - (y3)/3 = x3/3 + C
3y - (y3) = x3 + C1


Let's try one with an initial condition

Ex. 2
dy/dt = ycost/(1 + 2y2), y(0) = 1

(1 + 2y2)dy/dt = ycost
-cost + ((1 + 2y2)/y)dy/dt = 0
&int;-cos(t)dt + &int;[((1 + 2y2)/y)dy/dt]dt = &int;0dx
-sint + &int;[y-1 + 2y]dy = C
-sint + ln|y| + y2 = C
ln|y| + y2 = sint + C

ln|1| + 12 = sin(0) + C
C = 1

ln|y| y2 = sint + 1

Again, this differential equation can be solved the "cheating" way by cross multiplying,

dy/dt = ycost/(1 + 2y2), y(0) = 1

[(1 + 2y2)/y]dy = cos(t)dt
&int;(y-1 + 2y)dy = &int;cos(t)dt
ln|y| y2 = sint + C


If you so desire, here are some problems

1. y' = x2/y
2. xdx + ye-xdy = 0, y(0) = 1
3. y2(1 + x2)1/2 + arcsinx dx
 
Last edited:
  • #17
The "cheating" way would be to cross multiply the initial equation. Although, this is not correct, it will end in the same answer.
____________
I've often wondered why this is the case. dy and dx are variables right? So why can't they be treated as such?
 
  • #18
Here's my solution to the first one:
dy/dx=x2/y
y(dy/dx)=x2
[inte] y(dy/dx)dx = [inte] x2dx
(1/2)y2= (1/3)x3 + C1
y2= (2/3)x3 + C2
 
  • #19
"I've often wondered why this is the case. dy and dx are variables right? So why can't they be treated as such?"

x is a variable, y is a funtion of x. By cross multiplying you would be treating y like a variable.

And your answer is correct.
 
  • #20
Interval of Solution for Nonlinear First Order Equations

I will mention right now, I have some difficulty understanding this part so I will have even more difficulty explaining it and will probably need a little help. The good news, this isn't the most important topic in my opinion.

The reason this is applicable is that it is often easier to find the existence and uniquness of a solution without having to work out the actual problem.

Theorm:
Let the function f and &part;f/&part;y be continuous in some rectangle [alpha] < t < [beta], &Gamma; < y < &delta; containing the point (t0, y0). Then, in some interval t0 - h < t < t0 + h contained in [alpha] < t < [beta], there is a unique solution y = &phi;(t) of the initial value problem
y' = f(t,y), y(t0) = y0


Assuming that both the function and it's partial deivative with respect to y are continuous in a rectangle R; |t - t0| < A, |y - y0| < B;

Let:
M = max |f(t,y)|, (t,y) is in R
C = min (A,B/M)

Then:
There is one and only one solution y(t) valid for
t0- C < x < t0+ C

http://www.angelfire.com/tx5/extravagantdreams/Nonlinear_Interval.jpg

Ex.
y' = y2, y(0) = 1
f(t,y) = y2
&part;f/&part;y = 2y

M = max |f(t,y)|, (t,y) is in R
C = min (A,B/M)

Since A can be made infinately large find the max for B

M = max|(1 + B)2|
C = B/M

C = B/(1 + B)2
C = 1/4

If you have no idea where the answer came from, you are in the same boat as I. If you do know, please share.

Perhaps another example

Ex. 2
y' = (t2 +y2)3/2, y(t0) = y0
&part;f/&part;y = 3y(t2 + y2)1/2

M = max|(t2 +y2)3/2|
M = [(t0 + A)2 + (y0 + B)2]3/2

C = min(A,B/M)
C = min(A, B/[(t0 + A)2 + (y0 + B)2]3/2)


Also, does anyone know how to display determinants or matracies on the computer?
 
Last edited by a moderator:
  • #21
Keep an eye on what you are doing in the above examples. You are finding a specific region upon which you can guarantee that a unique solution exists. This means you must provide some specific numbers to the bounds of the region.

One thing that might help you a little, redraw your picture with the A and B intervals CENTERED on (t0,y0)

In your worked example the interval for y is:
B-y0<=y <= B+y0

Since y0=1 we have:
1-B<=y<=B+1

so if M= Max|f(t,y)| and f(t,y)= y2 The maximum value of f(t,y) on our interval is (B+1)2
Since A can be made infinitely large find the max for B

I believe that this is saying you are free to pick any value on the t axis for t0, once that value is defined, the value of f(t[0,y) will be fixed and you can guarantee the existence of a Max value for f(t,y)On an interval surrounding it.
C = B/(1 + B)2
This follows from the definitions.
C = 1/4
To get this a value of B must be given or defined, since you are free to choose B to be anything, it looks like someone picked 1.

Does this help?
 
Last edited:
  • #22
I guess I am getting confused because the book does not use
M = max |f(t,y)|
C = min (A,B/M)

notation. From what I understand, all this really says is that you are trying to find the maximum interval of f(t,y), where the limiting factor is either A or B/M (which is denoted C). But why B/M, why not just B? This is what gets me. Is C a value in the horizontal direction? Does dividing B by M make it a horizontal value? And how do you choose B?

I think I am making this too difficult. I fear it is one of those things where you just have to look at it.
 
  • #23
OD Exact Equations and Integrating Factors

Theorem:
Let the functions M, N, My, Nx, where subscripts denote partial derivatives, be continuous in the rectangular region R: [alpha] < [beta], &Gamma; < y < &delta;. Then;

M(x,y) + N(x,y)y' = 0

is an exact differential equation in R if and only if

My(x,y) = Nx(x,y)

at each point of R. That is, there exists a function satisfying

&part;[psi]/&part;x(x,y) = M(x,y), &part;[psi]/&part;y(x,y) = N(x,y)
or
[psi]x(x,y) = M(x,y), [psi]y(x,y) = N(x,y)

if and only if M and N satisfy

My(x,y) = Nx(x,y)


This means that there is a solution [psi](x,y) of the general equation M + Ny' = 0

Proof of My(x,y) = Nx(x,y)
We already defined

[psi]x(x,y) = M(x,y), [psi]y(x,y) = N(x,y)

we can compute the partial derivative of each to be

[psi]xy(x,y) = My(x,y), [psi]yx(x,y) = Nx(x,y)

Since My and Nx are continuous, it follows that [psi]xy and [psi]yx are continuous also, which also guarantees their equality.

Finding [psi](x,y)
when My(x,y) = Nx(x,y):
Starting with the equations

[psi]x(x,y) = M(x,y), [psi]y(x,y) = N(x,y)

start with the first and integrate with respect to x

[psi](x,y) = &int;M(x,y)dx + h(y)

Where h is some function of y playing the role of an arbitrary constant. Now we must proove that h(y) can always be chosed so that [psi]y = N

[psi]y(x,y) = &part;/&part;y[&int;M(x,y)dx + h(y)]
[psi]y(x,y) = &int;My(x,y)dx + h'(y)

Setting [psi]y = N we obtain

N(x,y) = &int;My(x,y)dx + h'(y)

Where we can then solve for h'(y)

h'(y) = N(x,y) - &int;My(x,y)dx
h(y) = Nx(x,y) - My(x,y)

Then the general solution

[psi](x,y) = &int;M(x,y)dx + &int;[N(x,y) - &int;My(x,y)dx]dy


Ex. 1

2xy3 + 3x2y2y' = 0

Where M = 2xy3, N = 3x2y2
Then My = 6xy2, Nx = 6xy2

Since My = Nx
This equations is exact and can be solved by the previous method.

Start with

[psi]x = M = 2xy3
[psi] = &int;Mdx + h(y) = &int;2xy3dx + h(y)
[psi] = x2y3 + h(y)

Find h;
Since we know that N = &part;[psi]/&part;y, differentiate both sides and substitude N

N = &part;/&part;y[&int;Mdx + h(y)
N = 3x2y2 + h'(y)

h'(y) = 0, h(y) = C

Plug h(y) back into the original [psi] equation
[psi] = x2y3 + C

[psi] = x2y3 = K

Ex. 2
Find the function [psi](x,y) of
y' = -(ax + by)/(bx - cy)

(bx - cy)y' = -(ax + by)
(ax + by) + (bx - cy)y' = 0

So
M = (ax + by), N = (bx - cy)
My = b, Nx = b

Since My = Nx, the equation is exact

[psi]x = M = ax + by
&int;[psi]xdx = &int;(ax + by)dx
[psi] = (1/2)ax2 + byx + h(y)
[psi]y = &part;/&part;y[(1/2)ax2 + byx + h(y)]
[psi]y = bx + h'(y)

[psi]y= N

N = bx + h'(Y)

(bx - cy) = bx + h'(y)

h'(y) = -cy
h(y) = -(1/2)cy2

[psi] = (1/2)ax2 + byx - (1/2)cy2 = K


Ex. 3
(ycosx + 2xey) + (sinx + x2ey - 1)y' = 0

So,
M = ycosx + 2xey, N = sinx + x2ey - 1
My = cosx + 2xey, Nx = cosx + 2xey

My = Nx, so can be solved using the exact method

Remember that
M = [psi]x
N = [psi]y

&int;[psi]xdx = &int;(ycosx + 2xey)dx
[psi] = ysinx + eyx2 + h(y)

Taking a partial derivative of both sides with respect to y,

[psi]y = sinx + x2ey + h'(y)

[psi]y = N = sinx + x2ey - 1

sinx + x2ey - 1 = sinx + x2ey + h'(y)
h'(y) = -1
&int;h'(y)dy = h(y) = -y

[psi](x,y) = ysinx + eyx2 - y = k


Integrating Factor
If an equation is not exact it can be multyplied by an integrating factor [mu] so that it becomes exact.

Starting with the general form
M(x,y) + N(x,y)y' = 0

and
My [x=] Nx

There is an integrating factor [mu] such that
[mu]M(x,y) + [mu]N(x,y)y' = 0

and
([mu]M)y = ([mu]N)x

or

[mu]yM + [mu]My = [mu]xN + [mu]Nx
M[mu]y - N[mu]x + (My - Nx)[mu] = 0

We will not discuss finding [mu](x,y) since this is entirely too diffictuly for this course, so we shall stick to [mu] as a function of x or y only.

If [mu] is a function of x;
([mu]M)y = [mu]My, ([mu]N)x = [mu]Nx + N(d[mu]/dx)

Thus, if ([mu]M)yis to equal [mu]Nx;
d[mu]\dx = (My - Nx)[mu]/N

If the function (My - Nx)/N depends on x only then [mu] can also be a function of x only and an integrating factor has been found.

The proceedure to finding [mu](y) is similar and the equation
d[mu]\dx = -(My = Nx)[mu]/M
is derived.

Ex. 4
(3xy + y2) + (x2 + xy)y' = 0
M = 3xy + y2, N = x2 + xy
My = 3x + 2y, Nx = 2x + y

Since,
My [x=] Nx

the equation is not seperable and an integrating factor must be found

[mu](x)
(My - Nx)/N dx = d[mu](x)/[mu](x)
ln|[mu](x)| = &int;(3x + 2y - 2x - y)/(x2 + xy)dx
ln|[mu](x)| = &int;(x + y)/x(x + y)dx
ln|[mu](x)| = &int;(1/x)dx
ln|[mu](x)| + ln|x|
[mu](x) = x

Since it is a function of x only, this can be used as the integrating factor. Now to solve the equations

x(3xy + y2) + x(x2 + xy)y' = 0
M = 3x2y + xy2, N = x3 + x2y
My = 3x2 + 2xy, Nx = 3x2 + 2yx

My = Nx
So it is now exact.

&part;/&part;y[&int;(3x2y + xy2)dx] = x3 + x2y
&part;/&part;y[yx3 + (1/2)y2x2 + h(y)] = x3 + x2y
x3 + x2y + h'(y) = x3 + x2y

h'(y) = 0, h(y) = C

yx3 + (1/2)y2x2 = K

Problems:

1. (2x + 3) + (2y - 2)y' = 0
2. (9x2 + y - 1)dx - (4y - x)dy = 0, y(1) = 0
3. (x2y = 2xy + y3)dx + (x2 + y2)dy = 0
 
Last edited:
  • #24
Originally posted by ExtravagantDreams
I guess I am getting confused because the book does not use
M = max |f(t,y)|
C = min (A,B/M)

notation. From what I understand, all this really says is that you are trying to find the maximum interval of f(t,y), where the limiting factor is either A or B/M (which is denoted C). But why B/M, why not just B? This is what gets me. Is C a value in the horizontal direction? Does dividing B by M make it a horizontal value? And how do you choose B?

I think I am making this too difficult. I fear it is one of those things where you just have to look at it.

This is covered pretty well in my copy of B&D.

He is using Picard iterations to prove the existence of solutions. I'll cheat for this cause he says it better then I. Here is are links to scans of Boyce & Deprima 2nd Ed.http://home.comcast.net/~rossgr1/Math/B_D1.jpg
Hope this helps, is this not in your book?
 
Last edited by a moderator:
  • #25
Second Order Linear Differential Equations

A second order differential equation is linear if it is in the following form;
y'' + p(t)y' + q(t)y = g(t)
or
P(t)y'' + Q(t)y' + R(t)y = g(t)

Where p, q, and g and functions of t

If the problem has initial conditions it will be in the form of
y(t) = y0, y'(t) = y'0.

Let's start with the easiest:
Homogeneous with Constant coefficients

Homogeneous in this case means g(t) = 0,
with constant coefficients mean P(t) = p, Q(t) = q, and R(t) = r
Since makes the general equations;

py'' + qy' + ry = 0

Solving the homogeneous equation will later always provide a way to solve the corresponding nonhomogeneous problem.

I'm not going to proove all this but you can take the kernal of this funtion as

ar2 + br + c = 0

and you can, so to speak, find the roots of this funtion.

r1,2 = (-b &plusmn; &radic;(b2 -4ac))/2a

r1 = (-b + &radic;(b2 -4ac))/2a
r2 = (-b - &radic;(b2 -4ac))/2a

Assuming that these roots are real and different then;
y1(t) = er1t
y2(t) = er2t

and y = C1y1(t) + C2y2(t)

Which comes from the initial deriviation. If someone really really wants to know, I will show you.

y = C1er1t + C2er2t

This is your general solution.
C1 and C2 can be solved for if initial conditions y(t) and y'(t) are given in the following manner;

y = C1er1t + C2er2t

y(t) = y0

y0 = C1er1t + C2er2t

and

y' = r1C1er1t + r2C2er2t


y'(t) = y'0


It is also possible to varify your solution by using
y = C1er1t + C2er2t
y' = r1C1er1t + r2C2er2t
y'' = r12C1er1t + r22C2er2t

and pluging them back into the equation
ay'' + by' + cy = 0

Ex. 1
y'' + 5y' +6y = 0, y(0) = 2, y'(0) = 3

1r2 + 5r + 6 = 0

(r + 3)(r + 2)

r1 = -3
r2 = -2

y = C1e-3t + C2e-2t

2 = C1e-3(0) + C2e-2(0)
2 = C1 + C2


y' = -3C1e-3t - 2C2e-2t

3 = -3C1e-3(0) - 2C2e-2(0)
3 = -3C1 - 2C2

|+1 +1 +2|
|- 3 - 2 +3| ~

|+1 +1 +2|
|+0 +1 +9| ~

|+1 +0 - 7|
|+0 +1 +9| ~

C1 = -7
C2 = 9

y = -7e-3t + 9e-2t
y' = 21e-3t - 18e-2t
y'' = -63e-3t + 36e-2t

y'' + 5y' +6y = 0
(-63e-3t + 36e-2t) +5(21e-3t -18e-2t) =6(-7e-3t + 9e-2t)

e-3t(105 - 63 - 42) + e-2t(36 - 18 - 18) = 0
 
Last edited:
  • #26
ok here's question... solve the differential equation d^3x/dt^3-2d^2d^2x/dt^2 +dx/dt=0



and find the Fourier series for f(x)={0,pi -pi<x<0
0<x<pi
 
  • #27
I'm actually not entirely sure how to solve this. This is not something we covered (higher order linear equations)

Skimming through the section, it looks like it is done in the same way as regular constant coefficient homogeneous equations.

3r3 - 2r2 + 1r = 0

where r are the kernals

r(3r2 - 2r + 1) = 0

r1 = 0
r23 = (1 &plusmn; i&radic;(2))/3


y = C1 + e1/3 x(C2cos((&radic;(2)/3)x) + C3sin((&radic;(2)/3)x))

and I'm not sure what a Fourier series is.
 
Last edited:
  • #28
Complex Roots; 2nd Order Homogeneous Constant Coeff. Diff. Eqs.

I have fallen way behind in doing this, about 3 + 3 fairly complicated sections, so I will try to cover as much as I can during my time off tonight and tomorrow morning.

So, we already saw what happens to 2nd order linear homogeneous differential equations with constant coefficient (wow, that is a mouth full) in the form;

ay'' + by' + cy = 0

where one finds the roots by evaluating;

ar2 + br + c = 0

and the answer is in the form;

y = C1er1t + C2er2t

However, when the expression

b2 - 4ac

is negative, the roots can be written in the form;

r12 = &lambda; &plusmn; i&mu;

Where;
&lambda; = -b/2a
i&mu = &radic;(b2 - 4ac)
when b2 < 4ac

and

y1(t) = e(&lambda; + i&mu;)t
y2(t) = e(&lambda; - i&mu;)t

Let us look at some properties first:

Using taylor serious it can be prooven that
eit = cos(t) + isin(t)
e-it = cos(t) - isin(t)
ei&mu;t = cos(&mu;t) + isin(&mu;t)

Where then
e(&lambda; + i&mu;)t = e&lambda;tei&mu;t = e&lambda;t[cos(&mu;t) + isin(&mu;t)]

Looking for real solutions

Addition of function 1 and 2
y1(t) + y2(t) =
(e&lambda;t[cos(&mu;t) + isin(&mu;t)]) + (e&lambda;t[cos(&mu;t) - isin(&mu;t)]) = 2e&lambda;t[cos(&mu;t)]

Subtraction of function 1 and 2
y1(t) - y2(t) =
(e&lambda;t[cos(&mu;t) + isin(&mu;t)]) - (e&lambda;t[cos(&mu;t) - isin(&mu;t)]) = 2e&lambda;t[isin(&mu;t)]

Since the differential equation is made up of real coefficients, it can be said that it's derivative is also. Therefore by simply neglecting the constant multipliers we obtain a pair of real-valued solution;


u(t) = e&lambda;t[cos(&mu;t)]
v(t) = e&lambda;t[sin(&mu;t)]

where u and v are just the real and imaginary parts of the solution respectively; meaning these parts are linearly independent, and a combination of these parts is a then also a solution.

W(u,v)(t) [x=] 0

Since the Wronskian of these two functions is equal to &mu;e2&lambda;t, it is always nonzero as long as &mu; [x=] 0. Since &mu; is always greater then zero, when b2 < 4ac, the general solution is;

y = C1e&lambda;tcos(&mu;t) + C2e&lambda;tsin(&mu;t)

Ex.
y'' + y' + y = 0

r2 + r + 1

r12 = (-1 &plusmn; &radic;[(1)2 - 4(1)(1)])/2(1) =
-1/2 &plusmn; i&radic;(3)/2

Then;
&mu; = -1/2
&lambda; = &radic;(3)/2

y = C1e-t/2cos(&radic;(3)t/2) + C2e-t/2sin(&radic;(3)t/2)
 
Last edited:
  • #29
ok here's another one solve ...
:: .
x -6x+9x=y

how would u solve this
 
  • #30
Originally posted by hawaiidude

x - 6x + 9x = y

LOL, are you trying to make fun of me?

What is the solution to the previous one?
 
  • #31
lol no it is correct
 
  • #32
Originally posted by hawaiidude
ok here's another one solve ...
:: .
x -6x+9x=y

how would u solve this

Where are the differentials? You need to make this a differential equation. As it stands I get

y=4x

Not real exicting.
 
  • #33
Originally posted by Integral
Where are the differentials? You need to make this a differential equation. As it stands I get

y=4x

Not real exicting.

Yeah, that's what I was thinking.
 
  • #34
Repeated Roots; Reduction of Order

This is a method for finding a second solution to a 2nd order linear homogeneous differential equations with constant coefficients assuming you already have the first solution. It can often occur when the roots of the equation are the same (when b2 - 4ac = 0).

Lets remember that given the equation;

ay'' + by' + cy = 0,

where a, b, and c are constant coefficients,

There can be a solution found by first finding the roots of;

ar2 + br + c = 0

Then it is a simple matter of remember the equations;

C1er1t + C2er2t

for ordinary roots, and;

C1e&lambda;tcos(&mu;t) + C2e&lambda;tsin(&mu;t)

for complex roots in the form;

r = &lambda; ± i&mu;

The general idea is to find a non-linear multiple of the first solution. Given the first solution, y1 = ert, the general solution is C1ert and another solution y2 = v(t)y1, where v(t) is some funtion of t, can be found.

Using;
y2 = v(t)y1

Find;
y'2 = v'(t)y1 + v(t)y'1
y''2 = v''(t)y1 + 2v'(t)y'1 + v(t)y''1

And plug these into the original equation;

ay'' + by' + cy = 0

a[v''(t)y1 + 2v'(t)y'1 + v(t)y''1] + b[v'(t)y1 + v(t)y'1] + c[v(t)y1]

Now group the equation such that all v(t), v'(t), and v''(t) terms are together and if it was done correctly, all v(t) terms will cancel.

v''(t)[ay1] + v'(t)[2ay'1 + by1] + v(t)[ay''1 + by'1 + cv(t)y1]

Since,
b2 - 4ac = 0

ay''1 + by'1 + cv(t)y1 = 0

So you are left with;

v''(t)[ay1] + v'(t)[2ay'1 + by1]

Solve for v(t) by integrating and plug back into;
y2 = v(t)y1

Let's try an example:

y'' + 4y' + 4y = 0

Find the Roots;
r12 = [-4 &plusmn; &radic;(42 - 4(1)(4))]/2(1)
r12 = -2

y1 = e-2t

y2 = v(t)e-2t
y'2 = v'(t)e-2t - 2v(t)e-2t
y''2 = v''(t)e-2t - 4v'(t)e-2t + 4v(t)e-2t

Plug into original equation;
v''(t)e-2t - 4v'(t)e-2t + 4v(t)e-2t + 4[v'(t)e-2t - 2v(t)e-2t] + 4[v(t)e-2t]

Combine v(t) terms;
v''(t)[e-2t] + v'(t)[-4e-2t + 4e-2t] + v'(t)[4e-2t - 8e-2t + 4e-2t] = 0

v''(t)[e-2t] + v'(t)[ 0] + v'(t)[ 0] = 0

In this case the v'(t) terms also cancelled. So,

v''(t)[e-2t] = 0

Since,
e-2t [x=] 0,
v''(t) = 0

Integrating;
v'(t) = C1,
v(t) = C1t + C2,

Remember the original solution;
y1 = e-2t

and;
y2 = v(t)e-2t

y2 = [C1t + C2]e-2t
y2 = C1te-2t + C2e-2t

Where the second term is just the the first, so the general solution is;
y = C1te-2t + C2e-2t


It actually turns out that y2 = ty1, always.

Give these problems a try:

1. y'' - 2y' + y = 0
2. 4y'' + 12y' + 9y = 0
 
Last edited:
  • #35
hey thanks...nice examples,,,clear and easy to understand...but here's a problem...when x=0 3x^2y''-xy'+y=0
 
  • #36
I'm not sure what you are asking;

3x2y''- xy'+ y = 0,

When x = 0

3(0)2y''- (0)y'+ y = 0

y = 0 ?
 
  • #37
not really but how do you find recurrence formulas? they're very complicated and i can't understand it...like the recurrence for (x^2+4)y''+xy=x+2

i thought you find the seond derivative and and first and the original

a0 )a1x+a2x^2+a3x^3+a4x^4...+anx^n+an+1+an+2x^n+2+...

y'=a1+2azx+3a3x^2...and so on..

how would you compute this? iam very confused ...all i got is you get the combining terms in this case8a2=2, 24a3+a0=1 2a2+48a4+a1=0 ...

then it's like n(n-1)an+4(n+2)(n+1)an+2+an-1=0 (n=0, 1 ,2 ,3 ,4 ...x

i know how they got the combined terms but how did they get the n's?
 
  • #38
Are you talking about the sums in series solutsions of 2nd Order Linear Equations? If it is beyond that, sorry I can't help ya. I'm only taking this course just now.
 
  • #39
Nonhomogeneous Equations; Method of Undetermined Coefficients

Here we will look at Second Order, nonhomogeneous Linear Equations of the form;

L[y] = y'' + p(t)y' + q(t)y = g(t)

where p(t), q(t), and g(t) are continuous functions on an open interval, I. We can use the homogeneous equations, where g(t) = 0 to solve the nonhomogeneous.

If Y1 and Y2 are two solutions of the nonhomogeneous equation, then their difference (Y1 - Y2) is a solution of the corresponding homogeneous equation. If in addition, y1 and y2 are a fundamental set of solutions of the homogeneous equation, then;

Y1 - Y2 = c1y1(t) + c2y2(t)

where c1and c2 are certain constants.


The general solution of the nonhomogeneous equation can then be written as;

y = &phi;(t) = c1y1(t) + c2y2(t) + Y(t)

where y1 and y2 are a fundamental set of solutions of the corresponding homogeneous equation, c1 and c2 are arbitrary constants, and Y(t) is some specific solutions of the nonhomogeneous equation.

We will attempt here to find the function Y(t) of the nonhomogeneous equation. There are two methods of doing this, namely the method of undetermined coefficients, which is discussed in this section, and the method of variation of parameters, which will be discussed next time.

The idea is to assume a solution for Y(t) with an undetermined coefficient, then use this answer and plug back into the original equation and try to find the coefficient. If it is successfuly, a solution of Y(t) has been found, if not there is so solution in the form that was assumed and a differnt assumption should be made.

Clearly, this has draw back, such that an assumption must be fairly easy to do. Yet, once such an assumption has been made, the solution is not difficult to optain.

Let's look at some examples:

y'' - 3y' - 4y = 3e2t

Here we seek a function such that the combination of Y''(t) - 3Y'(t) - 4Y(t) = 3e2t

Since, the exponential function reproduces itself through differentiation it is the most plausible answer. Let's assume Y(t) = Ae2t, where A is the undetermined coefficient.

Y(t) = Ae2t
Y'(t) = 2Ae2t
Y''(t) = 4Ae2t

Plug these values into the combination equation;

1[4Ae2t] - 3[2Ae2t] - 4[Ae2t] = 3e2t

Attempt to solve for A;

[4 - 6 - 4]A = 3
A = -1/2

So;

Y(t) = -1/2e2t

Let us try another one, where we first assume the incorrect solution;

y'' - 3y' - 4y = 2sin(t)

Let us assume;

Y(t) = Asint(t)
Y'(t) = Acos(t)
Y''(t) = -Asint(t)

1[-Asint(t)] - 3[Acos(t)] - 4[Asint(t)] = 2sin(t)

A[-5 - 3cot(t)] = 2

Clearly, this can not be solved. Let's assume a differnt solution, namely Y(t) = Asint(t) + Bcos(t), where B is just another undetermined coefficient.

Y(t) = Asint(t) + Bcos(t)
Y'(t) = Acost(t) - Bsin(t)
Y(''t) = -Asint(t) - Bcos(t)

1[-Asint(t) - Bcos(t)] - 3[Acost(t) - Bsin(t)] - 4[Asint(t) + Bcos(t)] = 2sin(t)

[-A + 3B -4A]sin(t) + [-B - 3A - 4B]cos(t) = 2sin(t)

Since there there are two sin(t) on the right there must be two on the left, zero cos(t) on the right, there must be zero on the left.

Hence the coefficients of sin(t) and cos(t) must be;

-5A + 3B = 2,
-3A - 5B = 0

1 -3/5 -2/5
0 -34/5 -6/5

1 0 -5/17
0 1 3/17

A = -5/17
B = 3/17

So, Y(t) = (-5/17)sint(t) + (3/17)cos(t)

Lets try one more,

y'' - 3y' - 4y = 4t2 - 1

Since g(t) is a polynomial with terms t2, t, 1, with coefficients 4, 0, -1 respectively, we can assume the solution;

Y(t) = At2 + Bt + C
Y'(t) = 2At + B
Y''(t) = 2A

[2A] - 3[2At + B] - 4[At2 + Bt + C] = 4t2 - 1

[-4A]t2 + [-6A - 4B]t + [2A - 3B - 4C] = 4t2 - 1

-4At2 = 4t2
A = -1

-6At - 4Bt = 0t
-6(-1) = 4B
B = 3/2

2A - 3B - 4C = -1
2(-1) - 3(3/2) - 4C = -1
-4C = 11/2
C = -11/8

So, Y(t) = (-1)t2 + (3/2)t + (-11/8)

Problems:

Find the general solution of;
1. y'' - 2y' - 3y = 3e2t
2. 2y'' + 3y' + y = t2 + 3sin(t)

3. y'' + 2y' + 5y = 4e-tcos(2t), y(0) = 1, y'(0) = 0
 
Last edited:
  • #40
yeah thanks...by the way, how do you compute pde's? the advanced types?
 
  • #41
I believe that is Fourier series, something I will not learn until Monday.
 
  • #42
i thought pde's were partil differential equations>?
 
  • #43
He meant that's one way to solve them, I think. Yes PDE means partial differential equation. Some of them can be reduced to ordinary differentials and there are just tons of methods for particular cases. This has been one of the most active branches of math resarch for hundreds of years, and no end in sight.
 
  • #44
o..by the way..here are some things that i wish to know
wha are the recurrence formulass for the folowwing

1) (x^2 +4) y''+xy=x+2
2) y''+y=0
8x^2 y''+10xy'+(X-1)y=0
 
  • #45
Nonhomogeneous Equations; Variation of Parameters

We have already seen how to find a particular solution for nonhomogeneous equation using the Method of Undetermined Coefficients, now we will try to use variation of parameters to accomplish the same thing.

Let's jump straight to an example;

y'' + 4y = 3csc(t)

Noting that the corresponding homogeneous equation is;

y'' + 4y = 0

We first solve this equation.

r12 = &radic;(-4(1)(4))/2(1) = 2i

remember that solution will be in form;

e&lambda;t[cos(&mu;t) + sin(&mu;t)]

where in this case &lambda; = 0 and &mu; = 2, so

yc(t) = c1cos(2t) + c2sin(2t)

The basic idea is to replace the constance c1 and c2 with functions u1(t) and u2(t) and solve for these functions.

Starting with the equation;

y = u1(t)cos(2t) + u2(t)sin(2t)

we can differentiate to optain;

y' = u'1(t)cos(2t) + u'2(t)sin(2t) - 2u1(t)sin(2t) + 2u2(t)cos(2t)

Since we only have one initial condition so far, yet two unknown variables, this would give us infinite many solutions. Let us impose a second condition so that we have one final solution. Here it is not important why we can do this;

We require that;

u'1(t)cos(2t) + u'2(t)sin(2t) = 0, so;

y' = 2u2(t)cos(2t) - 2u1(t)sin(2t)

y'' = 2u'2(t)cos(2t) - 2u'1(t)sin(2t) - 4u2(t)sin(2t) - 4u1(t)cos(2t)

Substitude these equations back into the original equation;

[2u'2(t)cos(2t) - 2u'1(t)sin(2t) - 4u2(t)sin(2t) - 4u1(t)cos(2t)] + 4[u1(t)cos(2t) + u2(t)sin(2t)] = 3csc(t)

2u'2(t)cos(2t) - 2u'1(t)sin(2t) = 3csc(t)


From our second set condition;
u'1(t)cos(2t) + u'2(t)sin(2t) = 0

u'2(t) = -u'1(t)cos(2t)/sin(2t)


Substitude;
2[-u'1(t)cos(2t)/sin(2t)]cos(2t) - 2u'1(t)sin(2t) = 3csc(t)

Simplify;

u'1(t) = -(3csc(t)sin(2t))/2 = -3cos(t)

Substituging once more;

u'2(t) = -u'1(t)cos(2t)/sin(2t)
u'2(t) = [-3cos(t)]cos(2t)/sin(2t)
u'2(t) = (3/2)csc(t) -3sin(t)

Now that we have optained u'1(t) and u'2(t), Integrate;

u1(t) = -3sin(T) + c1
u2(t) = (3/2)ln|csc(t) - cot(t)| + 3cot(t) + c2(t)

Finally, substitude u1(t) and u2(t) into the y expression;

y = [-3sin(T) + c1]cos(2t) + [(3/2)ln|csc(t) - cot(t)| + 3cot(t) + c2(t)]sin(2t)


That probably looked more confusing than need be, so let's look at an arbitrary function to see a set by set method and prove this can be used for any Second Order Linear Nonhomogeneous Equation.

Let us start with the general equation;
y'' + p(t)y' + q(t)y = g(t)

The general solution to the corresponding homogeneous equation will be;
yc(t) = c1y1(t) + c2y2(t)

This is from the assumption that the equation has constant coefficients. Now in the general solution, replace constants with functions u.

y = u1(t)y1(t) + u2(t)y2(t)

Take the derivative;
y' = u'1(t)y1(t) + u'2(t)y2(t) + u1(t)y'1(t) + u2(t)y'2(t)

For a second condition set terms with u' equal to zero;
u'1(t)y1(t) + u'2(t)y2(t) = 0

This gives;
y' = u1(t)y'1(t) + u2(t)y'2(t)

Differentiate again and plug y, y', and y'' into the original equation;

y'' = u1(t)y''1(t) + u2(t)y''2(t) + u'1(t)y'1(t) + u'2(t)y'2(t)

[u1(t)y''1(t) + u2(t)y''2(t) + u'1(t)y'1(t) + u'2(t)y'2(t)] + p(t)[u1(t)y'1(t) + u2(t)y'2(t)] + q(t)[u1(t)y1(t) + u2(t)y2(t)] = g(t)

Rearranging;
u1(t)[y''1(t) + p(t)y'1(t) + q(t)y1(t)] + u2(t)[y''2(t) + p(t)y'2(t) + q(t)y2(t)] + u'1(t)y'1(t) + u'2(t)y'2(t)] = g(t)

Since both y1 and y2 are solutions to the corresponding homogeneous equation, the expressions in brackets equal zero, leaving;

u'1(t)y'1(t) + u'2(t)y'2(t) = g(t)

Using this equation and the previous equation;
u'1(t)y'1(t) + u'2(t)y'2(t) = 0

substituation can be used and integration can be done to find u1and u2.

u'1(t) = -y2(t)g(t)/W(y1,y2)(t)

u'2(t) = y1(t)g(t)/W(y1,y2)(t)

u1(t) = -&int;(y2(t)g(t)/W(y1,y2)(t))dt + c1

u2(t) = &int;(y1(t)g(t)/W(y1,y2)(t))dt + c2

Where then;
Y(t) =
-y1(t)&int;(y2(t)g(t)/W(y1,y2)(t))dt + y2&int;(y1(t)g(t)/W(y1,y2)(t))dt

and the general solution is;
y = c1y1(t) + c2y2(t) + Y(t)

I realize this is a little confusing to follow, so let me sum up what you really need to know without deriving everything everytime.

First;
You must find the solutions y1 and y2 of the homogeneous equation.

Then use these two formulas:

u'1y1 + u'2y2 = 0
u'1y'1 + u'2y'2 = g(t)

If there is a term infront of the y'' term, it must be divided out to give the correct g(t).

From this system of equations, where y and y' are known, u'1 and u'2 can be found, then integrated.
 
Last edited:
  • #46
DDy + Dy + y = 0 for all values of y

x2 + px + q cannot be zero for all values.

Although it's possible if y = aekx.

y = aekx
Dy = akekx.
DDy = ak2ekx.

DDy + Dy + y = ak2ekx + akekx + aekx = a(k2 + k + 1)ekx

If (k2 + k + 1) = 0 then DDy + Dy + y = 0
 
Last edited by a moderator:
  • #47
Series solution of 2nd Order Linear Equations: Ordinary Point

Awsome, I feel special now that this was made a sticky

I haven't been around in a while, I was really busy during finals and then kinda crawled in a hole for a month during break. but for now I am back, we will see how long it lasts this time.

I changed my format of doing this, instead of writing everything out on here, I have decided to use Word Processor and Equation Editor to create a document that you can then download. I hope no one will have any problems this way.

If you dare...
 
Last edited:
  • #48
ExtravagantDreams said:
However, a question, does anyone know an easier way for writing math on the computer and one that looks less confusing. I know I will have difficulty finding some things, especially subscripts and superscripts. Anyone know a better way to denote these?

My understanding is that PF postings will support latex formatting. I haven't used it yet but go to: https://www.physicsforums.com/showthread.php?t=8997
for instructions.
 
  • #49
Does anyone know of any good Intro to Diffy Q books? Or just Diffy Q books in general? Thanks...
 
  • #50
Ebolamonk3y said:
Does anyone know of any good Intro to Diffy Q books? Or just Diffy Q books in general? Thanks...


I'm using the book Differential Equations: 2nd Edition by Blanchard, Devaney, Hall.
 

Similar threads

Back
Top