# The auxiliary equation. Where did it come from?

#### dionysian

In my DE book Zill 7th ed it says that a solution to the equation

$$ay^{''} + by^{'} + cy = 0$$

can be found by "trying" a solution $$y = e^{mx}$$

I then see how you take the first and second derivative of $$y = e^{mx}$$ and plug it into the equation and get the auxilary equation $$am^2 + bm + c = 0$$. But why in the world do we "try" the solution $$y = e^{mx}$$ in the first place? I can see that choosing $$y = e^{mx}$$ conveintly lets us solve for $$m$$ but how do we know that the solution is of the form $$y = e^{mx}$$ in the first place.

ps. this is my first post and i dont know if my latex is showing up properly (it doesnt seem to be in the preview of my post). If my latex is not showing up properly can someone please tell me what i am doing wrong. Here is a sample of what i am entering for one of my expressions y = e^{mx}.

Last edited:
Related Differential Equations News on Phys.org

#### Chris Hillman

Why try?

In my DE book Zill 7th ed it says that a solution to the equation

$$ay^{''} + by^{'} + cy = 0$$

can be found by "trying" a solution $$y = e^{mx}$$
What section? Is it called something like "Homogeneous Linear Equations with Constant Coefficients"? (This is where auxilliary equations appear in introductory ODE courses.)

If so, this is an inspired bolt from the blue. Sure, few students would think of assuming $y = e^{mx}$ just to see what happens. This step takes creativity and insight. After that, though, it is smooth sailing. So, you aren't supposed to understand why someone thought of this first step, only the steps which come after the first step, which is by far the hardest.

dont know if my latex is showing up properly (it doesnt seem to be in the preview of my post).
That's right, it won't show up in preview. You can use "itex" and "/itex" (inside square brackets) instead of "tex" and "/tex", when you want to render an in-line expression.

#### dionysian

Yes it is from the section in my book entitled "Homogeneous Linear Equations with Constant Coefficients"

So is it a correct solution just because $$y = e^{mx}$$ will satisfy the equation? And if this is so, couldn't I in "thoery" find some other function that i could plug into the equation and factor out $$x$$ and solve for the functions coefficient and get another solution? I can see how $$y = e^{mx}$$ may be the only function that satifys these conditions, but if i could find such a function wouldn't it also be a solution to the equation?

BTW

Last edited:

#### HallsofIvy

Homework Helper
Not entirely "inspired". If you have a linear d.e. with constant coefficients, say Ay"+ By'+ cy= 0, then y and its derivatives have to cancel out. In order to do that, they must be the same kind of functions. That is, we know y can't be a logarithm, for example, because the derivatives of logarithms aren't logarithms and so can't cancel y. The obvious thing to try as a function, "all of whose derivatives are the same kind of function", is an exponential (although you quickly learn that sine and cosine also may work as well as polynomials).

#### Chris Hillman

Hi, HallsofIvy,

Agreed, but you confirmed my point: the first step requires insight. I was trying to reassure dionysian that Zill doesn't expect his readers to consider it stunningly obvious that assuming $y=\exp(mx)$ will help!

And hi again, dionysian,

I think your first question is: "how can I check whether a specific function $y(x)$ satisfies a specific differential equation?". The answer is: plug it into both sides and see whether it violates the required equality.

I think your second question is: "suppose I have found some family of solutions to a differential equation; how can I be sure I have found all the solutions?" In general, this can be tricky, but in the case of "homogeneous linear ODES with constant coefficients", your book probably discusses the appropriate picture. Namely, there exists a kind of "basis" for the solution space (as in, a basis for some vector space) which consists of a finite list of functions which are each solutions of the ODE and which together form a basis, i.e. a finite set of functions which is "independent" and "complete", so that every solution can be expressed as a linear combination of these basic functions (usually called "fundamental solutions" in this context).

Hope this helps!

#### dextercioby

Homework Helper
In my DE book Zill 7th ed it says that a solution to the equation

$$ay^{''} + by^{'} + cy = 0$$

can be found by "trying" a solution $$y = e^{mx}$$

I then see how you take the first and second derivative of $$y = e^{mx}$$ and plug it into the equation and get the auxilary equation $$am^2 + bm + c = 0$$. But why in the world do we "try" the solution $$y = e^{mx}$$ in the first place? I can see that choosing $$y = e^{mx}$$ conveintly lets us solve for $$m$$ but how do we know that the solution is of the form $$y = e^{mx}$$ in the first place.

ps. this is my first post and i dont know if my latex is showing up properly (it doesnt seem to be in the preview of my post). If my latex is not showing up properly can someone please tell me what i am doing wrong. Here is a sample of what i am entering for one of my expressions y = e^{mx}.
Here are some steps:

1. Can you show that the ODE you wrote admits as a solution the polynomial P(x)=0 and ALL polynomial solutions to your ODE must be equal to 0 ?
2. Consider a=0 and solve the ODE by separating variables. See if you obtain a contradiction with 1.
3. Take for simplicity a=1, b=0 and c=-1. Write the 2-nd order ODE

y"=y as a system of 1-st order ODE-s

y"=y' and y'=y. Solve the system and then get the solution for the simplified ODE y"=y.
4. Do 3. for c=1, a=1 and b=0.

5. Take now b=0 and $a\neq 0$. The ODE resulting is

$$y''=-\frac{c}{a}y$$

Compute the simple system of ODE-s which results and find its solutions. Then you'll have solutions for the b=0 case.

6. Assume now that b is different from 0. Can you find a simple system of ODE-s that will lead you to your solution ?

Daniel.

#### dionysian

Hi, HallsofIvy,

Agreed, but you confirmed my point: the first step requires insight. I was trying to reassure dionysian that Zill doesn't expect his readers to consider it stunningly obvious that assuming $y=\exp(mx)$ will help!

And hi again, dionysian,

I think your first question is: "how can I check whether a specific function $y(x)$ satisfies a specific differential equation?". The answer is: plug it into both sides and see whether it violates the required equality.

I think your second question is: "suppose I have found some family of solutions to a differential equation; how can I be sure I have found all the solutions?" In general, this can be tricky, but in the case of "homogeneous linear ODES with constant coefficients", your book probably discusses the appropriate picture. Namely, there exists a kind of "basis" for the solution space (as in, a basis for some vector space) which consists of a finite list of functions which are each solutions of the ODE and which together form a basis, i.e. a finite set of functions which is "independent" and "complete", so that every solution can be expressed as a linear combination of these basic functions (usually called "fundamental solutions" in this context).

Hope this helps!
Chris Hillman,

Thank you for restating my question I know see what I was really asking. Now my book does give a theorem about a fundamental set of solutions and the jist of it goes something like this "a set of n linearly independent solutions of the homogenous nth order diff equation is said to be a fundamental set of solutions". I understand the part about linearly independence of the functions “the wronksian has to never equal zero", but I don’t see an explicit theorem or proof that the solution set is "complete".

Now you mentioned the solution space, I like this, does this "kind of mean" that the solution set is a basis or is it a full fledge basis? I would think that it would be a basis of the solution space. And if it is that would mean that the dimension of the solution space for the "second order linear homogenous equations with const coeff" is 2, is that correct?

And if it is correct is there a theorem about the order of the equation and the dimensions of the solution space? For example, a third order equation will have a 3 dimensional solution space... etc? Because it seems to me that if there is theorem about the dimensions of a solution space of the DE and we able to find $$n$$ linear independent vectors of a $$n$$ dimensional solution space we could say that the fundamental set is complete. I am I on the right track here or is this a bunch of crap?

Thank you again for the help, I very much appreciate it. I am reviewing and trying to make sense of all the theory and methods for solving DEs in my DE book. I finished my DE class last year and was able to memorize all the methods I needed to solve the equations and pass my tests, but most of it still seems mysterious to me and that bugs me. Is there any book out there anyone could recommend that would help shed light on the methods such as “undetermined coefficients”, “variation of parameters” ,”exact equations”, “series solution to DE” and where they came from?

Last edited:

#### HallsofIvy

Homework Helper
Chris Hillman,

Thank you for restating my question I know see what I was really asking. Now my book does give a theorem about a fundamental set of solutions and the jist of it goes something like this "a set of n linearly independent solutions of the homogenous nth order diff equation is said to be a fundamental set of solutions". I understand the part about linearly independence of the functions “the wronksian has to never equal zero", but I don’t see an explicit theorem or proof that the solution set is "complete".

Now you mentioned the solution space, I like this, does this "kind of mean" that the solution set is a basis or is it a full fledge basis? I would think that it would be a basis of the solution space. And if it is that would mean that the dimension of the solution space for the "second order linear homogenous equations with const coeff" is 2, is that correct?
Yes. It is not terribly difficult to show that the set of all solutions to a linear, homogeneous, nth order differential equation forms an n dimensional linear vector space. If you have n independent solutions then they must form a basis and so also "span" the space of solutions: every solution can be written as a linear combination of them. By the way, while it is true that a set of n solutions is independent if and only if the Wronskian is not 0, the definition of independence (you should remember from linear algebra) is that no vector in the set can be written as a linear combination of the others.

And if it is correct is there a theorem about the order of the equation and the dimensions of the solution space? For example, a third order equation will have a 3 dimensional solution space... etc? Because it seems to me that if there is theorem about the dimensions of a solution space of the DE and we able to find $$n$$ linear independent vectors of a $$n$$ dimensional solution space we could say that the fundamental set is complete. I am I on the right track here or is this a bunch of crap?
Yes, that is exactly right. As I said before the set of all solutions to a linear homogeneous nth order differential equation forms an n dimensional vector space.

Here's an outline of the proof (which, in detail, is very deep). First rewrite the equation as a system of n first-order equations. You can do that by assigning new variable names to each derivative. For example, if you differential equation is y(n)= f(x,y', y", ...), let u= y', v= y", ,,, w= y(n-1)etc. so the equation becomes w'= f(x, u, v, ... , w) while we also have y'= u, u'= v, etc. Now rewrite that as a single vector equation $\frac{dY}{dx}= F(x, Y)$ where Y is the vector having y, u, v, etc. as components. In particular, if the original equation is a homogeneous, linear, nth order differential equation, that final equation can be written $\frac{dY}{dx}= AY$ where A is an n by n matrix whose components depend on x only and Y can be written as an n dimensional column vector. The "deep" part is recognizing that the set of all such vectors has much the same properties as the real numbers: it is a complete metric space. That means the "Banach fixed point principle", which is used in Poincare's "existance and uniqueness theorem" for initial value problems is valid.

Poincare's "existance and uniqueness theorem" essentially says that as long as f(x,y) is continuous in both variables and "Lipschitz" in y for some region around (x0,y0) then there exist a unique solution to the differential equation y'= f(x,y) satisfying y(x0)= x0. Use that to show that there exist a unique function y1(x) satisfying the differential equation and initial condition y(x0)= 1, y'(x0)= 0, y"(x0)= 0,.., y(n-1)(x0)= 0. Then there exist another function y2[/sup], satisfying the differential equation and y(x0)= 0, y'(x0)= 1, y"(x0)= 0,... ,yn-1(x0)= 0. There exist yet a third function satisfying the differential equation and y(x0)= 0, y'(x0)= 0, y"(x0)= 1, ..., y(n-1)(x0)= 0. We can continue until we have an nth function satisfying the differential equation and y(x0)= 0, y'(x0)= 0, y"(x0)= 0, ..., yn-1(x0)= 1. Since the values are different at x0, it's easy to show that these functions are independent. Also given any function y(x) satisfying the differential equation and y(x0)= A, y'(x0)= B, y"(x0)= C, ..., y(n-1)(x0)= Z, it is easy to show that
y(x)= Ay1(x)+ By2[/sup](x)+ Cy3(x)+ ...+ Zyn(x): that is, that this set of "fundamental solutions" spans the set of all solutions. Since we have n functions (vectors) that are independent and span the space of solutions, the space is n-dimensional.

Thank you again for the help, I very much appreciate it. I am reviewing and trying to make sense of all the theory and methods for solving DEs in my DE book. I finished my DE class last year and was able to memorize all the methods I needed to solve the equations and pass my tests, but most of it still seems mysterious to me and that bugs me. Is there any book out there anyone could recommend that would help shed light on the methods such as “undetermined coefficients”, “variation of parameters” ,”exact equations”, “series solution to DE” and where they came from?[/QUOTE]