The auxiliary equation. Where did it come from?

  • Thread starter dionysian
  • Start date
In summary, the conversation discusses the process of solving a homogeneous linear differential equation with constant coefficients. The initial solution of y = e^{mx} is found by "trying" it and then plugging it into the equation to get the auxiliary equation. The reason for "trying" this solution is not immediately obvious, but it is a convenient choice that ultimately allows for the solution of m. The conversation also touches on the possibility of finding other solutions by factoring out x, but it is explained that the chosen solution y = e^{mx} is the only one that satisfies the given conditions. The conversation ends with a discussion on how to check and ensure that all solutions have been found.
  • #1
dionysian
53
1
In my DE book Zill 7th ed it says that a solution to the equation

[tex] ay^{''} + by^{'} + cy = 0
[/tex]

can be found by "trying" a solution [tex] y = e^{mx} [/tex]

I then see how you take the first and second derivative of [tex] y = e^{mx} [/tex] and plug it into the equation and get the auxilary equation [tex] am^2 + bm + c = 0 [/tex]. But why in the world do we "try" the solution [tex] y = e^{mx} [/tex] in the first place? I can see that choosing [tex] y = e^{mx} [/tex] conveintly let's us solve for [tex] m [/tex] but how do we know that the solution is of the form [tex] y = e^{mx} [/tex] in the first place.

ps. this is my first post and i don't know if my latex is showing up properly (it doesn't seem to be in the preview of my post). If my latex is not showing up properly can someone please tell me what i am doing wrong. Here is a sample of what i am entering for one of my expressions y = e^{mx}.
 
Last edited:
Physics news on Phys.org
  • #2
Why try?

dionysian said:
In my DE book Zill 7th ed it says that a solution to the equation

[tex] ay^{''} + by^{'} + cy = 0
[/tex]

can be found by "trying" a solution [tex] y = e^{mx} [/tex]

What section? Is it called something like "Homogeneous Linear Equations with Constant Coefficients"? (This is where auxilliary equations appear in introductory ODE courses.)

If so, this is an inspired bolt from the blue. Sure, few students would think of assuming [itex] y = e^{mx} [/itex] just to see what happens. This step takes creativity and insight. After that, though, it is smooth sailing. So, you aren't supposed to understand why someone thought of this first step, only the steps which come after the first step, which is by far the hardest.

dionysian said:
don't know if my latex is showing up properly (it doesn't seem to be in the preview of my post).

That's right, it won't show up in preview. You can use "itex" and "/itex" (inside square brackets) instead of "tex" and "/tex", when you want to render an in-line expression.
 
  • #3
Yes it is from the section in my book entitled "Homogeneous Linear Equations with Constant Coefficients"

So is it a correct solution just because [tex] y = e^{mx} [/tex] will satisfy the equation? And if this is so, couldn't I in "theory" find some other function that i could plug into the equation and factor out [tex] x [/tex] and solve for the functions coefficient and get another solution? I can see how [tex] y = e^{mx} [/tex] may be the only function that satifys these conditions, but if i could find such a function wouldn't it also be a solution to the equation?

BTW
Thank you for the reply.
 
Last edited:
  • #4
Not entirely "inspired". If you have a linear d.e. with constant coefficients, say Ay"+ By'+ cy= 0, then y and its derivatives have to cancel out. In order to do that, they must be the same kind of functions. That is, we know y can't be a logarithm, for example, because the derivatives of logarithms aren't logarithms and so can't cancel y. The obvious thing to try as a function, "all of whose derivatives are the same kind of function", is an exponential (although you quickly learn that sine and cosine also may work as well as polynomials).
 
  • #5
Hi, HallsofIvy,

Agreed, but you confirmed my point: the first step requires insight. I was trying to reassure dionysian that Zill doesn't expect his readers to consider it stunningly obvious that assuming [itex]y=\exp(mx)[/itex] will help!

And hi again, dionysian,

I think your first question is: "how can I check whether a specific function [itex]y(x)[/itex] satisfies a specific differential equation?". The answer is: plug it into both sides and see whether it violates the required equality.

I think your second question is: "suppose I have found some family of solutions to a differential equation; how can I be sure I have found all the solutions?" In general, this can be tricky, but in the case of "homogeneous linear ODES with constant coefficients", your book probably discusses the appropriate picture. Namely, there exists a kind of "basis" for the solution space (as in, a basis for some vector space) which consists of a finite list of functions which are each solutions of the ODE and which together form a basis, i.e. a finite set of functions which is "independent" and "complete", so that every solution can be expressed as a linear combination of these basic functions (usually called "fundamental solutions" in this context).

Hope this helps!
 
  • #6
dionysian said:
In my DE book Zill 7th ed it says that a solution to the equation

[tex] ay^{''} + by^{'} + cy = 0
[/tex]

can be found by "trying" a solution [tex] y = e^{mx} [/tex]

I then see how you take the first and second derivative of [tex] y = e^{mx} [/tex] and plug it into the equation and get the auxilary equation [tex] am^2 + bm + c = 0 [/tex]. But why in the world do we "try" the solution [tex] y = e^{mx} [/tex] in the first place? I can see that choosing [tex] y = e^{mx} [/tex] conveintly let's us solve for [tex] m [/tex] but how do we know that the solution is of the form [tex] y = e^{mx} [/tex] in the first place.

ps. this is my first post and i don't know if my latex is showing up properly (it doesn't seem to be in the preview of my post). If my latex is not showing up properly can someone please tell me what i am doing wrong. Here is a sample of what i am entering for one of my expressions y = e^{mx}.

Here are some steps:

1. Can you show that the ODE you wrote admits as a solution the polynomial P(x)=0 and ALL polynomial solutions to your ODE must be equal to 0 ?
2. Consider a=0 and solve the ODE by separating variables. See if you obtain a contradiction with 1.
3. Take for simplicity a=1, b=0 and c=-1. Write the 2-nd order ODE

y"=y as a system of 1-st order ODE-s

y"=y' and y'=y. Solve the system and then get the solution for the simplified ODE y"=y.
4. Do 3. for c=1, a=1 and b=0.

5. Take now b=0 and [itex] a\neq 0 [/itex]. The ODE resulting is

[tex] y''=-\frac{c}{a}y [/tex]

Compute the simple system of ODE-s which results and find its solutions. Then you'll have solutions for the b=0 case.

6. Assume now that b is different from 0. Can you find a simple system of ODE-s that will lead you to your solution ?

Daniel.
 
  • #7
Chris Hillman said:
Hi, HallsofIvy,

Agreed, but you confirmed my point: the first step requires insight. I was trying to reassure dionysian that Zill doesn't expect his readers to consider it stunningly obvious that assuming [itex]y=\exp(mx)[/itex] will help!

And hi again, dionysian,

I think your first question is: "how can I check whether a specific function [itex]y(x)[/itex] satisfies a specific differential equation?". The answer is: plug it into both sides and see whether it violates the required equality.

I think your second question is: "suppose I have found some family of solutions to a differential equation; how can I be sure I have found all the solutions?" In general, this can be tricky, but in the case of "homogeneous linear ODES with constant coefficients", your book probably discusses the appropriate picture. Namely, there exists a kind of "basis" for the solution space (as in, a basis for some vector space) which consists of a finite list of functions which are each solutions of the ODE and which together form a basis, i.e. a finite set of functions which is "independent" and "complete", so that every solution can be expressed as a linear combination of these basic functions (usually called "fundamental solutions" in this context).

Hope this helps!

Chris Hillman,

Thank you for restating my question I know see what I was really asking. Now my book does give a theorem about a fundamental set of solutions and the jist of it goes something like this "a set of n linearly independent solutions of the homogenous nth order diff equation is said to be a fundamental set of solutions". I understand the part about linearly independence of the functions “the wronksian has to never equal zero", but I don’t see an explicit theorem or proof that the solution set is "complete".

Now you mentioned the solution space, I like this, does this "kind of mean" that the solution set is a basis or is it a full fledge basis? I would think that it would be a basis of the solution space. And if it is that would mean that the dimension of the solution space for the "second order linear homogenous equations with const coeff" is 2, is that correct?

And if it is correct is there a theorem about the order of the equation and the dimensions of the solution space? For example, a third order equation will have a 3 dimensional solution space... etc? Because it seems to me that if there is theorem about the dimensions of a solution space of the DE and we able to find [tex] n [/tex] linear independent vectors of a [tex] n [/tex] dimensional solution space we could say that the fundamental set is complete. I am I on the right track here or is this a bunch of crap?


Thank you again for the help, I very much appreciate it. I am reviewing and trying to make sense of all the theory and methods for solving DEs in my DE book. I finished my DE class last year and was able to memorize all the methods I needed to solve the equations and pass my tests, but most of it still seems mysterious to me and that bugs me. Is there any book out there anyone could recommend that would help shed light on the methods such as “undetermined coefficients”, “variation of parameters” ,”exact equations”, “series solution to DE” and where they came from?
 
Last edited:
  • #8
dionysian said:
Chris Hillman,

Thank you for restating my question I know see what I was really asking. Now my book does give a theorem about a fundamental set of solutions and the jist of it goes something like this "a set of n linearly independent solutions of the homogenous nth order diff equation is said to be a fundamental set of solutions". I understand the part about linearly independence of the functions “the wronksian has to never equal zero", but I don’t see an explicit theorem or proof that the solution set is "complete".

Now you mentioned the solution space, I like this, does this "kind of mean" that the solution set is a basis or is it a full fledge basis? I would think that it would be a basis of the solution space. And if it is that would mean that the dimension of the solution space for the "second order linear homogenous equations with const coeff" is 2, is that correct?
Yes. It is not terribly difficult to show that the set of all solutions to a linear, homogeneous, nth order differential equation forms an n dimensional linear vector space. If you have n independent solutions then they must form a basis and so also "span" the space of solutions: every solution can be written as a linear combination of them. By the way, while it is true that a set of n solutions is independent if and only if the Wronskian is not 0, the definition of independence (you should remember from linear algebra) is that no vector in the set can be written as a linear combination of the others.

And if it is correct is there a theorem about the order of the equation and the dimensions of the solution space? For example, a third order equation will have a 3 dimensional solution space... etc? Because it seems to me that if there is theorem about the dimensions of a solution space of the DE and we able to find [tex] n [/tex] linear independent vectors of a [tex] n [/tex] dimensional solution space we could say that the fundamental set is complete. I am I on the right track here or is this a bunch of crap?
Yes, that is exactly right. As I said before the set of all solutions to a linear homogeneous nth order differential equation forms an n dimensional vector space.

Here's an outline of the proof (which, in detail, is very deep). First rewrite the equation as a system of n first-order equations. You can do that by assigning new variable names to each derivative. For example, if you differential equation is y(n)= f(x,y', y", ...), let u= y', v= y", ,,, w= y(n-1)etc. so the equation becomes w'= f(x, u, v, ... , w) while we also have y'= u, u'= v, etc. Now rewrite that as a single vector equation [itex]\frac{dY}{dx}= F(x, Y)[/itex] where Y is the vector having y, u, v, etc. as components. In particular, if the original equation is a homogeneous, linear, nth order differential equation, that final equation can be written [itex]\frac{dY}{dx}= AY[/itex] where A is an n by n matrix whose components depend on x only and Y can be written as an n dimensional column vector. The "deep" part is recognizing that the set of all such vectors has much the same properties as the real numbers: it is a complete metric space. That means the "Banach fixed point principle", which is used in Poincare's "existance and uniqueness theorem" for initial value problems is valid.

Poincare's "existance and uniqueness theorem" essentially says that as long as f(x,y) is continuous in both variables and "Lipschitz" in y for some region around (x0,y0) then there exist a unique solution to the differential equation y'= f(x,y) satisfying y(x0)= x0. Use that to show that there exist a unique function y1(x) satisfying the differential equation and initial condition y(x0)= 1, y'(x0)= 0, y"(x0)= 0,.., y(n-1)(x0)= 0. Then there exist another function y2[/sup], satisfying the differential equation and y(x0)= 0, y'(x0)= 1, y"(x0)= 0,... ,yn-1(x0)= 0. There exist yet a third function satisfying the differential equation and y(x0)= 0, y'(x0)= 0, y"(x0)= 1, ..., y(n-1)(x0)= 0. We can continue until we have an nth function satisfying the differential equation and y(x0)= 0, y'(x0)= 0, y"(x0)= 0, ..., yn-1(x0)= 1. Since the values are different at x0, it's easy to show that these functions are independent. Also given any function y(x) satisfying the differential equation and y(x0)= A, y'(x0)= B, y"(x0)= C, ..., y(n-1)(x0)= Z, it is easy to show that
y(x)= Ay1(x)+ By2[/sup](x)+ Cy3(x)+ ...+ Zyn(x): that is, that this set of "fundamental solutions" spans the set of all solutions. Since we have n functions (vectors) that are independent and span the space of solutions, the space is n-dimensional.


Thank you again for the help, I very much appreciate it. I am reviewing and trying to make sense of all the theory and methods for solving DEs in my DE book. I finished my DE class last year and was able to memorize all the methods I needed to solve the equations and pass my tests, but most of it still seems mysterious to me and that bugs me. Is there any book out there anyone could recommend that would help shed light on the methods such as “undetermined coefficients”, “variation of parameters” ,”exact equations”, “series solution to DE” and where they came from?[/QUOTE]
 

1. What is an auxiliary equation and what is its purpose?

An auxiliary equation is a mathematical equation used to find the roots or solutions of a given polynomial equation. Its purpose is to simplify the process of solving equations and to help identify the possible solutions.

2. Where did the auxiliary equation come from?

The concept of an auxiliary equation was first introduced by French mathematician François Viète in the 16th century. It was later developed and expanded upon by mathematicians such as René Descartes and Isaac Newton.

3. How is the auxiliary equation used in solving polynomial equations?

The auxiliary equation is used in conjunction with the quadratic formula or other methods to find the roots or solutions of polynomial equations. It is often used to simplify higher degree equations and make the process of solving them more manageable.

4. Can the auxiliary equation be used for all types of polynomial equations?

The auxiliary equation is most commonly used for solving quadratic equations, but it can also be applied to higher degree equations. However, its effectiveness may vary depending on the complexity of the equation.

5. Are there any limitations or drawbacks to using the auxiliary equation?

While the auxiliary equation can be a useful tool in solving polynomial equations, it may not always provide the most accurate or efficient solutions. It is important to consider other methods and techniques in conjunction with the auxiliary equation for more complex equations.

Similar threads

  • Differential Equations
Replies
5
Views
651
  • Linear and Abstract Algebra
Replies
13
Views
509
  • Differential Equations
Replies
16
Views
889
  • Differential Equations
Replies
1
Views
1K
Replies
3
Views
789
  • Differential Equations
Replies
2
Views
1K
Replies
2
Views
2K
  • Differential Equations
Replies
1
Views
748
  • Precalculus Mathematics Homework Help
Replies
4
Views
609
  • Calculus and Beyond Homework Help
Replies
7
Views
704
Back
Top