How did Euler know this?

  • #1

Main Question or Discussion Point

In the general DE form, y"+y=0 and where the initial conditions are y(0)=2 and y'(0)=0, Euler realized that y(x)=e^ix+e^ix. How did he know that it's a cosine graph when there's no indication in the equation that any solution is possible? y= c1+c2=2 but y" gives 0.
 
Last edited:

Answers and Replies

  • #2
arildno
Science Advisor
Homework Helper
Gold Member
Dearly Missed
9,970
131
Are you asking how he understood that the complex exponential is related to the the trigonometric functions?

Look at the power series of the exponential, and see that, with rearrangement, we must have:
[tex]e^{ix}=\cos(x)+i\sin(x)[/tex]
 
  • #3
The power series for that identity does make sense, but I really wasn't asking about that. I'm wondering as to how he knew y= e^ix+e^-ix would be of use to y"+y=0
 
  • #4
453
0
Well, presumably he first noticed separately that e^ix and e^-ix work. He may have found this simply by trying things, but more likely, he realized that the equation says that y''=-y, which says that the second derivative is proportional to the original function. One obvious thing to try would be a function whose first derivative is also proportional to itself (since this case is completely understood). The only function whose derivative is proportional to itself is e^ax, so we can see that an obvious guess is e^ax where a^2=-1, which gives us the two functions he found.
 
  • #5
mathwonk
Science Advisor
Homework Helper
10,779
950
assume that y'' +y = o has a solution which is a lower series. then use the equation, plus perhaps the assumption that either y(0)=1 and y'(0) = 0, or vice versa, to deduce the form of the powers series.
 
  • #6
1,250
2
Assuming we already know the solution to

y' = a x

is C e^{a t}

Then we can take:

y'' + y = 0

And rewrite it as a set of two equations:

y' = v
v' = -y

The reason we want to do this is that now we can define the vector:

F = {y,v}

so that

F' = {y',v'}

and we can define the matrix:

A = {{0,1},{-1,0}}

So that our second order equation becomes a first order vector equation:

F' = A F

Which has the solution:

F = C e^{A t}

where C is a vector, and the exponential of the matrix is defined by its power series:

e^{A t} = 1 + A + A^2 / 2 + A^3 / 3 ...

From here it is formal exercise to arrive at the solution, which uses diagonalizability (notice that plus and minus square roots of negative one are the eigenvalues of the matrix A).
 
  • #7
HallsofIvy
Science Advisor
Homework Helper
41,792
920
In the general DE form, y"+y=0 and where the initial conditions are y(0)=2 and y'(0)=0, Euler realized that y(x)=e^ix+e^ix.
You mean e^(ix)+ e^(-ix)

How did he know that it's a cosine graph when there's no indication in the equation that any solution is possible? y= c1+c2=2 but y" gives 0.
No, it doesn't. I have no idea what you mean by "y" gives 0".
 
  • #8
Defennder
Homework Helper
2,591
5
Are you asking how Euler deduced in his time that y= e^ix+e^-ix satisfied the equation, when it did not appear obvious that any solution existed? That would be math history. I don't know for sure, but for one thing note that 'e' itself is also known as "Euler's constant". Presumably that means that he would have been sufficiently familiarised with it to realise that it satisfied the DE.
 
  • #9
Well, presumably he first noticed separately that e^ix and e^-ix work. He may have found this simply by trying things, but more likely, he realized that the equation says that y''=-y, which says that the second derivative is proportional to the original function. One obvious thing to try would be a function whose first derivative is also proportional to itself (since this case is completely understood). The only function whose derivative is proportional to itself is e^ax, so we can see that an obvious guess is e^ax where a^2=-1, which gives us the two functions he found.
Thanks, I think I got it now. Setting y''=y allows everything to cancel out and form a cosine series when the initial condition y(0)=1 and y'(0)=0 is satisified. The -x on the right side would just equal to one when added to one. This was a lot more than I expected
 
  • #10
Assuming we already know the solution to

And rewrite it as a set of two equations:

y' = v
v' = -y

The reason we want to do this is that now we can define the vector:

F = {y,v}

so that

F' = {y',v'}

and we can define the matrix:

A = {{0,1},{-1,0}}

So that our second order equation becomes a first order vector equation:

F' = A F

Which has the solution:

F = C e^{A t}

where C is a vector, and the exponential of the matrix is defined by its power series:

e^{A t} = 1 + A + A^2 / 2 + A^3 / 3 ...

From here it is formal exercise to arrive at the solution, which uses diagonalizability (notice that plus and minus square roots of negative one are the eigenvalues of the matrix A).
I don't understand how you got A = {{0,1},{-1,0}}?
 
  • #11
I think I understand the series now!!! e^ix+e^-ix is really 2cos(x) since cos(x)=1/2 (e^ix+e^-ix) and then differentiating substituting into the series and then ix^even gives the solution (!) Thanks people :-)
 
  • #12
mathwonk
Science Advisor
Homework Helper
10,779
950
look, the equation itself says that y'' = -y, so if you differentiate the power series twice, you must get minus the original series.

comparing coefficients of the original series and the differentiated one, this says the coefficient of x^2 must be the constant coefficient divided by -2, and the coefficient of x^4 must be the constant coefficient divided by 4!, and the coefficient of x^6 is the constant coefficient divided by -6!,....


thus if you assume also that y(0) = 1 and y'(0) = 0, you get precisely the power series of cos(x).

assuming as you did that y(0)= 2, y'(0) = 0, you get 2cos(x).

similarly, if y(0) = 0 and y'(0) = 1, you get sin(x).
 
  • #13
rbj
2,226
7
look, the equation itself says that y'' = -y, so if you differentiate the power series twice, you must get minus the original series.
another simple way to look at it (even though it isn't how Euler has written to look at it) is that the diff eq.:

[tex] \frac{d^2 y}{dx^2} = -y [/tex]

has, in real analysis, the general solution

[tex] y = A \cos(x) + B \sin(x) [/tex]

where A and B can be any numbers. the two terms are the two linearly independent solutions to the 2nd-order linear diff eq and there is some easy theorem that says that the sum of any two solutions is also a solution (and from that you can get that any constant-scaled solution is also a solution). there's another theorem that says the number of linearly independent solutions to an Nth-order diff eq is N.

without initial conditions, A and B can be any numbers, but with 2 initial conditions (or the same number of boundary conditions), both A and B can be determined to unique values.

thus if you assume also that y(0) = 1 and y'(0) = 0,
with those initial conditions, A=1 and B=0 . if it were y(0) = 1 and y'(0) = i, which you get for

[tex] y(x) = e^{\mathrm{i} x} [/tex]

then A = 1 and B = i .

i know this isn't how Euler's formula isn't first presented, but i liked this diff eq presentation (and proof, such as it is) the best.
 
Last edited:
  • #14
mathwonk
Science Advisor
Homework Helper
10,779
950
i am proving what you are taking for granted.
 
  • #15
rbj
2,226
7
i am proving what you are taking for granted.
what, specifically, is taken for granted?

can not mathematicians, who are post-Newton and post-Leibniz, but who have never dealt with the concept of imaginary numbers, are they not capable of doing ordinary, homogeneous diff eqs such as

[tex] \frac{d^2 y}{dx^2} + y = 0 [/tex]

and getting the general solution

[tex] y = A \cos(x) + B \sin(x) [/tex]

for undetermined A and B?

can they not apply 2 independent initial conditions to impose constraints on A and B so that they must take on particular values in order for the 2 initial conditions to be satisfied?

you don't need to go and use power series to do that. and, IMO, the power series method is a little uglier, and i don't see it as more rigorous. it's just another way to do it, given the "assumptions" we learn from calculus.

now, when you apply it to solving Euler's formula

[tex] y(x) = e^{\mathrm{i} x} \ = \ ?? [/tex]

where you want explicit real and imaginary parts to y(x), then we are taking some things "for granted". like i2 = -1 and then otherwise, we treat i just like we have the real numbers where axioms like the commutative, associative, and distributive properties apply. does your proof prove those basic axioms? i don't think so, like mine, you are assuming the same axioms.

and, because we're treating i as some other general number (but with the specific property that i2 = -1 ), we're saying that the results learned from calculus (like what the derivative of eax is), we are able to set up that diff eq and the initial conditions to derive Euler's formula.

wonk, you be the math prof, and i am just a Neanderthal electrical engineer (who does signal processing for a living and has some university teaching experience, too), but like the Dirac delta thing, here is another place where mathematicians and electrical engineers just might have different valid ways of looking at it. i don't think, given what we learn in calculus and previous math courses, that the power series method of solving that particular diff eq is any more rigorous than using the known properties of the trig functions cos(.) and sin(.).
 
  • #16
836
13
Sorry about my simplistic view of things, but I think about it like this:
We want to solve y"+y=0

So, if we add the second derivative of the function to itself, it must equal zero. We know the second derivative of sine or cosine is minus itself, therefore they both satisfy y"+y=0. It is a simple matter to go from this to complex exponentials.

You can go on to look at uniqueness and all that other jazz, but I do not know what level you're at.
 
  • #17
rbj
2,226
7
Sorry about my simplistic view of things, but I think about it like this:
We want to solve y"+y=0

So, if we add the second derivative of the function to itself, it must equal zero. We know the second derivative of sine or cosine is minus itself, therefore they both satisfy y"+y=0. It is a simple matter to go from this to complex exponentials.
that is, if you have already established a relationship between complex exponentials and the sin(.) and cos(.) functions. if you haven't established that relationship, there are a few different ways to do it. it seems the most common is to look at the power series of eix, cos(x), and sin(x), and when i is the imaginary unit (so that i2=-1), you can see that the series for cos(x) and i sin(x) add up to the series for eix (and then conclude that they are the same functions).

but, it's not necessary to use power series to do that. the Wikipedia article on Euler's formula shows two other valid proofs other than the power series method. one is noting what you, qspeechc, did above for the solution to y"+y=0. but it also notes that eix is a solution and since

y(x) = eix

and

y(x) = cos(x) + i sin(x)

are both solutions to y"+y=0 and both satisfy these two initial conditions:

y(0) = 1
y'(0) = i

that is enough to say that they are the same. it's an alternative proof to Euler's formula which is why i took a little exception to wonk's implication that the power series method proves something that is taken for granted using the diff eq method here.
 

Related Threads for: How did Euler know this?

  • Last Post
Replies
3
Views
1K
Replies
1
Views
3K
Replies
12
Views
2K
Replies
3
Views
7K
  • Last Post
Replies
4
Views
2K
Replies
1
Views
2K
Replies
2
Views
2K
Replies
4
Views
17K
Top