Proof of Euler's Formula with No Angle Addition

  • Context: MHB 
  • Thread starter Thread starter mathbalarka
  • Start date Start date
  • Tags Tags
    Formula
Click For Summary

Discussion Overview

The discussion revolves around deducing a proof of Euler's formula, \( e^{i \theta} = \cos(\theta) + i\sin(\theta) \), without relying on the angle addition formulas for sine and cosine. Participants explore various approaches and the implications of their definitions and derivations.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that Euler's formula should be derived without using the angle-sum identity, questioning the validity of steps that involve sine and cosine derivatives.
  • One participant emphasizes the importance of defining \( e^{i\theta} \), \( \cos\theta \), and \( \sin\theta \) through their power series, suggesting that this approach makes Euler's relation trivial.
  • Another participant challenges the notion that the Taylor expansion of these functions relies on trigonometric identities, arguing that it can be defined for arbitrary differentiable functions.
  • Concerns are raised about the ambiguity of defining trigonometric functions in terms of right-angled triangles without a rigorous foundation, particularly regarding the definition of angles and arc lengths.
  • Some participants express that deriving sine and cosine without the angle-sum identity is a difficult task and question whether it can be done without circular reasoning.
  • A participant proposes a method involving the logarithmic derivative of a complex function to derive the derivatives of sine and cosine, attempting to avoid the angle-sum identity.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best approach to derive Euler's formula without using angle addition. Multiple competing views remain regarding the definitions and derivations of trigonometric functions and their relation to Euler's formula.

Contextual Notes

Participants highlight the limitations of their definitions and the potential circular reasoning involved in defining trigonometric functions without a rigorous foundation. The discussion reflects a range of assumptions regarding the relationships between the functions involved.

  • #31
chisigma said:
In fact the Euler's identity is one of the 'goldenkeys' of Math and I'm not surprised that Your post is the starting point of very interesting discussions. In this topic two different 'right definitions' of the function $\sin x$ have been presented and both these definition disregard the 'geometrical meaning' of the function. One definition is...

$\displaystyle \sin x = \sum_{n=0}^{\infty} (-1)^{n}\ \frac{x^{2 n + 1}}{(2 n + 1)!}\ (1)$

Another definition represents the function $\sin x$ as the solution of the ODE...

$\displaystyle y^{\ ''} = - y,\ y(0)=0,\ y^{\ '} (0) = 1\ (2)$

Very well!... but for my 'point of view of things' it is difficult to imagine a comfortable way, starting from definition (1) or (2), to demonstrate the basic properties of thye function... for example that for any real x is $\sin x = \sin (x + 2\ \pi)$... I do hope in some help from somebody...

Kind regards

$\chi$ $\sigma$

If we were to decide to define $\sin$ as the unique non-zero (twice-differentiable, of course) function $f$ such that:

$f + f'' = 0$ (1)
$f(0) = 0$
$f'(0) = 1$.

It becomes obvious that:

$f'$ also satisfies (1) (by differentiating both sides). We shall henceforth call $f' = \cos$.

Now suppose a function $f$ satisfies:

$f + f'' = 0$
$f(0) = 0$
$f'(0) = 0$.

I claim that then $f = 0$. Proof:

Multiplying (1) by $f'$, we get:

$(f')(f + f'') = 0$
$2(ff'' + f'f'') = 0$
$((f')^2 + f^2)' = 0$

Thus $(f')^2 + f^2$ is a constant, and $f(0) = 0,f'(0) = 0$ imply this constant is 0. This in turn means:

$[f'(x)]^2 + [f(x)]^2 = 0$ for all $x$, which means $f(x) = 0$ for all $x$.

Now, suppose:

$f + f'' = 0$
$f(0) = a$
$f'(0) = b$

I claim $f = a\cos + b\sin$

Proof: Let $g = f - a\cos - b\sin$

Then $g' = f' - a(\cos)' - b(\sin)' = f' - a((\sin)')' - b\cos$

$= f' - a(\sin)'' - b\cos = f' + a\sin - b\cos$ (by (1)).

Now $g'' = f'' + a(\sin)' - b(\cos)' = f'' + a\cos + b\sin$, so:

$g + g'' = f + f'' = 0$,
$g(0) = f(0) - a\cos(0) - b\sin(0) = a - a + 0 = 0$
$g'(0) = f'(0) + a\sin(0) - b\cos(0) = b + 0 - b = 0$

Hence $g = 0$, by our earlier result.

Now we are in a position to PROVE the angle-sum formula, establishing that these "really are" our usual trig functions:

Let $y$ be any real number. For each such number, we can define a function:

$f(x) = \sin(x + y)$ and:

$f + f'' = 0$
$f(0) = \sin y$
$f'(0) = \cos y$

So we have that $f = \sin y\cos + \cos y\sin$ that is:

$\sin(x + y) = \sin y \cos x + \cos y \sin x$

which holds for all $x$ and any real number $y$ (thus all of them).

The angle-sum identity for cosine can now be found in a similar fashion.

As Opalg mentioned earlier, producing $\pi$ requires the most work: the easiest way is probably to show that $\cos$ cannot be > 0 for all $x > 0$ so there is a smallest positive 0 for $\cos$ (the properties of the first and second derivatives of cosine show it is concave downwards at 0).

Clever use of the angle-sum formulae can then be used to establish the periodicity.
 
Mathematics news on Phys.org
  • #32
I didn't write that there is no way, using a 'non geometrical' definition of $\sin x$, to demonstrate that is $\sin x = \sin (x + 2\ \pi)$... I wrote that in my opinion there isn't a comfortable way to do it...Kind regards $\chi$ $\sigma$
 
Last edited:
  • #33
I agree that sometimes the "rigorous" way isn't the "intuitive" way. We have to, in some of these discussions, take the "long way 'round" to prove something immediately evident from a diagram.

One could view this state of affairs as a defect of analysis, or: as a tribute to the subtlety of geometry. :)
 
  • #34
Okay, this was something new. I seem to have found out a very elegant proof of the challenge problem, please review my proof for errors :

Using the limit definition of the transcendental constant $e$, we can derive

$$e^{iz} = \lim_{n \to \infty} \left ( 1 + i \frac{z}{n} \right )^n$$

The convergence of limit follows from binomial theorem, which uses basically the factorial function. Our goal is to prove that $e^{iz}$ is on the circumference of the unit circle on $\mathbb{C}$. This is proved by choosing

$$ \left | \left ( 1 + i \frac{z}{n} \right )^n \right | = \left ( \left | 1 + i \frac{z}{n} \right | \right )^n = \left ( 1 - \frac{z^2}{n^2} \right )^{n/2} $$

And showing that the limit goes to $1$ as $n$ tends $\infty$ which is quite straightforward from the definitions of the limit. Hence we have $\left |e^{iz}\right | = 1$, the desired.

Now, as every complex number situated in the circumference of the unit circle is of the form $\cos(\theta) + i\sin(\theta)$, which follows from the geometric definition of $\sin$ and $\cos$, the problem is proved.
 
Last edited:
  • #35
mathbalarka said:
Okay, this was something new. I seem to have found out a very elegant proof of the challenge problem, please review my proof for errors :

Using the limit definition of the transcendental constant $e$, we can derive

$$e^{iz} = \lim_{n \to \infty} \left ( 1 + i \frac{z}{n} \right )^n$$

The convergence of limit follows from binomial theorem, which uses basically the factorial function. Our goal is to prove that $e^{iz}$ is on the circumference of the unit circle on $\mathbb{C}$. This is proved by choosing

$$ \left | \left ( 1 + i \frac{z}{n} \right )^n \right | = \left ( \left | 1 + i \frac{z}{n} \right | \right )^n = \left ( 1 - \frac{z^2}{n^2} \right )^n $$

And showing that the limit goes to $1$ as $n$ tends $\infty$ which is quite straightforward from the definitions of the limit. Hence we have $\left |e^{iz}\right | = 1$, the desired.

Now, as every complex number situated in the circumference of the unit circle is of the form $\cos(\theta) + i\sin(\theta)$, which follows from the geometric definition of $\sin$ and $\cos$, the problem is proved.

Proving that for all z is $|e^{i\ z}|=1$ You don't prove that is $e^{i\ z} = \cos z + i\ \sin z$... for example it could be $e^{i\ z}= 1$...

Kind regards

$\chi$ $\sigma$
 
  • #36
As I see it, you've proved:

$e^{iz} = \cos\theta + i \sin\theta$ for some $\theta \in [0,2\pi)$

I see no problem with assuming $z \in \Bbb R$ but I think you still have to show:

$z - \theta = 2k\pi,\ k \in \Bbb Z$
 
  • #37
Ah, you're right, we are at the point of differentiating sin and cos again, true. :p
 
  • #38
Recall that differentiating geometrically is the limit of a certain ratio (of two differences), and that trig functions are themselves ratios. Can you show geometrically that:

$(\sin)'(x) = \cos x$?

(I believe you CAN if you have the angle-sum formulas already established...but...)
 
  • #39
Yes, I had a proof for it in my mind and was trying to prove it the other way around. Nevermind, here it is :

The beginning of the proof of mine starts similarly as chisigma's : Let $\vec{r}$ be a point moving along the unit circle with uniform velocity and with co-ordinate $r(t) = (\cos(t), \sin(t))$. We have $|\vec{r}| = 1$ as well as $\vec{r} \cdot \vec{r} = 1$.

Differentiating, we obtain

$$\vec{\frac{dr}{dt}} \cdot \vec{r} + \vec{r} \vec{\cdot \frac{dr}{dt}} = 0$$

i.e., $\vec{r} \cdot \vec{\frac{dr}{dt}} = 0$. This implies the vectors $\vec{r}$ and $\vec{\frac{dr}{dt}}$ are orthogonal. Then all the co-ordinates of $\vec{r}$ are shifted by a sum of $\pi/2$. Hence,

$$ r'(t) = (\cos(t + \pi/2), \sin(t + \pi/2)) = (-\sin(t), \cos(t))$$

Note that these are obtained by noting the symmetries of circle, so has nothing to do with the angle-sum formula. Note further that the equality still doesn't follow as we have no knowledge about the scalar $\left | \vec{\frac{dr}{dt}} \right |$. This can be proved to be 1 by noting that the it is the velocity of the vector $r$ which is in turn equals to $\frac{d}{T}$ where $d$ is the distance, i.e, $2\pi$ and $T$ is the period of motion, i.e, $2\pi$. This proves that

$$\frac{d}{dt} \left ( \sin(t) \right ) = \cos(t)$$
$$\frac{d}{dt} \left ( \cos(t) \right ) = -\sin(t)$$

Does this look good?
 
Last edited:
  • #40
mathbalarka said:
Yes, I had a proof for it in my mind and was trying to prove it the other way around. Nevermind, here it is :

The beginning of the proof of mine starts similarly as chisigma's : Let $\vec{r}$ be a point moving along the unit circle with uniform angular speed $\frac{1}{2\pi}$ and with co-ordinate $r(t) = (\cos(t), \sin(t))$. We have $|\vec{r}| = 1$ as well as $\vec{r} \cdot \vec{r} = 1$.

Differentiating, we obtain

$$\vec{\frac{dr}{dt}} \cdot \vec{r} + \vec{r} \vec{\cdot \frac{dr}{dt}} = 0$$

i.e., $\vec{r} \cdot \vec{\frac{dr}{dt}} = 0$. This implies the vectors $\vec{r}$ and $\vec{\frac{dr}{dt}}$ are orthogonal. Then all the co-ordinates of $\vec{r}$ are shifted by a sum of $\pi/2$. Hence,

$$ r'(t) = (\cos(t + \pi/2), \sin(t + \pi/2)) = (-\sin(t), \cos(t))$$

Note that these are obtained by noting the symmetries of circle, so has nothing to do with the angle-sum formula. Note further that the equality still doesn't follow as we have no knowledge about the scalar $\left | \vec{\frac{dr}{dt}} \right |$. This can be proved to be 1 by noting that the it is the velocity of the vector $r$ which is in turn equals to $\frac{d}{T}$ where $d$ is the distance, i.e, $2\pi$ and $T$ is the period of motion, i.e, $2\pi$. This proves that

$$\frac{d}{dt} \left ( \sin(t) \right ) = \cos(t)$$
$$\frac{d}{dt} \left ( \cos(t) \right ) = -\sin(t)$$

Does this look good?

A couple of minor notes:

It is the arguments of the coordinates that are shifted by $\dfrac{\pi}{2}$.

We can't say "which way" the shift occurs from your argument, it could be plus or minus, yes? Also, orthogonality only gives $r'(t)$ up to a scalar, so your FIRST equation involving it is not quite correct (which is why you need to find $|r'(t)|$). So I think you still need to show whether we have:

$r'(t) = (\cos t, -\sin t)$ or:
$r'(t) = (-\cos t, \sin t)$
 
  • #41
Deveno said:
So I think you still need to show whether we have:

$r'(t) = (\cos t, -\sin t)$ or:
$r'(t) = (-\cos t, \sin t)$

As $\vec{\frac{dr}{dt}}$ is the velocity vector of the point, it has the same direction as the position vector $\vec{r}$. Hence, no sign change is possible.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K