mathbalarka
- 452
- 0
Deduce a proof of Euler's formula $e^{i \theta} = \cos(\theta) + i\sin(\theta)$ without using angle addition formula for sine and cosine.
Balarka
.
Balarka
.
mathbalarka said:Deduce a proof of Euler's formula $e^{i \theta} = \cos(\theta) + i\sin(\theta)$ without using angle addition formula for sine and cosine.
Balarka
.
mathbalarka said:First things first, thanks for participating, chisigma.
Your initial start was good, but the solution isn't complete. Can you verify step (2) and show that you are not using angle sum identity while computing $z'(t)$?
mathbalarka said:I'd ask you the same as chisigma - can you justify your steps doesn't include the angle sum identity in the background?
ZaidAlyafey said:That is a difficult task unless you are claiming that I am using it in a specific step .
I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.mathbalarka said:angle-sum identity is usually not derived from Euler's formula, as it makes things ambiguous.
Opalg said:I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.
I don't understand what you mean by this ambiguity. If you define the functions $e^{i\theta}$, $\cos\theta$ and $\sin\theta$ by their power series, then the real part of the power series for $e^{i\theta}$ is the power series for $\cos\theta$, and the imaginary part is the series for $\sin\theta$. So Euler's formula $e^{i\theta} = \cos\theta + i\sin\theta$ is practically a tautology.mathbalarka said:Opalg said:I was not saying that the angle-sum identity is derived from Euler's formula. It is proved by multiplying the defining power series, using the theorem that the Cauchy product of two absolutely convergent series converges absolutely to the product of their sums. From that, one can deduce that $e^{i(\theta+\phi)} = e^{i\theta}e^{i\phi}$. The angle-sum identities then follow by taking the real and imaginary parts.
That depends on how you derive the Euler's formula. If you find out a way to derive it without angle-sum formula, then yes, the above can be applied. Otherwise ambiguity occurs, as I have indicated before.
Opalg said:If you define the functions $e^{i\theta}$, $\cos\theta$ and $\sin\theta$ by their power series ...
Opalg said:I don't understand what you mean by this ambiguity.
Okay, if you want to define trig functions in terms of triangles, then you must first define what you mean by an angle. And if you want to define angles (as radians), you have to know what is meant by arc length. But arc length, on a curve $y=f(x)$, is defined in terms of an integral $$\int\sqrt{1+f'(x)^2}dx$$. If you try to do that for the unit circle, you will find yourself having to integrate $\dfrac1{\sqrt{1-x^2}}$, which leads you to $\arcsin x$. This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.mathbalarka said:I am not defining them by their power series. They are defined as usual by the properties of right-angled triangle throughout the context.
mathbalarka said:Yes, but note that you are deriving sine and cosine while deriving $z(t)$, can you do that without the angle-sum identity?
- - - Updated - - -
Opalg said:This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.
mathbalarka said:@chisigma : How do you derive (3)?
mathbalarka said:$(1)$ is true if and only if $\sin(t) > 0$.
Opalg said:Okay, if you want to define trig functions in terms of triangles, then you must first define what you mean by an angle. And if you want to define angles (as radians), you have to know what is meant by arc length. But arc length, on a curve $y=f(x)$, is defined in terms of an integral $$\int\sqrt{1+f'(x)^2}dx$$. If you try to do that for the unit circle, you will find yourself having to integrate $\dfrac1{\sqrt{1-x^2}}$, which leads you to $\arcsin x$. This becomes a circular argument (if you'll forgive the pun), and you end up trying to use the inverse sine function in order to define the sine function. That is why a rigorous treatment of trig functions has to start from the power series definition.
Very ingenious! But I can't help thinking that power series are neater and more useful.Deveno said:I almost agree with this.
It is actually possible to define the cosine function by first defining:
$\displaystyle A: [-1,1] \to \Bbb R,\ A(x) = \frac{x\sqrt{1-x^2}}{2} + \int_x^1 \sqrt{1 - t^2}\ dt$
and then defining cosine on the interval:
$\displaystyle [0, 2\int_{-1}^1 \sqrt{1 - u^2}\ du]$
as the unique $x$ such that:
$(A \circ \cos)(x) = \dfrac{x}{2}$
and then finally, defining:
$\displaystyle \pi = 2\int_{-1}^1 \sqrt{1 - u^2}\ du$
and on $[0,\pi]$ defining:
$\sin(x) = \sqrt{1 - (\cos x)^2}$.
At this point we extend the domain of these two functions by defining, for $x \in (\pi,2\pi]$:
$\cos(x) = \cos(2\pi - x)$
$\sin(x) = -\sin(2\pi - x)$,
and finally extend by periodicity.
Now, granted this construction bears little resemblence to the usual geometric definitions of the trigonometric functions, and why we would do such a thing is a bit unmotivated, but it IS rigorous, and it DOESN'T use power series.
One advantage of this method is that angle-sum identities are no longer required to derive the derivatives of these two functions, it follows (more or less straight-forwardly) that:
$\cos'(x) = -\sin x$
$\sin'(x) = \cos x$
by applying the inverse function theorem to $B = 2A$ (first on the interval $[0,\pi]$ and then using the "extended" definitions to compute the derivatives on all of $\Bbb R$...care has to be taken to apply the correct one-sided limits at the "stitch points", of course).
A clear disadvantage of this, is that it is NOT clear how to extend these definitions analytically to $\Bbb C$, but now the Taylor series in Zaid's posts can be computed without fear of reference to the angle-sum formulae, and THOSE series can easily be shown to be convergent on $\Bbb C$.
Just sayin'
Deveno said:Euler's theorem is really a very beautiful theorem, right up there with a (similar in some ways) theorem of Pythagoras. It is a GOOD thing we can prove it, and the difficulties involved in doing so without self-reference is (in my humble opinion) something of a testament to how deep it cuts. The numbers $e$ and $\pi$ turn out to be brothers, after all...there is something very satisfying about this.
mathbalarka said:... okay, seems to be pulling all the people here out of the original challenge. Apologies if so... :p
chisigma said:Very well!... but for my 'point of view of things' it is difficult to imagine a comfortable way, starting from definition (1) or (2), to demonstrate the basic properties of thye function
chisigma said:In fact the Euler's identity is one of the 'goldenkeys' of Math and I'm not surprised that Your post is the starting point of very interesting discussions. In this topic two different 'right definitions' of the function $\sin x$ have been presented and both these definition disregard the 'geometrical meaning' of the function. One definition is...
$\displaystyle \sin x = \sum_{n=0}^{\infty} (-1)^{n}\ \frac{x^{2 n + 1}}{(2 n + 1)!}\ (1)$
Another definition represents the function $\sin x$ as the solution of the ODE...
$\displaystyle y^{\ ''} = - y,\ y(0)=0,\ y^{\ '} (0) = 1\ (2)$
Very well!... but for my 'point of view of things' it is difficult to imagine a comfortable way, starting from definition (1) or (2), to demonstrate the basic properties of thye function... for example that for any real x is $\sin x = \sin (x + 2\ \pi)$... I do hope in some help from somebody...
Kind regards
$\chi$ $\sigma$
mathbalarka said:Okay, this was something new. I seem to have found out a very elegant proof of the challenge problem, please review my proof for errors :
Using the limit definition of the transcendental constant $e$, we can derive
$$e^{iz} = \lim_{n \to \infty} \left ( 1 + i \frac{z}{n} \right )^n$$
The convergence of limit follows from binomial theorem, which uses basically the factorial function. Our goal is to prove that $e^{iz}$ is on the circumference of the unit circle on $\mathbb{C}$. This is proved by choosing
$$ \left | \left ( 1 + i \frac{z}{n} \right )^n \right | = \left ( \left | 1 + i \frac{z}{n} \right | \right )^n = \left ( 1 - \frac{z^2}{n^2} \right )^n $$
And showing that the limit goes to $1$ as $n$ tends $\infty$ which is quite straightforward from the definitions of the limit. Hence we have $\left |e^{iz}\right | = 1$, the desired.
Now, as every complex number situated in the circumference of the unit circle is of the form $\cos(\theta) + i\sin(\theta)$, which follows from the geometric definition of $\sin$ and $\cos$, the problem is proved.
mathbalarka said:Yes, I had a proof for it in my mind and was trying to prove it the other way around. Nevermind, here it is :
The beginning of the proof of mine starts similarly as chisigma's : Let $\vec{r}$ be a point moving along the unit circle with uniform angular speed $\frac{1}{2\pi}$ and with co-ordinate $r(t) = (\cos(t), \sin(t))$. We have $|\vec{r}| = 1$ as well as $\vec{r} \cdot \vec{r} = 1$.
Differentiating, we obtain
$$\vec{\frac{dr}{dt}} \cdot \vec{r} + \vec{r} \vec{\cdot \frac{dr}{dt}} = 0$$
i.e., $\vec{r} \cdot \vec{\frac{dr}{dt}} = 0$. This implies the vectors $\vec{r}$ and $\vec{\frac{dr}{dt}}$ are orthogonal. Then all the co-ordinates of $\vec{r}$ are shifted by a sum of $\pi/2$. Hence,
$$ r'(t) = (\cos(t + \pi/2), \sin(t + \pi/2)) = (-\sin(t), \cos(t))$$
Note that these are obtained by noting the symmetries of circle, so has nothing to do with the angle-sum formula. Note further that the equality still doesn't follow as we have no knowledge about the scalar $\left | \vec{\frac{dr}{dt}} \right |$. This can be proved to be 1 by noting that the it is the velocity of the vector $r$ which is in turn equals to $\frac{d}{T}$ where $d$ is the distance, i.e, $2\pi$ and $T$ is the period of motion, i.e, $2\pi$. This proves that
$$\frac{d}{dt} \left ( \sin(t) \right ) = \cos(t)$$
$$\frac{d}{dt} \left ( \cos(t) \right ) = -\sin(t)$$
Does this look good?
Deveno said:So I think you still need to show whether we have:
$r'(t) = (\cos t, -\sin t)$ or:
$r'(t) = (-\cos t, \sin t)$