# E and eulers formula?

Ben Niehoff
Gold Member
Returning to the Cauchy-Riemann equations, we can consider the complex exponential to be a function

$$f(x,y) = u(x,y) + i v(x,y)$$

such that f, u, v, x, and y are real, and f satisfies

$$\frac{df}{dz} = f$$

where $z = x+iy$.

I don't have time to work this out right now, but this will lead to two coupled partial differential equations for u and v. Combined with the equations

$$\frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}$$

$$\frac{\partial u}{\partial y} = - \frac{\partial v}{\partial x}$$

we should be able to show that

$$u(x,y) = C \exp x \cos y$$

$$v(x,y) = C \exp x \sin y[/itex] for arbitrary real constant C. Then we simply choose the solution that is consistent with $\exp x$ on the real line. Hmm...no, I think C should turn out to be complex...but there is still a unique solution that is consistent with the real exponential. Last edited: rbj The structure of the debate was pretty much this: rbj: There's the proof for what exp is. jostpuur: No you cannot prove it without some definition first. rbj: You just look at the properties of exp, and prove what it is. I don't see a problem. jostpuur: No you cannot prove it without the definition. rbj: You just look at the properties, and then you just define it according to them, and then prove what it is. I still don't see what's the problem. Okey there's no problem.... This seems to be respecting the principles of natural sciences very well, but that's not really how mainstream mathematics works. Here's some counter examples to too unclear instructions for complex exponential. ---- Too little properties from real exponential: We want exp(z1+z2)=exp(z1)*exp(z2). For any fixed $\alpha\in\mathbb{R}$, define [tex] \exp_{\alpha}(x+iy) = \exp(x)(\cos(\alpha y) + i\sin(\alpha y))$$

Now the desired property is satisfied with arbitrary alpha.
----
does it satisfy

$$e^{\beta z} = \big( e^{\beta} \big)^{z}$$

or

$$\frac{d}{dz} e^{\beta z} = \beta \ e^{\beta z}$$

for arbitrary but constant $\beta$ ?

----
Too much properties from real exponential:

We want exp(z)>0, because also real exponential is positive.

We get this by choosing $\alpha = 0$, and we have $\exp_0(\mathbb{C})\subset\mathbb{R}$.
----
so you make that assumption about other operations with complex numbers? we say that $i^2 < 0$ and we don't allow for that it the land of real numbers.

These counter examples prove, that a vague instruction "extend the properties of real exponential function to complex field" cannot be used to define a unique complex exponential function. The complex exponential function exists only after it has been defined with a clear definition, and not before.
i'm just saying that since complex numbers are the topic being that the argument of the exp function is a complex number, you do what you do for other extensions of functions of a real variable to having a possibly complex variable.

1. we require that the extension retain the "operational properties" of the function. for exp it's things like

$$e^{z_1+z_2} = e^{z_1}e^{z_2}$$

$$\big( e^{z_1} \big)^{z_2} = e^{z_1 z_2}$$

and 2. we require that when the imaginary part of the argument is zero, it is consistent with the default definition that already exists for the natural expoential.

$$e^z = e^{\mathrm{Re}(z)}, \quad \mathrm{Im}\big\{ e^z \big\} = 0, \quad \mathrm{if} \ \mathrm{Im}\big\{ z \big\} = 0$$

i know that sometimes one can get surprized in that the species of animal you get back isn't always the same as the habitual assumption, like thinking that what one might get back from $\log(z)$ is a complex number when it's a vector of complex numbers, unless for example you make the further assumption of it returning a principle value.

you stretched what i did say to what i didn't.

the guy wanted to know why

$$e^{i \theta} = \cos(\theta) + i \sin(\theta)$$

for real theta.

the Wikipedia article says that if we expect that those two rules of extensions above are implied, there is only one complex number that $e^{i \theta}$ can be equal to for the exponential to retain properties like its Maclauren series, or for it to appear in the denominator of a fraction with $\cos(\theta) + i \sin(\theta)$ in the numerator. apply the quotient rule (and remembering that $i^2 = -1$) and respecting a boundary condition that $e^{i 0} = 1$, and there is no other complex number that will fit the bill. to answer the guy's question, you don't have to make him/her worry about the possibility that $e^{i \theta}$ is a matrix, or an element of a Hilbert space.

Last edited:
Ben Niehoff
Gold Member
rbj, see my post above about the Cauchy-Riemann conditions. It is sufficient to demand that

$$e^{\alpha+\beta} = e^{\alpha}e^{\beta} \qquad \forall \alpha, \beta \in \mathbb{C}$$

$$e^z = e^{\Re(z)} \qquad \text{when } \Im(z) = 0$$

$$\frac{d}{dz}e^z \qquad \text{exists}$$

One can then prove that

$$\frac{d}{dz}e^z = e^z$$

and therefore $e^z$ is $C^{\infty}$.

Yet another angle of attack is to define $\ln z$ as the contour integral

$$\ln z = \int_{\gamma} \frac{d\zeta}{\zeta}$$

where $\gamma$ is some contour running from $1 + 0i$ to z. One can then define $e^z$ as the inverse of this function.

Ben Niehoff
Gold Member
So, here's an elaboration on the differential equation method I mentioned earlier. First, suppose we have a function

$$f(z) = f(z(x,y)) \qquad z(x,y) = x + iy$$

then

$$\frac{\partial f}{\partial x} = \frac{df}{dz}\frac{\partial z}{\partial x} = \frac{df}{dz} \qquad (1)$$

and

$$\frac{\partial f}{\partial y} = \frac{df}{dz}\frac{\partial z}{\partial y} = i \frac{df}{dz} \qquad (2)$$

Therefore, we see that

$$\frac{\partial f}{\partial x} + i \frac{\partial f}{\partial y} = 0 \qquad (3)$$

This follows directly from the algebraic properties of $i$. Now, writing

$$f(x,y) = u(x,y) + i v(x,y)$$

we can re-write (3) as

$$\left( \frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} \right) + i \left( \frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y} \right) = 0$$

$$\left( \frac{\partial u}{\partial x} - \frac{\partial v}{\partial y} \right) + i \left( \frac{\partial u}{\partial y} + \frac{\partial v}{\partial x} \right) = 0 \qquad (4)$$

and these are the Cauchy-Riemann equations!

Now, what we want is to find some f(z) such that

$$\frac{df}{dz} = f$$

Using (1) and (2), we can therefore write:

$$\frac{\partial f}{\partial x} = f \qquad (5)$$

and

$$\frac{\partial f}{\partial y} = if \qquad (6)$$

So, again noting that $f(x,y) = u(x,y) + i v(x,y)$, we write (5) and (6) as

$$\frac{\partial u}{\partial x} + i \frac{\partial v}{\partial x} = u + iv$$

$$\frac{\partial u}{\partial y} + i \frac{\partial v}{\partial y} = iu - v$$

These, then, yield four coupled first-order partial differential equations for u and v:

$$\frac{\partial u}{\partial x} = u \qquad (7)$$

$$\frac{\partial v}{\partial x} = v \qquad (8)$$

$$\frac{\partial u}{\partial y} = -v \qquad (9)$$

$$\frac{\partial v}{\partial y} = u \qquad (10)$$

(As an aside, the second two equations look like Hamilton's equations of motion, no?). Anyway, we can immediately integrate (7) and (8) as:

$$u = \eta_1(y) e^x \qquad \qquad v = \eta_2(y) e^x$$

where $\eta_1$ and $\eta_2$ are arbitrary functions of y alone. Plugging these into the second two equations, we get

$$\frac{\partial}{\partial y}(\eta_1(y) e^x) = -\eta_2(y) e^x$$

$$\frac{\partial}{\partial y}(\eta_2(y) e^x) = \eta_1(y) e^x$$

which simplify to

$$\frac{d\eta_1}{dy} = -\eta_2 \qquad (11)$$

$$\frac{d\eta_2}{dy} = \eta_1 \qquad (12)$$

Taking derivatives again with respect to y, we get

$$\frac{d^2\eta_1}{dy^2} = -\frac{d\eta_2}{dy}$$

$$\frac{d^2\eta_2}{dy^2} = \frac{d\eta_1}{dy}$$

Substituting back into (11) and (12), we decouple the equations as

$$\frac{d^2\eta_1}{dy^2} = -\eta_1$$

$$\frac{d^2\eta_2}{dy^2} = -\eta_2$$

These, then, can be integrated to yield

$$\eta_1(y) = A \cos y + B \sin y \qquad (13)$$

$$\eta_2(y) = C \cos y + D \sin y \qquad (14)$$

for arbitrary constants A, B, C, and D. Taking derivatives of (13) and (14) and plugging back into (11) and (12) yields

$$-A \sin y + B \cos y = -C \cos y + -D \sin y$$

$$-C \sin y + D \cos y = A \cos y + B \sin y$$

And so,

$$C = -B \qquad D = A$$

Therefore, we have

$$u(x,y) = \eta_1 e^x = e^x(A \cos y + B \sin y)$$

$$v(x,y) = \eta_2 e^x = e^x(-B \cos y + A \sin y)$$

And so

$$f(x,y) = u(x,y) + i v(x,y) = e^x(A \cos y + B \sin y) + i e^x(-B \cos y + A \sin y) \qquad (15)$$

for arbitrary constants A and B.

To choose a unique f(x,y), we require that

$$f(x,0) = e^x$$

Putting y=0 into (15), this yields

$$A e^x - iB e^x = e^x$$

and so, A=1 and B=0. Therefore, the unique extension of $\exp z$ over the complex numbers is

$$\exp z = \exp (x + iy) = e^x \cos y + i\, e^x \sin y$$

This is the only function over the complex numbers which satisfies both

$$\frac{d}{dz} \exp z = \exp z$$

and

$$\exp (x + i0) = e^x$$

Q.E.D.

rbj
rbj, see my post above about the Cauchy-Riemann conditions. It is sufficient to demand that

$$e^{\alpha+\beta} = e^{\alpha}e^{\beta} \qquad \forall \alpha, \beta \in \mathbb{C}$$

$$e^z = e^{\Re(z)} \qquad \text{when } \Im(z) = 0$$

$$\frac{d}{dz}e^z \qquad \text{exists}$$
i agree with you, Ben. i don't think we just define the complex exponential function as

$$e^z = e^{x+iy} = e^x \cos(y) + i e^x \sin(y)$$

i think we (as did Euler) set out to find a "definition" (explicit functions for the real and imaginary parts of ez) so that the behavior of the expontial function is carried over to the world of complex numbers and that when the imaginary part of the argument is zero, that the exponential behaves as it previously had with real input.

Ben Niehoff
Gold Member
Right. So, after some investigation, I have found that there are two looser definitions one might use:

Definition 1:
$\exp(z)$ is the unique function over $\mathbb{C}$ such that:

1. $\frac{d}{dz}(\exp(z))$ exists, and

2. $\exp(x+i0) = e^x$ for all real numbers x.
Definition 2
$\exp(z)$ is the unique function over $\mathbb{C}$ such that:

1. $\frac{d}{dz}(\exp(z)) = \exp(z)$, and

2. $\exp(0) = 1$.
Either definition is sufficient, and they are equivalent.

I was bored in class so I did this, don't really know if it's any use here, but I think so and found it very interesting:

d/dx[exp(x)]=lim h->0 $$\frac{exp(x+h)-exp(x)}{h}$$ = exp(x) lim h->0 $$\frac{exp(h)-1}{h}$$=exp(x) lim h->0 $$\frac{\sum\frac{h^k}{k!}-1}{h}$$, with the sum from 0 to infinity,

= exp(x) lim h->0 $$\frac{\sum\frac{h^k}{k!}}{h}$$, with the sum from 1 to infinity

= exp(x) lim h->0 $$\sum\frac{h^{(k-1)}}{k!}$$

and all terms in the sum go to zero, except the first, which goes to 1... so:

= exp(x)

so, when you define exp(x) by $$\sum\frac{x^k}{k!}$$, the deravative of exp(x) is exp(x)

Last edited:
Gib Z
Homework Helper
Yes, and that result would have been much easier if you differentiated the series directly, term by term. Or even if you used the limit definition, at the point $$\lim_{h\to 0} \frac{e^h -1}{h}$$ you could have replaced with the series definition there, subtract one from it and then divide by h, you have the same series again.

well thats what I did >_>, at least the second thing.

Gib Z
Homework Helper
My bad, I didn't pay attention to the intervals of summation.

mathwonk
Homework Helper
have you studied linear algebra? an "eigenvector" for a linear operator T is a vector v such that Tv is a scalar multiple of v. These vectors provide the most natural coordinate system appropriate to the operator T. If one wants to solve an equation like TX = Y, for X, it is easy to do if Y is expanded in terms of eigenvectors of T.

The functions e^ax provide the eigenvectors for the linear operator D (differentiation). Using them, one gets the most natural expansion of a smooth function, its fourier series. this makes it easy to solve differential equations like Df = g, if one can expand g in a fourier series.

Last edited:
HallsofIvy
Homework Helper
I will point out, again, that there is nothing all that magical about e. Any exponential function akx is an eigenfunction of the derivative operator.

mathwonk