What Makes the Symbol e So Essential in Mathematics?

  • Thread starter Thread starter arron
  • Start date Start date
  • Tags Tags
    Symbol
arron
Messages
13
Reaction score
0
e=(1/n+1)^n,as n->00,
though I have used this symbol for so many times, but I still can not understand the true meaning of it. Why does it has such strange characteristic---d(e^x)/dx=e^x,
and e^x=1+x,as x->0, and why is it able to be used everywhere?
who can explan this symbol to me?
 
Physics news on Phys.org
arron said:
e=(1/n+1)^n,as n->00,
though I have used this symbol for so many times, but I still can not understand the true meaning of it. Why does it has such strange characteristic---d(e^x)/dx=e^x,
and e^x=1+x,as x->0, and why is it able to be used everywhere?
who can explan this symbol to me?

Not a dumb question at all. This is kind of stuff that we learn in the school when we don't know that things could be proven, and once we learn that things can be proven, we have already learned to accept all the properties of the e.

There is lots of ways to prove these simple results. Unfortunately, most of them use circular logic :wink:

I could probably fight my way to these results through series.
 
arron: e=(1/n+1)^n,as n->00

As you have defined, we can proceed to arrive at the form below by allowing N to increase beyond bound and taking limits:

e(x)=\sum_{i=0}^\infty\frac{X^i}{i!}

Then differentiating term by term shows the e(x) is its own derivative.

Of course, you might want to study the conditions which allow us to do such things. But, the above offers an intutive reason. And to continue...

If we then use the value iX, we can proceed to split the series into real and imaginary parts, resulting in cos(x) + isin(x).

Using Y=e^(iX), this results in the harmonic equation Y+Y'' = 0. Which allows to to find things like:
(e^ix +e^-ix)/2 = cos(x). We can also integrate, substituting X = e^Z, X' =e^ZdZ

\int\frac{dx}{x}=Inx
 
Last edited:
Arron: what you have is that e = \lim_{n\to \infty} \left( 1 + \frac{1}{n}\right)^n. This comes from a more general result, letting x=1.

e^x = \lim{n\to \infty} \left( 1 + \frac{x}{n} \right) ^ n.

Well let's DEFINE e^x to be the amazing function which is its own derivative. From that we have a whole family of functions that are their own derivative, ae^x where a is some constant! Let the one we are working on be the function from the family in which a=1.

From the above equation, let's differentiate the Right hand side, with respect to x. Since the limit does not affect the x, we can ignore that limit in computing the derivative. After applying the power and chain rules appropriately, we get the derivative for be (1+ x/n)^(n-1). Then having the limit again, can you see how the 2 expressions are in fact the same? Hence it's its own derivative, so its e^x.

We could have approached this from the other way, in which we define e^x = \lim{n\to \infty} \left( 1 + \frac{x}{n} \right) ^ n and show in the same fashion that it's its own derivative, and hence we have the property (e^x)' = e^x.

I used one definition to get to the other result, and then I used another definition to get to the left over result. That's sort of what jostpuur meant when he said circular logic, I used a definition to get to a result, then that result as a definition to 'prove' the original definition.
 
GibZ, there is lot of problems with your post.

Gib Z said:
Well let's DEFINE e^x to be the amazing function which is its own derivative.

It is not obvious, that such function exists at all, unless its existence is proven. The notation is also problematic, because a^x already has a previous meaning for all constants a. It should not be defined again.

From that we have a whole family of functions that are their own derivative, ae^x where a is some constant! Let the one we are working on be the function from the family in which a=1.

As you noted, if function f exists so that Df=f, then there is more of those functions too. Hence your earlier definition of f does not specify f uniquely. Noting that the other functions have the form af, where a is constant, and then demanding a=1, still does not give the function uniquely, because f wasn't unique in the first place.

From the above equation, let's differentiate the Right hand side, with respect to x. Since the
limit does not affect the x, we can ignore that limit in computing the derivative.

<br /> D \lim_{n\to\infty} f_n = \lim_{n\to\infty} Df_n<br />
is never trivial. It is not always even true.
 
"It is not obvious, that such function exists at all, unless its existence is proven. The notation is also problematic, because a^x already has a previous meaning for all constants a. It should not be defined again."

Sorry, I should have noted the reason that such a function exists and why it would be an exponential function if there was one.
For a general exponential function f(x)=a^x,
f&#039;(x) = \lim_{h\to 0} \frac{a^{x+h} - a^x}{h}, which comes directly from the definition of the derivative. I guess I made the mistake about the differential operator, but I'm quite sure constants can be taken out of limits.

Some simplifying gets f&#039;(x) = a^x \lim_{h\to 0} \frac{a^h - 1}{h}.
We can see that this function will be its own derivative of the limit evaluates to 1.
The following is the furthest thing from a proof possible, but it can give us a hunch: substitute in values of h that are small, like 0.0001, and try a=3. We get 1.0986..quite close to one. How about we try a= 2.7. Then it's 0.9933. We know exponential functions are strictly increasing, so the value of a we seek lies someone between 2.7 and 3. If we used some root finding method (not Newton's method because that requires finding the derivative), we would eventually land somewhere around 2.718281828. That gives me evidence there is some value where the limit evaluates to 1.
 
Here's some calculations. This is the way I would start doing this. Using the Newton's binomial formula we get

<br /> e := \lim_{n\to\infty}\big(1 + \frac{1}{n})^n = \lim_{n\to\infty}\sum_{k=0}^n \frac{n!}{k!(n-k)!}\frac{1}{n^k}<br />

<br /> =\lim_{n\to\infty}\big(1\; +\; \frac{n}{1}\frac{1}{n}\; +\; \frac{n(n-1)}{2}\frac{1}{n^2}\; +\; \frac{n(n-1)(n-2)}{2\cdot 3}\frac{1}{n^3}\; +\; \frac{n(n-1)(n-2)(n-3)}{2\cdot 3\cdot 4}\frac{1}{n^4}\;+\cdots+\;\frac{n!}{n!}\frac{1}{n^n}\big)<br />

<br /> =\lim_{n\to\infty}\big(1\; +\; 1\; +\; \frac{n-1}{2n}\; +\; \frac{(n-1)(n-2)}{2\cdot 3 n^2}\; +\; \frac{(n-1)(n-2)(n-3)}{2\cdot 3\cdot 4 n^3}\; + \cdots+\;\frac{(n-1)!}{n!}\frac{1}{n^{n-1}}\big)<br />

Let us study the individual terms of this sum. It is quite easy to see, that for each k there exists a such polynomial p of order k-2 that we get the following limit

<br /> \frac{(n-1)(n-2)\cdots(n-k+1)}{k!\; n^{k-1}} = \frac{n^{k-1} + p(n)}{k!\; n^{k-1}} \underset{n\to\infty}{\to} \frac{1}{k!}<br />

Thus it is very easy to believe, that the limit of the sum gives

<br /> \lim_{n\to\infty}\big(1 + \frac{1}{n}\big)^n = 1 + 1 + \frac{1}{2} + \frac{1}{3!} + \cdots = \sum_{k=0}^{\infty} \frac{1}{k!}<br />

I did this calculation in high school, but later understood, that it is in fact not yet a correct proof for this series representation of e. I'll give the following problem as an exercise. Suppose we have numbers

<br /> b_{11}<br />
<br /> b_{21},\; b_{22}<br />
<br /> b_{31},\; b_{32},\; b_{33}<br />
...
so that each limit

\lim_{n\to\infty} b_{ni} =: c_i

exists. Is the equation

<br /> \lim_{n\to\infty}\sum_{i=1}^{n} b_{ni} = \sum_{i=1}^{\infty} c_i<br />
true if the both sides at least exist? (hint: The correct answer is no)

Anyway, the previous series representation for e is true, although I personally I have never gone through the rigor proof. I have an idea how it can be done, though.

How to proceed from this with the exponential function? We can define a function

<br /> \textrm{exp}(x) := \sum_{k=0}^{\infty} \frac{x^k}{k!}<br />

You can immediately see that \textrm{exp}(1)=e^1. After proving that exp is continuous, and satisfies the identity

<br /> \textrm{exp}(x+y) = \textrm{exp}(x)\; \textrm{exp}(y)<br />

we should be already quite close to \textrm{exp}(x)=e^x. Once the series representation for e^x is available, rest of the properties become easier to prove.
 
Last edited:
Here's another point of view. Suppose we want to find a function

<br /> f:\mathbb{R}\to\mathbb{R}<br />

that satisfies the equation Df = f, but don't have a clue what it should be. We can first consider an easier problem. Fix some constant \Delta x &gt; 0. Is there a function

<br /> f:\{0,\;\Delta x,\; 2\Delta x,\; 3\Delta x,\ldots\}\to\mathbb{R}<br />

that would satisfy condition f(0)=1 and an equation

<br /> f(x) = \frac{f(x+\Delta x) - f(x)}{\Delta x}?<br />

The equation can be turned into a recursion relation

<br /> f(x+\Delta x) = (1 + \Delta x)f(x)<br />

With induction one can prove that the solution is

<br /> f(x)=(1 + \Delta x)^{x / \Delta x}<br />

Now one might guess that if a function f that is its own derivative exists, it cannot be anything else but what we get with a limit \Delta x\to 0 out of this analogous discrete problem. So

<br /> f(x) = \lim_{\Delta x\to 0}(1 + \Delta x)^{x / \Delta x} = \lim_{n\to\infty} \big(1 + \frac{1}{n}\big)^{nx}<br />

I would very much like to know how the e was found in the first place. Was it done by the Euler, btw? I wouldn't be surprised if it was something like this.
 
It was by Napier, even before the invention of Calculus. It Napier, who discovered logarithms, used it as a base with special properties. Finally something I know that correct =]
 
  • #10
thanks

Thanks everyone, I am so apprecited. I am trying to understand you and make it clear.
There is still a question that I found it could be used everywhere, even in Maxwell distribution equations which belongs to caloricics. Can you explain why it has so many usage?
 
  • #11
In lang's book, e was defined with property that derivative is itself... and start from there and prove uniqueness and existence...
 
  • #12
I find this simple relationship intriguing.

\int ^e _1 \frac {dx} x = 1

That is the area under the inverse curve between 1 and e is exactly 1. Somehow I think nature uses this fact.
 
  • #13
ex occurs in so many things because ax does. And all exponentials are "interchangeable":
a^x= e^{ln(a^x)}= e^{xln(a)}
so that any exponential can be written as "e to a constant times x". It's not a matter of "Nature" using anything but of e being particularly easy for us to use.

One thing that is done more often now in Calculus books than used to be done is to define ln(x) by \int_0^x (1/t)dt. All of the properties of the log, including ln(x^y)= yln(x), can be proven from that.

You can then define Exp(x) to be the inverse function to ln(x). Then if y= Exp(x), x= ln(y). For x non-zero, 1= (1/x)ln(y)= ln(y^{1/x}). Going back to Exp, y^{1/x}= Exp(1) so that y= Exp(1)^x. If we define e to be Exp(1) then the inverse function to ln(x) is y= ex.
 
Back
Top