Can a Periodic Function and Gamma Function Solve a Functional Equation?

In summary, the limit \lim_{x \rightarrow \infty} \left( 1 + \frac{1}{x} \right) ^ x can be proven using the squeeze theorem and the definition of e as \lim_{n \rightarrow \infty} \left( 1 + \frac{1}{n} \right) ^ n. The properties of ex can then be derived using its definition and the chain rule.
  • #1
Orion1
973
3

Is this identity true?

Identity:
[tex]\frac{d}{dx} x^n = \lim_{h \rightarrow 0} \frac{(x + h)^n - x^n}{h} = nx^{n-1}[/tex]

 
Last edited:
Physics news on Phys.org
  • #2
Orion1 said:

Is this identity true?
Identity:
[tex]\frac{d}{dx} x^n = \lim_{h \rightarrow 0} \frac{(x + h)^n - x^n}{h} = nx^{n-1}[/tex]

Sure it's true, for non-negative integer n. Even for all [itex]n\neq -1[/itex].
 
  • #3
benorin said:
Sure it's true, for non-negative integer n. Even for all [itex]n\neq -1[/itex].

?? Even for n= -1 (as long as x is not 0) and, indeed, for n any complex number (again, as long as x is in the domain). That's why that formula is typically the first derivative formula one learns in calculus!
 
  • #4


Is it possible to solve the limit part of this equation in a classical way to produce the solution?

[tex]\frac{d}{dx} (\ln x) = \lim_{h \rightarrow 0} \frac{\ln(x + h) - \ln x}{h}[/tex]
 
Last edited:
  • #5
Orion1 said:

Is it possible to solve the limit part of this equation in a classical way to produce the solution?
[tex]\frac{d}{dx} (\ln x) = \lim_{h \rightarrow 0} \frac{\ln(x + h) - \ln x}{h}[/tex]
Yes, it is.
Lemma:
[tex]\lim_{x \rightarrow 0} \frac{\ln(1 + x)}{x} = 1[/tex]
Proof:
[tex]\lim_{x \rightarrow 0} \frac{\ln(1 + x)}{x} = \lim_{x \rightarrow 0} \frac{1}{x} \ \ln(1 + x) = \lim_{x \rightarrow 0} \ln \left[ (1 + x) ^ \frac{1}{x} \right] = \ln \lim_{x \rightarrow 0} \left[ (1 + x) ^ \frac{1}{x} \right] = \ln \lim_{y \rightarrow \infty} \left[ \left( 1 + \frac{1}{y} \right) ^ y \right] = \ln e = 1[/tex].
(y = 1 / x)
----------------------
[tex]\lim_{h \rightarrow 0} \frac{\ln(x + h) - \ln x}{h} = \lim_{h \rightarrow 0} \frac{\ln \left( \frac{x + h}{x} \right)}{h} = \lim_{h \rightarrow 0} \frac{\ln \left( 1 + \frac{h}{x} \right)}{h} = \lim_{h \rightarrow 0} \frac{1}{x} \ \frac{\ln \left( 1 + \frac{h}{x} \right)}{ \left( \frac{h}{x} \right)} = \lim_{k \rightarrow 0} \frac{1}{x} \ \frac{\ln \left( 1 + k \right)}{k} = \frac{1}{x}[/tex] (Q.E.D)
k = h / x
----------------------
I think you can find this in your calculus book... :rolleyes:
 
  • #6
benorin said:
Sure it's true, for non-negative integer n. Even for all [itex]n\neq -1[/itex].

Sounds like the constraint for the integral analog of that formula, doesn't it? :uhh: I should sleep more...
 
  • #7
this is a nice argument vietdao29. however one can prove anything if you assume enough background. In my opinion the things you have used in your proof, are more difficult to prove than the result you have proved using them.

For instance the existence alone of the log function and its continutiy and various properties, are all usually proved using the derivative formula. Of course they can be proved otherwise, but it is more difficult than the argument you are using them for.

I would ask how do you prove all those difficult properties you are using without knowing the derivative in advance?

One particular example, the limit calculation of e, is itself proved in my calculus book using the derivative formula for ln(x). Thus using my calculus book's argument to complete yours, would render it circular.

Dieudonne has a nice developoment of the log function and its continuity and multiplicativity properties without derivatives, defining the exponential as its inverse. It follows that the exponential is continuous hence integrable.

He then shows that a^x can be written as a constant times the difference of two values of its own integral, and since the integral of a continuous function is differentiable, the differentiability and derivative formula for the exponential follows, and hence also that for the log.

still he does not derive the limit (I + (1/x))^x --> , as x -->infinity, directly.

this is usually derived by l'hopitals rule, i.e. exactly the reverse of your aregument. thus i ask how you prove that limit without knowing any derivative formulas for ln or exp?

i.e. that limit you are assuming is essentialy equivalent to the one you are proving, so you have not made any progress unless you show how to prove one of them without using the other.
 
Last edited:
  • #8
mathwonk said:
how do you prove that limit without knowing any derivative formulas for ln or exp?

[tex]\frac{d}{dx} (\ln x) = \lim_{h \rightarrow 0} \frac{\ln(x + h) - \ln x}{h}[/tex]
 
  • #9
huh?

i am asking for a proof of lim x-->infinity [1 + 1/x]^x = e, which

does not use the limit:

lim h-->0 [ln(x+h)-ln(x)]/h = 1/x.
 
  • #10
Uhmm, my book define e to be:
[tex]e = \lim_{n \rightarrow \infty} \left( 1 + \frac{1}{n} \right) ^ n, \ n \in \mathbb{N}[/tex]
It can be shown that e is bounded using Binomial theorem.
Letting
[tex]u_n = \left( 1 + \frac{1}{n} \right) ^ n[/tex] we can show that {un} is increasing and is bounded, lower bounded by 2, and upper bounded by 3.
From there, it can be shown that:
[tex]e = \lim_{x \rightarrow \infty} \left( 1 + \frac{1}{x} \right) ^ x, \ x \in \mathbb{R}[/tex] by the Squeeze theorem.
Since we have:
If [tex]\lim_{x \rightarrow \alpha} f(x) = L[/tex] then [tex]\lim_{x \rightarrow \alpha} f(x) ^ n = L ^ n[/tex] (n is some constant).
We then can show that:
[tex]e ^ x = \lim_{k \rightarrow \infty} \left( 1 + \frac{x}{k} \right) ^ k, \ k \in \mathbb{R}[/tex]
-------------------------
From here, my book assumes that ex is continuous, and is increasing (since e > 1). (?)
They state that based on the fact that if a > 1 Then ax is increasing. However, they don't prove that fact. They say that it's generally accepted! (?)
I think I need to consult my maths teacher about this.
-------------------------
So I think about some other way:
We can prove that:
(xa)' = axa - 1, for all a in the reals.
We can also prove the chain rule using limit.
That means:
[tex](e ^ x)' = \lim_{k \rightarrow \infty} \left( \left( 1 + \frac{x}{k} \right) ^ k \right)' = \lim_{k \rightarrow \infty} \left( 1 + \frac{x}{k} \right) ^ {k - 1} = e ^ x, \ k \in \mathbb{R}[/tex]
Using the fact that e0 = 1, and (ex)' = ex, we can show that:
[tex]e ^ x = 1 + \sum_{n = 1} ^ \infty \frac{x ^ n}{n!} = \sum_{n = 0} ^ \infty \frac{x ^ n}{n!}[/tex]
From here, we can say that ex is continuous, increasing, and can prove some of its properties like:
ea + b = ea eb, ...
Since ex is increasing, it must have an inverse function, known as ln(x) (whose graph is the reflection of the graph ex across the line y = x). So ln(x) is continuous (since ex is continuous).

From here, I think I can show that (ln(x))' = 1 / x using the derivatives for inverse function. But since my book generally accept things, so their way is much longer!
Using : ln(x) = a <=> ea = x. We can show that: eln a = a. From here, we can prove all proterties for the log function, like:
[tex]\ln(x) - \ln(a) = \ln \left( \frac{x}{a} \right)[/tex]
[tex]\ln(x) + \ln(a) = \ln (xa)[/tex]
[tex]\ln(x ^ a) = a \ln (x)[/tex], ...
Is there any fraud in my reasoning?
Am I using good terminology? (English is not my mother tongue :tongue2:)
-------------------------
It would be nice if you can show me your book's definition of e. And how can they prove some log and exp's properties...
And may I know the name of the book?
 
Last edited:
  • #11
Since we have:
If [tex]\lim_{x \rightarrow \alpha} f(x) = L[/tex] then [tex]\lim_{x \rightarrow \alpha} f(x) ^ n = L ^ n[/tex] (n is some constant).
agreed for integer n
We then can show that:
[tex]e ^ x = \lim_{k \rightarrow \infty} \left( 1 + \frac{x}{k} \right) ^ k, \ k \in \mathbb{R}[/tex]
no, you can't say that. how did you go from integer n to k in R?
From here, my book assumes that ex is continuous, and is increasing (since e > 1). (?)
They state that based on the fact that if a > 1 Then ax is increasing. However, they don't prove that fact. They say that it's generally accepted! (?)
I think I need to consult my maths teacher about this.
-------------------------
So I think about some other way:
We can prove that:
(xa)' = axa - 1, for all a in the reals.

how can you even prove that? what is x^a if a is not an integer or a rational?

It would be nice if you can show me your book's definition of e. And how can they prove some log and exp's properties...
And may I know the name of the book?

exp(x) is the uique solution to f'=f f(0)=1, it exists and is well defined, it has powerseries we know and love.

In anycase, e=1+1+1/2!+1/3!+...

and you haven't proved that is equal to the limit of (1+1/n)^n.
 
Last edited:
  • #12
Let us be rigorous! Tell us what, then, is known at the point of the question that we avoid avoid circular reasoning and knowledge of theorems more advanced that that which is to be proved: how is the [itex]\ln x[/itex] defined? if it is as the inverse of [itex]e^x,[/itex] how was that defined? do we admitt such theorems as the product, quotient chain rules? the binomial theorem?

Are you looking for an [tex]\epsilon -\delta[/tex] proof? be specific, please.
 
  • #13
It's right here.

mathwonk said:
huh?
i am asking for a proof of lim x-->infinity [1 + 1/x]^x = e, which
does not use the limit:
lim h-->0 [ln(x+h)-ln(x)]/h = 1/x.

It's right here.
 
  • #14
Oh, and log(x) is (equivalent to being defined as) the integral from 1 to x of 1/t dt. You can prove everything you want to about logs from that definition, by the way, ie that log(xy)=log(x)+log(y) and that log(x^r)=rlog(x), in particular that log(1/x)=-log(x)
 
Last edited:
  • #15
matt grime said:
Oh, and log(x) is (equivalent to being defined as) the integral from 1 to x of 1/t dt. You can prove everything you want to about logs from that definition, by the way, ie that log(xy)=log(x)+log(y) and that log(x^r)=rlog(x), in particular that log(1/x)=-log(x)

And, perhaps most importantly, that its inverse is an exponential.
 
  • #16

In order to proceed with the solution to the limit definition for the natural logarithm (ln) derivative, we must first prove the limit definition for base (e).

[tex]f(x) = \ln x[/tex]

[tex]f'(1) = \lim_{h \rightarrow 0} \frac{f(1 + h) - f(1)}{h} = \lim_{x \rightarrow 0} \frac{f(1 + x) - f(1)}{x}[/tex]

[tex]= \lim_{x \rightarrow 0} \frac{\ln(1 + x) - \ln 1}{x} = \lim_{x \rightarrow 0} \frac{1}{x} \ln(1 + x)[/tex]

[tex]= \lim_{x \rightarrow 0} \ln [(1 + x)^{\frac{1}{x}}][/tex]

[tex]= \lim_{x \rightarrow 0} \ln [(1 + x)^{\frac{1}{x}}] = \ln [\lim_{x \rightarrow 0} (1 + x)^{\frac{1}{x}}] = \ln e = 1[/tex]

[tex]f'(1) = 1[/tex]

[tex]\text{Theorem:}[/tex]
[tex]\text{If} \; f \; \text{is continuous at b and} \; \lim_{x \rightarrow a} g(x) = b \; \text{then} \; \lim_{x \rightarrow a} f(g(x)) = f(b)[/tex]

[tex]\lim_{x \rightarrow a} f(g(x)) = f (\lim_{x \rightarrow a} g(x))[/tex]

[tex]e = e^1 = e^{\lim_{x \rightarrow 0} \ln [(1 + x)^{\frac{1}{x}}]} = \lim_{x \rightarrow 0} e^{\ln (1 + x)^{\frac{1}{x}}} = \lim_{x \rightarrow 0} (1 + x)^{\frac{1}{x}}[/tex]

[tex]\boxed{e = \lim_{x \rightarrow 0} (1 + x)^{\frac{1}{x}}}[/tex]

[tex]n = \frac{1}{x} \; \; \; n \rightarrow \infty \; \; \; x \rightarrow 0^+[/tex]

[tex]\boxed{e = \lim_{n \rightarrow \infty} (1 + \frac{1}{n})^n}[/tex]

Is this solution correct?

[tex]\frac{d}{dx} (\ln x) = \lim_{h \rightarrow 0} \frac{\ln(x + h) - \ln x}{h} = \lim_{h \rightarrow 0} \ln [(x + h)^{\frac{1}{h}}][/tex]

[tex]\boxed{\frac{d}{dx} (\ln x) = \lim_{h \rightarrow 0} \ln [(x + h)^{\frac{1}{h}}]}[/tex]

[tex]\lim_{h \rightarrow 0} \frac{1}{x} \ln(x + h) = \frac{\ln x}{x}[/tex]

[tex]x = e[/tex]

[tex]\frac{\ln x}{x} = \frac{\ln e}{e} = \frac{1}{e}[/tex]

[tex]\boxed{\frac{d}{dx} (\ln x) = \lim_{h \rightarrow 0} \frac{\ln(x + h) - \ln x}{h} = \frac{1}{x}}[/tex]

 
Last edited:
  • #17
In all my proof, I just use the fact that xa (a is an integer) is continuous.
Lemma 1:
If [itex]\lim_{x \rightarrow \alpha} f(x) = L[/itex] then [itex]\lim_{x \rightarrow \alpha} f(x) ^ n = L ^ n[/itex] (n is some real number).
Assume that f(x) > 0, for some x in the neighbourhood of [itex]\alpha[/itex].
I will adopt the fact that fa(x) (a is an integer, positive or negative) is continuous, which can be shown by limits.
---------------
Given an irrational number b. I'm going to prove that gb(x) is also continuous.
We define a sequence of functions {un(x)}:
[tex]u_n(x) = g ^ {j(n)} (x)[/tex], where j(n) is a function that will return a rational number in the range [itex]b \pm 10 ^ {-n}[/itex].
So we can define [tex]g ^ b(x) = \lim_{n \rightarrow \infty} u_n(x), \ n \in \mathbb{N}[/tex].
Since un(x) is continuous for all natural number n, gb(x) must also be continuous.
---------------
From there, we can say that:
If [itex]\lim_{x \rightarrow \alpha} f(x) = L[/itex] then [itex]\lim_{x \rightarrow \alpha} f(x) ^ n = L ^ n[/itex]. (Q.E.D)Lemma 2:
(xa)' = axa - 1 (for all a).
If a is rational then we can show that: (xa)' = axa - 1 (using lemma 1).
If a is irrational, then define xa as above:
[tex]x ^ a = \lim_{n \rightarrow \infty} v_n(x), \ n \in \mathbb{N}[/tex], where vn(x) can be define as:
[tex]v_n(x) = x ^ {j(n)}[/tex], where j(n) is a function that will return a rational number in the range [itex]a \pm 10 ^ {-n}[/itex].
[tex](v_n(x))' = (x ^ j(n))' = j(n) x ^ {j(n) - 1}[/tex].
As j(n) will converge to a as n tends to infinity, we can say that:
[tex](x ^ a)' = \lim_{n \rightarrow \infty} (v_n(x))' = ax ^ {a - 1}[/tex]Lemma 3:
We define {on} to be:
[tex]o_n = \left( 1 + \frac{1}{n} \right) ^ n[/tex]
By using binomial theorem, we can say that: 2 < {on} < 3, and {on} is increasing. That means {on} will converge to some value as n tends to infinity, and we denote that value to be e.
Proof: for the statement: 2 < {on} < 3
[tex]o_n = 1 + n \frac{1}{n} + \frac{n(n - 1)}{2!} \frac{1}{n ^ 2} + ... + \frac{n!}{n!} \frac{1}{n ^ n}[/tex]
[tex]1 + 1 + \frac{1}{2!} \left( 1 - \frac{1}{n} \right) + \frac{1}{3!} \left( 1 - \frac{1}{n} \right) \left( 1 - \frac{2}{n} \right) + ... + \frac{1}{n!} \left( 1 - \frac{1}{n} \right) \left( 1 - \frac{2}{n} \right) ... \left( 1 - \frac{n - 1}{n} \right)[/tex] (***)
[tex]< 2 + \frac{1}{2!} + \frac{1}{3!} + ... + \frac{1}{n!} \leq 2 + \frac{1}{2} + \frac{1}{2 ^ 2} + \frac{1}{2 ^ 3} + ... + \frac{1}{2 ^ n} \leq 3[/tex].
From (***) we can show that {on} is increasing, and hence is lower bounded by o1 = 2.
Lemma 4:
[tex]\lim_{x \rightarrow \infty} \left( 1 + \frac{1}{x} \right) ^ x = e[/tex].
Take the derivatives of that, we have (we can take derivatives of it due to lemma 2):
[tex]\left( \left( 1 + \frac{1}{x} \right) ^ x \right)' = \left( 1 + \frac{1}{x} \right) ^ {x - 1} > 0[/tex], hence:
[tex]\left( 1 + \frac{1}{x} \right) ^ x \right)[/tex] is increasing.
There will exist a natural n such that n <= x <= n + 1. that means:
[tex]\left( 1 + \frac{1}{n} \right) ^ n \right) \leq \left( 1 + \frac{1}{x} \right) ^ x \right) \leq \left( 1 + \frac{1}{n + 1} \right) ^ {n + 1} \right)[/tex]. Using Squeeze theorem we can show that:
[tex]\lim_{x \rightarrow \infty} \left( 1 + \frac{1}{x} \right) ^ x = e[/tex] (Q.E.D)
Lemma 5:
[tex]e ^ x = \sum_{n = 0} ^ {\infty} \frac{x ^ n}{n!}[/tex]
From lemma 1, we can show that:
[tex]e ^ x = \lim_{k \rightarrow \infty} \left( 1 + \frac{x}{k} \right) ^ k[/tex]. Taking derivatives of that gives:
[tex](e ^ x)' = \lim_{k \rightarrow \infty} \left( \left( 1 + \frac{x}{k} \right) ^ k \right)' = \lim_{k \rightarrow \infty} \left( 1 + \frac{x}{k} \right) ^ {k - 1} = e ^ x[/tex].
Using e0 = 1, and (ex)' = ex, we can show that:
[tex]e ^ x = \sum_{n = 0} ^ {\infty} \frac{x ^ n}{n!}[/tex] (Q.E.D).
From here, we can show another definition for e:
[tex]e = \sum_{n = 0} ^ {\infty} \frac{1}{n!}[/tex].
From lemma 5, we can say that ex is continuous, and increasing. Hence it has an inverse function, denote as ln(x), whose graph is a reflection of the graph ex across the line y = x. Hence ln(x) is also continuous. Some of the proof for the properties for exp and ln are shown in my earlier post.Also from my earlier post here, we can say that (ln(x))' = 1 / x. From which we can show your definition for ln(x):
[tex]\ln (x) = \int \limits_{1} ^ x \frac{dx}{x}[/tex]
Yes, I know Vietnamese book sucks :yuck:. So please give me some advice, and opinion, so I can expand my knowledge. And maybe I may write to the publisher asking them to write clearer, more structural, and accurate books. Is there any fraud in my reasoning? (Lemma 3 is taken out of my abstract algebra book).
Yeah, reading back my abstract algebra book, I find lots of things that are so unclear, or are claimed to be generally accepted. So I would like to ask you guys what books in English that teach us Calculus (all courses), Abstract Algebra,... with good structures, resonable and clear explanation, so that I can read them to consolidate, and expand my knowledge.
Thanks,
 
Last edited:
  • #18
Orion1, I like what you have: very good. But I cannot figure how you got this:

Orion1 said:
[tex]f'(1) = \lim_{x \rightarrow 0} \ln(1 + x)^{\frac{1}{x}} = 1[/tex]
[tex]f'(1) = 1[/tex]
 
Last edited:
  • #19
VietDao29 said:
So I would like to ask you guys what books in English that teach us Calculus


I can supply you with a reference for the Calculus book that my college is currently using. My college is very selective regarding their mathematics books and this is a recent publication:


Calculus - James Stewart 5e (5th edition)
ISBN: 0-534-39339-X

Available for Purchase:
http://websites.swlearning.com/cgi-wadsworth/course_products_wp.pl?fid=M2b&product_isbn_issn=053439339X&discipline_number=436

benorin said:
Orion1, I like what you have: very good. But I cannot figure how you got this:
[tex]f'(1) = \lim_{x \rightarrow 0} \ln [(1 + x)^{\frac{1}{x}}] = 1[/tex]
[tex]f'(1) = 1[/tex]


This is the circular argument:

[tex]f(x) = \ln x[/tex]
[tex]f'(x) = \frac{1}{x}[/tex]
[tex]f'(1) = 1[/tex]

The equation should actually read:
[tex]\lim_{x \rightarrow 0} \ln [(1 + x)^{\frac{1}{x}}] = 1[/tex]
[tex]f'(1) = 1[/tex]

[tex]\lim_{x \rightarrow 0} \ln [(1 + x)^{\frac{1}{x}}] = \ln [\lim_{x \rightarrow 0} (1 + x)^{\frac{1}{x}}] = \ln e = 1[/tex]

 
Last edited by a moderator:
  • #20
matt grime said:
exp(x) is the uique solution to f'=f f(0)=1, it exists and is well defined, it has powerseries we know and love.

In anycase, e=1+1+1/2!+1/3!+...

Alternatively, [tex]\exp (kx),[/tex] k fixed, is the unique continuous non-trivial solution to the functional equation [tex]f(x+y)=f(x)f(y),\forall x,y\in\mathbb{R}[/tex]
 
Last edited:
  • #21
[tex]\lim_{x \rightarrow 0} \ln (1 + x)^{\frac{1}{x}} =\ln\left[ \lim_{x \rightarrow 0} (1 + x)^{\frac{1}{x}} \right] = \ln\left[ \lim_{y \rightarrow \infty} \left(1 + \frac{1}{y}\right) ^{y} \right] = \ln\left( e \right) =1[/tex]

and Maple says the same.
 
  • #22
re: post 13, thanks benorin, but that proof just makes my point as to how circular and ridiculous this whole approach is.

I.e. in your proof you are assuming the complete theory of the exponential function, including its power series.

If you know that much, then you know the exponential function is its own derivative, from which it follows immediately (by the chain rule) that the ln function has derivative 1/x.

so it is very disingenuous to pretend you are giving a direct proof of the derivative limit for log if you use that you know the derivative of e^x.

get my point? i.e. if we start with a knowledge of the exponential function as you are doing, then a much shorter argument for the limit of ln'(x) is just

ln'(x) = 1/exp'(ln(x)) = 1/1/x = x.

the book i recommend is Foundations of Modern Analysis, by Jean Dieudonne.
 
  • #23
the development of the exponential and log functions is quite challenging to do rigorously and completely. Then there is the challenge of being clear and intuitive.

Most books I have seen opt for the rigorous but unintuitive approach, and define ln(x) as the integral, of 1/t, from t=1 to t=x.

then one proves easily thism is differentiable with derivative 1/x, hence increasing and invertible and the inverse is called exp(x).

it is also easy to prove that ln(cx) = ln(c) + ln(x) for all c x, by taking the derivative of both sides and using the MVT, and the fact that ln(1) = 0.

then the inverse function satisfies exp(x+y) = exp(x)exp(y) and is its own derivative.this is a clean completely thorough and rigorous approach, but does not answer the question, Why in the world was that crazy definition chosen for ln(x)?so it is more intuitive to lok first at a definition for e^x, and not the power series either, since that one is not motivated until you know what the derivative of e^x should be.so first you try to define say 2^x, because there is no reason tom even suspect the existence of the number e until you have gone quite far in the discussion.

eventually by using the proeprty that 2^(x+y) should equal (2^x)(2^y), which is easily proved for integer exponents, you can define 2^x for all rsational x.

then you can try to prove there is a unique extension to all reals.

so this approach uses as a start the idea that exponential functions should satisfy the homomorphism law f(x+y) = f(x)f(y).

equivalently one can start from the law f(xy) = f(x)f(y), for logs. either way you have to prove there is a non zero continuous solution of these functional equaitons.

that is what dieudonne does.

there is no direct proof possible that 1/x, satisfies the limit definition of ln'(x), it seems to me, without some unnatural definition of ln, or withoput taking for granted a lot of theory which is actually more sophisticated and difficult than the result one wants to derive.i.e. by the only natural definiton, ln is inverse to an exponential function, and not only that but one with a very esoteric base. so the hard parts are even defining 2^x and then e and then e^x adequately. once that is done, the derivative of ln(x) is easy.of course by taking as a starting poiint something which essentially contains the end result you want you can deduce anything.
 
  • #24
of course power series do offer an alternative approach to defining e^x, and it can be motivated as follows: after discovering the law f(x+y) = f(x)f(y) for esxponentials, one assumes there is such a continuous function and tries to take its derivative. the usual limit definition than implies that IF the limit exists, then the derivative is a constant times the original function.hence one might try to construct such a function as a power series, and of cousre the easiest way would be to assume the constant is one.

then one is led to the familiar power series for e^x, which one then must prove converges and is differentiable, etc etc.

i.e. to use this approach one must treat power series before one introduces either logs or exponentials, a rare approach in beginning calculus courses.

John Tate did it this way however in 1960 at Harvard in my own first calculus course. Later when I wound up in a less challenging course, I astonished my teacher by not being aware of any difficulty in treating logs exponentials and sines and cosines.

i.e. we did power series first, then exponentials, and logs, including for complex numbers, and then define sine and cosine as complex linear combinations of e^ix, and e^(-ix).

this is ok if your class is strong enough to stomach power series before basic derivatives and integrals.

few books use this approach. stewart is a standard, above average, book that i believe uses the first rigorous approach i gave above.

dieudonne is a very strong book that proves by hand there is a non zero continuous solution of the functional equation f(xy) = f(x)f(y), and goes on from there.

he uses a trick to show the inverse function is differentiable which i sketched above, after proving it has the homomorphism law, he is able to write e^x in terms of its own integral, hence uses the FTC to deduce differentiability.
 
  • #25
so theoretically there are essentially 4 ways to do this:

1) to do exponentials first, prove for each a>0, there is a unique continuous function f such that f(x+y) = f(x) f(y) and f(1) = a. then deduce (as in dieudonne) that it is differentiable with derivative equal to a constant times itself.

2) using the discussion above as hindsight, go back and try to produce a differentiable function which equals its own derivative, e.g. using power series. then deduce the functional equation and hence the fact that it is an exponential.

or do logs first
3) given a >0, produce by hand a continuous function f satisfying the law f(xy) = f(x) = f(y), and f(a) = 1. then show this function is differentiable somehow, e.g. by inverting it and using the second half of the derivation above.

4) produce by integration a function whose derivative is 1/x, then deduce the homomorphism law, invert the function to obtain the exponential.



this last approach is also nice for complex variables since it makes clear that the valkue of ln depends on the choice of opath of integration. Of course the pwoers eries approach also works for exponentials in approach 2 above. For complex numbers the direct approaches 1 and 3 seem less feasible.
 
  • #26
benorin, your post #20 is wrong as stated, do you see why? hint consider the word "unique".
 
  • #27
Thanks mathwonk, I found this interesting to read.
In my analysis course, we also did it by first defining ln(x) as that integral, showing that it satisfies the necessary properties and then define the inverse function exp(x). After reading this, I'm interested in seeing Dieudonné's approach completely :smile:
 
  • #28
mathwonk said:
benorin, your post #20 is wrong as stated, do you see why? hint consider the word "unique".

Yeah, I left out the constant the first type-up hence "the unique" and "fixed" it later... How about "the unique family of solutions to..."
 
  • #29
ok, so you would say that every solution has the form e^(cx) for some c. yes that seems to work.

Also one might say for each a>0, that a^x is the unique continuous solution such that f(1) = a.
 
  • #30
The unfavored approach to this expoential business, namely beginning with complex power series developments, is taken by Rudin in the Prologue of Real and Complex Analysis, 3rd ed. as may be viewed https://www.amazon.com/gp/product/0070542341/?tag=pfamazon01-20. Mind you this is my grad real analysis text, but it was, say :rolleyes: , 'stimulating'.
 
Last edited by a moderator:
  • #31
TD: that approach in your book is the easiest of the three at elast if one propves the integrability of 1/x and the fundamental theorem of calculus.

now integrability is often not actually rpoved in most books, so to be honest that also is a gap in this so called easy approach.

what I sometime do in my course, instead of assuming integrability of continuous functions, which requires the concept of uniform continuity, is to prove integrability for monotone functions, which is much easier and covers the case of 1/x.

i.e. it is very easy to prove that the upper and lower Riemann sums for a monotone function on [a,b] converge to the same limit, as they differ by the product |f(b)-f(a)| times deltax, which obviously goes to zero as deltax does.

this is Newtons proof. then after knowing your monotone function is integrable, if it is also continuous it is very easy to prove the integral is differentiable, since the difference quotient [F(x+h)-F(x)]/h is bounded above by the area of the rectangle with base h and height f(x+h)-f(x) which approaches zero as h does by the definition of continuity.


If you look in stewart you will probably find that he assumes integrability of continuous functions or maybe proves it in an appendix that most courses skip.
 
  • #32
Well we didn't use one of the standard textbooks (such as Stewart etc), but one which was written by the professor himself (just for our course, non-commercial). We showed integrability of (piecewise) continuous functions, using upper and lower sums and, as you said, by relying on uniform continuity.

We also introduced (and showed it was possible to define) trigonometric functions in the same way: by defining the inverse functions as integrals, limiting them so they become bjiections and then define the inverse functions. It was interesting to see but as you remarked, not a very intuitive approach (although I don't think that was the point, just letting us see that it's possible and how you could rigorously introduce these things).
 
  • #33
well the series approach is more advanced but it is nice, especially if motivated as i tried to do above. i overstated it when i said it should not be done that way, when what i should have said was it should be motivated first by convincing the reader that one is looking for a function which equals its own deriavtive. after that it is natural to use that definition.
as i said, it was done that way in my first freshman level calc course and i really liked it, as my appetite for rigor had never been met in high school trig courses.
Rudin of course cares nothing for motivation, and only for rigor and elegance, and although i respect his expertise, i do not enjoy his book for learning.
here is a nice application of that approach, which was something like problem 2 or 3 on one of our first homework assignments freshman year:
prove e defined as 1 + 1 + 1/2 + 1/3! + ... is irrational as follows:
assume e = n/m for some integers n,m>0 and get a contradiction as follows:
if it wereb true then em! would be an integer, but prove that for all m>0
em! is nevber and integer by direwct estimation.
i.e. multiply m! by the series for e, and look at the terms which are obviously integers, and estimate sum of the rest of the terms by comparing with a geometric series.
you wills ee that m!e equals an integer plus a term which is between 0 and 1, hence not an integer.
 
Last edited:
  • #34
TD: I do like the approach via inverting integrals, as when generalized, it leads to the beautiful elliptic functions as inverses of the integral of 1/sqrt(1-x^4), and which are so important in algebraic geometry and number theory, e.g. in the proof of fermat's last theorem.
this approach aslso explains the trick of "separating variables" in solving d.e.'s.
i.e. by the inverse function theorem, if f' = 1/P(x), then g'(x) = P(g(x)), where g is the inverse of f.
this is the whole basis for the so called separable variables technique, but i have never seen it so simply explained in any book. i noticed it myself this fall while teaching integral calculus and discussing exactly the "inverse of integrals" ideas we have been discussing.


i.e. to solve dg/dx = P(g) or dy/dx = P(y), you separate variables, getting

dy/P(y) = dx and integrate both sides to get G(y) = x, and then

y = H(x) where H = inverse of G. i.e. if G'(x) = 1/P(x), then H' = P(H), where H is the inverse of G. this is of course just the inverse function theorem, but is usually presented as magic. (i hope i didn't screw this up too badly, but i ahve already correcetd several typos and mental errors. by the way notice that my posts are almost free of calculations and many others here above are almost entirely calculations. math is not really about calculation in my view. of course i am wrong, and merely compensating for my weak calculating ability.)
 
Last edited:
  • #35
mathwonk said:
Rudin of course cares nothing for motivation, and only for rigor and elegance, and although i respect his expertise, i do not enjoy his book for learning.

Indeed! As a student presently learning from said text, I promptly cheered upon reading that. Thank you.

mathwonk said:
here is a nice application of that approach, which was something like problem 2 or 3 on one of our first homework assignments freshman year:
prove e defined as 1 + 1 + 1/2 + 1/3! + ... is irrational as follows:
assume e = n/m for some integers n,m>0 and get a contradiction as follows:
if it wereb true then em! would be an integer, but prove that for all m>0
em! is nevber and integer by direwct estimation.
i.e. multiply m! by the series for e, and look at the terms which are obviously integers, and estimate sum of the rest of the terms by comparing with a geometric series.
you wills ee that m!e equals an integer plus a term which is between 0 and 1, hence not an integer.

Nice.
 

Similar threads

Replies
2
Views
1K
Replies
2
Views
790
Replies
1
Views
163
Replies
3
Views
972
Replies
4
Views
884
Replies
16
Views
2K
Back
Top