Robert1986 said:
That was a WONDERFUL explanation.
This should be included in EVERY calculus book, though sadly it isn't. It wasn't until analysis that I really understood what was going on.
Thank you. I'm glad you liked it. I will provide the details in this post.Suppose that f(x+y)=f(x)f(y) for all x,y. This assumption implies that for all x, f(x)=f(x)f(0). If f≠0, this implies that there's a y such that f(y)≠0 and f(y)=f(y)f(0). So if f≠0, then f(0)=1. (It might seem that f(0)=0 is also possible, but this would imply that for all x, f(x)=f(x)f(0)=0, and this contradicts f≠0).
Suppose further that f≠0. The above implies that f(0)=1. If there's a y such that f(y)=0, then for all such y, and all x, f(x+y)=f(x)f(y)=0. This would imply that f(x)=0 for all x, which contradicts the assumption that f≠0. So there is no y such that f(y)=0. In other words, for all x, f(x)≠0.
Suppose further that f is differentiable. Our assumptions imply that f'(x)=f(x)f'(0) for all x. (Differentiate f(x+y)=f(x)f(y) with respect to y, and then set y=0).
Let t be an arbitrary real number and define a function g by g(x)=f(tx) for all x. Clearly, g≠0. By the chain rule, g is differentiable, and g'(x)=f'(tx)t for all x. For all x,y, g(x+y)=f(t(x+y))=f(tx+ty)=f(tx)+f(ty)=g(x)+g(y). So g satisfies all the conditions we have imposed on f. Since t is an arbitrary real number, this means that if there's one function that satisfies those conditions, there are infinitely many. However, if t≠1, then f'(0)≠g'(0). This means that we can narrow our search by only considering functions with a specified value of its first derivative at 0.
Suppose further that f'(0)=1. Then f'(x)=f(x)f'(0)=f(x) for all x. This implies that for all x,
$$\lim_{h\to 0}\frac{f'(x+h)-f'(x)}{h}=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}.$$ So f' is differentiable, and f''=f'. The latter result implies that f''(0)=f'(0)=1. Similarly, for all positive integers n, if the nth partial derivative of f exists at x, then so does the (n+1)th, and ##f^{(n+1)}=f^{(n)}##. By induction, this implies that f is smooth (i.e. that the partial derivatives of f up to arbitrary order all exist).
This implies that f can be expressed as a Taylor series around 0: For all x,
$$f(x)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n =\sum_{n=0}^\infty \frac{x^n}{n!}.$$ We can also obtain this result without using Taylor's formula. We just need to know that the result that f is smooth implies that f can be expressed as a power series, and that the coefficients in a power series are unique. For all x,
$$
\begin{align}
f(x) &=\sum_{n=0}^\infty a_n x^n\\
f'(x) &=\sum_{n=1}^\infty a_n n x^{n-1} =\sum_{n=0}^\infty a_{n+1} (n+1)x^n.
\end{align}
$$ This implies that for all non-negative integers n, we have a
n+1=a
n/(n+1)!. Since f'(0)=a
0, this formula gives us a
1=a
0/1!=f'(0)=1, a
2=a
1/2!=1/2!, and so on.
Note that we have only proved that
if there's a non-zero differentiable f such that f(x+y)=f(x)f(y) for all x,y, and f'(0)=1, then that f is given by the power series above. This result implies that there's
at most one f with those properties. We still need to prove that such an f exists. A good way to do that to first prove that ##\sum_{n=0}^\infty\frac{x^n}{n!}## is convergent for all x. Then we define f by ##f(x)=\sum_{n=0}^\infty\frac{x^n}{n!}## for all x, and prove that the f defined this way is non-zero, differentiable, and such that f(x+y)=f(x)f(y) for all x,y.
I won't do that right now, but I might return to it later if I'm not too busy.
Edit: What HallsofIvy suggested is another good way to prove existence. It seems to avoid having to deal with convergence of series, but it requires that you understand Riemann integrals.