if you assume you have a continuous function f satisfying f(x+y) = f(x).f(y) for all x,y, and f(1) = a>0, then it does equal a^x, and has derivative k.a^x, for some constant k, but just showing it is differentiable is not at all easy, in my opinion.
proving functions given by convergent power series are differentiable is not trivial either. the easiest way of dealing with exponentials, is the usual one of using the fundamental theorem of calculus to prove the function defined by the integral of 1/x is differentiable, and then proving it is also invertible, and using the chain rule (and inverse function theorem) to get the derivative of the inverse to equal itself, and then proving also that the inverse satisfies the properties listed earlier that characterize an exponential function.
Maybe the series approach would be cleaner after all. but you have to deal with uniform convergence which is usually considered more advanced than inverse functions and the FTC. It would be worth learning about though.
maybe the differential equations approach is worth trying too. In that approach you look at the de y'=y, y(0) = 1, and you prove that such a function must also satisfy y(a+b) = y(a).y(b). Then you deduce that it is an exponential function of form a^x for some a, and you just define e to be that value of a. Of course in this approach you have to prove the d.e. has a solution.
there is a nice proof of existence of solutions of first order d.e.'s using sequences of approximations, which should generate something like the series expansion of e^x.
so this approach works backwards: first you prove there is some function which equals its own derivative, and then afterwards you prove that function is an exponential function. then the method of proof may give
you a series expansion and you can compute the base e via that expansion.
i think that proof method picard's method. here is a link that may help you.
http://www.math.msu.edu/~seal/teaching/f09/picard_iteration.pdf