Derivatives of Standard Functions

In summary, I don't remember where most of the maths I use comes from, but I don't think this is due to the fact that I didn't take my studies seriously until the end of undergrad. I don't know how to complete the proof of the derivative of an elementary function without specialised knowledge, but I believe this is common. I believe we typically learn mathematics procedurally and never really know the reasons why the things we do are actually valid.
  • #1
madness
815
70
I can't help but feeling these days that I don't actually understand where most of the maths I use comes from. Unfortunately, I can't remember whether this is due to the fact that I didn't take my studies seriously until the end of undergrad, or rather that these things were never actually taught to me. One example of this is the derivative of elementary functions that I use regularly (things like trig functions, exponents, etc.). Let's take the exponential function. It's easy to show from the definition of a derivative that $$\frac{d a^x}{dx} = a^x \lim_{h\rightarrow 0}\frac{a^h -1}{h}$$ (at least for) $$a\ne0$$. However I don't know how to take that limit to complete the proof. So my concrete question is: can one actually complete this proof in a straightforward way, that is without other specialised knowledge of the function (e.g., its power series definition, relationship to logarithm, etc.) I would be in the same situation with all the common functions, and this situation case extends beyond calculus to other fields such as linear algebra. In linear algebra for example, it struck me from reading another thread here that to prove that matrices have at least one eigenvalue, one must use the fundamental algebra, whose proof I didn't see until I took algebraic topology. So my more general question is: do we (most of us?) typically learn mathematics procedurally/operationally and never really know the reasons why the things we do are actually valid? Or is it just me?

PS: I'm aware that I've interspersed a number of different questions and points here, I hope that doesn't cause umbrage with anyone!
 
Physics news on Phys.org
  • #2
You listed a lot of things which cannot be used. The result, however. is ##\log a##, so how are we allowed to use the logarithm?
 
  • #3
fresh_42 said:
You listed a lot of things which cannot be used. The result, however. is ##\log a##, so how are we allowed to use the logarithm?

Ok, so we might admit (or prove?) that exponential has an inverse and choose to call that the logarithm. What would we do next to complete the proof?
 
  • #4
The usual definition of ##e## is ##e=\lim_{t\to\infty}\left(1+\frac{1}{t}\right)^t.## So the limit you want to compute with ##a=e## is

##\lim_{h\to 0}\lim_{t\to\infty}\frac{\left(1+\frac{1}{t}\right)^{th}-1}{h}.## You can expand with the binomial theorem and take limits to get ##1.##

And once you have that ##\frac{d}{dx}e^x=e^x##, then in general ##\frac{d}{dx}a^x=\frac{d}{dx}e^{x\ln(a)}=\ln(a)a^x.##
 
  • Like
Likes etotheipi
  • #5
I would first prove ##\dfrac{d}{dx}e^x = e^x## and from that ##\dfrac{d}{dx}\log x =\dfrac{1}{x}## by the chain rule. At last we get
$$
\dfrac{d}{dx} \log y = \dfrac{d}{dx}(x\log a)=\log a= \dfrac{\dfrac{d}{dx} y}{y} \Longrightarrow y'= y\log a = a^x\log a
$$
This uses the arithmetic rules of the logarithm and that the exponential function solves ##y'=y\, , \,y(0)=1.##
 
  • #6
Infrared said:
The usual definition of ##e## is ##e=\lim_{t\to\infty}\left(1+\frac{1}{t}\right)^t.## So the limit you want to compute with ##a=e## is

##\lim_{h\to 0}\lim_{t\to\infty}\frac{\left(1+\frac{1}{t}\right)^{th}-1}{h}.## You can expand with the binomial theorem and take limits to get ##1.##

And then once you have that ##\frac{d}{dx}e^x=e^x##, then in general ##\frac{d}{dx}a^x=\frac{d}{dx}e^{x\ln(a)}=\ln(a)a^x.##

And where does that definition of e come from? If I wanted to do the proof from first principles, I would need to first prove that ##\lim_{t\to\infty}\left(1+\frac{1}{t}\right)^t## exists. I would then I would be happy to call it "e". Following that, I'm not sure about what you mean by expanding with a binomial theorem. Do you expand for integer or non-integer exponents ##th##?
 
  • #7
fresh_42 said:
I would first prove ##\dfrac{d}{dx}e^x = e^x##

I don't know how to prove that without invoking theorems I don't know the proof of. See my above reply to Infrared.
 
  • #8
Mathematics is a discipline where results are based on earlier results. If you do not allow earlier results, you will have a long way to go from Zermelo-Fraenkel and arithmetics' axioms to any kind of differentiation. Otherwise we will have to use something substantial.
 
  • Like
Likes dextercioby
  • #9
There is a binomial theorem valid for non-integer exponents: see https://proofwiki.org/wiki/Binomial_Theorem/General_Binomial_Theorem

It takes a little work to prove it, but it certainly doesn't rely on properties of the exponential function. Alternatively, you could pick integers ##n\leq th\leq n+1## and apply the standard Binomial theorem here and then bound. I'm sure this would work, but I guess it would be a little tedious.

To show that the limit exists, you want to show that ##(1+1/t)^t## is increasing in ##t## and bounded; this is a standard exercise.

Also, if you would be happy using a different definition of ##e##, choosing ##e:=\sum_{n=0}^\infty 1/n!## would probably make your life a bit easier (although it's not too hard to show these definitions are equivalent).
 
  • Like
Likes madness
  • #10
As far as I can tell from these responses, I was probably never actually taught via mathematical proof where these things come from. I imagine it would get even more difficult if I asked how to prove the derivative of sine and cosyne. But it's reassuring to see that, at least for the exponential function, its derivative can be derived using only high-school level algebra and the definition of a derivative (probably the proof that the limit definition of e exists requires a bit more advanced analysis, but that's ok).
 
  • #11
Same problem: What are ##\cos## and ##\sin##? There are so many ways to define them. Some are easier to differentiate and some are less.
 
  • #12
To show that the limit exists, you want to show that ##(1+1/t)^t## is increasing in ##t## and bounded; this is a standard exercise.

Out of curiousity, how would you show that it is bounded? Would you use the generalised binomial theorem again?
 
  • #13
fresh_42 said:
Same problem: What are ##\cos## and ##\sin##? There are so many ways to define them. Some are easier to differentiate and some are less.

Just to be difficult, I'd define them as the ratio of the relevant two sides of a triangle. That's certainly how they get taught before we learn calculus.
 
  • #14
madness said:
Just to be difficult, I'd define them as the ratio of the relevant two sides of a triangle. That's certainly how they get taught before we learn calculus.
The shortest way in this case is probably to use the unit circle in the complex plane and write them by Euler's formula in terms of the exponential function.
 
  • #15
fresh_42 said:
The shortest way in this case is probably to use the unit circle in the complex plane and write them by Euler's formula in terms of the exponential function.

Sure, although most proofs of Euler's formula use either the power series defintion of the exponential function or something similar https://en.wikipedia.org/wiki/Euler's_formula#Proofs. The proof using polar coordinates https://en.wikipedia.org/wiki/Euler's_formula#Using_polar_coordinates could be used if we allow for knowledge of how to differentiate the exponential function in the complex plane, which according to the above posts would require that we extend the binomial theorem to the complex plane. Otherwise, we could perhaps derive the power series definition of the exponential function as a corollary of the above proof sketches in this thread.
 
  • #16
madness said:
I imagine it would get even more difficult if I asked how to prove the derivative of sine and cosyne.
This isn't too bad, actually. You can prove using the squeeze theorem that
$$\lim_{h \to 0}\frac{\sin h}{h} = 1.$$ Then it's straightforward to show that
$$\lim_{h \to 0}\frac{\cos h-1}{h} = 0.$$ Then it's just a matter of using the angle-addition formulas for sine and cosine and the definition of the derivative.
 
  • #17
madness said:
Let's take the exponential function. It's easy to show from the definition of a derivative that $$\frac{d a^x}{dx} = a^x \lim_{h\rightarrow 0}\frac{a^h -1}{h}$$ (at least for) $$a\ne0$$. However I don't know how to take that limit to complete the proof.
My old calculus book defined
$$\log ⁡x = \int_1^x \frac{du}{u}.$$ Starting with this definition, you can prove ##\log ab = \log a + \log b## and ##\log a^b = b \log a## where ##b## is rational. From the fundamental theorem of calculus, it follows that ##(\log ⁡x)'= \frac 1x##.

Because the log function is one-to-one, there exists an inverse function ##\exp##, and it satisfies ##\exp \log x = x## and ##\log \exp x = x##. By differentiating the latter, it follows that ##(\exp x)' = \exp x##.

Then for ##a>0## and rational ##b##, we have ##a^b = \exp(\log a^b) = \exp(b \log a)##. Up to this point, the book had defined ##a^b## only for rational values of ##b##. Since the right-hand side is defined for all values of ##b##, this relationship gives us an obvious way to define ##a^b## for all values of ##b##. The derivative of ##a^x## then follows from the established properties of ##\exp## and the chain rule.

Finally, defining ##e## to be the value such that ##\log e = 1##, it follows that ##\exp x = e^x##.
 
  • #18
madness said:
As far as I can tell from these responses, I was probably never actually taught via mathematical proof where these things come from.

That's quite possible, even if you were a diligent student. When I was reading introductory calculus texts (years ago), many texts did not prove the existence of ##\lim_{h \rightarrow 0} \frac{a^h-1}{h}##. As I recall, Johnson and Kiokemeister's, Calculus with Analytic Geometry, 3rd and 4th editions did.
 

1. What are derivatives of standard functions?

Derivatives of standard functions are mathematical tools used to calculate the rate of change of a function at a specific point. They represent the slope of a tangent line at that point and can be used to determine the direction and rate of change of a function.

2. How do you find the derivative of a standard function?

To find the derivative of a standard function, you can use the rules of differentiation, such as the power rule, product rule, quotient rule, and chain rule. These rules allow you to calculate the derivative of a function based on its algebraic form.

3. What is the purpose of using derivatives of standard functions?

Derivatives of standard functions have many applications in science and engineering. They can be used to analyze the behavior of a system, optimize functions, and solve real-world problems involving rates of change, such as velocity, acceleration, and growth rates.

4. Can derivatives of standard functions be negative?

Yes, derivatives of standard functions can be negative. A negative derivative indicates that the function is decreasing at that point, while a positive derivative indicates that the function is increasing. The magnitude of the derivative represents the steepness of the function at that point.

5. How do derivatives of standard functions relate to integrals?

Derivatives and integrals are inverse operations. This means that the derivative of a function is related to its integral, and vice versa. The integral of a function represents the area under its curve, while the derivative represents the rate of change of the function. This relationship is known as the Fundamental Theorem of Calculus.

Similar threads

Replies
2
Views
301
Replies
5
Views
1K
Replies
3
Views
2K
Replies
12
Views
2K
Replies
2
Views
1K
Replies
1
Views
210
Replies
20
Views
2K
Replies
3
Views
2K
  • Calculus
Replies
2
Views
2K
Back
Top