Why Study Differentials?

  • Thread starter Thread starter ShawnD
  • Start date Start date
  • Tags Tags
    Differentials
ShawnD
Science Advisor
Messages
715
Reaction score
2
One application of derivatives from first year calculus is something called differentials. The intent is to find the change of something based on the derivative of a function and some sort of varialbe like time or a distance or something.
Let's say you have this formula:
y = x^2
now here is the derivative:
dy/dx = 2x
now if you bring the dx over, it looks like this
dy = 2x dx

In math class, these are meant to find changes in things. Let's say you wanted to find the change in y when x changes from 5 to 10. you would just fill in the equation like this:
dy = 2(5)(5)
dy = 50

the dy is your change in y. the first 5 is your original x value. the second 5 is your change in x.

The differential said the change is 50. Now let's see what the original equation says the difference is:
final - original
= x^2 - x^2
= 10^2 - 5^2
= 100 - 25
= 75

The two different equations give VERY different answers. They're not even close. Knowing this, why do we still learn these?
 
Physics news on Phys.org
dy/dx is the instantaneous rate of change of y with respect to x. Since y(x) = x2 is nonlinear, you can't expect dy/dx to equal Δy/Δx
 
Exactly. If they don't work then why the hell do we learn them?
 
Because differentials work well then the quantities involved are small.

Let's stick to the f(x)=x^2 example, but with a smaller differential... how about x=10 and \delta x=1.

In this case, we have f(11)=121 and the differential approximation gives f(11) \approx f(10) + 1 * f'(10) = 120 which is pretty darn close.

The relevant theorem is:

<br /> f(x + \delta x) = f(x) + f&#039;(x) \delta x + \varepsilon (\delta x) \delta x \ <br /> \mathrm{where} \lim_{\delta x \rightarrow 0} \varepsilon(\delta x) = 0<br />

In other words, the error term in the approximation of f(x+\delta x) shrinks "quickly" as \delta x approaches 0. In fact, if f(x) is twice differentiable, you can prove that there exists a constant c such that |\varepsilon (\delta x)| &lt; c |\delta x|, so the error term is quadratic in \delta x. (This is the Taylor remainder theorem; a differential approximation is just a first degree Taylor polynomial!)


edit: finally got the LaTeX right. :smile:
 
Last edited:
oh ok, that makes sense.
 
Back
Top