# Why do we learn differentials?

1. Nov 29, 2003

### ShawnD

One application of derivatives from first year calculus is something called differentials. The intent is to find the change of something based on the derivative of a function and some sort of varialbe like time or a distance or something.
Let's say you have this formula:
y = x^2
now here is the derivative:
dy/dx = 2x
now if you bring the dx over, it looks like this
dy = 2x dx

In math class, these are meant to find changes in things. Let's say you wanted to find the change in y when x changes from 5 to 10. you would just fill in the equation like this:
dy = 2(5)(5)
dy = 50

the dy is your change in y. the first 5 is your original x value. the second 5 is your change in x.

The differential said the change is 50. Now lets see what the original equation says the difference is:
final - original
= x^2 - x^2
= 10^2 - 5^2
= 100 - 25
= 75

The two different equations give VERY different answers. They're not even close. Knowing this, why do we still learn these?

2. Nov 29, 2003

### jamesrc

dy/dx is the instantaneous rate of change of y with respect to x. Since y(x) = x2 is nonlinear, you can't expect dy/dx to equal &Delta;y/&Delta;x

3. Nov 29, 2003

### ShawnD

Exactly. If they don't work then why the hell do we learn them?

4. Nov 29, 2003

### Hurkyl

Staff Emeritus
Because differentials work well then the quantities involved are small.

Let's stick to the $f(x)=x^2$ example, but with a smaller differential... how about $x=10$ and $\delta x=1$.

In this case, we have $f(11)=121$ and the differential approximation gives $f(11) \approx f(10) + 1 * f'(10) = 120$ which is pretty darn close.

The relevant theorem is:

$$f(x + \delta x) = f(x) + f'(x) \delta x + \varepsilon (\delta x) \delta x \ \mathrm{where} \lim_{\delta x \rightarrow 0} \varepsilon(\delta x) = 0$$

In other words, the error term in the approximation of $f(x+\delta x)$ shrinks "quickly" as $\delta x$ approaches 0. In fact, if $f(x)$ is twice differentiable, you can prove that there exists a constant $c$ such that $|\varepsilon (\delta x)| < c |\delta x|$, so the error term is quadratic in $\delta x$. (This is the Taylor remainder theorem; a differential approximation is just a first degree Taylor polynomial!)

edit: finally got the LaTeX right.

Last edited: Nov 29, 2003
5. Nov 29, 2003

### ShawnD

oh ok, that makes sense.