# Approximation with derivative

1. Jun 4, 2008

### yoran

Hi,

In my textbook they derive that a solution to the law of Faraday and the law of Ampère-Maxwell is an electromagnetic wave.

In one of the steps they have to calculate $$E(x+dx,t)$$ where E is the magnitude of the electric field of the wave. They say
$$E(x+dx,t) \approx E(x,t)+\frac{dE}{dx} \Big|_{t=constant}dx$$
On what theorem is this step based?

Thank you.

2. Jun 4, 2008

### Gib Z

The fact that a function can be approximated well at point a by the tangent at point a.

3. Jun 4, 2008

### yoran

So basically you do a linear approximation? In my textbook they write that it's an approximation but they use it later as if it's an exact equality. Is this wrong or can you do it because dx is infinitesimally small?

4. Jun 4, 2008

### kanato

That's generally known as a first order Taylor expansion. But it can be obtained from the definition of the derivative:

$$\frac{dE}{dx} = \lim_{dx \rightarrow 0}\frac{E(x+dx)-E(x)}{dx}$$

Forget about the limit and solve for E(x+dx). As an approximation, the result works as long as dx is small. The reason is you can expand any function in a Taylor series, which is a power series in dx. Terms above dx^2 get cut in an expansion like that, so the error for small, non-zero dx is roughly proportionate to dx^2

5. Jun 4, 2008

### tiny-tim

Hi yoran!

Yes … if dx is only very small, then it's only an approximation.

But if dx is infinitesimally small, then it's exact!

6. Jun 4, 2008

### Hurkyl

Staff Emeritus
Except when the error is a nonzero infinitessimal. :tongue:

(Tiny-tim and I, are talking about other arithmetic systems -- in the real numbers, 0 is the only infinitessimal number)

7. Jun 4, 2008

### yoran

Thanks a lot guys. I've seen Taylor and Maclaurin series in a Calculus course but I never applied it in a "real" context. Now I can see why it's so useful. It's much clearer now.