Differentiation: How does it overcome the 0/0 problem?

cscott
Messages
778
Reaction score
1
How does differentiation overcome the fact that as we let \Delta x \rightarrow 0 until it is truly 0, our two coordinates to take the slope with will be the same and thus give a slope of \frac{0}{0}?
 
Physics news on Phys.org
\lim_{\Delta x \rightarrow 0} \Delta x = 0 \neq dx

we are considering the function \frac{dy}{dx}

and the limit of the whole thing, \lim_{\Delta x \rightarrow 0} \frac{f(x + \Delta x) - f(x)}{\Delta x}

is different than the limit of it's parts.
 
Try f(x)=x^2 in quetzalcoatl9's formula (the definition of the derivative).
Expand out the numerator. Do some algebra.
Lastly, take the limit.
See what you get.

Try x^3, etc... sin(x),exp(x), etc...

In some sense what you're doing is measuring the relative rate between \Delta f (which depends on \Delta x) and \Delta x itself as \Delta x gets small.
 
cscott said:
How does differentiation overcome the fact that as we let \Delta x \rightarrow 0 until it is truly 0, our two coordinates to take the slope with will be the same and thus give a slope of \frac{0}{0}?

They aren't the exact same coordinate. Hence the x+\Delta{x}[/tex]<br /> <br /> I&#039;m kind of confused on your question. If you could explain your thoughts more, perhaps we could help you more.
 
The question you need to be asking yourself is "what is a limit?" Once you have that down, it will be (more) clear how the definition of a derivative in terms of a limit works.
 
cscott said:
How does differentiation overcome the fact that as we let \Delta x \rightarrow 0 until it is truly 0, our two coordinates to take the slope with will be the same and thus give a slope of \frac{0}{0}?

To expand on Jameson's comment...

Jameson said:
They aren't the exact same coordinate. Hence the x + \Delta x

cscott, go back to the definition of a limit, which says:

Let (a,b) be an open interval containing c. Let f be a function defined on (a,b), except possibly at c.

\lim_{x \rightarrow c}f(x)=L iff for every \epsilon &gt; 0, there exists \delta &gt; 0 such that 0&lt;|x-c|&lt;\delta implies that |f(x)-L|&lt;\epsilon.

Note that the second inequality really implies that 0 \leq |f(x)-L| &lt; \epsilon, which means that f(x)-L can indeed equal zero. But in the first inequality, |x-c| is forever restricted to be above zero.

So in the definition of the derivative:

f&#039;(x)=\lim_{\Delta x \rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x},

we can see that it is |\Delta x - 0| that is forever restricted to be above zero.

edit:

Hmmm...Why didn't LaTeX format this right?

\lim_{x \rightarrow c}f(x)=L
 
Last edited:
I think "itex" is limited to one line... so the the x ->c can't fit under.
 
Thanks for the new insight. I'm going to read more on limits and such.
 
or you could take the classical way out, of descartes etc.

i.e. when f is a polynomial, then f(x)-f(a) = [x-a][g(x)] for some factor g.

then you are asking essentially for the value of the quotient [f(x)-f(a)]/(x-a)

= [(x-a)g(x)]/(x-a) at x=a.

obviously you should factor out the x-a first, then the answeer is g(a), which has nothing to do with the fact that you get 0/0 before you factor out.


now in cases where you do not know how to factor, you plug in a sequence of numbers for x that are merely close to a, i.e. you let x = a+e for small e.

then you get the sequence of approximations
[(e) g(a + e)]/e = g(a+e). as e gets smaller and smaller, hopefully you are eventually able to guiess what g(a) should be from all these approximations of form g(a+e).

if so you have "taken the limit" as x goes to a.
 
Back
Top