Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Can it be argued that derivatives should be undefined?

  1. Jun 4, 2013 #1
    I understand derivatives and I am not trying to be like a stickler or anything, but before manipulating the equation to arrive at a form where we can find a real answer for a derivative, we are left with [f(x+h)-f(x)]/h (where h is delta x I guess as most people write it). Before evaluating it further, if we are taking the limit as h goes to zero....then wouldn't the equation, which once manipulated gives us a reasonable answer, be equal to (0-0)/0 for all derivatives? Why is it that we are allowed to ignore this form and use the manipulated form?

    I hope my question was clear, thanks in advance.
     
  2. jcsd
  3. Jun 4, 2013 #2

    Borek

    User Avatar

    Staff: Mentor

    Just because the division is undefined doesn't mean the limit is undefined. These are two different things.
     
  4. Jun 4, 2013 #3

    HallsofIvy

    User Avatar
    Science Advisor

    Have you considered looking at the graph of such a "difference quotient"? For example, if [itex]f(x)= x^2[/itex] then [itex]\frac{f(2+ h)- f(2)}{h}= \frac{(2+h)^2- 4}{h}= \frac{h^2+ 4h+ 4- 4}{h}= \frac{h^2+ 4h}{h}[/itex].

    Now, as long as h is NOT 0, we can write that as [tex]\frac{h(h+ 4)}{h}= h+4[/tex] and its graph would be a straight line, with slope 1. The actual graph is that straight line with the point (0, 4) removed. That is, the value of the difference quotient is not defined at h= 0 but the limit, as h goes to 0, is 4. That is why we have to define the derivative in terms of the limit.o

    (I have no doubt that you are good at manipulating the formulas of Calculus, but until you understand exactly why the limit is so important in Calculus, you do not really "understand Calculus". Actually, most students don't really "understand Calculus" until they have taken "Mathematical Analysis".)
     
  5. Jun 4, 2013 #4
    I know, but in the power rule, we assume h goes to zero when we have 2x + h as our answer....so if we are assuming it goes to zero then, why not assume it goes to zero when in the denominator?
     
  6. Jun 4, 2013 #5
    Yeah, I get it but doesn't that mean that the derivative must have an infinitesimally small error at all times?
     
  7. Jun 4, 2013 #6

    pwsnafu

    User Avatar
    Science Advisor

    If you look at the numerator separately from the denominator, then you are right. The point is that the numerator goes to zero faster than the denominator, so the ratio goes to a finite number.

    The derivative has no error.
    Edit: Maybe I'm misunderstanding. Error respect to what?
     
    Last edited: Jun 4, 2013
  8. Jun 4, 2013 #7

    WannabeNewton

    User Avatar
    Science Advisor

    Look at the ##\epsilon - \delta## definition of a limit in ##\mathbb{R}##. This will surely get rid of any misconceptions you have about the limit, especially the part about the "infinitesimally small error".
     
  9. Jun 4, 2013 #8
    No. Especially not if you use a more complicated definition of a derivative, but that's a bit advanced for right now. :wink:

    The idea of a real limit is this: Say we have a number ##\epsilon>0##. Then, we can say that ##\displaystyle \lim_{x\rightarrow\alpha}f(x) = \mathfrak{L}## if and only if, for all ##\epsilon>0##, there exists another number ##\delta>0## such that, for all x, the inequality ##0<\left|x-\alpha\right|<\delta## implies the inequality ##\left|f(x)-\mathfrak{L}\right|<\epsilon##.

    What the heck does this mean, you ask? It means that we can make ##\epsilon## as close to 0 as we want. In fact, it means we can make ##\epsilon## ARBITRARILY small. We might even pretend that we can make it SO small that we can imagine that there is no real number between ##\epsilon## and 0. In simpler terms, this basically means that f approaches (and gets infinitesimally close to) ##\mathfrak{L}## as x approaches ##\alpha##. This does NOT mean that ##\displaystyle f(\alpha)=\lim_{x\rightarrow\alpha}f(x)##.

    For a rather inflated and sugar-driven example, consider the function

    $$f:\mathbb{R}\rightarrow\mathbb{R}\cup\left\{TOOTSIEPOP\right\} \\ f(x) = \left\{\begin{matrix} 2, & x\neq 2 \\ TOOTSIEPOP, & x=2 \end{matrix}\right. .$$

    For all values of x, other than 2, f(x)=2. In fact, if we get infinitesimally close to 2, we are still approaching a value of 2 for f(x) at x=2. However, as stated, f(2)=TOOTSIEPOP. Clearly, we aren't going to approach TOOTSIEPOP in the real numbers. The limit as x approaches 2 is, thus, 2.

    Edit:
    WannabeNewton post above basically suggests the same. For more info, you might start here.
     
    Last edited: Jun 4, 2013
  10. Jun 4, 2013 #9
    Yeah, I am familiar with epsilon-delta. I see now...I was just confused how we could assume delta x = 0, but when it was in fraction form we could not (otherwise we would have it undefined). I get it though. Thanks for the examples, all
     
  11. Jun 5, 2013 #10

    lavinia

    User Avatar
    Science Advisor
    Gold Member
    2017 Award

    Think about this example. the ratio 2x/x is 2 for any x except 0. What is its limit as x approaches zero?
     
  12. Jun 5, 2013 #11

    lurflurf

    User Avatar
    Homework Helper

    In a derivative we have [f(x+h)-f(x)]/h, but h is not zero so there is no problem. Assume f(x) is a polynomial. We may define the derivatives of f by

    $$\mathop{f}(x+h) = \sum_{k=0}^\infty \frac{h^k}{k!} {\mathop{f}}^{(k)}(x)$$

    We do not worry as this does not remind us of dividing by zero.

    $$\lim_{h \rightarrow0} \frac{\mathop{f}(x+h)-\mathop{f}(x))}{h}$$

    Reminds us of dividing by zero, but does not actually involve it. The derivative is what it is. For example to find the derivative of x^2 we have

    $$\lim_{h \rightarrow 0} \frac{2 \, x \, h}{h}$$

    Since h is not 0 we might just as well have
    $$\lim_{h \rightarrow 0} 2x $$

    $$\frac{2 \, x \, h}{h}=2x $$

    When h is not zero, 2x does not remind us of dividing by zero.
     
  13. Jun 5, 2013 #12
    Ah, I understand now. The one thing though, is if you have 2 + h, shouldn't the derivative also be 2+h? The reason it is always shown to be 2 as that as h goes to zero, 2 + 0 = 2. But if h is not zero, should it not be 2+dx? If h is never zero how come 2 + h = 2?
     
  14. Jun 5, 2013 #13

    HallsofIvy

    User Avatar
    Science Advisor

    The problem appears to be that you still do not understand the concept of a limit.
     
  15. Jun 5, 2013 #14

    WannabeNewton

    User Avatar
    Science Advisor

    The whole point of the limit is that you take ever decreasingly small open balls around the limit, when it exists, so that the values converging to the limit become arbitrarily close to the limit via containment in the open balls.
     
  16. Jun 5, 2013 #15
    I see now...I don't know what I was missing before
     
  17. Jun 5, 2013 #16
    Thanks all
     
  18. Jun 5, 2013 #17
    Why is ##\lim_{h\rightarrow 0} 2+h## evaluated by simply substituting ##h=0## into the function, and why is ##\lim_{h\rightarrow 0} \frac{h}{h}## not evaluated like that?

    Well, the answer is the notion of a continuous function. Basically, if ##f:D\rightarrow \mathbb{R}## is a continuous function that is defined in a point ##c## (this means that ##c\in D##), then [tex]\lim_{h\rightarrow c} f(h) = f(c)[/tex]is true. This is a theorem that one can prove (or you can accept it as definition of continuity). For example, consider ##f:\mathbb{R}\rightarrow \mathbb{R}:h\rightarrow 2+h##. This function is certainly defined in ##0## and is continuous there. So we can write [tex]\lim_{h\rightarrow 0} 2+h = \lim_{h\rightarrow 0} f(h) = f(0) = 2+0 = 2[/tex]

    Now, why does this not work for [tex]\lim_{h\rightarrow 0} \frac{h}{h}[/tex] Well, the simple reason is that the function ##f(h) = h/h## is not defined at ##0##. So the above does not apply. In other words, we can write ##f:\mathbb{R}\setminus \{0\}\rightarrow \mathbb{R}:h\rightarrow h/h## and ##0## is not in the domain.

    How do we handle this situation then? Well, we have the following theorem:

    Let ##D## be a set such that ##c\notin D##. Given two functions ##f:D\rightarrow \mathbb{R}## and ##g:D\cup \{c\}\rightarrow \mathbb{R}##. If for all ##x\in D## holds that ##f(x) = g(x)##, then ##\lim_{h\rightarrow c} f(h)=\lim_{h\rightarrow c} g(h)##.

    So this theorem tells us that two functions have the same limit if they agree everywhere except possibly in ##c##.

    Now, let ##f:\mathbb{R}\setminus \{0\}\rightarrow \mathbb{R}:h\rightarrow h/h## and ##g:\mathbb{R}\rightarrow \mathbb{R}:h\rightarrow 1##. Then ##f(h)= g(h)## for all ##h\in \mathbb{R}\setminus \{0\}##, so the theorem implies that the limits agree. So [tex]\lim_{h\rightarrow 0} f(h) = \lim_{h\rightarrow 0} g(h)[/tex] Thus [tex]\lim_{h\rightarrow 0}\frac{h}{h} = \lim_{h\rightarrow 0}1[/tex] Now, the right-hand side is a continuous function and it is defined at ##0##, so we can apply the previous result: [tex]\lim_{h\rightarrow 0} 1 = \lim_{h\rightarrow 0} g(h) = g(0) = 1[/tex] This is what actually is going on when evaluating limits.
     
  19. Jun 6, 2013 #18
    I have another unrelated question but don't want to start a new thread and clog up the forums, so answer it if you wish and I will be happy.

    For a definite integral I have read that ∫a-b f(x)dx = F(b)-F(a). Easy enough to plug numbers in and compute, but there was no explanation as to why that is the formuia. Why is the area under a curve equal to the antiderivative at the end point minus the origin? WHat about all the points in between?
     
  20. Jun 6, 2013 #19
    Also how long should it take me to teach myself calc 1-3 (not analysis) if I am studying 15 hours a week? I know everybody is different, but so far I have put in about 30 hours and I am freaking out that I don't fully get it yet, but so far I have learned limits, derivative chain rule and power rule, antiderivatives, continous functions, derivatives of common functions. Seems like by now I should fully get integrals but maybe I am just trying to rush things
     
  21. Jun 6, 2013 #20

    pwsnafu

    User Avatar
    Science Advisor

    It's called the Fundamental Theorem of Calculus.
     
  22. Jun 6, 2013 #21

    lurflurf

    User Avatar
    Homework Helper

    Well 15 hours a week is about the time a person in a class would spend and it would take a year for calculus 1-3. Self study might be slightly faster or slower depending on the person. Knowing calculus 1-3 means different things different places depending on differences in difficulty, amount of material, number of sessions, rigor, topics covered, applications, computational skill, and other things that very class to class.

    To put the pacing in perspective a class would spend 1-3 weeks on many of these topics
    Calculus 1
    01-Introduction
    02-Limits
    03-Elementary functions
    04-Derivatives
    05-Appliations of derivatives
    06-Integrals
    07-Applications of Integrals
    08-More about Elementary functions
    09-Techniques of integration
    10-Assorted loose ends
    11-Seqences and series
    12-Convergence
    13-Coordinate systems
    14-Analytic geometry
    15-Differential equations
    16-Vector, matrices, determinants, and complex Numbers
    17-Derivatives in several variables
    18-Applications of derivatives in several variables
    19-Integrals in several variables
    20-Application of integrals in several variables
    21-introduction to vector calculus
    22-differential vector calculus
    23-integral vector calculus
    24-More on Coordinate systems
    25-Vector Calculus in Space
    26-Vector Calculus on Surfaces
    27-Transport
    28-Differential forms/Stokes theorem
    29-Applications of vector Calculus
    30-Topics in vector Calculus

    A danger in self study is in going much faster than a class (which is good) you might not learn as
    thoroughly as the class (which is bad). Watch out for this.
     
    Last edited: Jun 6, 2013
  23. Jun 6, 2013 #22

    lurflurf

    User Avatar
    Homework Helper

    The Fundamental Theorem of Calculus can be understood a few ways, here is one.

    An integrable function f can be approximated by a constant C and the constant can be given by a value of the function.
    ∫f(x)dx ~f(c) h if x~c and h~0

    A difference of a differentiable function F can be approximated by a constant C and the constant can be given by a value of the derivative.
    F(x+h)-F(x)~h F'(x) if x~c and h~0

    If F'=f we can combine the above
    ∫f(x)dx~F(x+h)-F(x) if x~c and h~0

    This hold only for small intervals, but many small intervals can be combined
    ∫f(x)dx~Ʃ(F(x+h)-F(x))

    The sum collapses and we get the theorem
    ∫f(x)dx~F(b)-F(a)
     
  24. Jun 6, 2013 #23
    It may seem like a weird idea, but read Wikipedia. The math pages are generally reliable, and bouncing around different areas to understand all the concepts in depth can really help you learn and make connections before you start fully applying the material. Start with something math related that you enjoy, and then open somewhere in the neighborhood of 15 to 20 new tabs.

    To give you an idea, the first integral I truly tried to evaluate on my own was ##\displaystyle \int \frac{e^{-\frac{1}{2}\left(\frac{x-\mu}{\sigma}\right)^2}}{\sigma\sqrt{2\pi}}\, dx##, an integral that has notable use in statistics. After a somewhat pathetic effort, I finally got to my answer of ##\displaystyle \frac{1}{\sqrt{\pi}}\int_{0}^{\frac{x-\mu}{\sigma\sqrt{2}}}e^{-\xi^2} \, d\xi - C##. At that point, I was fairly certain that I understood most of the formulas.

    So, anecdotes and stories aside, my suggestion is to read Wikipedia pages and, every once in a while, do stupid stuff with what you learn. That's how I learned math, so hopefully it works for you too.
     
  25. Jun 7, 2013 #24
    Thanks for all the tips and help!
     
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook