Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

B How does the delta ε definition prove derivatives?

  1. Aug 9, 2016 #1
    The exercises in my imaginary textbook are giving me an ε, say .001, & are making me find a delta, such that all values of x fall within that ε range of .001. The section that I'm working on is called "proving limits." Well, that is not proving a limit. All that's doing is finding values of what f(x) could be by whatever ε was given, in our case .001. Any reasonable person could understand that by making ε smaller, the values of what f(x) could be, are gonna be closer to the limit, L. But still that isn't proving the limit. So if I can't make ε out to be any small discrete value, how am I supposed to prove a limit? Well, if you could prove that delta was a function of ε, then this would work. Why? Because you won't have to put yourself in that position → you don't actually have to pick an ε, so you won't be stuck with just a range of possible f(x)'s. Take, for example, f(x) = 5x. Given ε>0, d=ε/5 will satisfy. d=ε/5 says that for ANY ε given (smaller than any number you can think of), delta will always be the ε given divided by 5. The fraction is irrelevant. It's simply the fact that delta is a function of ε that proves the limit.


    Here's a new one, & a relevant one to what differential calculus is supposed to be about. Suppose f(x)=x². DQ reads: 2x+h. lim x² h→0 =2x. I understand the whole 0/0 thing & the need for a limit, but I don't understand how the delta ε def'n works here. First of all, how am I supposed to graph this? In the previous example where f(x)=5x, it was graphed as such & ε was the range of f(x)'s around the limit point, L, & the deltas were simply some unknown range around x. Pretty straight forward. Well how does that work in our new example? I think I'm going to stop here. Delta is no longer just an arbitrary range around x. It's an arbitrary range around x+h. And ε is of course, the range of f(x)'s around the limit point L, but in this case, the output, f(x), is going to be the slope over the interval x+h. Am I supposed to be graphing h as a function of 2x+h? That doesn't even make sense...

    :/
     
  2. jcsd
  3. Aug 9, 2016 #2

    Mark44

    Staff: Mentor

    Finding the ##\delta## that works for a given ##\epsilon## is easier if you're working with linear functions. It's quite a bit harder if you're working with nonlinear functions, such as f(x) = x2.
    Same as before. The difference quotient is ##\lim_{h \to 0}\frac{f(x + h) - f(x)}{h} = \lim_{h \to 0}\frac{(x + h)^2 - x^2}{h} = \lim_{h \to 0}\frac{2xh + h^2}{h}##.
    You could use the definition of a limit to evaluate this limit, or you could make life easier by using some properties of limits that are derived from the limit definition, and determine that the last limit works out to 2x. If you choose to follow the more rigorous path, you should realize that the variable here is h, and that x is assumed to be fixed. You want to show that ##|\frac{2xh + h^2}{h} - 2x|## can be made arbitrarily close to 0 (i.e., smaller than ##\epsilon##) when h is close to 0 (that's the ##\delta##).
     
  4. Aug 11, 2016 #3
    How come you write 2xh+h² ÷ h instead of 2x+h? Aren't they equivalent? If so, then the proof should be,

    For any ε>0, there exists a δ>0 such that |2x+h - 2x|<ε whenever |h - 0|<δ.

    So, δ=ε?

    And do I really have to say "for any ε>0, there exists a δ>0 such that" or can I just put 0<|f(x)-L|<ε whenever 0<|x-a|<δ as my proof?
     
  5. Aug 11, 2016 #4

    Mark44

    Staff: Mentor

    The two expressions are equal; I just didn't carry the math that far. Also, since the limit is as ##h \to 0##, you need to convince yourself that ##\frac{2xh + h^2}{h}## can be simplified to 2x + h. If h = 0, ##\frac{2xh + h^2}{h}## is undefined.
    Yes, in this case, since the function involved is linear. I.e., h is a linear function of h. Things would be more difficult if you were calculating the derivative of sin(x) or some other function.
    You don't need to say "for any ε>0, there exists..." In your proof you are showing that, given an ε>0, you can show a δ that works. However, you should start your proof off with "Given an ε>0.." and it should end up with a δ that works.
     
  6. Aug 11, 2016 #5
    I remember trying to calculate the derivative of sinx using the method included in the link below (see Post #1). I never really figured it out. It kept changing on me. The only reason I got the answer is because I just so happened to have had a table of cosx sitting beside me. I was wondering when this was going to surface again.

    https://www.physicsforums.com/threa...ial-equation-using-logic.880255/#post-5532720

    Few things,

    It doesn't seem at all obvious that the 2x, the limit, would be preserved the smaller you made h. No wonder it took so long for someone to figure this out. I have mixed feelings as to when the δ-ε def'n should be included in a Calculus course/textbook. I think most people could agree on guessing what the derivative may be, but understanding just exactly how the algebra works out is a completely different story & this is where I got hung up You can make h smaller indefinitely & still end up with 2x+h for there are an infinite set of numbers between 0 & 0+h. If you wanted the slope at instantaneous x, you'd get 0/0. The δ-ε def'n cleared all this up & I never would have understood Calculus without it. Lastly, I have my own reservations as to whether or not my "elementary" method for computing derivatives is legitimate [see link again]. I can understand why it fails from a purely mathematical standpoint, but I don't think some of these functions that I'm seeing in my Calculus textbook are applicable to the real world, at least not when you define the variables to be so & so. Maybe there exists a unit of time so small that it wouldn't make sense to make h any smaller. Maybe there exists a limit to just exactly how much a body can be accelerated over some given time interval. Unfortunately, I don't know enough about physics to support either of these arguments, but if I ever find anything, I will be taking another look at my old method.



    I have a new question.

    f(x)=sin(Bx). Is it possible to make B infinitely large, so when you go to graph the function, it'd be a solid rectangle from y=1 to y=-1 & then from x=-∞ to x=∞. If so, how would you express that?
     
    Last edited: Aug 12, 2016
  7. Aug 12, 2016 #6

    Mark44

    Staff: Mentor

    Something like this, I guess...
    ##g(x) = \lim_{B \to \infty}\sin(Bx)##
     
    Last edited: Jul 20, 2017
  8. Aug 12, 2016 #7

    Are Fourier Transforms limited to expressing functions of one variable or can they be used to express 3 dimensional bodies described by the multi-variable calculus?
     
    Last edited by a moderator: Jul 20, 2017
  9. Aug 12, 2016 #8

    Mark44

    Staff: Mentor

    I think you're really asking about Fourier Series, rather than Fourier Transforms. An ordinary Fourier Series (see https://en.wikipedia.org/wiki/Fourier_series) expresses a function (of one variable) in terms of an infinite sum of sine and cosine terms. These series can also be extended to functions of two or more variables. See https://en.wikipedia.org/wiki/Fourier_series#Extensions.
     
  10. Aug 13, 2016 #9
    Suppose we were modeling a piece of paper through time. Now suppose I added the constraint that B was not infinite at say, t=30 seconds. What would happen at t=30 seconds? Would the piece of paper cease to exist at that instant of time? And just ignore the fact that I'm allowing a body to exist without any breadth. I'm sure another function can be used to describe it, but I think you get the point.



    See 0:37

    The resultant waveform is only of 2 dimension. I'm having a difficult time trying to visualize how it'd be set up if we were trying to model a sphere over time ...not just another curve. All of the axis' are already used up when modeling 2D.
     
    Last edited by a moderator: Jul 20, 2017
  11. Aug 13, 2016 #10

    chiro

    User Avatar
    Science Advisor

    Hey INTP_ty.

    The understanding of using delta-epsilon "methods" has to do with continuity.

    Derivatives assume continuity and analytic behaviour (which involves continuity existing in particular ways corresponding to a "constraint") uses limits which are based on two things approaching the same point as you "shrink" the area you are investigating.

    With calculus, the limits always exist at each point and the mappings are both continuous (i.e. the limits equal the function for all values) and differentiable (assume continuity and then make sure the derivative limit exists and that it's function is essentially continuous). If you have both limits for function and derivative existing and if you have both functions (for normal and derivative) being continuous then it means you have a function that can be differentiated.

    In fact the main theorem of complex analysis is such that if you can meet a specific condition of differentiation and limits then a function is analytic in the entire plane.

    Geometrically, I would advise you shrink the region low enough so that you know that as you continue to go further (in terms of the shrinking) then you will get even closer to what the limit is (without getting there of course).

    The multi-variable approach does this with a hyper-sphere and you are looking at a circular region in n-dimensions instead of a distance in one dimension.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: How does the delta ε definition prove derivatives?
Loading...