Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Precise (Or Epsilon-Delta) Definition of a Limit

  1. Jul 9, 2013 #1
    Hello guys!

    I am trying to get a solid grasp of the Precise Definition of a Limit. I am having a particular hard time linking the intuition of the limit I developed a while ago to the Epsilon-Delta definition.

    I understand the basics: a limit exists/is only true if and only if for every value of ε > 0 there is a δ > 0 that "encloses" a range of x values whose outputs satisfy the inequality: l f (x) - L l < ε.

    Now, I simply can't understand how on Earth that attests that the value of a function, f (x), approaches L as x gets infinitely close to, e.g. c ...

    Here is my take on it (I hope it is at least mildly correct!):

    Delta is a function of epsilon. Namely, if epsilon decreases (if we close-in on L from both sides), Delta decreases (meaning the x values approach c from both sides)

    If the limit is true/exists, we can make epsilon as small as we want (get as close as we wish to L from both sides) thereby making Delta increasingly small (making the x values get closer and closer to c.) This shows that as f (x) approaches L, x approaches c from both sides: the limit is correct/true.

    Am I on the right track?

    Thank you very much in advance for any help whatsoever (this thing is really bothering me)!
     
  2. jcsd
  3. Jul 9, 2013 #2

    WannabeNewton

    User Avatar
    Science Advisor

    Do you know what an open ball is? Try to formulate it conceptually in terms of open balls; I find the ##\epsilon-\delta## definition of a limit to be very intuitive if I think of it in terms of open balls (motivated by topology).
     
  4. Jul 9, 2013 #3

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    You're on the right track.

    Epsilon-delta definitions actually have a lot to do with basic physics. It's the same idea of an approximation.

    Let's say that you're in a shower. You like your shower to have a temperature of 40°C or approximate. You can regulate your temperature by turning some knob. Of course, you can never get exactly 40°C by turning the knob (you don't have that precision), but you can get close. In fact you can get arbitrary close.

    For example, let's say that you like 40°C, but ±5°C is ok too. Then you got a certain number of positions that are ok. In fact, you got an entire range of positions that is ok. The 5°C is called the ε, while the range of positions has to do with the δ.
    However, if you're more sensitive and if you like 40°C, but only ±0.5°C is ok. Then there are also some positions of the knob that are ok, but there are significantly less positions that are ok.

    In general, let's say that you like 40°C with a tolerance of ε°C. Then there is a certain range of positions of the knob that are ok. This is the δ-range. The smaller you take ε, the smaller the δ-range will be. But whatever we see, there will always be a δ-range.

    So choosing your temperature in the shower is continuous.

    An example of a discontinuous function would be the following. Consider the following graph and imagine that it is landscape: (the following graph IS continuous, but that's not the function I'm talking about)

    [Broken]

    So your landscape is flat between 1 and 3, and decreases outside that. Let's say that you have a ball that you want to place on the landscape. Your goal is to place the ball exactly on 3. Of course, this is impossible to do. So we will allow some degree of tolerance. Let's say we want to place the ball 1 distance from the spot 3. We can always do this by placing it on the left of 3. But we can't do that by placing it on the right. Indeed, if we place it on the right, then the ball will just roll away and will roll outside of the allowable range. So we see that there is no allowable interval of tolerance around 3: the space left to 3 is ok, but the space right to 3 is not ok. This means that the process is not continuous.

    Abstractly, you work with functions ##f:\mathbb{R}\rightarrow \mathbb{R}##. You should see the domain as some kind of knob in the shower. So we can put the knob on any value we please , but only approximately. The function ##f## regulates the temperature. So let's say we put the knob on ##0##, then we will feel temperature ##f(0)##. The entire process of epsilon-delta definitions is to get as close as we want to some specific temperature by turning the knob and thus getting within a certain allowable range of the knob. If no matter how close we want to get to the temperature, there is always an allowable interval of the knob to the left and to the right, then the function is continuous.
     
    Last edited by a moderator: May 6, 2017
  5. Jul 9, 2013 #4

    Zondrina

    User Avatar
    Homework Helper

    In symbols :

    $$\forall \epsilon > 0, \exists \delta > 0 \space | \space 0 < |x-a| < \delta \Rightarrow |f(x) - L| < \epsilon$$

    Considering ##0 < |x-a| < \delta##, we know that |x-a| must always be positive, so we never actually consider what happens at x=a, only what happens as we approach it. That's where ##|x-a| < \delta## comes into play.

    Expanding we get ##-\delta < x-a < \delta## and then ##x + \delta > a > x -\delta##.

    So there exists a ##\delta > 0## such that ##a## is bounded between ##x + \delta## and ##x -\delta## and we can only arbitrarily get close to ##a##.

    Using this, what does it say about ##|f(x) - L| < \epsilon##?

    Well first, lets consider ##-\epsilon < f(x) - L < \epsilon## which yields ##f(x) + \epsilon > L > f(x) - \epsilon##. So ##\forall \epsilon > 0## we can make ##f(x)## as close to ##L## as we like. How close do we need to be you might ask? Sufficiently close.

    What defines sufficiently close? Well, ##f(x)## varies according to the values of ##x##. How far away ##x## is from ##a## depends on ##\delta##. So the values of how far ##f(x)## is away from ##L## ( which change according to ##\epsilon## ) will also vary according to ##\delta##. Hence ##\epsilon## will change according to the ##\delta## we choose.

    So in conclusion we know we can choose a ##\delta(\epsilon)## as to make ##f(x)## as close to ##L## as we like.
     
  6. Jul 9, 2013 #5
    This seems backwards: we generally say "as x approaches c, f(x) approaches L."

    I think of a limit as stating that, for x values "close" to c, we can make f(x) as close to L as we want. [itex]0 < |x - c| < \delta[/itex] denotes a deleted neighborhood of c - basically an interval centered on c, but with with c removed. Then, the epsilon-delta definition of a limit simply states [itex]\lim_{x->c}f(x) = L[/itex] whenever there exists a (nonempty) deleted neighborhood of c (call it N) such that for every x in N, f(x) is within an epsilon of L.

    To better understand this definition, let's consider the rational indicator function, [itex]I_Q[/itex]. [itex]I_Q(x)[/itex] is defined as 1 if x is rational, and zero otherwise. We can see that the limit [itex]\lim_{x->1}I_Q(x)[/itex] will not exist for any x, as whenever [itex]\epsilon \le 1[/itex], there is no deleted neighborhood of 1 which does not contain an irrational number. No matter how "close" our x values are to c, there is a limit to how close f(x) can be to any given number; thus, the limit fails to exist.

    I'd also like to point out that, technically, there is no requirement that delta decrease with epsilon. Consider, for example, the function f(x) = 0: [itex]|f(x) - 0| < \epsilon[/itex] for ANY choice of delta.
     
  7. Jul 10, 2013 #6

    Stephen Tashi

    User Avatar
    Science Advisor

    I agree with micromass that you're on the right track and I agree with Strants that it's more in the spirit of the formal definition to think of delta being small as "causing" (i.e. implying) that |f(x) - L | is small instead of thinking about epsilon making delta small.

    When proof about a limit is written, the person writing the proof usually provides a way to state a suitable delta by making it a function of epsilon. But this is a symptom of the fact that some people accept mathematical proofs when the reasoning is done in a backward manner. (For example, in "proving" trig identities, most teachers accept writing the identity to be proven and then performing steps till we reach an identity already known to be true. This looks like you assume the thing that is to be proven as the first step! A proper proof would consist of writing the steps in reverse order so that you begin with identity known to be true and derive the identity to be proven.)

    The forwards order for a limit proof could begin something like "Given epsilon > 0, pick delta = epsilon/3". However, it would seem that "delta = epsilon/3" was pulled out of thin air. So often these proofs are written as if we are working backwards and trying to "solve for delta" as a function of epsilon. But the formal reasoning goes in the reverse order. It says that making delta small causes |f(x) -L| to be smaller than epsilon. The fact that making epsilon small forces us to search for smaller deltas is generally true, but it isn't the fact that makes the proof work.
     
  8. Jul 10, 2013 #7
    Thank you very much for all the replies! They even helped me understand parts of the definition I thought I had already comprehended!

    But I think I need to give myself some more time to digest the content of the responses; I am still having a hard time convincing myself of some things :redface:

    I think what confused me was that I thought that the purpose of the precise definition was to find the limit - show that, as x approaches, e.g. c, f (x) approaches L.

    I think I got what is the function of the definition now:

    The true "role" of the precise definition is to prove/confirm a limit is in fact L. So, if someone claims that the limit of f (x) as x approaches, e.g. c, is L, then the precise definition can be used to prove that right or wrong.

    How does it do that?

    By testing if for every ε > 0 there is a δ > 0 that "houses" a range of x values whose outputs satisfy l f (x) - L l < ε. Meaning, that we can get as close as we want to the limit, L.

    How does that sound?

    I would like to thank all of you guys one more time! I will keep reading your posts one by one!
     
  9. Jul 10, 2013 #8

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Yes, that sounds good.

    So indeed, the purpose of the epsilon-delta definition is to prove certain limits are true. So in order to prove things like ##\lim_{x\rightarrow a} f(x) = L##, you will need to take an ##\varepsilon>0## arbitrary, and then find a ##\delta## that works for that ##\varepsilon##. Works means here that for all ##x## in the ##\delta##-range, we have that ##f(x)## is in the ##\varepsilon## range.

    It might be good to talk a bit of history now. Because when limits and continuity were invented, they looked totally different and didn't use epsilon-delta at all. Let's say we want to calculate

    [tex]\lim_{h\rightarrow 0} \frac{(x+h)^2 - x^2}{h}[/tex]

    Historically, this was solved by infinitesimals. These are not real numbers. Infinitesimals are "things" that lie extremely close to 0 (closer than any real nonzero number), but aren't 0. So let ##e## be an infinitesimal, then we calculate

    [tex]\frac{(x+e)^2 - x^2}{e} = \frac{x^2 + 2xe + e^2 - x^2}{e} = \frac{2xe + e^2}{e} = 2x + e[/tex]

    But since ##e## is close to ##0##, we can set it ##0##. So we get that the limit is ##2x##.

    This is how limits were done in the past. And everything worked fine. If you want to calculate limits, then doing things like this will give you the right answer.

    But there are problems. For example, why can we set ##e=0##, what justification is there? And what is an infinitesimal anyway?
    Furthermore, we have a real function that we want to calculate the limit of. And the answer is a real number. But the relation between them requires things that aren't real numbers, but infinitesimals! It would be much more elegant if we can find limits by only working with real numbers or properties of real numbers.

    These issues plagued calculus for over 100 years. No answer was found. Not until they invented the epsilon-delta definition. This was a satisfactory answer and was much more rigorous than the infinitesimal approach.

    So if you want to calculate limits, then you don't need epsilon-delta at all. You will rarely ever need it for calculating specific limits. But you need it to put calculus on solid ground.

    That said, the approach with infinitesimals was made rigorous too, but only very recently. This is the approach of hyperreal or surreal numbers.
     
  10. Jul 10, 2013 #9

    WannabeNewton

    User Avatar
    Science Advisor

    Here's how I like to think of it. Say we have a function ##f## continuous at some ##x_0## and let's say for starters that you give me any ##\epsilon > 0## whatsoever; what you have glibly done is given me an open interval around ##f(x_0)## with radius ##\epsilon##. I can then guarantee you an open interval of radius ##\delta## around ##x_0## whose image under ##f## will fit into the open interval that you prescribed me. Now let's say you pinch the open interval around ##f(x_0)## to make it even smaller (i.e. choose a smaller ##\epsilon##) then I can contest you and sufficiently pinch the open interval around ##x_0## so that its image once again fits into your newly pinched open interval around ##f(x_0)##. We can then keep doing this indefinitely in the sense that you can keep pinching the open interval around ##f(x_0)## to arbitrarily small sizes and I will always be able to pinch the open interval around ##x_0## to sufficiently small sizes so that under ##f## it fits into your open interval around ##f(x_0)##. This tells me that no matter how small an open interval you make around ##f(x_0)##, I can always find an open interval around ##x_0## which can be fit into your interval under ##f##.

    Consider the function ##f(x) = \begin{cases}
    0 \text{ if } x\leq 0 \\
    1 \text{ if } x> 0
    \end{cases}## and let's say we want to evaluate continuity at ##x = 0##. So say you give me an open interval of radius ##\epsilon = \frac{1}{2}## about ##f(0) = 0##; if you imagine this function as a graph in ##\mathbb{R}^{2}## then said open interval can be pictured as being centered on the origin and lying along the ##y##-axis. Now can I manage to find you an open interval of some radius ##\delta## such that under ##f## this interval fits into yours (my interval can be pictured as being centered on the origin and lying along the ##x##-axis)? Well note that no matter how small an open interval I take around ##x = 0##, it will always contain some ##x < 0## and some ##x > 0## so the image will always be ##\{0,1\}##. But there is no way this can fit inside your original open interval so this function can't be continuous in the above sense, as we would expect. More explicitly, if we assume ##f## is continuous at ##x = 0## then for ##\epsilon = \frac{1}{2}## there exists a ##\delta > 0## such that for all ##x\in (-\delta,\delta)##, ##f(x)\in (-\frac{1}{2},\frac{1}{2}) ## which is a contradiction as ##f(x) = 1## for ##x > 0##. I hope that helps!

    EDIT: Here's an animation that depicts what I was talking about above: http://www2.seminolestate.edu/lvosbury/calculusI_folder/EpsilonDelta.htm
     
    Last edited: Jul 10, 2013
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Precise (Or Epsilon-Delta) Definition of a Limit
Loading...