I already know the definition.The limit of f(x) as x approaches p from above is L if, for every ε > 0, there exists a δ > 0 such that |f(x) − L| < ε whenever 0 < |x − p |< δ. However if this definition of a limit explains the behavior of a function around a point why is the inequality |f(x) − L| < ε not 0<|f(x)-L|<ε. I get why this is a triple inequality 0 < |x − p |< δ, because you are specifying the behavior around x but not at x. For this same reason shouldn't |f(x) − L| < ε be 0<|f(x)-L|<ε. Why forget the <0 restriction? Also, I already know what the definition means. The limit statement means that no matter how small ε is made, δ can be made small enough.In other words F(x) get get as close as it wants to L as x can get as close as it wants to a point p. However would the limit exist if as ε gets smaller, δ increases. In other words would the limit exist if F(x) gets closer to L with at the same time x getting further away from point p.