The limit definition looks rather convulated when stated in terms of epsilons and deltas. One good way of thinking about it is this:
Given any allowable magnitude of error (formally epsilon) from a value (the limit), there exists a range near c (the value x is approaching) for which the function's outputs ( f(x) ) will deviate from the limit no more than the given magnitude of error (epsilon).
The key here is that if the limit for f(x) at a particular point c exists (and hence the previous statement holds), then we are stating that we can get f(x) as close to L as we want. I can make it within .001 or .000001, ... anything (because for each error I present to it, the limit existing garuntees that i can find an interval of values for x symmetrically about c such that f(x) will be that close to L).