1. The problem statement, all variables and given/known data Prove that lim f(x) x --> 0 = lim f(x-a) x --> a. 2. Relevant equations 0 < |x-a| < d 0 < |f(x)-a| < E 3. The attempt at a solution Spivak's logic, again, is giving me trouble. However, I realize that his method probably has a better chance of being right than mine. His method follows: Suppose that lim f(x) x --> a = l, and let g(x) = f(x-a). Then for all E > 0, there is a d > 0 such that, for all x, if 0 < |x-a| < d, then |f(x)-l| < E. Now, if 0 < |y| < d, then 0 < |(y+a) - a| < d, so |f(y+a)-l| < E. But this last inequality can be written |g(y) - l| < E. So lim g(y) y --> 0 = l. The bold portion is where I get lost. How can he rewrite |f(y+a)-l| < E as |g(y)-l| < E? Doesn't this suggest that f(y+a) = g(y)? However, wouldn't that conflict with the original definition of g(x) = f(x-a)? Wouldn't this definition mean g(y) = f(y-a), not f(y+a)? Where's my error?