Before about 1900, everybody used infinitesimals to do calculus. (If you have infinitesimals, you also have infinities, since you can invert an infinitesimal.) A classic calc text using this approach is Silvanus Thompson's Calculus Made Easy, which you can find for free on the web. Nobody knew how to formalize infinitesimals, and you just had to sort of get a feel for what kinds of manipulations were OK. Because of this, it became stylish ca. 1900-1960 to teach calc using limits rather than infinitesimals, although practitioners in fields like physics never stopped using infinitesimals. Ca. 1960, Abraham Robinson proved that you could do all of analysis using infinitesimals, and laid out specific rules for what manipulations were OK. A well known text using this approach is Elementary Calculus by Keisler, which the author has made freely available on the web.
A typical paradox would be something like this. Infinity plus one is still infinite, so ∞+1=∞. Subtracting ∞ from both sides gives 1=0. Before Robinson, you just had to know that this kind of manipulation didn't smell right. After Robinson, there are more clearly defined rules that you can learn, and those rules tell you that this manipulation is bogus. The basic rule is called the transfer principle, which you can learn about in Keisler's book.