I have tried to explain, perhaps too succintly, in posts 14 and 21 how one does derivatives quite rigorously without limits, indeed as they were done by Descartes, Fermat, and other pioneers before the introduction of limits. I have taught this approach off and on for several decades.
This topic is interesting, although not news to some of us, (indeed the citation of Decartes would seem to prove it is pretty old). I first learned the algebraic approach and wrote notes on it in 1967 at Brandeis, while studying the Zariski tangent space, and also in a solicited "book review" of a calculus book in 1978, called Lectures on Freshman Calculus, by Cruse and Granberg, in which Descartes' method is used only for quadratics, and the authoirs imply it does not work otherwise.
[I carefully explained to the publishers in my review that the method works for all polynomials, if understood properly, but they ignored me and published the misleading version anyway (in an otherwise excellent book, now long out of print). I realized later that like many people who ask for comments, they did not want corrections to their errors, but only wanted praise they could use in advertising copy.]
As to Mr Devlin's commments on the contrast between intuitive continuity and the epsilon - delta definition of it, he is quite right, but again this is hardly shocking news. The point I routinely make to my classes, and presumably many others do as well, is that the intuitive property we want for continuity of real functions, captured in Euler's "freely hand drawn graph" description is the statement of the intermediate value theorem. Indeed this was the definition of continuity given by some 19th century mathematicians, perhaps Dirichlet.
However today we realize that examples, such as f(x) = sin(1/x), for x not zero, and f(0) = 0, have the IVT property (near zero) but not the other intuitive property desirable for physics, namely that when the data we input into the experiment is approximately what is desired, then so should be the data output from the experiment, i.e. the epsilon - delta definition.
So we make the epsilon - delta definition for several very good reasons.
1) it is precise, and can be actually verified easily in specific cases such as polynomials and all elementary functions, so as to conclusively prove they satisfy it. It has an appropriate intuitive meaning, namely when x is near a, then f(x) is also near f(a).
2) it has as a CONSEQUENCE, i.e. as a provable result, the intuitive intermediate value property, in the case of functions defined on the real line.
3) it also embodies the desirable "physics" property above, i.e. if the measurements are approximate correct, then the result should be approximately correct. this epsilon - delta continuity of physical phenomena in the large, is assumed in all laboratory physics experiments, else they would be useless in the presence of any error at all.
4) it also applies to cases where the domain space is not "continuous", i.e. we can also speak of functions on the rationals being continuous in the epsilon delta sense, where we do not expect the intermediate value property to hold.
The news that what is taught in many current textbooks seems not quite all it should be, is only a remark on the lack of scholarship of some textbook authors, (or their desire to please their publishers) not on that of actual mathematicians. People who get their education from better sources, e.g. original works by great mathematicians, or from better textbooks, are not as limited by these misconceptions.
So I suggest that some, perhaps all, of the problems being posed, have been considered, even answered, hundreds of years ago; and one may profit from studying the historical development of the idea of derivatives by the old masters.
Or it may be more fun to rediscover it all over again for oneself.
Best wishes.