Book for self-learning calculus and learning in a foreign language?

Click For Summary
Learning calculus in one's native language is recommended to avoid the distraction of language barriers while tackling complex subjects. It is suggested to use a native language book for foundational understanding, while also engaging with an English version to familiarize oneself with terminology. Popular calculus textbooks mentioned include Stewart's for practical applications, Spivak for rigorous understanding, and Apostol for a formal approach. The effectiveness of online resources, such as YouTube lectures, is debated, with some advocating for traditional study methods over passive consumption. Ultimately, the choice of resources should align with individual learning styles and goals.
  • #31
semperfidelis said:
What about more science focused schools like Caltech or MIT?
fresh_42 said "I think that "self-learning" shouldn't follow any specific university. The basics are the same everywhere, and dozens of free lecture notes are available. Choosing one that suits one's individual fondness rather than sticking to a dogmatic call for a certain university is fairly easy. Calculus is the same everywhere. In Edinburgh, Toronto, Kaiserslautern, Boston, or Bournemouth."

I never said otherwise. All I was trying to convey is that a good education aims at insight first, specially in science which involves reasoning and understanding. And I was assuming that that kind of education is what you get in top schools (in general, there's always exceptions).

Muu9 said:"Caltech used (maybe still uses) Apostol I'm the first quarter. Interestingly, it is the only US college to have calculus in high school as a hard requirement for admissions. More interestingly, for the second and third quarters students are given the option to switch into a more applied track, which many take."
So Apostol in the 1st qtr of Freshman year (I guess it's a short time to cover both volumes), and then you can choose a more applied track meaning switching to something like Stewart?
 
Physics news on Phys.org
  • #32
semperfidelis said:
fresh_42 said "I think that "self-learning" shouldn't follow any specific university. The basics are the same everywhere, and dozens of free lecture notes are available. Choosing one that suits one's individual fondness rather than sticking to a dogmatic call for a certain university is fairly easy. Calculus is the same everywhere. In Edinburgh, Toronto, Kaiserslautern, Boston, or Bournemouth."

I never said otherwise. All I was trying to convey is that a good education aims at insight first, specially in science which involves reasoning and understanding. And I was assuming that that kind of education is what you get in top schools (in general, there's always exceptions).

Muu9 said:"Caltech used (maybe still uses) Apostol I'm the first quarter. Interestingly, it is the only US college to have calculus in high school as a hard requirement for admissions. More interestingly, for the second and third quarters students are given the option to switch into a more applied track, which many take."
So Apostol in the 1st qtr of Freshman year (I guess it's a short time to cover both volumes), and then you can choose a more applied track meaning switching to something like Stewart?
More like Marsden
 
  • #33
user079622 said:
Integration before derivation is very unlikely, in math and physics classes/books. I dont like that order.
Order must make be logic, you first learn derivation, then antiderivation.
You can always correct your understanding over time. There's a long time gap between now and more advanced work. There's something to be said for expediency, getting started ASAP. None of the choices presented here will permanently nor inevitably distort your understanding.

I personally, in reviewing a few things in Vector Calculus, liked some YT presentations like those of Trefor Bazzett and Dennis Auroux from MIT's OCW, as an aide, complement to your learning process.
 
  • #34
WWGD said:
You can always correct your understanding over time. There's a long time gap between now and more advanced work. There's something to be said for expediency, getting started ASAP. None of the choices presented here will permanently nor inevitably distort your understanding.

I personally, in reviewing a few things in Vector Calculus, liked some YT presentations like those of Trefor Bazzett and Dennis Auroux from MIT's OCW, as an aide, complement to your learning process.
Sorry if I came off too much like a " Get off my Lawn" type; it just seemed OP was overthinking their choice. Ultimately, OP, if you're able to, drop by a college or technical bookstore and drop by the Math/Calculus section and browse through a few books to get an idea of which may be better suited for you. If not available, check the reviews of Calc books in Amazon or similar.
 
  • #35
You may not agree, but I want to speak up for introducing integration first, as the best approach.
In fact, the choice by Apostol to present integration first is entirely logical and historically justified, as he explains in his introductory remarks. First of all it it is a mistake to think that integration is the same as anti differentiation. This mistake is a result of the unfortunate tendency to teach differentiation first, then integration, accompanied quickly by the modern discovery that, in some good cases, integrals can be calculated by anti differentiation. This fact apparently causes almost all students to forget what integration really is (including me), and remember only the computational technique of anti differentiation.

Antidifferentiation cannot always be used to compute integrals, for two reasons. First it is not even true that a (Riemann) integral is equal to an antiderivative except for continuous functions, whereas integrals exist for many discontinuous functions. Secondly, even for continuous functions, the theoretical antiderivative of a familiar continuous function is almost never also a familiar function, except for very simple functions and problems cooked up to work in book exercises. What for example is the antiderivative of a simple continuous function like cos(x^2)? And the integral of a step function, the simplest of all functions to find the area under, is not an antiderivative at all, i.e. the area function is not differentiable at the end of a "step".

Riemann integration is an extension of Archimedes' method of exhaustion, defining, and sometimes computing, volumes of curved regions, as limits of volumes of polygonal regions. Archimedes computed this way the area of a circle easily as a limit of the areas of triangles, which is much harder using calculus. For volumes, the basic technique he used, is that two solid regions have the same volume if every plane section of them parallel to some fixed plane, have the same areas. By this method he proved that the volume of a ball is the difference of the volumes of a cylinder and a cone, which can also be proved with more effort by calculus. He even proved by the same method that the volume of a "bicylinder" is the difference between the volumes of a rectangular block and a square based pyramid. Moreover he also deduced the surface areas of these figures, something which seems much more difficult using calculus; at least in the second case it is a problem not solved in any calculus book I have seen.

The idea of differentiation seems to have emerged only centuries later, and then the observation that the area function of a region with continuously moving boundary, has a derivative equal to the height of the region, and thus the method of antidifferentiation as a way of computing some integrals. Students who immediately discard all concepts of integration except this last one, are thus left helpless to deal with any integral whose antiderivative they cannot guess, and cannot appreciate methods of approximation, which are based on limiting methods such as those of Archimedes.

For these reasons, I support the teaching of integration before differentiation, as more enlightening, as well as historically accurate, i.e. starting from a more intuitive notion, just adding up blocks, and taking limits of this method, to get area, and eventually arriving at a more sophisticated one, based on computing the rate of change of area formulas for figures with continuously moving boundaries.

And Apostol is not the only author to use this approach, at least one of the other great classics does the same, namely Courant. Thus of the three widely accepted 20th century classics of rigorous calculus, Spivak, Apostol, and Courant, 2 out of 3 introduce integrals first.

Just my opinion of course. Views on best pedagogy are never universal. (I also agree with fresh_42 that one is likely to learn much more easily from a book written in ones own native language.)
 
  • Like
Likes martinbn and fresh_42
  • #36
mathwonk said:
For these reasons, I support the teaching of integration before differentiation, as more enlightening, as well as historically accurate ...
I'm a bit insecure about what you mean here. Archimedes maybe? Cauchy (1789–1857) and Riemann (1826-1866) were certainly later than Taylor (1685-1731). Diuedonné writes
The first half of the 17th century was marked, on the one hand, by the creation of the coordinate method (so-called "analytic geometry") by Fermat and Descartes, and, on the other, by the progressive and unsystematic introduction of the fundamental ideas of calculus by numerous mathematicians. These ideas were systematized and provided with convenient notations and general algorithms by Newton and, above all, Leibniz, beginning in 1665. They were soon joined by the brothers Jacob and John Bernoulli, as well as some lesser mathematicians such as Cotes, B. Taylor, Stirling, and de Moivre.

On the other hand, there was Euler (1707-1783) about whom Dieudonné writes
Also worth mentioning is the treatise "De insigni usu calculi imaginariorum in calculi integrali": Euler performed complex variable transformations, demonstrating that, with the help of complex numbers, one can combine or derive integral calculus formulas, particularly those related to logarithms and the arctangent; these formulas, in fact, appear completely different in the real world. Euler's contemporaries took careful note of these phenomena, which contributed to the widespread use of complex numbers in analysis.

Wikipedia supports your point of view:
Isaac Barrow, Newton's academic teacher, recognized that the calculation of areas (integral calculus) and the calculation of tangents (differential calculus) are in some ways inverse to each other, but he did not discover the fundamental theorem. The first to publish this was James Gregory in 1667 in Geometriae pars universalis. The first to recognize both the connection and its fundamental significance were Isaac Newton and Gottfried Wilhelm Leibniz, independently of one another, with their infinitesimal calculus. In his first notes on the fundamental theorem from 1666, Newton explained the theorem for arbitrary curves through the origin, which is why he ignored the constant of integration. Newton did not publish this until 1686 in his Philosophiae Naturalis Principia Mathematica. Leibniz discovered the theorem in 1677, writing it down essentially in today's notation.
 
Last edited:
  • #37
Yes, I was referring to Archimedes for the origin of the ideas of integration, as does Apostol in the historical remarks I cite. In my view, Archimedes' ideas are already adequate both to define integration and prove Cavalieri's principle. Of course, Riemann makes much more precise, the ideas present in Archimedes, but I do not date the origin of integration from these precise statements. If one is going to credit Newton with the FTC, e.g., as you point out, one obviously cannot hold that the integral originated with Riemann.
I am curious in this regard about the origin of the concept of continuity, sometimes also attributed (in a precise form) to 19th century mathematicians. E.g. I am curious to see the version of the fundamental theorem stated by Newton, i.e. how he phrased the necessary continuity hypothesis for the first part of the FTC. I have mainly studied Riemann's treatment of the integral. The only writing on the integral by Newton I have seen, was an argument that it exists for (piecewise) monotonic functions, something I believe should be a standard feature of college calculus courses, which usually skip the harder proof for continuous ones.

From glancing at the (Latin?) work by Gregory you cite, my impression is that he just drew a picture of a continuous looking curve and went from there.

On the other hand, I can make an argument that Euclid's characterization of the tangent line to a circle, as the line meeting the circle such that no other line can be interpolated between it and the circle, anticipates Newton's limiting definition of the tangent line as a limit of secants. So maybe, to be consistent, I should allow that then origins of integrals and derivatives both date from at least Euclid's time. So I guess, as usual, that questions of history, and origins and attributions of ideas, are infinite and impossible to make definite. They all seem to grow together and intertwine through time.

Still, I personally do think it follows from this history that the ideas of the integral were much more highly developed than those of the derivative, hundreds of years earlier.

It seems in my remarks which you cite, I was actually quoting Apostol (without citation):
"The approach in this book has been suggested by the historical and philosophical development of calculus and analytic geometry. For example, integration is treated before differentiation. Although to some this may seem unusual, it is historically correct and pedagogically sound. Moreover, it is the best way to make meaningful the true connection between the integral and the derivative."

But I completely agree with him.
 
Last edited:
  • #38
mathwonk said:
I am curious in this regard about the origin of the concept of continuity, sometimes also attributed (in a precise form) to 19th century mathematicians.
I usually cite Dieudonné who wrote a book about the history of mathematics from 1700 to 1900. He is a bit biased towards the great French mathematicians, which may explain that he names Cauchy:
Cauchy - Cours d ’analyse de l'école royale Polytechnique - Debure - Paris 1821 said:
The function f is continuous at the point x if the absolute value (the numerical value) of the difference f(x + a) —f(x) "decreases with a so that it becomes smaller than any finite number".
But even Dieudonné observed
All proofs proposed for the "Fundamental Theorem of Algebra" between 1740 and 1830 differ from the usual algebraic considerations in that continuity considerations are included in some way at a certain stage of the proof.
mathwonk said:
So I guess, as usual, that questions of history, and origins and attributions of ideas, are infinite and impossible to make definite. They all seem to grow together and intertwine through time.
That is so true. Many concepts were ripe at a certain time in history and were considered by various scientists. We tend to attribute them by the name of the person who made the main contribution, but they are rarely the only ones. I still bemoan Pythagoras's theorem. It should be called Babylonian Theorem instead. And this is just one example in a row of truly many. Even Einstein already knew that the speed of light is finite and SR is based on this fact.

I think that we can at most expect to find a paper in which certain technical terms were first mentioned. An example is the prefix "eigen-". Hilbert first used it in a paper from 1904 in which he defined "Eigenfunktionen". The concept itself (eigenvalues) is older.
 
Last edited:
  • #39
Thank you for the references. I have consulted a translation of a 17th century book of Barrow containing his proof of the FTC. To get an idea of what we are dealing with here in comparing historical accounts with modern ones, and in particular the irrelevance of my concern as to which definition of continuity was originally used in the hypothesis of the 1st FTC, take a look at his argument, page 117.

He shows that if one takes "any" curve, whose definition is given merely by drawing a picture of a smooth monotone graph (below the axis), and then graphs its "area function" (shown above the axis), (whose definition I have not traced), then the line TF whose slope equals the height of the original curve, is tangent to the graph of the area function, in the (Euclid's) sense that it "touches" (i.e. meets but does not cross) the (fortunately convex) graph of the area function.

[I have just noticed that Barrow's argument is almost equivalent to the one I myself give, and learned from Mackey, by drawing an increasing graph, and comparing the change in area under the curve to the change in area of a rectangle with height equal to the height of the curve! I.e. to one side, the change in area of the rectangle is greater, and to the other side is less than that of the curve. One difference is that continuity is not needed in Barrow's version of the argument since he is checking Euclid's definition of a tangent, not Newton's. I.e. Euclid's definition of a tangent to a convex curve, as a line that meets but lies entirely on one side of the curve, holds for some lines even for curves we would not consider to have a tangent, e.g. the absolute value graph y =|x| at the point (0,0). The problem here is there are many such "tangent" lines to this curve. Newton's, but not Euclid's, definition insures the tangent line is unique.]

So as I should had known, the "theorems" historically precede the definitions needed to make them precise and rigorous. In particular the first "proofs" of the FTC relating differentiation and integration, precede by roughly 200 years, Riemann's precise definition of integration. Of course, if we include Archimedes, as I think we should, the gradual definition of integration proceeded over some 2,000 years.
(This lack of precise definitions, in some 17th century mathematics, plays a role earlier in this book where Barrow points out that another man's attempt (Tacquet, p.44) to apply Archimedes' method of infinitesimals to surface area gives the wrong result.)

Fascinating stuff! How much easier might it be to teach useable calculus if we did not try to teach rigorous limits first?

https://archive.org/details/geometricallectu00barruoft/page/116/mode/2up
 
Last edited:
  • #40
mathwonk said:
Thank you for the references.
Au contraire! Thank you for the Barrow link. I always appreciate finding original works and statements. I have just written a historically motivated article about vector spaces and it was very interesting to read what e.g. Heisenberg and Schrödinger have said about their work on QM and why it has to do with vector spaces.
 
  • #41
By the way, Barrow’s argument is about the simplest proof of the (first part of the) fundamental theorem of calculus I have ever seen, using just elementary Euclidean geometry, and using Euclid’s definition of a tangent line, to a curve that is convex upwards at p, as a line meeting the curve at p but otherwise lying below it near p. Thus he has to show that if the original function is y = f(x), and the area function under graph(f) from 0 to x is A(x), then the line L through p = (x,A(x)) with slope f(x), lies below graph(A) both to the left and right of p. I.e. then the slope of graph(A) at p is f(x), so that f is the derivative of its area function A.

He is assuming that the function f is monotone, say increasing, near x. Then he wants to show that for h = delta(x) > 0 positive, then delta(A) = A(x+h) - A(x) is greater than delta(L), the rise in the line L from x to x+h, and that the decrease in A is less than that of L when h = delta(x)<0 is negative. But looking back at the graph of f, one sees that delta(L) = slope of L times h = f(x).h =the area of the rectangle with height f(x) and base h, while delta(A) is the area under that portion of graph(f) also with base h. Since f is increasing, the rectangle with height f(x) lies under the graph of f when h >0, and lies above it when h < 0. Hence the line L which meets graph(A) at p = (x,A(x)), rises more slowly than A as we move to the right, and drops more rapidly as we move to the left, i.e. the line L lies entirely below the curve graph(A) near p. QED.
img12_14.jpg
In particular the argument follows from understanding this picture, where the fixed vertical solid line has height f(x).

Notice that continuity of f is not used here, reflecting the fact that Euclid’s definition of a tangent line does not imply uniqueness of the line. So this proof shows only that the line with slope f(x) is tangent to graph(A) at (x,f(x)) in Newton’s sense, provided the tangent line exists in his sense, i.e. provided it is the unique line satisfying Euclid’s definition.

Euclid of course is aware of the importance of uniqueness of the tangent line, but since he is treating only the case of a circle, he is able, in Prop.16, Book III, to prove uniqueness in that case as a corollary of his definition, showing no other line can be interposed between the circle and his tangent line.

In my opinion, Newton may well have used that property in Euclid's Prop. 16 as inspiration for his own definition. At least a careful translation of the fact that no other line can be interposed between the tangent and the circle does yield the modern statement that the tangent is a limit of secants, (for a convex curve). I.e. given any other line making any positive angle e>0 with the tangent line, the circle eventually gets between these two lines; hence (the circle being convex), given any e >0, all secants to points near enough to p, are closer than e to the tangent line.
 
Last edited:
  • #42
Another observation obtained by contemplating Barrow, makes the FTC almost obvious. Namely, we are asking for the linear function that best approximates the area function of f near a, i.e. for the slope of the line that best approximates the graph of that area function near a. But the area function of a graph of constant height is linear, with slope equal to the height of that graph. So we are asking for which function of constant height has area function best approximating the area function of f near a, and the answer is obviously the function whose constant height agrees with the height of f at a. I.e. the rectangle whose area is closest to the area under graph f, near a, is obviously the one with height equal to f(a), the height of graph f at a.

This method even computes one sided derivatives of a function with a jump discontinuity at a. Namely the best approximation to the area from the left is by a rectangle whose height is the (limiting) value of f before the jump, and to the right, the value after the jump.

One can even deduce that this is the derivative in Newton's sense. I.e. the error in the area approximation to delta(A), by a rectangle with height f(a), is (for a monotone function f) less than delta(f).delta(x), which, for f continuous, approaches zero faster than delta(x), as that quantity goes to zero, since f continuous implies that delta(f)-->0 as delta(x)-->0.
E.g. in the picture above in post # 41, if you look say at the left green rectangle, the error in the area approximation by the green rectangle to the area under the purple curve, is the area of a curved "triangle" with height delta(f) and base delta(x), whose area is a quantity of "second order" smallness, for continuous f.
 
Last edited:

Similar threads

  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 26 ·
Replies
26
Views
6K
  • · Replies 17 ·
Replies
17
Views
8K
  • · Replies 11 ·
Replies
11
Views
5K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 9 ·
Replies
9
Views
10K
  • · Replies 39 ·
2
Replies
39
Views
8K
  • · Replies 4 ·
Replies
4
Views
2K