# Calculus Multivariable calculus without forms or manifolds

Tags:
1. Dec 7, 2018

### Ansatz

Hi there all,

I'm currently taking a course in Multivariable Calculus at my University and would appreciate any recommendations for a textbook to supplement the lectures with. Thus far the relevant material we've covered in a Single Variable course at around the level of Spivak and some Linear Algebra around the level of Strang (Linear Algebra and its Applications). For reference we are concurrently studying some more analysis in R but now around the level of Rudin's PMA (though I do not think this has impacted the design of this module at all).

The content of the course is as follows:

• Continuous Vector-Valued Functions
• Some Linear Algebra
• Differentiable Functions
• Inverse Function Theorem and Implicit Function Theorem
• Vector Fields, Green’s Theorem in the Plane and the Divergence Theorem in R^3

We are also told that at the end of the module we should be able to:
• Demonstrate understanding of the basic concepts, theorems and calculations of multivariate analysis.
• Demonstrate understanding of the Implicit and Inverse Function Theorems and their applications.
• Demonstrate understanding of vector fields and Green’s Theorem and the Divergence Theorem.
• Demonstrate the ability to analyse and classify critical points using Taylor expansions.
From my experience the module started off with a fairly theoretical flair - which remained present throughout the entirety of the material on differentiation. We started by studying a small but requisite amount of norms, metrics and topologies which was then used in our treatment of differentiation - especially the inverse and implicit function theorems (which receive a lot of emphasis in this module). We then moved on to the integration section of the course which was, by contrast, more computational in flair. It still seemed to me to be at a higher level than seen in books such as Stewart but this was mostly by the nature of the exercises given and presentation of the subject matter as opposed to the content itself.

If any further information would help please feel free to ask and thank you to anyone who takes the time to assist me with this search!

2. Dec 7, 2018

### mathwonk

Hard to say with only this info but Spivak's Calculus on Manifolds is a possible source. But do you want another source without manifolds and forms, or do you want one that does have those topics? Wendell Fleming's Calc of several variables os also excellent for stuff like yours but more advanced maybe.

3. Dec 7, 2018

### Ansatz

I have looked at Spivak's book and do intend to work through it for myself in my spare time. As a course supplement the material on differentiation seems to fit my needs well but the section on integration is more sophisticated than our treatment of it. A book containing material on manifolds and forms is of course not a problem however to suit my needs it should come towards the end of the book and the book also needs to treat topics like Green's theorem and Divergence theorem without the using material of that nature. I shall certainly look into Fleming's text - thank you for the suggestion.

4. Dec 7, 2018

### mathwonk

greens and divergence and stokes are all higher dimensional versions of the fundamental theorem of calculus. they are all proved in the same way, by using fubini's theorem to write a multiple integral as a repeated integral, i.e. repeated one dimensional integrals, then applying induction to reduce down to the one dimensional fundamental theorem. hence after you know the one dimensional fundamental theorem the main new ingredient for those higher dimensional theorems is fubini's theorem. As I recall, Spivak does a nice job on fubini, so you would be well prepared in the sense by his treatment of integration. I don't know why it is more sophisticated than normal, except maybe the fact he does n dimensions instead of just two and three. he also uses partitions of unity, but these are very useful to learn about as a tool for reducing global statements to local ones.

he also has a nice proof of the inverse function theorem in the finite dimensional case, using local compactness. that theorem is also true in the infinite dimensional case, using only completeness, a weaker condition than compactness. This proof appears in Lang's Analysis I, or whatever the title of the newer edition is, maybe Undergraduate analysis.

Basically the inverse function theorem says that a smooth function f from V to V, where V is a complete normed vector space, such as R^n, with invertible derivative at say 0, has a smooth inverse defined locally near 0. A general proof can be given based on the idea behind the geometric series as follows. After composing with the inverse of the derivative of f, and a translation,we may assume f has the form f = I-e, where e is a function defined near 0, taking 0 to 0, and having derivative zero at 0, and I is the identity function.

the fact that f is locally injective is argued using the mean value theorem. the harder part is to show that f is locally surjective, i.e. has a local right inverse. Think of the case of real numbers, and the problem of finding a multiplicative inverse of 1-e, where 1 is the number one, and e is a number smaller than 1. The geometric series gives the inverse as g = 1/(1-e) = 1+e+e^2+e^3+...... We can write this in a way that will also make sense for smooth functions of a vector variable as follows. define g0 = 1, g1 = 1+e.go = 1+e, g2 = 1+e.g1 = 1+e + e^2,....., gn+1 = 1 + e.gn. Then the sequence of numbers (or functions if e is a variable) gn converges to g , the multiplicative inverse of 1-e.

In our case, f = I - e, where e is smooth function with derivative zero at 0, and I is the identity function. Again define go = I, g1 = I + e.go, g2 = I + e.g1,..., gn+1 = I + e.gn, where now the dot means not multiplication but composition. I.e. gn+1 = I + e(gn). We claim these functions gn converge to some function g. In the series case, the difference (gn+1 - gn) is just equal to e^(n+1), so their sum for n ≥ M, does go to zero with M, but the same thing is true here again, and we get the functions gn converging (locally near 0) to some function g.

Then note that for all n, we have f.gn = (I-e).gn = gn - e.gn = gn - (gn+1 - I) = I - (gn+1 - gn). Now as n-->infinity, we have (gn+1 - gn) -->0. So f.gn converges to I. I.e. lim(f.gn) = I. We claim g is a right inverse to f, i.e. that f.(limgn) = I . But as we have said, f.gn --> I

Then I claim we can interchange limits to get I = lim (f.gn) = lim( (I-e).gn) = (1-e)(lim(gn)) = f.g, i.e. g is a right inverse to f, at least locally.

This is the proof essentially given in all books that claim to do the proof using the "contraction lemma" in a complete metric space, but it is usually disguised somewhat. But since this is such an important and complicated theorem, I like the fact that the underlying idea is just to imitate the construction in the geometric series.

Last edited: Dec 7, 2018
5. Dec 7, 2018

### mathwonk

for this approach, but not explained this way, see the first 20 pages of chapter XVIII of Lang's Undergrad Analysis:

One thing to mention as I recall, he argues not for convergence of functions, but for convergence of the vector values of those functions, i.e. convergence one point at a time. This is analogous to thinking of the geometric series as stated for a series of numbers rather than functions.

I notice he also uses differential forms for green's theorem an so - on. The reason for that is using forms makes all those theorems look exactly the same. I.e. they are all disguised versions of exactly the same statement about integral of forms, namely the integral of a form M over the boundary of a region, equals the integral of dM over the region itself. This huge simplification makes it worth learning the language of forms. It was only after reading langs proof of greens theorem for a rectangle that I realized how easy that theorem is.

It might be worth your while, after learning those theorems in classical form, to at least work out how their statements can be reformulated as statements involving forms, just to see the relationship and the simplification. It can be hard to remember the definition of the curl and divergence and so on, but it easy to remember the definition of dM for any kind of form M, so the form language actually makes the computations much easier. I.e. the rules for alternating multiplication reuse all those operations down to just remembering that df = ∂f/∂x dx + ∂f/∂y dy.

Last edited: Dec 7, 2018
6. Dec 7, 2018

### Ansatz

Wow thanks for taking the time to make these posts! Your explanation of the proof of inverse function theorem is particularly illuminating - couching it in terms of a geometric series is a very nice way of thinking about it. I'll certainly be using Spivak where possible given your recommendations. As an aside I don't suppose you (or anyone else reading this) are familiar with Guzman's text "Derivatives and Integrals of Multivariable Functions" it's a lower level text than Spivak - so I will likely work CoM to stretch myself anyway - but it seems to match the content of my course decently well so may make a nice supplement. I've never really heard of Guzman's book until looking into sources for the module and information on it seems scarce online, so if you do have an opinion to share it would be appreciated!

7. Dec 7, 2018

### mathwonk

well guzmans book looks a bit terse but not as terse as spivak. the treatment is similarly organized to spivak but without forms and manifolds. notice in chapter 3 i think guzmans first topic is contractions, as i said the proof of inverse function theorem can be based on. so you may be able to detect within his proof of that theorem an iterative construction of the value of the inverse function that resembles my geometric series type construction above. spivak's argument is different and gets a contradiction from assuming the function f is not surjective onto any neighborhood of the image point f(0).

again note that the first topic in chapter 5 of guzman on integration is fubinis theorem, the fundamental inductive result for reducing multiple integrals to repeated one dimensional ones.

you are welcome for these remarks. It is a pleasure to have them appreciated. I noticed that analogy with geometric series some 45 years ago while trying to understand the idea behind lang's argument, but have not seen it pointed out anywhere else since. it probably does exist though since virtually everything that is natural and true has been noticed by someone.

Last edited: Dec 7, 2018
8. Dec 7, 2018

### mathwonk

I just noticed why the geometric series analogy does not occur in most explanations of the inverse function theorem, namely they have reduced the theorem to the contraction lemma, and the geometric series argument appears in the proof of the contraction lemma. so it is because they have separated out the proof into two steps and the geometric series argument appeared in the preliminary step.

Look at the lines leading down to equation (2.1) on page 3 of Keith Conrad's nice notes here:

and you will see a portion of a geometric series being summed to prove that the sequence of iterates of any contraction does form a cauchy sequence.

9. Dec 8, 2018

### mathwonk

Here is the contraction lemma proof of the inverse function theorem in outline.

(I am guessing at some of the hypotheses from memory).

A contraction is a map from a space to itself, that squeezes pairs of points closer together by at least a fixed ratio < 1, i.e. u:S—>S is a contraction on S if there is a constant c, with 0 < c < 1, such that for all pairs of points, p, q in S, we have |u(p) - u(q)| ≤ c |p-q|.

Lemma: If u is a contraction defined on a complete metric space S, (such as a closed subset of finite dimensional euclidean space), then u has a unique fixed point. I.e. there is exactly one point p in S such that e(p) = p. Moreover, if q is any point at all of S, the sequence q, u(q), u(u(q)),… converges to the fixed point p.

Note: it is required that u(S) is contained in S. Note also that u takes this sequence into itself, so if the sequence converges, the limit must be a fixed point of u.

Lemma: (mean value theorem): If f:S-->R^n is a smooth map on a closed bounded convex subset S of euclidean space R^n, and if there is a constant M, such that for all x in S we have |f’(x)| ≤ M, then for all pairs of points p,q in S, we have |f(p) - f(q)| ≤ M |p-q|.

Notice this is true on a closed interval, and follows from the usual one variable MVT.

Cor: If u:S-->S is smooth map on a closed bounded convex subset S of euclidean space, and if |u’(x)| ≤ 1/2 for all x in S, then u is a contraction on S.

Theorem (inverse function theorem): If e:W-->R^n is a smooth map defined on a nbhd W of 0 in euclidean space R^n, and if e(0) = 0 and e’(0) = 0, and I is the identity map, then the map f = I-e is a smooth homeomorphism from some small neighborhood of 0 to some (possibly other) small neighborhood of 0.

PROOF outline: Since e is smooth and has zero derivative at 0, there is some neighborhood of 0 where the derivative of e is less than 1/2 in norm, hence e is a contraction on some small neighborhood of 0. Of course 0 is the unique fixed point of e on such a neighborhood.

Now choose such a small closed ball S where e is a contraction with contraction factor 1/2, and then choose another ball T of half that radius. We will show that every point y of T is the image of a unique point x of S under the map I-e.

Given a point y of T, define u(x) = y - (f(x) - x) = y + e(x). Since e maps S into T, and translation by y maps T into S, this u maps S into S. Moreover since e is a contraction on S, so is u.

Thus u has a unique fixed point x, i.e. given y in T, there is a unique point x in S such that u(x) = x, i.e. such that y - (f(x) - x) = x, which means f(x) = y.

If we let U be the subset of points of S that are mapped by f into (the interior of) T, then U is open and f is a bijection from U to the interior of T, as desired.

Remark: Since the fixed point is the limit of the iterated sequence beginning with any point at all, given y we may start with 0 as a guess at the value of x. Then the sequence 0, u(0), u(u0)), … must converge to x. But by definition u(p) = y + e(p), so this sequence is just 0, y, y + u(y), y + u( y + u(y)),….., i.e. this is the sequence analogous to the sequence associated to the geometric series!

One then has to show that the inverse of f is also smooth. Then by composing with a translation and the inverse of the derivative we get the general theorem:

Theorem: If f:W-->R^n is a smooth map defined on an open set W in R^n containing p, and if f’(p) is an invertible linear map, then f is a smooth homeomorphism, with smooth inverse, from some open nbhd of p to some open neighborhood of f(p).

Remark: The same theorem and same proof holds if we replace R^n by a Banach space, i.e. possibly infinite dimensional complete normed linear space, since all it used was completeness. The argument in Spivak's Calculus on manifolds on the other hand, uses local compactness, which only holds for finite dimensional Banach spaces.

Last edited: Dec 9, 2018
10. Dec 11, 2018

### FourEyedRaven

One possibility is this Dover gem, "Advanced Calculus: An Introduction to Classical Analysis", by Louis Brand. It's concise and rigorous. It's also very cheap given the content.

Another is the classic "Calculus", by Tom Apostol. Specifically, volume 2. It is less concise than the previous book, but it's also more expensive.

11. Dec 12, 2018

### George Jones

Staff Emeritus
I wouldn't call it an analysis book, but I kind of like "Calculus of Several Variables" by Serge Lang.

12. Dec 12, 2018

### mathwonk

there is one reason for learning the multiplication of forms: namely it makes it easy to remember the statement of the theorems of green, stokes, gauss, etc...

i.e. if we have a plane region R bounded by a closed curve C, then the path integral of Pdf + Qdy over C equals some integral over R, but what is the integrand? The formula using forms makes this a mechanical calculation as follows: it is simply d(Pdx+Qdy) whatever that is.

ok: d(Pdx+Qdy) = dP^dx + dQ^dy = (∂P/∂x dx + ∂P/∂y dy)^dx + (∂Q/∂x dx + ∂Q/∂y dy)^dy = ∂P/∂x dx^dx + ∂P/∂y dy^dx + ∂Q/∂x dx^dy + ∂Q/∂y dy^dy
= (now use dx^dx = 0 = dy^dy, and dy^dx = - dx^dy, to get) (∂Q/∂x - ∂P/∂y) dx^dy.

Thus greens theorem says that the integral of Pdx + Qdy over C = ∂R, equals the integral of (∂Q/dx - ∂P/dy)dx^dy over R.

this works in all cases... i.e. the integral of F over ∂R equals the integral of dF over R.

exercise: show that d(Ady^dz + Bdz^dx + C dx^dy) = (∂A/∂x + ∂B/∂y + ∂C/∂z)dx^dy^dz.

I hope this is correct but I did not do it out, I just went with the symmetry of the statements. ...
oops, Confession: I just got it wrong, and then did it out in my head; hopefully it is correct now.

moral: well the moral is obvious: don't make a statement you have not checked, to reduce the likelihood of looking like an ....... of course i could still be wrong, so it doesn't totally rule out looking like an......

Last edited: Dec 12, 2018
13. Dec 15, 2018

### MidgetDwarf

Hubbard and Hubbard: Vector Calculus, Linear Algebra, and Differential Forms?