mathwonk
Science Advisor
Homework Helper
- 11,961
- 2,237
An apology and some comments:
I apologize for making critical comments no one was interested in and which stemmed from not reading Dave's introduction well enough. He said there he was not interested in "getting it right", whereas "get it right" is my middle name (it was even chosen as the tagline under my photograph in high school, by the yearbook editor, now I know why!) I have always felt this way, even as an undergraduate, but apparently not everyone does. My happiest early moments in college came when the fog of imprecise high school explanations was rolled away by precise definitions and proofs.
On the first day of my beginning calculus class the teacher handed out axioms for the reals and we used them to prove everything. In the subsequent course the teacher began with a precise definition of the tangent space to the uncoordinatized euclidean plane as the vector space of translations on the plane.
E.g. if you are given a translation, and a point p, then you get a tangent vector based at p by letting p be the foot of the vector, then applying the translation to the point p and taking that result as the head of the vector.
This provides the isomorphism between a single vector space and all the spaces Tp(R^n) at once. Then we proceeded to do differential calculus in banach space, and derivatives were defined as (continuous) linear maps from the get go.
So I never experienced the traditional undergraduate calculus environment until trying to teach it. As a result I do not struggle with the basic concepts in this subject, but do struggle to understand attempts to "simplify" them.
I am interested in this material and will attempt to stifle the molecular imbalances which are provoked involuntarily by imprecise statements used as a technique for selling a subject to beginners.
One such point, concerning the use of "variables" will appear below, in answer to a question of hurkyl.
to post #6 from Tom, why does Dave derive the basis of Tp(R^2) the way he does? instead of merely using the fact that that space is isomorphic to R^2, hence has as basis the basis of R^2?
I think the point is that space is not equal to R^2, but only isomorphic to R^2. Hence the basis for that space should be obtained from the basis of R^2 via a given isomorphism.
Now the isomorphism from Tp(R^2) to R^2 proceeds by taking velocity vectors of curves through p, so Dave has chosen two natural curves through p, the horizontal line and the vertical line, and he has computed their velocity vectors, showing them to be <1,0> and <0,1>.
So we get not just two basis vectors for the space but we get a connection between those vectors and curves in the plane P. (Of course we have not proved directly they are a basis of Tp(P), but that is true of the velocity vectors to any two "transverse curves through p").
So if you believe it is natural to prefer those two curves through p, then you have specified a natural isomorphism of Tp(R^2) with R^2. In any case the construction shows how the formal algebraic vector <1,0> corresponds to something geometric associated to the plane and the point p.
In post #18, Hurkyl asks whether dx and dy are being used as vectors or as covectors? This is the key point that puzzled and confused me for so long. Dave has consciously chosen to extend the traditional confusion of x and y as "variables" on R^2 to an analogous confusion of dx and dy as variables on Tp(R^2).
The confusion is that the same letters (x,y) are used traditionally both as functions from R^2 to R, and as the VALUES of those functions, as in "let (x,y) be an arbitrary point of R^2."
In this sense (x,y) can mean either a pair of coordinate functions, or a point of R^2. Similarly, (dx,dy) can mean either a pair of linear functions on Tp(R^2) i.e. a pair of covectors, or as a pair of numbers in R^2, hence a tangent vector in Tp(R^2) via its isomorphism with R^2 described above.
So Dave is finessing the existence of covectors entirely.
This sort of thing is apparently successful in the standard undergraduate environment or Dave would not be using it, but it is not standard practice with mathematicians who tend to take one point of view on the use of a notation, and here it is that x and y are functions, and dx and dy are their differentials.
There is precedent for this type of attempt to popularize differentials as variables and hence render them useful earlier in college. M.E. Munroe tried it in his book, Calculus, in 1970 from Saunders publishers, but it quickly went out of print. Fortunately I think Dave's book is much more user friendly than Munroe's.
(Munroe intended his discussion as calculus I, not calculus III.)
In post #43, Gza asked what a k cycle is, after I said a k form was an animal that gobbles up k cycles and spits out numbers.
I was thinking of a k form as an integrand as Dave does in his introduction, and hence of a k cycle as the domain of integration. Hence it is some kind of k dimensional object over which one can integrate.
Now the simplest version would be a k dimensional parallelpiped, and that is spannned by k vectors in n space, exactly as Gza surmised. A more general such object would be a formal algebraic sum, or linear combination, of such things, and a non linear version would be a piece of k dimensional surface, or a sum or lin. comb. of such.
now to integrate a k form over a k diml surface. one could parametrize the surface via a map from a rectangular block, and then approximate the map by the linear map of that block using the derivative of the parameter map.
Then the k form would see the approximating parametrized parallelepiped and spit out a number approximating the integral.
By subdividing the block we get a family of smaller approximating parallelepipeds and our k form spits out numbers on these that add up to a better approximation to the integral, etc...
so k cycles of the form : "sum of parallelepipeds" do approximate non linear k cycles for the purposes of integration over them by k forms.
The whole exercise people are going through trying to "picture" differential forms, may be grounded in the denial of their nature as covectors rather than vectors. I.e. one seldom tries to picture functions on a space geometrically, except perhaps as graphs.
On the other hand I have several times used the technique of discussing parallelepipeds in stead of forms. That is because the construction of 2 forms from 1 forms is a formal one, that of taking an alternating product. the same, or analogous, construction that sends pairs of one forms to 2 forms, also sends pairs of tangent vectors to (equivalence classes of) parallelograms.
I.e. there is a concept of taking an alternating product. if applied to 1 forms it yields 2 forms, if applied to vectors it yields "alternating 2 - vectors".
In post #81, Tom asked for the proof of the lemma 3.2 that all 2 forms in R^3 are products of 1 forms. I have explicitly proved this in the most concrete way in post #66 by simply writing down the factors in the general case.
In another post in answer to a question of Gza I have written down more than one solution to every factorization, proving the factors are not unique.
Also in post #81, Tom asked about solving ex 3.18. What about something like this?
Intuitively, a 1 form measures the (scaled) length of the projection of a vector onto a line, and a 2 form measures the (scaled) area of the projection of a parallelogram onto a plane. Hence any plane containing the normal vector to that plane will project to a line in that plane. hence any parallelogram lying in such a plane will project to have area zero in that plane.
e.g. dx^dy should vanish on any pair of vectors spanning a plane containing the z axis.
Notice that when brainstorming I allow myself the luxury of being imprecise! there are two sides to the brain, the creative side and the critical side. one should not live exclusively on either one.
I apologize for making critical comments no one was interested in and which stemmed from not reading Dave's introduction well enough. He said there he was not interested in "getting it right", whereas "get it right" is my middle name (it was even chosen as the tagline under my photograph in high school, by the yearbook editor, now I know why!) I have always felt this way, even as an undergraduate, but apparently not everyone does. My happiest early moments in college came when the fog of imprecise high school explanations was rolled away by precise definitions and proofs.
On the first day of my beginning calculus class the teacher handed out axioms for the reals and we used them to prove everything. In the subsequent course the teacher began with a precise definition of the tangent space to the uncoordinatized euclidean plane as the vector space of translations on the plane.
E.g. if you are given a translation, and a point p, then you get a tangent vector based at p by letting p be the foot of the vector, then applying the translation to the point p and taking that result as the head of the vector.
This provides the isomorphism between a single vector space and all the spaces Tp(R^n) at once. Then we proceeded to do differential calculus in banach space, and derivatives were defined as (continuous) linear maps from the get go.
So I never experienced the traditional undergraduate calculus environment until trying to teach it. As a result I do not struggle with the basic concepts in this subject, but do struggle to understand attempts to "simplify" them.
I am interested in this material and will attempt to stifle the molecular imbalances which are provoked involuntarily by imprecise statements used as a technique for selling a subject to beginners.
One such point, concerning the use of "variables" will appear below, in answer to a question of hurkyl.
to post #6 from Tom, why does Dave derive the basis of Tp(R^2) the way he does? instead of merely using the fact that that space is isomorphic to R^2, hence has as basis the basis of R^2?
I think the point is that space is not equal to R^2, but only isomorphic to R^2. Hence the basis for that space should be obtained from the basis of R^2 via a given isomorphism.
Now the isomorphism from Tp(R^2) to R^2 proceeds by taking velocity vectors of curves through p, so Dave has chosen two natural curves through p, the horizontal line and the vertical line, and he has computed their velocity vectors, showing them to be <1,0> and <0,1>.
So we get not just two basis vectors for the space but we get a connection between those vectors and curves in the plane P. (Of course we have not proved directly they are a basis of Tp(P), but that is true of the velocity vectors to any two "transverse curves through p").
So if you believe it is natural to prefer those two curves through p, then you have specified a natural isomorphism of Tp(R^2) with R^2. In any case the construction shows how the formal algebraic vector <1,0> corresponds to something geometric associated to the plane and the point p.
In post #18, Hurkyl asks whether dx and dy are being used as vectors or as covectors? This is the key point that puzzled and confused me for so long. Dave has consciously chosen to extend the traditional confusion of x and y as "variables" on R^2 to an analogous confusion of dx and dy as variables on Tp(R^2).
The confusion is that the same letters (x,y) are used traditionally both as functions from R^2 to R, and as the VALUES of those functions, as in "let (x,y) be an arbitrary point of R^2."
In this sense (x,y) can mean either a pair of coordinate functions, or a point of R^2. Similarly, (dx,dy) can mean either a pair of linear functions on Tp(R^2) i.e. a pair of covectors, or as a pair of numbers in R^2, hence a tangent vector in Tp(R^2) via its isomorphism with R^2 described above.
So Dave is finessing the existence of covectors entirely.
This sort of thing is apparently successful in the standard undergraduate environment or Dave would not be using it, but it is not standard practice with mathematicians who tend to take one point of view on the use of a notation, and here it is that x and y are functions, and dx and dy are their differentials.
There is precedent for this type of attempt to popularize differentials as variables and hence render them useful earlier in college. M.E. Munroe tried it in his book, Calculus, in 1970 from Saunders publishers, but it quickly went out of print. Fortunately I think Dave's book is much more user friendly than Munroe's.
(Munroe intended his discussion as calculus I, not calculus III.)
In post #43, Gza asked what a k cycle is, after I said a k form was an animal that gobbles up k cycles and spits out numbers.
I was thinking of a k form as an integrand as Dave does in his introduction, and hence of a k cycle as the domain of integration. Hence it is some kind of k dimensional object over which one can integrate.
Now the simplest version would be a k dimensional parallelpiped, and that is spannned by k vectors in n space, exactly as Gza surmised. A more general such object would be a formal algebraic sum, or linear combination, of such things, and a non linear version would be a piece of k dimensional surface, or a sum or lin. comb. of such.
now to integrate a k form over a k diml surface. one could parametrize the surface via a map from a rectangular block, and then approximate the map by the linear map of that block using the derivative of the parameter map.
Then the k form would see the approximating parametrized parallelepiped and spit out a number approximating the integral.
By subdividing the block we get a family of smaller approximating parallelepipeds and our k form spits out numbers on these that add up to a better approximation to the integral, etc...
so k cycles of the form : "sum of parallelepipeds" do approximate non linear k cycles for the purposes of integration over them by k forms.
The whole exercise people are going through trying to "picture" differential forms, may be grounded in the denial of their nature as covectors rather than vectors. I.e. one seldom tries to picture functions on a space geometrically, except perhaps as graphs.
On the other hand I have several times used the technique of discussing parallelepipeds in stead of forms. That is because the construction of 2 forms from 1 forms is a formal one, that of taking an alternating product. the same, or analogous, construction that sends pairs of one forms to 2 forms, also sends pairs of tangent vectors to (equivalence classes of) parallelograms.
I.e. there is a concept of taking an alternating product. if applied to 1 forms it yields 2 forms, if applied to vectors it yields "alternating 2 - vectors".
In post #81, Tom asked for the proof of the lemma 3.2 that all 2 forms in R^3 are products of 1 forms. I have explicitly proved this in the most concrete way in post #66 by simply writing down the factors in the general case.
In another post in answer to a question of Gza I have written down more than one solution to every factorization, proving the factors are not unique.
Also in post #81, Tom asked about solving ex 3.18. What about something like this?
Intuitively, a 1 form measures the (scaled) length of the projection of a vector onto a line, and a 2 form measures the (scaled) area of the projection of a parallelogram onto a plane. Hence any plane containing the normal vector to that plane will project to a line in that plane. hence any parallelogram lying in such a plane will project to have area zero in that plane.
e.g. dx^dy should vanish on any pair of vectors spanning a plane containing the z axis.
Notice that when brainstorming I allow myself the luxury of being imprecise! there are two sides to the brain, the creative side and the critical side. one should not live exclusively on either one.
Last edited: