A Geometric Approach to Differential Forms by David Bachman

  • #101
mathwonk said:
have you read post 98?

Not yet, but I will.

I apologize if my comments are not of interest. I am stuck between trying to be helpful and just letting my own epiphanies flow as they will.

No, your comments are very much of interest. I'm glad you're making them, and I'm glad that they will be preserved here so that we can go over them at leisure later. But right now, the clock is ticking for us. We are preparing to present some preliminary results to the faculty at our school. Basically the ladies (Melinda and Brittany, who has been silent in this thread so far, but she has been reading along) will be presenting the rules of the calculus, why it is advantageous, and a physical application (Maxwell's equations). The centerpiece of the presentation will be the same as the centerpiece of the book: the generalized Stokes theorem.

Once the presentation to the faculty is done, we will have 2 weeks until the conference. During that time we will get back to your comments.

I appreciate your patience.

That's what I should be saying to you!
 
Physics news on Phys.org
  • #102
Tom Mattson said:
So the line l is the line that is parallel to the vector [w_2,w_3,w_1].
As I said, l is the (or rather, any) line containing [w_2,w_3,w_1], not parallel to it. Actually, since the plane spanned by two vectors passes through the origin (and since a plane is a subspace if and only if it passes through the origin), you can choose the line parallel to that vector, but this seems like more work.

\omega = \omega _1 dx \wedge dy + \omega _2 dx \wedge dz + \omega _3 dy \wedge dz

\omega(A, B) = \omega _1 (a_1b_2 - b_1a_2) + \omega _2 (a_1b_3 - b_1a_3) + \omega _3 (a_2b_3 - b_2a_3)

= p_3(a_1b_2 - b_1a_2) - p_2(a_1b_3 - b_1a_3) + p_1(a_2b_3 - b_2a_3)

= \det (P A B)

So P = (p_1, p_2, p_3) = (\omega _3, -\omega _2, \omega _1). (I believe you have the above, or something close, in your post).

If we choose a line containing P, then any pair of vectors A, B that span a plain containing that line will also have to conatin P. Then {P, A, B} is dependent, so the determinant is 0. Therefore it is sufficient (and easier) to choose a line containing P. The line parallel to P may not contain P (if the line doesn't pass through the origin), and hence the plane containing the line may not contain P, and hence the set {P, A, B} may not be dependent, so the determinant may not be zero, and so \omega (A, B) may not be zero. To claim that the plane containing the line parallel to P can be done, but requires (a very little) more proof. You know that the line parallel to P, paramterized by t, contains points for t=0 (let's call it P0) and t=1 (P1). So the plane contains these two points. Now you know that P1 - P0 = P. Since the plane in question is a subspace, it is closed under addition and scalar multiplication, and since it contains the line, it contains P1 and P0, and hence P1 - P0, and hence P.

So anyways, you have it right, and if you want to choose a line parallel to P, you may want to throw in that extra bit that allows you to claim that P is in the plane. One more remark: You have A and B in R³, and C in the tangent space. It seems as though you should have them all in R³, or all in the tangent space.
 
  • #103
tom, thank you very much!

the one geometric thing i added recently may be too far along to be useful to your students but it addresses the geometry of whether a 2 form is or is not a product of one forms, in R^4.

the answer is that 2 forms in R^4 form a vector space of dimension 6, and in that space the ones which are products of one forms form a quadratic cone of codimension one.

I think I also have the answer to the geometric question of what it means to add two 2 forms in R^3, both of which are products of one forms. i.e. to add two paralleograms.


i.e. take the planes they span, and make them parallelograms in those planes, sharing one side.

then take the diagonal of the third side of the parallelepiped they determine, and pair it with the shared side of the two paralleograms.

maybe that is the parallelogram sum of the two parallelograms? at lea`st if the teo parallelograms are rectangles?

ok i know your students do not have time for this investigation, but i am trying to throw in more geometry.

of course i agree with you, the geometry is a little unnatural.

these suggestions are not worked out on paper but just in my head on the commute home from work, but they gave me some pleasure. and i had your students in mind, maybe at some point some will care about these comments.

best,

roy
 
  • #104
Tom, here are a few more comments on how to possibly convince skeptics of the value of differential forms.

These are based on the extreme simplification of the variuous stokes, greens, gauss theorems as stated in dave's book.

The point is that when a result is simplified we are better able to understand it, and also to understand how to generalize it, and to understand its consequences.

I also feel that you sell the popwer of some tool more effectively if you give at elast one application of its power. I.e. not just simplifying statements but applying those simpler statements to prove something of interest. hence in spite of the demands on the reader I will sketch below how the insight provided by differential forms, leads to a proof of the fundamental theorem of algebra.

(I actually discovered these standard proofs for myself while teaching differential forms as a young pre PhD teacher over 30 years ago, and taught them in my advanced calc class.)

It is of course true that every form of stokes theorem, in 3 dimensions and fewer, has a classical statement and proof.

But I claim none of those statements clarify the simple dual relationship between forms and parametrized surfaces.

i.e. in each case there is an equation between integrals, one thing integrated over a piece of surface [or curve or threefold], equals something else integrated over the boudary of the surface [or curve or threefold].

But in each case the "something else" looks different, and has a completely different definition. i.e. grad(f) looks nothing like curl(w), nor at all like div(M).

It is only when these objects, functions, one forms, two forms, threeforms, are all expressed as differential forms, that the three operations, grad, curl, div, all look the same, i.e. simply exterior derivative "d".

then of course stokes theorem simply says <dS,w> = <S, dw>.


Now that is clear already from what is in the book. But once this is done, then forms begin to have a life of their own, as objects which mirror surfaces, i.e. which mirror geometry.

I.e. this reveals the complete duality or equality between the geometry of parametrized surfaces S, and differential forms w. There is a gain here because even though taking boundary mirrors taking exterior derivative, what mirrors exterior multiplication of forms? I.e. on the face of them, forms have a little more structure than surfaces, which enables calculation a bit better.

Eventually it turns out that multiplication of forms mirrors intersection of surfaces, but this fact only adds to the appeal of forms, since they can then be used to calculate intersections.

Moreover, who would have thought of multiplying expressions like curl(w) and grad(f)? without the formalism of forms?

Already Riemann had used parametrized curves to distinguish between surfaces, and essentially invented "homology", the duality above reveals the existence of a dual construction, of "cohomology".

I.e. if we make a "quotient space" from pieces of surfaces, or of curves, we get "kth homology", defined as the vector space of all parametrized pieces of k dimensional surfaces, modulo those which are boundaries.

this object measures the difference between the plane (where it is zero) and the punctured plane (where it is Z), because in the latter there exists a closed curve which is not the boundary of a piece of parametrized surface, namely the unit circle. Then a closed curve represents n if it wraps n times c.c. around the origin.

This difference can be used to prove the fundamental theorem of algebra, since a polynomial can be thought of as a parametrizing map. Moreover a globally defined polynomial always maps every closed curve onto a parametrized curve that IS the boundary of a piece of surface. namely, if C is the boiundary of the disc D, then the image of C bounds the image of D!.


But we know that some potential image curves, like the unit circle, are not boundaries of anything in the complement of the origin. Hence a polynomial without a zero cannot map any circle onto the unit circle one to one, nor onto any closeed curve that winds around the origin,

Hence if we could just show that some circle is mapped by our polynomial onto such a curve, a curve that winds around the origin (0,0), it would follow that our polynomial does not map entirely into the complement of (0,0). I.e. that our polynomial must "have a zero"!

So it all boils down to verifying that certain curves in the punctured plane are not boundaries, or to measuring how many times they wind around the origin. How to do this? How to do it even for the simple unit circle? How to prove it winds once around the origin?

Here is where the dual object comes in. i.e. we know from greens theorem or stokes theorem or whatever you want to call it, that if w is a one form with dw = 0, then w must have integral zero over a curve which is a boundary.

Hence the dual object, cohomology, measure the same phenomena, as a space of those differential forms w with dw = 0, modulo those forms w which themselves equal dM for some M.

Hence, how to see why the unit circle, does wind around the origin?

Answer: integrate the "angle form" "dtheta" over it. if you do not get 0, then your curve winds around the origin.

here one must must realize that "dtheta" is not d of a function, because theta is not a single valued function!

so we hjave simultaneously proved that fact.

anyway, this is taking too long.

but the solid angel form, integrated =over the 2 sphere also proves that the 2 sphere wrapos around the origin in R^3, and proves after some argument, that there can be no never zero smooth vector field on the sphere, i.e. that you cannot comb the hair on a billiard ball.
 
  • #105
Hey all,

I have been going through the book and following the very interesting discussion here. David, I definitely fall into the category of people who like to learn things in a visual way, so I am finding your book to be a nice introduction to the subject. (As for my math background, btw, I majored in electrical engineering as an undergrad and graduated in 1993 -- since then I have been in the medical field, so I'm a bit rusty! :smile: )

As time permits I may join in the discussion. For now I thought I'd post something on this:

mathwonk said:
for example if N and M are anyone forms at all

N^M = N^(N+M) = N^(cN+M) = (cM+N)^M, for any constant c.

In keeping with the spirit of the geometric interpretation, I was inspired when I got to mathwonk's post to make a powerpoint visualization to demonstrate
N^M = N^(cN+M). You can download it from my briefcase at briefcase.yahoo.com/straycat_md in the "differential forms" folder. It's got animations so you have to view it as a "presentation" and then click on the spacebar to see things move (vectors appearing, etc.). Tell me what you think! :)

Regards,

straycat
 
  • #106
hey! i loved that. i did not realize myself why it was true geometrically until i saw your picture! its just that the area of a patrrallelogram does not change when you translate one side parallel to itself, keeping it the same length.

cool!
 
  • #107
Hey everybody!

My advisees, Melinda and Brittany, gave their practice presentation to the faculty on Friday, and they just ate it up. I was thinking that many of them would not have been exposed to forms, and I was right. After leading up to it the ladies showed how quickly the classical versions of Stokes' Theorem and the Divergence Theorem pop right out of the Generalized Stokes' Theorem. They thought it was beautiful.

I'll be returning to this thread with more notes tomorrow.
 
  • #108
congratulations!
 
  • #109
Tom, Melinda, and Brittany: let me add my congratulations as well!

I have a question for you. In your attempts to "sell" differential forms as an area of study, what are the branches of mathematics against which you are competing, or against which you would compare differential forms? I am wondering in particular whether Hestenes' geometric algebra (also called Clifford Algebra, I think) would be one of these "competitors." I guess a way to phrase the question would be: for a given typical application of differential forms, what other branches of mathematics might be used for the same application? (I hope this is not too off the topic of David's book.)

David Strayhorn
 
  • #110
Yea this is a thorny issue of notation and the war still rages in specialist circles.

As a physicist I was very interested in Hestenes work at first, but upon further review it seems a tad rigid. It really boils down to a choice of how much structure you want to have on a manifold without losing all information. Eg the minimal amount of structure we can place such that we retrieve the good results we know about, at that point philosophy comes into play (as well as potential physics).

Hestenes basically goes with the philosophy that all manifolds are isomorphic in some sense to a vector space and starts his algebra from there, as opposed to the usual covering space method which somewhat a priori picks a notion of coordinates. The cool thing (for a physicsist) is that the dirac operator instantly is promoted to a very natural geometrical object, as fundamental as length.

The problem is tricky and I'd love to start a new thread on the subject with experts more familiar with the problem. I tried to get a category theorist explain the problems to me, but I must admit a lot of it went way over my head.
 
  • #111
I just wanted to ask about the non-linear forms for Area...How can I generalized the formula in the work there for finding the area of the boundary of a 3D manifold (Agree with me that the boundary of a 3D manifold is 2-dimensional ?) :

Area=\sqrt{(dx\wedge dy)^2+(dx\wedge dz)^2+(dx\wedge dz)^2}

which gives the area of a 2Dimensional manifold, with x=x(t,p), y=y(t,p), z=z(t,p)...But what if I have the boundary of a 3D manifold (4 coordinates parametrized by 3 free variables) ??
 
  • #112
Finals week is wrapping up, and the girls and I are going to get back to doing more work on this right afterwards. Their presentation at the http://www.skidmore.edu/academics/mcs/hrumc.htm went very well. They were among the best of the day, which is pretty amazing considering that this was their first speaking engagement.

We also got to hear the keynote speaker, Ken Ribet, talk about Fermat's last theorem. But that's for another thread.

Sorry for the delay, and see you later this week.
 
Last edited by a moderator:
  • #113
to straycat, i confess i am a little puzzled by the question, but perhaps it is only because it has a "what is this good for" sound to me.

I.e. there are only a few natural constructions possible in mathematics, starting from a given amount of data, and one needs to know all of them.


I.e. starting from a differentiable manifold, almost the only construct possible is the tangent bundle. then what more refined constructions can be made? one can take sections of it, dualize it, and perform the various multilinear constructions on it, e.g. alternating, or symmetric.

but that's about it.


not to know about any of these, such as sections of the exterior powers of the dual bundle, (i.e. differential forms), would seem to be folly.

I.e. I cannot imagine an argument for NOT knowing about differential forms, and clifford algebras too for that matter.

Its not like there's a huge amount of constructions out there and you only need one. Theres only a few useful constructions that anyone has been able to think of, and you need them all to understand the objects of study.

Its big news when anyone thinks of a new one, like moduli spaces of manifodls or bundles on manifolds, and related iinvariants like characteristic classes or gauge theory.

but that's just a mathematician talking.

Suppose you want to understand a ring. what do you look at? well you could ask how many elements it has, quite interesting if it is finite, not at all other wise.

Then you could ask about its group of units, whether it is commutative, whether it embeds in a field, what its relation is to its "prime ring", i.e. smallest subring containing 1, (i.e. dimension as a vector space if a field, or transcendence degree); prime elements versus irreducible elements, possible uniqueness of factorability into primes, structure of its ideals; then you could ask what its various modules are like, are they all free? what resolutions do they admit? i.e. their projective dimension, representations, then their set of prime ideals and geometric structures possible on these such as spectrum, zariski topology, krull dimension, components, possible mappings to or from standard rings like polynomial rings.

what else? there is really a limited amount of interesting constructions possible. one should not have to argue in favor of learning something about them. i guess the only argument is that life is finite, but most of us have some spare time. that's why we post here on PF.

The big excitement aboiut Wiles work on FLT was not that he solved it, but that he invented some new tools that other people think they can also use to solve new problems and push matters further. That's why a whole generation of young number theorists jumped with glee on his work and began studying it eagerly.

Useful tools are all too rare. we should treasure them and contemplate them when we get the chance. Are there really people out saying, "well i know differential forms have been around for decades, they are the basic tool for defining fundamental invariants like deRham cohomology, they have a huge literature devoted to them, are part of the accepted language of manifolds by all mathematicians, and physicists like John Archibald Wheeler used them in the standard text on gravitation, but are they really important enough for me to learn about?"
 
Last edited:
  • #114
mathwonk said:
Its not like there's a huge amount of constructions out there and you only need one.

Well the main motivation for my question is to try to understand to what extent and in what way tensor analysis, differential forms, and Clifford algebras are different, and to what extent they are minor variations on the same thing.

To make an analogy: there are multiple formulations of quantum mechanics [1], such as wave mechanics, the matrix formulation, Feynman's path integral, etc etc. You could argue that any practicing physicist should know all of them, but I think that most do not. So it's worthwhile to develop arguments for why they should spend the time to do so.

mathwonk said:
i guess the only argument is that life is finite, but most of us have some spare time. that's why we post here on PF.

Well, don't underestimate the "life is short" argument! :wink: I'm not a mathematician by trade, so most of my time is spent on other things. I could be watching Star Wars right now. :cool:

David

[1] Styer et al. "Nine Formulations of Quantum mechanics. Am J Phys 70 (3), 288.
 
  • #115
straycat said:
I have a question for you. In your attempts to "sell" differential forms as an area of study, what are the branches of mathematics against which you are competing, or against which you would compare differential forms? I am wondering in particular whether Hestenes' geometric algebra (also called Clifford Algebra, I think) would be one of these "competitors." I guess a way to phrase the question would be: for a given typical application of differential forms, what other branches of mathematics might be used for the same application? (I hope this is not too off the topic of David's book.)

I have no idea whether it's been brought up or not. But, the example I think of when reading your question, is that of Maxwell's Equations. It is obviously entirely possible to study them without any knowledge of differential forms. However, if you do have the machinery of forms behind you, you can rewrite the equations extremely succinctly. If I recall correctly, it boils down to two: dF=0 and d^*F=0. The extra bonus of this is that then one can study Maxwell-like forms on other manifolds besides E^3.

The other example is that of symplectic and contact geometries, which of course wouldn't exist without the use of forms. Now, this is a mathematician writing here whose work lies within the realm of this geometry. So, it's important to me. And apparently to a few physicists out there too.

It's a bit dated but Harley Flanders' text (differential forms and its application to the physical sciences) gives several examples of how forms can be used in various parts of science and mathematics.
 
  • #116
up until this morning i would not have known a clifford algebra if it spoke to me, but while doing my exercises and lying down, i perused artin's geometric algebra and read the definitions, lemmas and consequences, over about 15 minutes, since life is finite. From that fairly innocent acquaintance, it seems to me they are a tool for studying the structure of groups of linear transformations which preserve various inner products.

For instance the group of "regular" positive Clifford elements leaving the original space invariant, map onto the group of positive inner product preserving linear transformations of the original space, with kernel the non zero elements of the underlying field.

This gives them applications to understanding the Lorentz group of rotations of 4 dimensional space time in special relativity which do not interchange past and future.

From this brief perspective, I would say many physicists should know about them, but that their interest is vastly more restricted than that of the very general and flexible tool of differential forms, which everyone who does calculus can benefit from. In particular anyone who wants to study general as opposed to only special relativity seems destined to require differential forms.
 
  • #117
ALFLAC! [why is this not sufficient? does the hierarchy here think us unable to communicate with a single word?]
 
  • #118
"but that their interest is vastly more restricted than that of the very general and flexible tool of differential forms, which everyone who does calculus can benefit from. In particular anyone who wants to study general as opposed to only special relativity seems destined to require differential forms."

Thats the problem, if you ask Hestenes and the Geometric Algebra people, they will tell you Differential forms are a subset of the more general Clifford algebra construction they use.

That is not however how I learned it, and why its somewhat confusing. For instance, typically in physics Clifford algebras primarily arise when you want to stick a spin geometry (read spinor bundles) on a manifold. This is topologically restricting from the getgo, amongst other things you need a choice of complex structure and I think the other is that the second STiefel Whitney class is identically zero.

I guess it just means I don't understand Geometric Algebra, b/c not only is their definition of differential forms/manifolds different than what I learned and how I use it daily, it also seems their 'Clifford algebras' are somewhat different than I learned. For instance one second of googling gives 4 camps
http://www.ajnpx.com/html/Clifford/4CliffordCamps.html

I asked a math proffessor about this the other day, and he babbled something (he was clearly confused too) about how they are trying to generalize cross products and how their construction is really only good in dimensions 3 and 4.
 
Last edited by a moderator:
  • #119
I tried some of those links but they sound like crackpots to me, and I do not want to waste any more time pursuing reading their stuff. If anyone seriously believes these guys have made differential forms obsolete, fine. I cannot help further. (math professor talking here.)
 
  • #120
Lol, I thought so too. They sound too grandiose, with huge claims etc.

However serious people, take them seriously. Hestenes is at Cambridge, and he has managed to convince quite a few physicists to write books on his approach, etc.

Go figure.
 
  • #121
well i noticed he is at cambridge, but he still seemed to be claiming to rewrite the whole mathematical basis of physics so i figred he is most likely a nutcase anyway.

of course we could be wrong. i mean i acknowledge that i also am a pod person mascarading as a normal human being.
 
  • #122
Is that David Hestenes,a guy who was at "Department of Physics and Astronomy,Arizona State University,Tempe,Arizona" ...?

I've got a lecture by him at the 1996 Fourth International Conference on Clifford Algebras and Their Applications to Mathematical Physics",Aachen,D called "Spinor Particle Mechanics".

Back then he didn't seem to be a crackpot.He's published in peer reviewed journals.I don't know what happened in between,there are 9 years,after all...

Daniel.
 
  • #123
Haelfix said:
That is not however how I learned it, and why its somewhat confusing. For instance, typically in physics Clifford algebras primarily arise when you want to stick a spin geometry (read spinor bundles) on a manifold. This is topologically restricting from the getgo, amongst other things you need a choice of complex structure and I think the other is that the second STiefel Whitney class is identically zero.

I guess it just means I don't understand Geometric Algebra, b/c not only is their definition of differential forms/manifolds different than what I learned and how I use it daily, it also seems their 'Clifford algebras' are somewhat different than I learned. For instance one second of googling gives 4 camps
http://www.ajnpx.com/html/Clifford/4CliffordCamps.html

You all will have to forgive me for my lame answer to the posed question. I didn't realize that the questioner was asking the question from such a sophisticated point of view. I was not aware that anyone disagreed with how to define Clifford algebras, et al., having myself assumed that it was all decided already. Looking at my copy of Spin Geometry, I think that maybe the idea is to create spin-like structures on a broader class of manifolds besides those with zero 2nd Stiefel-Whitney classes.

Haelfix said:
I asked a math proffessor about this the other day, and he babbled something (he was clearly confused too) about how they are trying to generalize cross products and how their construction is really only good in dimensions 3 and 4.

Since any odd-diml. complex projective space (among other higher dimensional creatures) is spin, maybe he was referring to the Seiberg-Witten equations which use spin geometry (and hence Clifford algebras) but seem restricted to the 4-diml. case.


I really do find it hard to believe though, that differential forms will become completely obsolete. Technically, Riemannian geometry has replaced calculus, but you still need the basic 1-diml. real calculus to actually do anything.
 
Last edited by a moderator:
  • #124
here is my perspective on "new" algebras, as derived from the old fashioned education i received in the 60's and currently visible in books such as lang's algebra.

an (associative) "algebra" A (with identity), over a ring R, is an abelian additive group with an associative bilinear multiplication, for which an element called 1 acts as the identity, equipped with a ring map from R to A, "preserving identities".


Given any module M over R there is a universal such object T(M) called the tensor algebra of M over R. There is always a module map from M into T(M), and the image generates T(M) as an algebra.

If M is free of rank s over R, then T(M) is a non commutative polynomial ring over R generated by s "variables", which can be chosen to be any s free generators of M as a module.

The beauty of this object is, it contains in its DNA the data of all possible such algebras over R. I.e. if B is any associate R algebra with identit, equipped with a module map M-->B whose image generates B over R, then there is a unique surjective R algebra map T(M)-->B such that the composition M-->T(M)-->B equals the given map M-->B.

Hence the "new" algebra B, is merely a quotient T(M)/I, of the universal algebra T(M) by some ideal I. In this sense there are no new algbras of this type, as they are all constructed out of T(M).


For example, if S(M) is the "symmetric algebra" of M over R, which just equals the usual commutative polynomial algebra over R, with algebra generators or "variables" equal to the module generators of M, then S(M) = T(M)/I where I is the 2 - sided ideal generated by elements of form
x(tens)y - y(tens)x.

and if E(M) is the exterior algebra of M over R, (whose elements are linear combinations of wedge products of things like dx, dy, dz, when dx, dy, dz are generators of M over R), then E(M) is just the quotient of T(M) by the ideal generated by elements which contain repeated factors like x(tens)x.


Now the usual definition of a Clifford algebra is that it is an associative algebra with identity, built on a vector space M over a field R, plus a quadratic form q ("inner product"), as follows: the algebra C(M) is equipped with a module map M-->C(M) such that the image of the element x, in C(M) has square equal to q(x).1. I.e. if x is in M, and q(x) is its "squared length" under the form q, then in C(M), we have x^2 = q(x).1. And the elements of M generate C(M) as an algebra over R. morover C(M) is universal for all such algebras, i.e. every other one is a quotient of C(M).

But in particuilar C(M) is an associative algebra generated by M. Hence there is a unique surjective R algebra map T(M)-->C(M) realizing C(M) as a quotient of form T(M)/I for some unique ideal I in T(M), containing elements of form
x(tens)x - q(x).1, and presumably generated by these.

Now I fully admit to being a novice here, but i fail to see how anyone can fail to deduce from this that the key construction to understand in all of this is the tensor product.

Moreover, as the Clifford algebra involves extra structure which is not always present, namely the form q, it is clearly a more special derivative of T(M) than is the exterior algebra E(M), i.e. differential forms.

Furthermore, what "new" algebras are possible? unless they are non associative. (and mathematicians have also studied non associative algebras but i have not myself.)

Anyone claiming to construct a new associative algebra generated by elements of a module M, makes one wonder if they are unaware of the basic universal constructions that have been on the scene and even dominated it since the 1950's.

Of course this all concerns only the local, i.e. pointwise side of the story. The usefuleness of these constructs to physicists should be influenced, perhaps decidely, by their global manifestations in physics.


Notice that even if I am completely wrong, I have purposely given you enough data to decide for yourself.

If someone in a competing camp wishes to share more sophisticated and newer definitions for these concepts, I assume we will all be grateful.

Oh yes, and Riemannian geometry cannot possibly replace calculus, as Riemannian geometry also invovles an inner product which is unnecessary for intrinsic ideas of calculus.
 
Last edited:
  • #125
OK, I have looked on the webpage http://modelingnts.la.asu.edu/GC_R&D.html and in particular perused the short simplified version of GA, intended for high school teachers. there is of course nothing there which is new in the mathematical sense, but some which would seem new to high school students (although I had some of this material in second grade! from a student teacher experimenting on us with trigonometry), an mr hestenes' goal there is to advocate incorporating some well known ideas of vector algebra, exterior algebra, and quadratic forms, into high school geometry, which he calls geometric algebra.

so he is not a real crackpot since he advocates something both useful and correct, and which he also seems to understand; but he is sort of a missionary, and hence comes on like a crackpot by advertising his mission in overly glowing terms, claiming he is going to revolutionize physics education and provide the universal answer to all communication problems between the two sciences, and harking back to the golden days of the 19th century, and so on.

this makes his non technical stuff sound a little fishy. but there is a similar movement by people who dress up like patch adams and try to sell calculus to reluctant students with books called "streetwise calc for dummies" and so on, and they are real mathematicians who have done some people some good, or at least some of my friends think so.

so i for one am glad mr hestenes is out there pumping for more use of vector algebra in high school and college. and although this stuff did get published in am. j. physics it seems, it would be hard for me to believe it occurs in any research math journals. but i have a finite amount of evergy and interest to devote to this type of thing. but i say in this case, more power to him.

i try to do exactly the same type of thing in my teaching, i.e. take known ideas, which are however not having the impact they should have at lower levels, and force them in there, hopefully after having understood them myself that is. i do it right here on this forum all the time. i am not talking about anything mathematical here that is not extremely well known to most practicing mathematicians. my very modest contribution to things like the discussion of clifford algebgras is just to pick up a book not everyone may have access to, read it quickly as a mathematician, and report back here to the best of my ability.
 
Last edited by a moderator:
  • #126
mathwonk said:
Oh yes, and Riemannian geometry cannot possibly replace calculus, as Riemannian geometry also invovles an inner product which is unnecessary for intrinsic ideas of calculus.

As I'm sure you know, standard calculus on the real numbers uses the Euclidean norm to define convergence of limits of sequences (among others), which can be derived from the Euclidean inner product, although I suppose one could develop most if not all of standard calculus by defining any old Hausdorff topology on R and defining convergence of sequences from there. Derivatives might end up looking a little strange, if the topology is...

Please take no offense. I was just being cheeky.
 
  • #127
are you under the impression that all norms arise from inner products?

i.e. that all banach spaces are hilbert spaces?
 
Last edited:
  • #128
mathwonk said:
are you under the impression that all norms arise from inner products?

i.e. that all banach spaces are hilbert spaces?

Of course not. I just know that the standard distance norm on R can be seen (if one wants to) as coming from the (albeit rather trivial) inner product on R.
 
  • #129
Back to business!

OK school's out, and I'm back for real. Let's finish this book by the end of the summer! Hoo-rah!

I'd like to pick up from where we left off in the book: Exercise 3.18. I posted a solution, to which AKG commented. I haven't looked at his comments in a while, but I do have questions on them. Naturally anyone is free to answer.

Here's my solution to the Exercise.

Tom Mattson said:
Exercise 3.18
Let \omega=w_1dx \wedge dy +w_2dy \wedge dz +w_3dz \wedge dx.
Let A=&lt;a_1,a_2,a_3&gt; and B=&lt;b_1,b_2,b_3&gt; be vectors in \mathbb{R}^3.
Let C=[c_1,c_2,c_3] be a vector in T_p\mathbb{R}^3 such that C=k_1A+k_2B. So the set {A,B,C} are dependent. That implies that det|C A B|=0.

Explicitly:

<br /> det [C A B]=\left |\begin{array}{ccc}c_1&amp;c_2&amp;c_3\\a_1&amp;a_2&amp;a_3\\b_1&amp;b_2&amp;b_3\end{array}\right|<br />

<br /> det [C A B]=c_1(a_2b_3-a_3b_2)-c_2(a_1b_3-a_3b_1)+c_3(a_1b_2-a_2b_1)<br />

Now let \omega act on A and B. We obtain the following:

<br /> \omega (A,B)=w_1(a_1b_2-a_2b_1)+w_2(a_2b_3-a_3b_2)+w_3(a_3b_1-a_1b_3)<br />

Upon comparing the expressions for det [C A B] and \omega (A,B) we find that \omega (A,B)=0 if w_1=c_3, w_2=c_1, and w_3=c_2. So the line l is the line that is parallel to the vector [w_2,w_3,w_1]. So I can write down parametric equations for l as follows:

<br /> x=x_0+w_2t<br />
<br /> y=y_0+w_3t<br />
<br /> z=z_0+w_1t<br />
[/color]

AKG responded thusly.

AKG said:
So anyways, you have it right, and if you want to choose a line parallel to P, you may want to throw in that extra bit that allows you to claim that P is in the plane.

I did not use your P though. I used a vector C that is in the plane spanned by A and B. I did that for the purpose of choosing a line l that is parallel to C, so that the plane spanned by A and B is guaranteed to contain l. The only thing I did not determine was the point (x_0,y_0,z_0), but this can be found easily knowing the vector parallel to l and the equation of the plane.

One more remark: You have A and B in R³, and C in the tangent space. It seems as though you should have them all in R³, or all in the tangent space.

That is not consistent with any of the reading thus far. The rest of the chapter discussed forms defined on T_p\mathbb{R}^n that act on vectors in \mathbb{R}^n. Am I misunderstanding something?
 
Last edited:
  • #130
i tried to answer this exercise in post 91, or so, as follows:

"Also in post #81, Tom asked about solving ex 3.18. What about something like this?
a 1 form measures the (scaled) length of the projection of a vector onto a line, and a 2 form measures the (scaled) area of the projection of a parallelogram onto a plane. Hence any plane containing the normal vector to that plane will project to a line in that plane. hence any parallelogram lying in such a plane will project to have area zero in that plane.

e.g. dx^dy should vanish on any pair of vectors spanning a plane containing the z axis.'

does that make any sense?

i have forgotten now but it sems the point is that 2 forms on R^3 are decomposable? reducible? whatever?
 
Last edited:
  • #131
mathwonk said:
i tried to answer this exercise in post 91, or so, as follows:

(snip)

does that make any sense?

Yes, it made sense. It's just that the next few exercise deal with the line that was to be found in 3.18, which is why I wanted an algebraic result. I'll chew on your answer a little longer and see if I can't answer the other questions with it.
 
  • #132
mathwonk said:
are you under the impression that all norms arise from inner products?
Are you under the impression that the norm doesn't arise from a inner products?

From the context presented here the norm of a vector is the square root of the inner product of a vector. The defnition of norm of of continuous when ... sorry but [f(x) = f(x)
 
  • #133
Tom Mattson said:
My next question is for the students:

Would any of you like to show this? Check my notes for how to show linearity and non-linearity (think superposition and scaling).
[/color]

Without going through all the steps, scaling returns |\omega\wedge\nu(cV_1,V_2)| = |c||\omega\wedge\nu(V_1,V_2)| \not= c|\omega\wedge\nu(V_1,V_2)| for c < 0.

Rev Prez
 
  • #134
pmb phy, the point of my post was that calculus for normed spaces depends only on the norm, hence makes sense in any banach space. and yes, i am under the impression, as are most people, that there exist banach spaces which are not hilbert spaces. i.e. there are norms which do not arise from dot products. (sup norm on continuous functions on [0,1].)

the message is that the derivative is a more basic concept than is the dot product, since the derivative makes pefect sense, with exactly the same definition, in situations where the dot product does not. of course people may disagree, but to me the evidence seems clear. :smile:
 
Last edited:
  • #135
OK, back to my quandry. I feel like I'm on the cusp of finally moving past it, but there is a little nagging detail here.

AKG said to me the following:

AKG said:
One more remark: You have A and B in R³, and C in the tangent space. It seems as though you should have them all in R³, or all in the tangent space.

And I replied as follows:

Tom Mattson said:
That is not consistent with any of the reading thus far. The rest of the chapter discussed forms defined on T_p\mathbb{R}^n that act on vectors in \mathbb{R}^n. Am I misunderstanding something?

On closer inspection of the text, it seems that I was wrong. But it seems as though there is actually a contradiction in the book. The notation &lt;\cdot , \cdot&gt; is explicitly said on p. 48 to denote vectors in a tangent space, and 1-forms on \mathbb{R}^n are said on p 50 to act on vectors of the form &lt;dx,dy&gt;, which means that they are vectors in the tangent space T_p\mathbb{R}^n. But looking at the diagram at the top of page 53, he plots the vectors V_1 and V_2 at the origin of a set of axes marked with x,y,z. This denotes the space \mathbb{R}^3, no? Well, if a 1-form acts on vectors from T_p\mathbb{R}^3, then I wonder why the axes aren't labeled dx,dy,dz?

OK, so here is what I'd like to know:

Just where do the vectors which are the arguments of a 1-form on T_p\mathbb{R}^n live? Do they live in the tangent space, or in \mathbb{R}^n itself?

And mathwonk: I'm not ignoring your geometric answer to Exercise 3.18. It's just that, as I said, it looks like we need an expression for the line l to move on to the other Exercises.

Boy I can't wait to be done with this chapter.
 
  • #136
i pointed out long ago several of the many imprecisions and errors in this book, such as you are now noticing.

in this case there, is no big worry. i.e. there is a natural isomorphism between R^n and any of its tangent spaces. so there is no real problem in identifying one with the other.

of course it is no help in understanding the authors conventions.



according to mr bachman's earlier statements to me, the arguments for what he calls a 1 form are indeed elements of the tangent space.
 
  • #137
The Flanders book, "Differential form and applications to the physical sciences" threw me for a loop with this. In the book, he started by referring to differential forms as "vectors". since the book came highly recommended to me, I began doubting everything that I thought I knew about forms until that point...

He was referring to them as vectors within the dual space (and later making a correspondence between them with vectors in E^3), which indeed they are...but he didn't lay that out until like 50 pages later, and it was unnecessarily confusing.
 
  • #138
look, at every point of R^n, or any manifold, there is a tangent space. a form is a linear function on the tangent space, and a field of forms is a choice of a such a linear function on every tangent space. that's all there is to it. whatever language each person uses is only a distraction. just get the idea, then deal with each author's variations in language.


a confusion then is that for R^n the space itself is naturally isomorphic to every tangent space. so what?

if you prefer a book that actually writes down everything correctly and precisely the first time, read spivak instead of bachman.
 
  • #139
mathwonk said:
in this case there, is no big worry. i.e. there is a natural isomorphism between R^n and any of its tangent spaces. so there is no real problem in identifying one with the other.

I understand that one space is a carbon copy of the other. If you recall, that was the reason I was moaning about the strange way in which he introduced the basis for T_p\mathbb{R}^2. But in this case there is a problem in identifying one with the other, because different two origins of the "home space" of V_1 and V_2 in Exercise 3.18 results in two different lines, and the line is the answer to the question. And the fact that that answer has to be used in the next 2 exercises makes it even worse.

To be honest, my advisees and I left this behind long ago just to move forward. We've finished all of chapters 4 and 5, and much of chapter 6 (up to Stokes' theorem). We just had no choice but to abandon this because of the deadline of the conference.

if you prefer a book that actually writes down everything correctly and precisely the first time, read spivak instead of bachman.

I would agree with that if you're talking about a course in advanced calculus. But I don't want to give up yet on the idea of a course in forms for college sophomores. But if I were going to suggest a course like that to my Department Chair, I can see now that I would want to invest the time putting it together myself, rather than just using this book.

OK, that's enough of that. I've solved Exercises 3.18 through 3.21. Solutions forthcoming shortly.
 
Last edited:
  • #140
Chapter 3: Forms


Note: All symbols used in Exercises 3.18 through 3.21 have the same meaning.

Exercise 3.18
Let \omega=w_1dx \wedge dy +w_2dy \wedge dz +w_3dz \wedge dx.
Let V_1=&lt;a_1,a_2,a_3&gt; and V_2=&lt;b_1,b_2,b_3&gt; be vectors in T_p\mathbb{R}^3.
Let V_3=&lt;c_1,c_2,c_3&gt; be a vector in T_p\mathbb{R}^3 such that C=k_1V_1+k_2V_2. So the set {V_1,V_2,V_3} are dependent. That implies that det|V_3 V_1 V_2|=0.

Explicitly:

<br /> det [V_3 V_1 V_2]=\left |\begin{array}{ccc}c_1&amp;c_2&amp;c_3\\a_1&amp;a_2&amp;a_3\\b_1&amp;b_2&amp;b_3\end{array}\right|<br />

<br /> det [V_3 V_1 V_2]=c_1(a_2b_3-a_3b_2)-c_2(a_1b_3-a_3b_1)+c_3(a_1b_2-a_2b_1)<br />

Now let \omega act on V_1 and V_2. We obtain the following:

<br /> \omega (V_1,V_2)=w_1(a_1b_2-a_2b_1)+w_2(a_2b_3-a_3b_2)+w_3(a_3b_1-a_1b_3)<br />

Upon comparing the expressions for det [V_3 V_2 V_1] and \omega (V_1,V_2) we find that \omega (V_1,V_2)=0 if w_1=c_3, w_2=c_1, and w_3=c_2. So the line l is the line that contains the vector V_3=&lt;w_2,w_3,w_1&gt;. So I can write down parametric equations for l as follows:

<br /> x=w_2t<br />
<br /> y=w_3t<br />
<br /> z=w_1t<br />
[/color]
 
Last edited:
  • #141
Chapter 3: Forms


Exercise 3.19

Let ||V_1 \times V_2|| \equiv A, the area of the parallelogram spanned by V_1 and V_2.

Now look at \omega (V_1,V_2).

\omega (V_1,V_2)= w_1(a_1b_2-a_2b_1)+w_2(a_2b_3-a_3b_2)+w_3(a_3b_1-a_1b_3)

Recalling that V_3=&lt;w_2,w_3,w_1&gt; we have the following.

\omega (V_1,V_2)=V_3 \cdot (V_1 \times V_2)
\omega (V_2,V_2)=||V_3||A cos( \theta ),

where \theta is the angle between V_3 (and therefore l) and both V_1 and V_2. Noting that this dot product is maximized when \theta is 90 degrees, we have our result.

Exercise 3.20

Let N \equiv V_1 \times V_2.

Recalling the action of \omega on V_1 and V_2 from the last Exercise, we have the following.

\omega (V_1,V_2)=V_3 \cdot (V_1 \times V_2)

Noting the definition of N we see that we can immediately identify V_3 with V_{\omega}, and the desired result is obtained.

Exericise 3.21

Start by manipulating the expression given in the Exercise.

\omega= F_x dy \wedge dz - F_y dx \wedge dz + F_z dx \wedge dy
\omega = F_z dx \wedge dy + F_x dy \wedge dz - F_y dx \wedge dz
\omega = F_z dx \wedge dy + F_x dy \wedge dz + F_y dz \wedge dx

I used commutativity of 2-forms under addition to get to line 2, and anticommutativity of 1-forms under the wedge product to get to line 3.

Noting that V_3=&lt;c_1,c_2,c_3&gt;=&lt;w_2,w_3,w_1&gt; (Exercise 3.18) and noting that V_3=V_{\omega} (Exercise 3.20), it can be seen that V_{\omega}=&lt;F_x,F_y,F_z&gt;
[/color]
 
Last edited:
  • #142
well on the positive side, some people actually learn more,by correcting errors of an imprecise book, than by plodding thriough one where all the i's are dotted for you. I think that may the case here. you seem to be learning a lot.
 
  • #143
Too true. I sometimes hand out fallacious arguments to my students and ask them to find the errors.

Notes on Section 3.5 will be forthcoming shortly, and then we can finally get on to differential forms and integration.

Yahoo!
 
  • #144
Is it safe to say this thread is dead? I'm working through Bachman on my own and the discussion here has been pretty helpful.
 
  • #145
Calculation with differntial forms

Tom Mattson said:
Hello folks,

I found a lovely little book online called A Geometric Approach to Differential Forms by David Bachman on the LANL arXiv. I've always wanted to learn this subject, and so I did something that would force me to: I've agreed to advise 2 students as they study it in preparation for a presentation at a local mathematics conference. :eek:

Since this was such a popular topic when lethe initially posted his Differential Forms tutorial, and since it is so difficult for me and my advisees to meet at mutually convenient times, I had a stroke of genius: Why not start a thread at PF? :cool:

Here is a link to the book:

http://xxx.lanl.gov/PS_cache/math/pdf/0306/0306194.pdf

As Bachman himself says, the first chapter is not necessary to learn the material, so I'd like to start with Chapter 2 (actually, we're at the end of Chapter 2, so hopefully I can stay 1 step ahead and lead the discussion!)

If anyone is interested, download the book and I'll post some of my notes tomorrow.


I ahve a question on the example of the integral presented in Example 3.3 (pages 40-41, from the hep archive).

He seems to go from dx^dy directly to dr^dt, where r and t are parametrizations of the upper half unit sphere, x= r cost, y = r sin t, z = sqrt(1- r^2), r ranging from 0 to 1 and t from 0 to 2 pi.

I don't understand that, it seems to me that dx^dy = r dr ^ dt.

Any one can help?

Thanks


Patrick
 
Last edited by a moderator:
  • #146
The extra r is there.

(z^2) dx^dy was transformed to (1 - r^2) r dr^dt.

Regards,
George
 
  • #147
George Jones said:
The extra r is there.

(z^2) dx^dy was transformed to (1 - r^2) r dr^dt.

Regards,
George

Yes, of course...:redface: Thanks

(I simply made the change of variables x,y -> r,t into dx^dy and got r dr^dt. Now I see that his \omega_{\phi(x,y)} calculates the Jacobian which is included automatically in the way I did it. Now I see that he literally meant to replace dx^dy by dr^dt without taking derivatives...that confused me).

Thanks..

On a related note...I know that I will sound stupid but I still find very confusing that the symbols "dx" and "dy" are used sometimes to represent infinitesimals and sometimes to represent differential forms. :eek:

Anyway...
 
  • #148
nrqed said:
On a related note...I know that I will sound stupid but I still find very confusing that the symbols "dx" and "dy" are used sometimes to represent infinitesimals and sometimes to represent differential forms. :eek:

Umm... that's on purpose since the one forms dx and dy are defined so that one can do the calculus without all this infinitesimal nonsense.

BTW whatevre is the obsession with infinitesimals? I thought that Bishop Berkley firmly nailed the last nail of their coffin way back in the 1600s. And Cauchy showed us how to do all of analysis and hence calculus without thinking once about them. Virtually no one that I know of in the research field actually thinks in terms of these. Don't we have enough non-computable numbers to deal with (e.g. the vast majority of irrational numbers) without willfully adding more?
 
  • #149
I thought that Bishop Berkley firmly nailed the last nail of their coffin way back in the 1600s.
I'm not sure what you mean, but I'm afraid you mean that using infinitessimals can make no sense! But we've had nonstandard analysis since the 1950s, which can be used to put infinitessimals on a perfectly rigorous foundation.
 
  • #150
Hurkyl said:
I'm not sure what you mean, but I'm afraid you mean that using infinitessimals can make no sense! But we've had nonstandard analysis since the 1950s, which can be used to put infinitessimals on a perfectly rigorous foundation.

I'm not sure, but I think that Doodle Bob was referring to these when

Doodle Bob said:
Don't we have enough non-computable numbers to deal with (e.g. the vast majority of irrational numbers) without willfully adding more?

Regards,
George
 
Back
Top