A Geometric Approach to Differential Forms by David Bachman

quantumdude
Staff Emeritus
Science Advisor
Gold Member
Messages
5,560
Reaction score
24
Hello folks,

I found a lovely little book online called A Geometric Approach to Differential Forms by David Bachman on the LANL arXiv. I've always wanted to learn this subject, and so I did something that would force me to: I've agreed to advise 2 students as they study it in preparation for a presentation at a local mathematics conference. :eek:

Since this was such a popular topic when lethe initially posted his Differential Forms tutorial, and since it is so difficult for me and my advisees to meet at mutually convenient times, I had a stroke of genius: Why not start a thread at PF? :cool:

Here is a link to the book:

http://xxx.lanl.gov/PS_cache/math/pdf/0306/0306194.pdf

As Bachman himself says, the first chapter is not necessary to learn the material, so I'd like to start with Chapter 2 (actually, we're at the end of Chapter 2, so hopefully I can stay 1 step ahead and lead the discussion!)

If anyone is interested, download the book and I'll post some of my notes tomorrow.
 
Last edited by a moderator:
Physics news on Phys.org
That seems like a gentle enough introduction to differential forms.

I do recommend though at least using them to prove the fundamental theorem of algebra, brouwers fixed point theorem, or even the non existence of vector fields on a 2 sphere. I taught all these in my advanced calculus class in ellensburg, washington in 1972.

let me sketch these:

1) by stokes theorem, if the image of a map of I x S^1 (interval cross the circle) into R^2 misses the origin, then the integral of the pullback of the angle form: dtheta = [-ydx + xdy]/(x^2+y^2), is the same over both copies of the circle {0} x S^1 and {1} x S^1.

Now it not hard to show that if f is a polynomial of degree n, and we choose the radius of our circle large enough, then the map given by H(t,z)
= z^n + tf(z) misses the origin.

But then the integral of dtheta over the image of the circle via f is 2πn.

On the other hand if there were no root of f inside the circle, then again by stokes theorem, this integral would be zero. hence there is such a root.

2) This time we integrate the solid angle form over the sphere, observing it changes sign if we pull back by the antipodal map, sending x to -x. On the iother hand, if there were a non zero tanbgent vector field on the sphere, we could use it to tell us which direction to flow around the sphere from x to -x, thus getting a homotopy as above that implies the two integrals should be the same.

Since the solid angle form integrates to something like 4π) or at least something non zero) over the sphere this is a contradiuction.

3) Brouwers fix point theorem: If some smooth map of the disk to itself has no fixed point then it enables us to write down a map of I x S^1 to S^1, which is the identity on {1}x S^1. But then the integral of dtheta around the circle would be zero and it is not.


My suggestion is that machinery should be built only for a purpose. If you are going to define and belabor the macinery of differential forms and stokes theorem, then you should use it for something.
 
Last edited:
Tom, I'm interested; I have the book in my Favorites and have done the excercises of Chapter two. This is very nice! Much more instructive than the MTW approach.

Just an added note; I recently bought Schroedinger's book Spacetime Structure And I'm reading that along with this. S. does a masterful intro to tensors and especially densities, so the parallels to Bachman's text are clear. Since workers in GR, etc, commonly switch back and forth, the combination is a very productive one.
 
Last edited:
i suggest re - reading my post after finishing the book of bachman. it could follow the very last section there.
 
Mathwonk, thank you for your suggestions. If you or anyone else thinks that there are some interesting applications that we can investigate before the end of the book, just give a holler.

selfAdjoint said:
Just an added note; I recently bought Schroedinger's book Spacetime Structure And I'm reading that along with this. S. does a masterful intro to tensors and especially densities, so the parallels to Bachman's text are clear. Since workers in GR, etc, commonly switch back and forth, the combination is a very productive one.

Sounds good, I'll order it.

I'll be posting notes over the next couple of hours. They will include section summaries, solutions to the exercises, and my own questions. I've asked my advisees to sign up at PF so they can ask questions of their own.

I've also added Bachman's name to the thread title. That way Google searches for the book will be more likely to turn up this thread. Could boost membership at PF.
 
Chapter 3: Forms

Section 1: Coordinates for Vectors

This language of differential forms is new to me, so I think it's important to take note of and summarize the important definitions and concepts. My summary of the text is in black, my homework solutions and comments on what I think needs elucidation are in blue[/color], and my questions are in red[/color].

Tangent Spaces
The section begins with an example of a tangent space. The example is a tangent line to a curve C at point p. The tangent space T_pC of curve C at point p is the space in which all the tangent vectors to C exist.

Bachman also makes the point that the point p is the point at which all of the tangent vectors have their tail. This serves to distinguish T_pC from C in the event that C is a straight line.

Coordinates of Points on Curves and in Planes
Coordinates are described in terms of functions or mappings. For instance on our curve C Bachman considers the point p on a curve C whose x coordinate is 5. He explains that what is really meant is that there exists a coordinate function x : C \rightarrow \mathbb {R} such that x(p)=5. Thus the function "eats" points and "spits out" real numbers. Similary he defines coordinates in the plane P, for which we naturally need 2 functions.

Coordinates of Vectors in Tangent Spaces
Once coordinates on a curve C and in a plane P are defined, the issue of coordinates in T_pP is addressed. Since we are talking about coordinates of vectors in a vector space, the first thing we need is a basis for that space. Bachman "derives" the basis as follows:

<br /> \frac {d(x+t,y)}{dt}=&lt;1,0&gt;<br />
<br /> \frac {d(x,y+t)}{dt}=&lt;0,1&gt;<br />

where (\cdot , \cdot ) denotes a point in P and &lt;\cdot , \cdot &gt; denotes a vector in T_pP.

Here is my first question.

I say that Bachman "derives" the basis because it looks so contrived. It is obvious that T_pP is just a carbon copy of \mathbb {R}^2 with a different origin. So why not simply use the well-known fact from linear algebra that a basis for this space is {&lt;1,0&gt;,&lt;0,1&gt;}?[/color]

Now that the basis has been chosen, we write a vector \mathbf{V} \in T_pP as \mathbf{V} = dx&lt;1,0&gt; + dy&lt;0,1&gt;, dx,dy \in \mathbb{R}.

This represents a conceptual break from the manner in which many calculus books are written. dx and dy are our familiar differentials, which are typically thought of as infinitesimal quantities. Now they are regarded as real-valued coordinate functions in T_pP. The break from the "infinitesimal" conception of dx was foreshadowed on page 39 in Chapter 2.[/color]

Illustrative Example
In the example in which we are asked to consider the tangent line to the graph of y=x^2 at the point (1,1), we are given an interpretation of differentials that is not made apparent in most calculus books. He continues with the notion of differentials as coordinate functions by labeling the axes of the coordinate system based at (1,1) with dx and dy, as shown. He presses the point even further by writing down the equation of the tangent line in this coordinate system: dy=2dx, or \frac {dy}{dx}=2.

This leads to my second question.

I have always read and been taught that \frac {dy}{dx} is not to be thought of as a quotient. This point is usually made when introducing the Chain Rule. But if dx and dy are real-valued functions, then there should be no reason why the derivative could be considered a quotient. Can any of our more experienced members comment on how the two points of view may be reconciled?[/color]

Bachman also mentions that the tangent line that we are interested in is coincident with T_{(1,1)} \mathbb {R}^2.

This leads to my third question.

Why is this line referred to as a tangent space to \mathbb {R}^2? Why is it not referred to as the tangent space to the curve?
[/color]


Exercise 3.1
My plan is to post all my solutions, but unfortunately I don't know how to draw vectors with LaTeX, so a verbal description will have to do. This exercise is simple enough, so that shouldn't be a problem.

(1) I have a vector whose tail is at (1,-1) with components 1 and 2.
(2) I have a vector whose tail is at (0,1) with components -3 and 1.[/color]
 
Last edited:
i have not read the book yet, but the whole point of differentials on a curve, is that the derivative IS a quotient of them.

I.e. a differential is a linear function on the tangent space. Since the tangent space to a curve is one dimensional, the space of linear functions is also one dimensional.

thus any two linear functionals are scalar multiples of each other, so their quotient is a scalar. this is not true for differentials on higher dimensional tangent spaces.

I cannot explain why this pont of view is prohibited in elementary calculus. perhaps they do not wish to do the work necessary to justify it.
 
well i think you are in for some trouble using this book just because it is free, and i recommend using spivak instead.

anyway, he is not very precise in describing the tangent space Tp(P). it is described more precisely in spivak as {p}xP, so that he does not use the same notation (1,0) and (0,1) for vectors in Tp(P) as for vectors in the disjoint space Tq(P). i.e. he should say {p}x(1,0), etc...

But anyway...

OK further corrections to his sloppiness:

He calls a point of Tp(P) by the name dx(1,0) + dy(0,1), where he says dx and dy are in R. This is not correct, but not too far off. the usual sloppy notation from classical calculus, but wasn't the point here to get things right?

Ok, anyway, he means if v is a vector in Tp(P) then since dx and dy are independent linear functionals on Tp(P), then dx(v) or more precisely dxp(v), is an element of R, so completely precisely, but not too neatly:

he means dxp(v)(1,0)p + dyp(v)(0,1)p is a representation of a point of Tp(P).

you see dx is certaoinly not an element of R, nor even a linear functional ,on Tp(P). rather dx is a function whose value at each point p is a inear functional on Tp(P). so we need some such notation as dx(p) or dxp. but he seems not to want to introduce enough notation to be correct.

I do not know if i have the patience to correct all this, but you probably do not need me to.

I do suggest you are in for an interesting time reading this somewhat careless treatment of the subject however.

But it is not so far wrong as to be impossible, and the point of math is to have fun, so if you like this book, go for it.

i do suggest spivaks calculus on manifolds however for anyone wanting it explained correctly and precisely.
 
Last edited by a moderator:
Sorry mathwonk, I just now accidentally hit "edit" instead of "quote", so your last post was momentarily replaced by mine. But I put everything back in order.

mathwonk said:
well i think you are in for some trouble using this book just because it is free, and i recommend using spivak instead.

That's OK. We're here to talk to each other, not do a book review. So I think we can take advantage of the incomplete or rough spots to suit our own purposes.

OK further corrections to his sloppiness:

Let's not be too ungracious. I've invited Bachman here via email to participate in the discussion. :wink:

i do suggest spivaks calculus on manifolds however for anyone wanting it explained correctly and precisely.

I've ordered Apostol and Spivak, per your recommendation.

Mathwonk, thank you for making your points. I'll look at them more thoroughly tomorrow, after I've copped some zzzzz's. :zzz:
 
  • #10
Both the book and this thread look promising - so I'll try to keep up. The fact that the text may sacrifice some rigour at this stage is a positive bonus. In many of the textbooks the wood is too obscured by the trees for them to be useful for self-tuition.

Mind you, my first problem as a stress analyst is to convince myself and my students that adopting a differential forms approach is worth the effort - there's a lot of investment in traditional tensor analysis. So if anyone can fire in some examples from fluid mechanics rather than quantum mechanics, I'd be grateful.
 
  • #11
Hello all,

My name is Dave Bachman. Tom, thanks so much for inviting me to join your thread, and for looking at my book! The version that is up on the arXiv is a little old. A more current one is available on my web page at:

http://pzacad.pitzer.edu/~dbachman

The idea of the text is that one can teach differential forms to freshmen and sophmores instead of the traditional approach to vector calc. I did not write it so that mathematicians, or even grad students, can learn differential forms. There are many good books out there targeted for this audience.

For this reason there is a lot of sacrifice of rigour for readability. The idea was not to "get it right", in the sense of presenting the material with all of its gory, technical details. Another reason I wrote the book was to present the geometric intuition behind forms, which is often lacking in more rigourous texts.

The new version that is up on my web page contains many new exercises, and a new first chapter on the basics from multivariable calculus. There is a lot of time there spent on parameterizations, sicne I had found this to be the biggest stumbling block in learning the rest of the material. Also the new version contains re-writes of several sections that were previously found to be awkward.

I am once again teaching out of my book, and every time I do this I post a new "edition". The next edition, which will be posted in about two months, will contain a new chapter on symplectic forms, as well as many new exercises that are a little more thought-provoking.

As to the comment that it is free... I'lll try to keep a free version available on the web, but the text is currently being evaluated by a publisher.

Thanks again! I'lll try to write more when I have time...

Dave.
 
  • #12
Bachman said:
My name is Dave Bachman. Tom, thanks so much for inviting me to join your thread, and for looking at my book!

Thanks for coming! :smile:

The version that is up on the arXiv is a little old. A more current one is available on my web page at:

http://pzacad.pitzer.edu/~dbachman

I had noticed that, but only after we started. Do you recommend we switch over?

The idea of the text is that one can teach differential forms to freshmen and sophmores instead of the traditional approach to vector calc.

That's exactly why I picked it. I would like to see something like this form the basis of a "Calculus IV" course where I work. That said, I'm not trying to flesh this out to the level of the Advanced Calculus course that mathwonk mentioned. At least not for the purposes of this thread. Personally, I'd love to go through Spivak, and I will once I get it.

Thanks again! I'lll try to write more when I have time...

Great! If possible, could you (or anyone else lurking in this thread) comment on the 3 questions I put in red font in post #6?

Thanks,
 
  • #13
Tom Mattson said:
my second question.

I have always read and been taught that is not to be thought of as a quotient. This point is usually made when introducing the Chain Rule. But if and are real-valued functions, then there should be no reason why the derivative could be considered a quotient. Can any of our more experienced members comment on how the two points of view may be reconciled?

The reason the teachers say the derivative is not a quotient is because old textbooks used to use "atomic" differentials and compute it by dividing them, which is convenient (many engineers still think that way) but that is invalid given limit concepts. The derivative is actually a limit of quotients between finite quantities. In the differential forms area the limit is sort of built in, so that when you take the tangent space you have ALREADY got the tangent, with its slope, the derivative. So then if you take a basis in the new space based on that slope, you can play differential without violating rigor.
 
Last edited:
  • #14
OK, I think my second question is covered pretty well. I'll wait another day for anyone who would like to comment on my first and third questions. Then I'll post my notes on the next section.
 
  • #15
OK, I think I've figured out the answers to my other 2 questions.

My first one was:

Tom Mattson said:
Here is my first question.

I say that Bachman "derives" the basis because it looks so contrived. It is obvious that T_pP is just a carbon copy of \mathbb {R}^2 with a different origin. So why not simply use the well-known fact from linear algebra that a basis for this space is {&lt;1,0&gt;,&lt;0,1&gt;}?[/color]

I plotted the points (x,y), (x+t,y), and (x,y+t) in the plane P. Then I drew vectors from (x,y) to each of the other two points. If I consider that (x,y) is the origin of the coordinate system with axes dx and dy, then I see that the vectors I drew are based in this coordinate system. Taking the derivative of the coordinates leads to the advertised unit vectors, no matter where (x,y) is located in P. So, I can sort of see why this is used as a procedure for determining the basis of T_pP.

I still don't really like it, because it does not explicitly appeal to the linear algebraic notion of a basis. I'd really like it if someone could tell me why this viewpoint is useful, but I won't complain about it again.

My third question pertained to the illustrative example on pp 18-19. It was the tangent space determined from the tangent line of the parabola y=x^2 at (1,1).

Tom Mattson said:
This leads to my third question.

Why is this line referred to as a tangent space to \mathbb {R}^2? Why is it not referred to as the tangent space to the curve?
[/color]

The point that this question is driving at is the apparent variance with the convention from the beginning of the chapter, in which Bachman names the tangent space determined from the tangent line to a curve C as T_pC. But here he calls it T_{(1,1)}\mathbb {R}^2. I am thinking that you can replace T_pC with a tangent space to \mathbb {R}^2 provided that the points along which the tangent spaces exist are constrained to the curve C. That is, any tangent space T_{(x,x^2)}\mathbb {R}^2 is a tangent space to y=x^2.

OK, I will pause for any corrections or additions to this post before posting the next set of notes and homework solutions.

Thanks everyone, this is a real help so far.
 
  • #16
A few quick replies...

First, I do recommend switching to the most current edition, if only because there are more (and better) exercises. If you are really considering the text for Calc IV then the first chapter of the most current edition should definitely be covered, if only as a review from Calc III.

Now on to your question. There must be some confusion generated by something I wrote, but I'm not sure what it is. The tangent space to the curve C ($T_pC$) is a line made up of tangent vectors. The tangent space to $R^2$ at the point $p$ is a plane, with basis $dx$ and $dy$. The line $T_pC$ sits in the plane $T_pR^2$, but it is certainly not the whole plane. So $T_pC$ is a proper subspace of $T_pR^2$. Does this help?

Dave.
 
  • #17
To get LaTeX typesetting here, just use [ tex ] and [ /tex ] tags (without the spaces). You can double-click on others' math to see how as well~
 
  • #18
We also have [ itex ] for LaTeX in paragraphs... it's rendered smaller so it lines up with ordinary text.

The tangent space to $R^2$ at the point $p$ is a plane, with basis $dx$ and $dy$.

Aren't dx and dy supposed to be cotangent vectors, not tangent vectors?
 
  • #19
Added references to the newer version of the book.

Bachman said:
First, I do recommend switching to the most current edition, if only because there are more (and better) exercises.

OK, I'll switch over.

Now on to your question. There must be some confusion generated by something I wrote, but I'm not sure what it is.

Here is why there is confusion:

On page 17 of the arXiv edition of the book (edit: that's page 47 in the newer version), you refer to the tangent space defined by the tangent line to a curve C as T_pC, not T_p\mathbb{R}^2. Then on pp18-19 (edit: that's pp 48-49 in the newer version), in what I would think is a completely analogous situation, you refer to the tangent space of y=x^2 as not the tangent space of that curve, but as the tangent space T_{(1,1)}\mathbb{R}^2.

Does this help?

Sorry, but no. :redface:
 
Last edited:
  • #20
Oh yes, of course. Thank you. What I meant to say was "The tangent space to \mathbb R^2 at the point p is a plane, with AXES dx and dy."
 
  • #21
Tom,

I'm still not sure where the confusion lies. The tangent space to C is a line, denoted as T_pC. At the bottom of page 18 I say "We are no longer thinking of this tangent line (i.e. the space T_pC) as lying in the same plane that the graph does. Rather, it lies in \mathbb T _{(1,1)} \mathbb R ^2."

I'm not sure how you are getting the impression, from this, that T_pC is all of \mathbb T _{(1,1)} \mathbb R ^2.

By the way, thanks all for the latex advice.

Dave.
 
  • #22
Bachman said:
I'm not sure how you are getting the impression, from this, that T_pC is all of \mathbb T _{(1,1)} \mathbb R ^2.

OK, I've got it. The tangent space to the parabola is a proper subspace of T_{(1,1)}\mathbb{R}^2. No problem.
 
Last edited:
  • #23
my apologies dave, for the picky mathematician criticisms of a text aimed at undergrads. tom is also helping me learn which explanatins are tenable for the desired audience.

clearly you yourself know what the correct version is, and have made didactic choices based on teaching experience.

i would edit out the ungracious late night posts but cannot do so now after a certain number of days have passed.

roy
 
Last edited:
  • #24
Chapter 3: Forms

Section 2: 1-Forms

Once again:

My notes are in black.
My comments and homework solutions are in blue.[/color]
My questions are in red.[/color].

I'll pause 24 hours for discussion, questions, and corrections. If none are forthcoming, then I will post the next section of my notes tomorrow night at about the same time.

1-Forms
A 1-form \alpha is a linear function that maps vectors into real numbers. Since it is called "linear", we require it to satisfy:

\alpha (\mathbf{v}+\mathbf{w})=\alpha (\mathbf{v}) + \alpha (\mathbf {w})<br />
\alpha (k \mathbf{v})=k\alpha (\mathbf{v})<br />

Quick question:

Are "1-form" and "linear functional" synonymous?
[/color]

The geometric interpretation of \omega is that of a plane whose graph passes through the origin in the dx-dy coordinate system. Fixing our attention on 1-forms on T_p\mathbb {R}^2, we see that our general 1-form is \omega = a dx +b dy. This is the equation of a plane in T_p\mathbb{R}^2 \times \mathbb{R}.


Just a note of clarification for students: "\times" denotes a Cartesian product, which makes n-tuples out of elements of sets. For instance \mathbb{R} \times \mathbb{R} is the set of all ordered pairs of real numbers. And in our case, T_p\mathbb{R}^2 \times \mathbb{R} indicates that we are forming n-tuples from ordered pairs in T_p\mathbb{R}^2 (the coordinates for dx and dy) and a member of \mathbb{R} (the value of \omega).
[/color]

Illustrative Example
For \omega (&lt;dx,dy&gt;)=2dx+3dy, evaluate \omega (&lt;-1,2&gt;).

This is easily done by plugging in the components of &lt;-1,2&gt; into the right places in \omega:

\omega (&lt;-1,2&gt;)=(2)(-1)+(3)(2)=4

And we are to take note that \omega (&lt;-1,2&gt;) is just the dot product &lt;-1.2&gt; \cdot &lt;2,3&gt;

Note that we can make a vector out of the coefficients in \omega. We can call it &lt; \omega &gt;=&lt;a,b&gt;. This notation is not introduced until Section 2.3, but I think it would be nice to have it now for shorthand.

So a recipe for evaluating a 1-form on a given vector is:

<br /> \omega (V) = &lt;\omega&gt; \cdot V<br />
[/color]

This brings us to the main point of the section: the geometric interpretation of 1-forms.

David Bachman said:
Evaluating a 1-form on a vector is the same as projecting onto some line and then multiplying by some constant.

This of course has the huge advantage of being independent of coordinates. Anyone who has studied relativity can see the value of this!


So now we know how to use a given 1-form to determine the projection of a vector onto a line, and we can then determine the scaling factor. What if we want to do things the other way around? What if I am given a line L, a scaling factor k, and a vector V? Recall from vector calculus that the dot product is related to the projection of a vector onto a line:

proj_{\mathbf {u}}\mathbf {v}=\frac {\mathbf {u} \cdot \mathbf {v}}{|\mathbf {v}|}

So say I want to write down a differential form that projects vectors onto a line L: dy=c dx and scales them by a factor of k (this will be asked of us in the Exercises). Since the slope of L is c=\frac {c}{1}, it is readily seen that a vector that is parallel to L is W=&lt;1,c&gt;. Since we are looking for the projection of V onto a line parallel to W, we look at:

<br /> proj_WV=\frac {W \cdot V}{|W|}<br />
<br /> proj_WV=\frac {&lt;1,c&gt; \cdot V}{\sqrt {1+c^2}}<br />

Upon comparing this with our expression for \omega above, it should be clear that our vector W is nothing other than &lt;\omega&gt;. Furthermore, I can scale the projection by a factor of k by multiplying both sides of the above projection by that factor.

<br /> k{}proj_WV=k \frac{&lt;1,c&gt; \cdot V}{\sqrt {1+c^2}}<br />

So we can now find the differential form \omega that projects V onto dy=cdx and scales by a factor of k, because we have just derived a function that does that very thing. Recognizing that:

<br /> &lt;\omega&gt;=&lt;\frac{k}{\sqrt {1+c^2}},\frac{ck}{\sqrt {1+c^2}}&gt;<br />

we have:

<br /> \omega=\frac{k}{\sqrt {1+c^2}}dx+\frac{ck}{\sqrt {1+c^2}}dy<br />
[/color]

1-Forms in \mathbb{R}^n
All of this straightforwardly generalizes to n dimensions. There is no need for elaboration.
 
Last edited:
  • #25
Chapter 3: Forms

Looks like my last post was too big, so I'm splitting it up.


Exercise 3.2
(1) \omega = -dx+4dy. That means that &lt; \omega &gt; =&lt;-1,4&gt;.
\omega (&lt;1,0&gt;)=&lt;-1,4&gt; \cdot &lt;1,0&gt;=-1
\omega (&lt;0,1&gt;)=&lt;-1,4&gt; \cdot &lt;0,1&gt;=4
\omega (&lt;2,3&gt;)=2\omega (&lt;1,0&gt;) + 3\omega (&lt;0,1&gt;)=10

Note that I used a linear combination of \omega(&lt;1,0&gt;) and \omega(&lt;0,1&gt;) to evaluate \omega(&lt;2,3&gt;). This is done in the spirit of Bachman's second geometric interpretation of \omega, which is:
[/color]

David Bachman said:
Evaluating a 1-form on a vector is the same as projecting onto each coordinate axis, scaling each by some constant, and adding the results.


It should not be difficult to see that this is true in general.

(2) Find the line that \omega projects onto.
Since the line is parallel to &lt;-1,4&gt; and it passes through the origin in T_p\mathbb{R}^2, it must be dy=-4dx.

Exercise 3.3
I will use the formula I derived in these Section notes.
(1) c=2 and k=2, so \omega=\frac{2}{\sqrt {5}}dx+\frac{4}{\sqrt {5}}dy.
(2)c=\frac {1}{3} and k=\frac{1}{5}, so \omega=\frac{3}{5 \sqrt {10}}dx+\frac{1}{5 \sqrt {10}}dy.
(3)c=0 and k=3, so \omega=3dx.
(4) Here c is undefined, but in light of (3) it shouldn't be too taxing to see that \omega=\frac{1}{2}dy.
(5) Since 1-forms are linear, we have superposition, so \omega=3dx+\frac{1}{2}dy.
[/color]
 
Last edited:
  • #26
answer to quick question: usually a 1 form is defined on manifold as a family of linear functionals, i.e. not as one linear function from vectors to numbers, but as an assignment of such a function to each point of the manifold.


in my usual notation dx is a 1 form, and its value at p dx(p) is a linear functional on the tangent space Tp(M).

this is analogous to the distinction between f' and f'(p). in fact the differential of f, in local coordinates x, is the 1 form f'dx whose value at p is f'(p)dx(p). more simply, if incorrectly, written as f'(p)dx.

reactions from the others? i may be out of step here, but i am trying to point out what most people in the community are going to mean by these terms.
 
  • #27
there is some discrepancy in the literature in the use of the word "form". algebraists do indeed use the word for a linear functional. lang in his algebra book, calls an alternating k tensor, a k - form. classicists (analysts?) have long used the word "form" for linear functionals, and algebraists also used it for homogeneous polynomials of higher degrees.

differential geometers who use it as i said above, are thus left without a good short word for the value of a k form at a point, and must call it an "alternating k tensor" as spivak does in his little "calculus on manifolds".

there are two ideas though, a covector, and a field of covectors. call them what you will.
 
  • #28
I make a distinction in my book between "1-form" and "differential 1-form." A 1-form is, indeed, a linear functional. It acts on a single tangent space. So, choosing a specific point p, a 1-form is a linear functional on T _p \mathbb R^n. A "differential 1-form", on the other hand, is a (differentiable) choice of 1-form for each tangent space. You'll get to this in the next chapter.

Dave.
 
  • #29
forgive me for not reading more closely. i have already perused the whole book quickly. since i already "know" everything in it, i am too impatient to read along in detail. so my comments should be pretty much ignored by learners.
 
  • #30
I had a question after reading prof. bachman's book. On page 45 of the new edition, he shows a function denoted by ω within the integrand, to be a an n-form, based upon the n vectors that ω takes in as an input. Isn't ω none other than the jacobian? Here's the integral from page 44 with the text "Area" replacing ω, to show the purpose of ω.

/
| f(φ(r, θ))Area [∂φ/∂r(r, θ),∂φ/∂θ(r, θ)]drdθ (1)
/


Area [∂φ/∂r(r, θ),∂φ/∂θ(r, θ)] = | <∂φ/∂r(r, θ)> X <∂φ/∂θ(r, θ)> | (2)


if I'm correct, the right side of 2 is the jacobian. how does this relate to n-forms on a "bigger picture" level?
 
  • #31
The equation on page 45 is supposed to motivate the study of n-forms. The integrand there is not an n-form. But it IS a function that takes two vectors and returns a real number. The point illustrated there is that you need such a function if your answer is going to be independent of the choice of parameterization. For such an integrand to be an n-form, it must also be linear (which the "Area" function is not in \mathbb R^3).

Dave.
 
  • #32
Dave, when you say an n form is "linear" do you mean what most people call "n - linear"? i.e. linear in one variable at a time?

and are they also alternating?
 
  • #33
Yes, yes. Technically, an n-form on a vector space M is a multi-linear, alternating operator on the cartesian product of n copies of M.

Dave.
 
  • #34
I hate to jump off the immediate topic of the material in the book, but I just had a quick question about the application of differential forms. Would learning it simply help me to broaden my understanding of calculus, or would it also have some sort of practical (applying to physics, I'm a physics major) applications as well? I'm familiar with the concept of stating maxwell's equations in the language of differential forms, thus making them simpler, but I'm already pretty much comfortable with them in the integral and differential formulations of the laws. What other areas of physics and math would be open to me after study of differential forms?
 
  • #35
Gza said:
What other areas of physics and math would be open to me after study of differential forms?

Anything involving vector fields, for starters. You can use them in Fluids, GR, and of course as you already noted, EM. The last chapter of Bachman's book discusses EM theory. They can also be applied to thermodynamics. But I am going to ask that this thread be reserved for a sequential discussion of the book. We can talk about all the applications you want at the end.

Since the discussion of my last set of notes has died down, I am going to post the next set later tonight.

Stay tuned...
 
  • #36
Chapter 3: Forms

Section 3: Multiplying 1-Forms

The first problem here is how to define a product of 1-forms. Why not \omega \cdot \nu (V) \equiv \omega (V) \cdot \nu (V)? Because it’s nonlinear.


To make the violation of linearity more explicit, note that superposition is violated:

<br /> \omega\cdot\nu(V_1+V_2)=\omega(V_1+V_2)\cdot\nu(V_1+V_2)<br />
<br /> \omega\cdot\nu(V_1+V_2)=[\omega(V_1)+\omega(V_2)]\cdot[\nu(V_1)+\nu(V_2)]<br />
<br /> \omega\cdot\nu(V_1+V_2)=\omega(V_1)\cdot\nu(V_1)+\omega(V_2)\cdot \nu(V_2)+\omega(V_1)\cdot\nu(V_2)+\omega(V_2)\cdot\nu(V_1)<br />
<br /> \omega\cdot\nu(V_1+V_2)\neq\omega\cdot\nu (V_1)+\omega\cdot\nu(V_2)<br />

And note that the scaling property is violated:

<br /> \omega\cdot\nu(cV)=\omega(cV)\cdot\nu(cV)<br />
<br /> \omega\cdot\nu(cV)=c^2\omega(V)\cdot\nu (V)<br />
<br /> \omega\cdot\nu(cV)\neq c\omega\cdot\nu(V)<br />
[/color]

So instead of taking the simple product of \omega and \nu, we define the wedge product \omega \wedge \nu. Since we can use \omega and \nu to act on V_1 and V_2 to generate pairs of numbers, it stands to reason that the natural geometric setting in which we should be operating is the a plane, namely the \omega - \nu plane.

Notation
(a,b) denotes a point in the x-y plane.
&lt;a,b&gt; denotes a vector in the x-y plane.
[a,b] denotes a vector in the \omega - \nu plane.


Quick question:

Is there any subtle distinction between the coordinates of a vector and the components of a vector, or are they synonymous?
[/color]

Geometric Interpretation of the Wedge Product
We don't want to use our product of 1-forms to generate a pair of vectors, we want to use it to generate a number. That number is defined to be the signed area of the parallelogram spanned by the vectors [\omega(V_1),\nu(V_1)] and [\omega(V_2),\nu (V_2)] in the \omega - \nu plane.


As we know from Calculus III, two vectors V_1=&lt;a,b&gt; and V_2=&lt;c,d&gt; in \mathbb {R}^2 span a parallelogram with signed area given by:

<br /> Area=(V_1\timesV_2) \cdot \hat {k}=\left |\begin{array}{cc}a&amp;b\\c&amp;d\end{array}\right|=ad-bc<br />

Similarly two vectors [\omega(V_1),\nu(V_1)] and [\omega(V_2),\nu(V_2)] in T_p\mathbb{R}^2 span a parallelogram with signed area given by:

<br /> Area=\omega \wedge \nu (V_1,V_2)=\left |\begin{array}{cc}\omega(V_1)&amp;\nu(V_1)\\\omega(V_2)&amp;\nu(V_2)\end{array}\right|=\omega(V_1)\nu(V_2)-\omega(V_2)\nu(V_1)<br />

Clearly the sign of the area depends on the order of the vectors in the cross product or the wedge product, as the case may be.
[/color]


Just anticipating an obvious question that would be asked by an astute student:

If all we're doing here is defining the wedge product in terms of something that could just as easily be expressed in terms of a cross product, why bother defining the wedge product at all? Why not just take the cross product of vectors in the \omega - \nu plane?
[/color]


We noted earlier that we did not want the simple product of 1-forms because it is nonlinear, and I showed as much in my notes. Now I want to show that the wedge product is linear.

Superposition
Checking the superposition property on \omega \wedge \nu (V_1, V_2) leads us to the following.

<br /> \omega\wedge\nu(V_1+V_2,V_3)=\left|\begin{array}{cc}\omega(V_1+V_2)&amp;\nu(V_1+V_2)\\\omega(V_3)&amp;\nu(V_3)\end{array}\right|<br />

<br /> \omega\wedge\nu(V_1+V_2,V_3)=\left|\begin{array}{cc}\omega(V_1)+\omega(V_2)&amp;\nu(V_1)+\nu(V_2)\\\omega(V_3)&amp;\nu(V_3)\end{array}\right|<br />

<br /> \omega\wedge\nu(V_1+V_2,V_3)=[\omega(V_1)+\omega(V_2)]\nu(V_3)-\omega(V_3)[\nu(V_1)+\nu(V_2)]<br />

<br /> \omega\wedge\nu(V_1+V_2,V_3)=[\omega(V_1)\nu(v_3)-\omega(V_3)\nu(V_1)]+[\omega(V_2)\nu(V_3)-\omega(V_3)\nu(V_2)]<br />

<br /> \omega\wedge\nu(V_1+V_2,V_3)=\omega\wedge\nu(V_1,V_3)+\omega \wedge\nu(V_2,V_3)<br />

Check.

In a similar fashion it can be shown that:

\omega\wedge\nu(V_1, V_2+V_3)=\omega\wedge\nu(V_1,V_2)+\omega\wedge\nu(V_1,V_3)
[/color]
 
Last edited:
  • #37
Chapter 3: Forms

Section 3: Multiplying 1-Forms (cont'd)​


Scaling
The other property to check is scaling.

<br /> \omega\wedge\nu(cV_1,V_2)=\left|\begin{array}{cc}\omega(cV_1)&amp;\nu(cV_1)\\\omega(V_2)&amp;\nu(V_2)\end{array}\right|<br />

<br /> \omega\wedge\nu(cV_1,V_2)=\left|\begin{array}{cc}c\omega(V_1)&amp;c\nu(V_1)\\\omega(V_2)&amp;\nu(V_2)\end{array}\right|<br />

<br /> \omega\wedge\nu(cV_1,V_2)=c\omega(V_1)\nu(V_2)-c\omega(V_2)\nu(V_1)<br />

<br /> \omega\wedge\nu(cV_1,V_2)=c\omega\wedge\nu (V_1,V_2)<br />

Check.

In a similar fashion it can be shown that:

\omega\wedge\nu(V_1,cV_2)=c\omega\wedge\nu(V_1,V_2).

Because \omega\wedge\nu(V_1,V_2) is linear in both variables, it is said to be bilinear. See the exchange between mathwonk and Bachman in Posts #32-33 on n-linearity.
[/color]

Lastly, we address the issue of signed areas. When we defined the wedge product we defined it as the signed area of the parallelogram spanned by the vectors [\omega(V_1),\nu(V_2)] and [\omega(V_2),\nu(V_2)].

Bachman sez:

David Bachman said:
Should we have taken the absolute value? Not if we want to define a linear operator.


My next question is for the students:

Would any of you like to show this? Check my notes for how to show linearity and non-linearity (think superposition and scaling).
[/color]
 
Last edited:
  • #38
If all we're doing here is defining the wedge product in terms of something that could just as easily be expressed in terms of a cross product, why bother defining the wedge product at all? Why not just take the cross product of vectors in the \omega-\nu plane?

Because the \omega-\nu plane is two-dimensional, and cross products are only defined for three-dimensional vectors.

Dave.
 
  • #39
Bachman said:
Because the \omega-\nu plane is two-dimensional, and cross products are only defined for three-dimensional vectors.

OK but that just changes the question. My ficticious student could then say that the same is true of the x-y plane, but that we can define cross products by defining a third axis that is orthogonal to the 2 existing axes. Why can't the same be done for the \omega-\nu plane?
 
  • #40
the cross product is defined for n-1 vectors in n-space, and the value is a vector in that space. hence it is only defiend for 2 vectors in 3 space. [which orthogonal direction are you going to choose for a given plane in n-space?]

It also depends on a choice of determinant for the larger space, i.e. of n-form.

the wedge product is defined for two vectors in n-space, and the value is a 2-vector, an element of a space of dimension "n choose 2".
 
  • #41
I like the geometric interpretation of the 2-form as the area of the parallelogram of the projection of vectors <V1> and <V2> onto the plane spanned by <ω> and <ν>, multiplied by the area of the parallelogram formed by <ω> and <v>, since it seems like a natural extension of the geometric interpretation of the one form, involving the dot product of <ω> and <V>; but it still seems difficult for me to switch back between this geometric interpretation of forms, and the idea of a 2-form for instance, as being a function ω^v:T_p\mathbb{R}^3 X T_p\mathbb{R}^3 -&gt; \mathbb{R}. For learning purposes, how exactly should one think about forms?
 
  • #42
klingon interpretation:
a k form is sort of like a bird of prey that hovers over the space looking for a k-cycle. when it sees one it gobbles it up and spits out a number.
 
  • #43
What is a k-cycle, if i may ask? I would assume it to be a collection of k, n-vectors within T_p\mathbb{R}^n, is this close?
 
  • #44
mathwonk said:
the cross product is defined for n-1 vectors in n-space, and the value is a vector in that space. hence it is only defiend for 2 vectors in 3 space.

Yep, I know all that. What I was originally asking is this:

From the point of view of a calculus student, what would be your answer to the following question at this stage in the game:

The Big Question:
"Why are we introducing the wedge product to find the area of a parellelogram, when we could just as well take a projection of a cross product, which we already know how to do?"

I already know that cross products and wedge products are two different animals, and I also know that we will eventually integrate them (actually, my advisees and I are doing that now). What I am asking is, do I tell a student who asks the question above to just sit tight and wait to see why we introduce the wedge product, or is there some reason that it's necessary now?

[which orthogonal direction are you going to choose for a given plane in n-space?]

Well, you said it yourself: the cross product is defined for n-1 vectors in n-space. I am still not seeing why taking cross products with our vectors living in the \omega - \nu plane is prohibited, as long as a 3rd axis is defined.

But if the answer to my Big Question above is, "You tell the student to sit tight and wait until the next chapter", then I'll settle for that.

By the way, my copy of Spivak's Calculus on Manifolds is due in on Saturday, and my copy of his Calculus is due in 2 weeks later. If the latter is all it's cracked up to be, then I may try to get my Department Chair to switch over. We currently use Larson, Hostetler and Edwards, which I am certain you would call a "cookbook".

More notes tomorrow...
 
Last edited:
  • #45
Chapter 3: Forms

Section 3: Multiplying 1-Forms (cont'd)​

Here are my homework solutions for the exercises that cover the material we've done so far. In my last set of notes, I posted a question to the students on the nonlinearity of 2-forms when the area of the parallelogram is unsigned. I'll post my solution to that tomorrow, if no one takes me up on it. I'll also finish posting Section 3.3 notes tomorrow.


Exercise 3.4

(1) Evaluating the four 1-Forms:
\omega(V_1)=&lt;2,-3&gt; \cdot &lt;-1,2&gt;=-8
\nu(V_1)=&lt;1,1&gt; \cdot &lt;-1,2&gt;=1
\omega(V_2)=&lt;2,-3&gt; \cdot &lt;1,1&gt;=-1
\nu(V_2)=&lt;1,1&gt; \cdot &lt;1,1&gt;=2

(2) Evaluating the 2-Form:

<br /> \omega\wedge\nu(V_1,V_2)=\left |\begin{array}{cc}\omega(V_1)&amp;\nu(V_1)\\\omega(V_2)&amp;\nu(V_2)\end{array}\right|<br />

<br /> \omega\wedge\nu(V_1,V_2)=\omega(V_1)\nu(V_2)-\omega(V_2)\nu(V_1)

<br /> \omega\wedge\nu(V_1,V_2)=(-8)(1)-(-1)(1)=-7<br />

(3) Expressing \omega\wedge\nu as a multiple of dx\wedge dy.
Let V_1=&lt;w,x&gt; and V_2=&lt;y,z&gt;. Then \omega\wedge\nu(V_1,V_2)=5(wz-xy).

Letting dx\wedge dy act on the same two vectors yields dx\wedge dy(V_1,V_2)=wz-xy. On comparison it is readily seen that the constant of proportionality is c=5.

Exercise 3.5
Skew-symmetry of \omega\wedge\nu(V_1,V_2)

<br /> \omega\wegde\nu(V_1,V_2)=\left |\begin{array}{cc}\omega(V_1)&amp;\nu(V_1)\\\omega(V_2)&amp;\nu(V_2)\end{array}\right|<br />

<br /> \omega\wegde\nu(V_1,V_2)=\omega(V_1)\nu(V_2)-\omega(V_2)\nu(V_1)<br />

<br /> \omega\wegde\nu(V_1,V_2)=-[\omega(V_2)\nu(V_1)-\omega(V_1)\nu(V_2)]<br />

<br /> \omega\wedge\nu(V_1,V_2)=-\left |\begin{array}{cc}\omega(V_2)&amp;\nu(V_2)\\\omega(V_1)&amp;\nu(V_1)\end{array}\right|<br />

<br /> \omega\wedge\nu(V_1,V_2)=-\omega\wedge\nu(V_2,V_1)<br />

Exercise 3.6
Using the result from the previous exercise and letting V_1=V_2=V:

<br /> \omega\wedge\nu(V,V)=-\omega\wedge\nu(V,V)<br />
<br /> 2\omega\wedge\nu(V,V)=0<br />
<br /> \omega\wedge\nu(V,V)=0<br />

Exercise 3.7
Done in Notes.

Exercise 3.8

<br /> \omega\wedge\nu(V_1,V_2)=\left |\begin{array}{cc}\omega(V_1)&amp;\nu(V_1)\\\omega(V_2)&amp;\nu(V_2)\end{array}\right|<br />

<br /> \omega\wedge\nu(V_1,V_2)=\omega(V_1)\nu(V_2)-\omega(V_2)\nu(V_1)<br />

<br /> \omega\wedge\nu(V_1,V_2)=-[\nu(V_1)\omega(V_2)-\nu(V_2)\omega(V_1)]<br />

<br /> \omega\wedge\nu(V_1,V_2)=-\left |\begin{array}{cc}\nu(V_1)&amp;\omega(V_1)\\\nu(V_2)&amp;\omega(V_2)\end{array}\right|<br />

<br /> \omega\wedge\nu(V_1,V_2)=-\nu\wedge\omega(V_1,V_2)<br />

Exercise 3.9

<br /> \omega\wedge\omega(V_1,V_2)=\left |\begin{array}{cc}\omega(V_1)&amp;\omega(V_1)\\\omega(V_2)&amp;\omega(V_2)\end{array}\right|<br />

<br /> \omega\wedge\omega(V_1,V_2)=\omega(V_1)\omega(V_2)-\omega(V_2)\omega(V_1)<br />

<br /> \omega\wedge\omega(V_1,V_2)=0<br />

Exercise 3.10
Distribution of \wedge over +.

<br /> (\omega+\nu)\wedge\psi(V_1,V_2)=\left |\begin{array}{cc}(\omega+\nu)(V_1)&amp;\psi(V_1)\\(\omega+\nu)(V_2)&amp;\psi(V_2)\end{array}\right|<br />

<br /> (\omega+\nu)\wedge\psi(V_1,V_2)=\left |\begin{array}{cc}\omega(V_1)+\nu(V_1)&amp;\psi(V_1)\\\omega(V_2)+\nu(V_2)&amp;\psi(V_2)\end{array}\right|<br />

<br /> (\omega+\nu)\wedge\psi(V_1,V_2)=[\omega(V_1)+\nu(V_1)]\psi(V_2)-[\omega(V_2)+\nu(V_2)]\psi(V_1)<br />

<br /> (\omega+\nu)\wedge\psi(V_1,V_2)=[\omega(V_1)\psi(V_2)-\omega(V_2)\psi(V_1)]+[\nu(V_1)\psi(V_2)-\nu(V_2)\psi(V_1)]<br />

<br /> (\omega+\nu)\wedge\psi(V_1,V_2)=\left |\begin{array}{cc}\omega(V_1)&amp;\psi(V_1)\\\omega(V_2)&amp;\psi(V_2)\end{array}\right| + \left |\begin{array}{cc}\nu(V_1)&amp;\psi(V_1)\\\nu(V_2)&amp;\psi(V_2)\end{array}\right|<br />

<br /> (\omega+\nu)\wedge\psi(V_1,V_2)=\omega\wedge\psi+\nu\wedge\psi<br />
[/color]
 
Last edited:
  • #46
Tom, I assumed you were working in n space, in which case there is no natural way to choose a 3rd axis. were you actually working in 3 space?

in that case I would say to the student that there is a special definition that works in 3 space but never works again, and we are trying to learn a method that will always work.

[If the stated purpose of your course is to learn about differential forms, it seems odd that a student would say, I don't want to learn how it is done with differential forms, I'd rather do it the old way.]

but maybe he is asking what does differential forms have to offer if his old way works as well.

in that case i would appeal to the fact that the diff forms approach generalizes to higher dimensions.
 
Last edited:
  • #47
mathwonk said:
Tom, I assumed you were working in n space, in which case there is no natural way to choose a 3rd axis. were you actually working in 3 space?

In this particular case we are working in 2-space, and taking advantage of a 3rd axis when talking about the cross product. As I said, I was wondering what to say to a student in regards to why we couldn't take the cross product in the \omega - \nu plane.

in that case I would say to the student that there is a special definition that works in 3 space but never works again, and we are trying to learn a method that will always work.

Good enough, then.

[If the stated purpose of your course is to learn about differential forms, it seems odd that a student would say, I don't want to learn how it is done with differential forms, I'd rather do it the old way.]

but maybe he is asking what does differential forms have to offer if his old way works as well.

Exactly. My advisees are making a presentation to an undergraduate math conference, and one of their points is that differential forms is superior to the old way in which vector calculus is typically presented. And before they do that, they will be giving a practice presentation to a skeptical faculty at our community college. i am just trying to anticipate the objections that they might raise.

in that case i would appeal to the fact that the diff forms approach generalizes to higher dimensions.

There we have it, then. Thanks.
 
  • #48
Tom,

Please excuse for rattling on, but i think i can do better than my last post, in the light of day.

I am a little rusty on cross products, but it seems to me that for one thing, differential forms methods are easier.

so maybe one could work up a little demonstration of the superior ease of wedge products.

e.g. one could use the properties of wedge products to actually compute the formula for a determinant. e.g. taking the wedge of v^w =
(ae1 + be2)^(ce1+de2)

gives ac e1^e1 + bc e2^e1 + ad e1^e2 + bd e2^e2

= ac (0) - bc (e1^e2) + ad e1^e2 + bd (0) = (ad-bc) e1^e2.

the same thing works for two vectors v,w in 3 space and gives three terms, where each term is then visibly a 2 by 2 determinant, i.e. the area of a projection of the parallelogram spanned by v,w into one of the three coordinate planes.

again, excuse me if i am out of touch with skillful use of cross products, but it seems to me that in that approach one simply memorizes all the formulas, and either memorizes the explicit coefficients of a cross product, or writes it as a formal determinant, and then must already know how to expand a determinant.


so of course in the one dimension where they overlap, the two methods are equivalent, since both amount to forming a 3 by 3 determinant, but the one seems more natural to me, and easier, since it is absed on axioms instead of memorized formulas. it also generalizes better.


it also gives an algebra for geometry, as originally envisioned by grassman, i.e. he was trying to calculate with objects which represented liens, and planes and 3 spaces etc,,,in n space.


thus one thinks of a simple ("decomposable") wedge product v^w^u, as representing the span of the 3 vectors u,v,w, in n space, except it degenerates to zero if they are dependent.

so it is sort of a tool for detecting when r vectors in n spaces are dependent.


thats all i can think of.

best wishes,

roy


oih yes, the cross product method is also less natural since even in three space it replaces a vector parallelogram, spanned by v and w, with a single vector vxw perpendicualr to that parallelogram, and having length equal to the area of the parallelogram.


why does one want to replace a natural geometric object like a parallelogram by a single vector, perpendicualr to it?

Even though it seems to me unnatural, one pretty aspect of that duality, is the pythagorean theorem. i.e. there are two pythagorean theorems, one for the parallelogram, wherein the square of the area of the parallelogram equals the sum oif the squares of the areas of the three projected parallelograms. this is dual to the fact that the squared length of the cross product vector equals the sum of the squares of the lengths of its three projected vector components.


so the general phenomenon is that a sequence of r independent vectors in n space span an r dimensional parallelogram, and it is dual to another (n-r) dimensional parallelogram with presumabl;y the same area?


this duality depends on having an inner product, whereas the wedge product formulation does not. moreover in general there is no good reason to replace an r dimensiona parallelogram by an n-r dimensional one.

but in the one case of three space, it let's us replace an object possibly less intuitive, i.e. a parallelogram, by a simpler one, a vector.


so the cross product approach has many disadvantages:

1) it depends on more structure, namely that of a dot product and consequent notions of orthogonality.

2) it has less intuitive meaning. i.e. what is the point of representing a planar object by a vector object?

3) it is special to three dimesional space where 2- planes are orthogonally dual to lines.

4) it is harder to calculate with, at least for me, whereas the wedge product ahs all its rules for calculating "built in", so that computing with it is easy and mechanical.

5) wedge multiplication meshes well with (exterior) differentiation d, rendering all vector calculus formulas the same, i.e. there is no longer three versions of stokes theorem (greens theorem, gauss theorem, stokes theorem, divergence theorem) but only one.

anyone can remember it:
the integral of dP over K, equals the integral of P over the boundary of K.

[where d(fdx) = df ^ dx for example,...so curl(fdx + gdy) = d(fdx+gdy)

= [df/dx dx + df/dy dy]^ dx + [dg/dx dx + dg/dy dy] ^ dy

= dg/dx - df/dy] dx ^ dy (I have to run to class so i hope this is somewhere near right.]

i.e. integration makes d the "adjoint" of boundary.

In fact probably the nicest mechanical calculation associated to wedge products is that of grad, curl, and div.

i.e. the computation of grad f, curl (w) and div(M) becomes absolutely trivial. even i can remember them. more detail on this if desired.

i think a good demonstration of the effectiveness of wedge products would be a demonstration of how, when combined with d, it uniformizes all these classical theorems.
 
Last edited:
  • #49
here is anotherr eason not to sue cross products in 2 space by choosing another orthogonal direction:

in 2 space the issue is simply to compoute a 2 by 2 determninant. it seems a big waste of energy to go to three dimensions, then compute a 3 by 3 determinant most of whose components are zero, just to get a 2 by 2 determinant.


so cross products in 2 space are even easier to dismiss as a reasonable method.
 
  • #50
a look at the generalized stokes theorem on page 104 of dave's book, and his nice table on page 110, contrasting the different looking classical version of the theorems with the completely unified looking versions on the right side of the table, should convince most people this is the way to go.
 
Back
Top