A Geometric Approach to Differential Forms by David Bachman

Click For Summary
The discussion revolves around David Bachman's book "A Geometric Approach to Differential Forms," which participants are using as a study guide for a mathematics conference presentation. The thread initiator has agreed to advise students on the material, emphasizing the importance of practical applications of differential forms, such as proving the fundamental theorem of algebra and Brouwer's fixed point theorem. Participants are sharing insights and questions about the book's content, particularly regarding the definitions and concepts of tangent spaces and coordinates in differential forms. There are discussions on the clarity and precision of Bachman's explanations, with some recommending alternative texts for a more rigorous understanding. Overall, the thread aims to foster collaborative learning and exploration of differential forms.
  • #61
Gza, the discussion reveals that the one forms having a given 2 form as product are certainly not unique. for example if N and M are anyone forms at all

N^M = N^(N+M) = N^(cN+M) = (cM+N)^M, for any constant c.

geometrically if we think about representing a plane and an oriented area, by an oriented parallelogram, any parallelogram in that plane having oriented area equal to that number would do. so the wedge product of any two independent vectors in that plane oriented properly, and with fixed product for their lengths, would have the same wedge product.

thus even if you fix one vector and its length, even then the other vector is not fixed. only its projection orthognal to the first vector is fixed. even if you also fix the length of the other vector, there still seem usually to be 2 choices for it.

the abstract discussion i gave mentioned the map from pairs of one forms to their wedge product, and stated that the "fibers" of this map are three dimensional. in particular the fibers are not single points as they would be if the two one forms were determined by their product.

i.e. thinking again geometrically, given a plane, how many ways are there to pick two indepedent vectors in it? each vector can be chosen in a 2 dimensional family of ways, hence the pair can be chosen in a 4 dimensional family of ways.

even if we fix their orientation and the area of the parallelogram they span, we only lose one parameter, so it brings down the fiber dimension from 4 to three.
 
Last edited:
Physics news on Phys.org
  • #62
it would seem that geometrically, to factor a 2 form, you would just find two independent vectors both perpendicular to the vector of coefficients of the 2 form. there are lots of those. then adjust the lengths by a scalar.

this is just solving a single homogeneous linear equation in three unknowns.
 
  • #63
it would seem that geometrically, to factor a 2 form, you would just find two independent vectors both perpendicular to the vector of coefficients of the 2 form.

So on what geometric basis would I be able to consider the coefficients of a two form as a vector? I'm having a hard time visualizing it.
 
Last edited:
  • #64
to paraphrase some of my physicist friends on here,
if it has three numbers its a vector right?

so use the zen approach, if it looks like a vector and quacks like a vector, treat it as a vector.


see the full solution in the next post.
 
Last edited:
  • #65
well here is how i thought of it: i figured the wedge product of two one forms has components which were 2by2 determinants, so they were essentiaslly the same as the components of the cross product (in 3 space). that mkeans the vector with those components should be perpendicular to the pl;ane spanned by the original two vectors, assuming they were independent.

now to perove that one would use the lagrange expansion of a determinant but i can't do thnat in my head so i just assumed it worked. then let's see, oh yes, that means that we are essentiaslly given the cross product of the two vectors and are l;ooking fopr the two vectors, which mkeans we want two vectors perpendicualr to the given vector, and spanning a parallelogram with area given by the length of the gove vbector. so i guess to be honest it was all inspired by the cross product interpretation whichw e are not using, i.e. eschewing.

but so what, if it helps, use it. just a suggestion, as it seemed easier than what i was hearing as a solution method. of course if it fails miserably i have egg on my face.
\
lets try one:


the product of oh, dx and dy is dx^dy, which has coefficients (1,0,0).

so the perp is (0,1,0) and (0,0,1). i.e. dy and dz, oops. i don't give up though but must understand what is going on.

AHA! the right way to assign coordinates is no doubt to call dx^dy dual to dz hence to (0,0,1), so in fact the coefficients of dx^dy should be (0,0,1), hence perpendicualr to (1,0,0) and (0,1,0), i.e. to dx and dy.

but of course this is cheating to make it work out. you need to give a decent explanation that works in general, but i still believe it.

why don't you give this a little shot? see if ti works for a little more complicated one like dx^dy + dx^dz. this ahs coords (0,0,1) + (0,1,0) = (0,1,1) or maybe (0,0,1) - (0,1,0) = (0,-1,1).

anyway, the perp is either (1,0,0) and (0,1,1), or (1,0,0) and (0,1,-1).

try both. multiply (1,0,0) = dx times (0,1,1) = dy + dz and get hey! dx^dy + dx^dz!

it works!

what do you think, was i just lucky? got to go now, marge is getting implants on the simpsons.
 
Last edited:
  • #66
ok: a dydz + b dzdx + c dxdy = (a,b,c)

has orthocomplement spanned by (-b,a,0), (0,-c,b), if b is not zero.

hence we try [-bdx + ady]^[-cdy+bdz]

= bcdxdy -b^2 dxdz + abdydz = bcdxdy + b^2 dzdx + abdydz

= b [a dydz + b dzdx + c dxdy].

so just divide one of the one forms by b.

if b=0, use the basis (0,1,0), (-c,0,a), for vectors orthogonal to (a,b,c).

then we get dy^(-cdx + adz) = ady^dz + c dx^dy.

what about this Gza?
 
Last edited:
  • #67
hi everyone!
I’m one of the students who will be presenting this topic at a conference. It’s taken me a while to sign on, but now that I’ve jumped in I’ll hopefully be able to add to the discussion regularly.
~First, to answer Tom’s question on post #37… Why don’t we take the absolute value of the signed area? The property of superposition gives us the equality below.
<br /> \omega\wedge\nu(V_1+V_2,V_3)=\omega\wedge\nu(V_1,V_3)+\omega\wedge\nu(V_2,V_3)<br />
If the absolute value is taken for all three wedge products, it’s pretty easy to see that the right side of the equation will not always equal the left side. This can be checked by plugging some vectors in, computing and taking note of the result. That’s what I did.
~Also, on pg. 26 of the arXiv version of the book Bachman says, “To give a 2-form in 4-dimensional Euclidian space we need to specify 6 numbers.” A question similar to this statement is asked a little further ahead in the reading. My question is, can this be treated as a combination? 4choose2 = 6. I also noticed that to give a 3-form in 3-space (3choose3 = 1), you need to specify one number
 
  • #68
*melinda* said:
~Also, on pg. 26 of the arXiv version of the book Bachman says, “To give a 2-form in 4-dimensional Euclidian space we need to specify 6 numbers.” A question similar to this statement is asked a little further ahead in the reading. My question is, can this be treated as a combination? 4choose2 = 6. I also noticed that to give a 3-form in 3-space (3choose3 = 1), you need to specify one number
That's the right track. To prove the general form, first note that the set of k-forms on an n-dimensional vector space is a vector space. Then find a basis for the set of k-forms (note that a one-form wedged with itself is zero, and reordering a wedge product simply changes the sign, in the same manner as the even or oddness of a permutation). Since the size of the basis determines the dimension of the vector space, which determines how many numbers are necessary to specify an element of the space, counting the size of the basis (which you will find is a combination) will tell you how many numbers you need.
 
Last edited:
  • #69
*melinda* : a basis for the k forms in n variables would be all k fold wedge products of the n one forms dx1,...dxn. but note that these prodcucts are zero unless al k of the forms multiplkied are distinct. so there are exactly n choose k ways to find k distinct ones.
 
  • #70
i know you guys skipped chapter 1, but i have learned so much reading your posts id ecided to try again reading the book. here are some tiny remarks that may be of help to dave in proofreading:

on page 15, ex 1.3 should say the area is |ad-bc|, if area is meant to be non negative. or else it should probably be called "oriented area".

in the next line the definition of determinant is also incorrect since it is defined as an area instead of an oriented area.

such obvious mistakes seem to be purposeful, but they do not make logical sense to me. i.e. it is incompatible in one line to say an area is a number that could have two values, one of them negative, and in the next line to define a determinant as an area, which can only be non negative??


what did you want to achieve here dave? are approaching the subject from the point of view that a few small inaccuracies will not matter to beginners?

if so, then please ignore all this. but if you want a proofreader, here goes.


same comment top of page 16, that "volume" formula is not always non negative.


line 2 of section on multiple variables: "these spaces a very familiar" should be "these spaces are very familiar"

a point of philosophy: it might be safer to say that picturing R^20 is very difficult for most of us. certainly some people think they can do it. in the other direction, the picture at the top of the elementary school blackboard does not allow one to picture R^1 either because it is not long enough.

but these are matters of taste. still why discourage anyone who wants to try to picture R^20? indeed you have already sketched how to do it in the introduction, as a product of 10 copies of R^2.

for example imagine 20 parallel copies of R, erected at the points 1,2...,20 on the x axis. and then imagine choosing one point on each line, perhaps connected by a zigzag line. that's a general point of R^20.

I admit these depictions do not allow one to "see" all of R^20, but no more does a line segment allow one to see all of R^1.

but this kind of thing could go on forever.


bottom page 18: it is not quite true to say we define the integral via evenly spaced subdivisions. indeed the integral is only defined for functions for which the type of spacing does not affect the outcome of the limit. if you want to say you are defining the integral of continuous functions this would be ok. but it is not too hard to define a non (riemann) integrable function such that the limit described will exist and not be equal to some other limits with other spacings.

same comment for volume integrals on page 19.

perhaps the word "compute" would be more appropriate than "define", since we do compute integrals this way when they exist.

ok on page 22 there is a caveat that technical issues are being ignored (like continuity). such caveats should probably be placed at the beginning of the discussion. even simpler is just to say at the beginning that we are discussing the case for continuous functions, since then everything said is actually true.

at the top of page 33, a parameterization for a surface is required to be one to one and onto, but in example 1.12 page 36, the parametrization given there of the unit disc is not one to one. perhaps it would be better to allow parametrizations which fail to be one to one on the boundary of the domain? (as in this standard example.)

the reader will face the same challenge in trying to solve ex 1.26 by a one to one parametrization.
 
Last edited:
  • #71
chap 2: page 39, same incorrect statement about defining integrals via evenly spaced subdivisions occurs again.

problems witth the definition of parametrization raises its head again on page 40. on page 23 a parametrization of a curve was defiend as a one to one, onto, differentiable map from (all of) R^1 to the curve, (although most exampels so far have not bee defiend on all of R^1, so it might have been better to say from an interval in R^1.

more significant, the first example given on page 40 is not differentiable at the end points of its domain. so again it might be well to say the parametrization, although continuous on the whole interval may fail to be differentiable at the endpoints.

this is the beginning of another potential situation where one probably is intending to integrate this derivative even though it is not continuous or even bounded on its whole domain. this problem is often overlooked in calculus courses. i.e. when the "antiderivative" is well defined and continuous on a closed interval, it is often not noticed that the derivative is not actually riemann integrable by virtue of being unbounded.

indeed as i predicted, exercise 2.1 page 43 asks the reader to integrate the non - integrable function, derivative of (1-a^2)^(1/2), from -1 to 1.

this function is not defined at the endpoints of that interval and is also unbounded on that interval. interestingly enouhg it has a bounded continulous "antiderivative" which enables one to "integrate" it, but not by the definition given in the section, since the limit of those riemann sums does not in fact exist.

the polar parametrization of the hemisphere, on page 44, is again not one to one. and again the third coordinate function of the parametrization phi is not differentiable wrt r at r=1, hence the integral written is again not defined by a limit of riemann sums.

it seems worthwhile to face head on this problem about many natural parametrizations often not being one to one, and point out that for questions of integration, there is no harm in non one to one ness occurring on sets of lower dimension, since the integral over those sets will be zero.

Stieltjes is misspelled on page 44, both the t and one e are omitted.

the language at the bottom of page 45 describes regions parametrized by R^1, R^2, and R^n, although what is apparently meant, and what is done, is to parametrize by rectangular blocks in those spaces.
 
Last edited:
  • #72
what about this Gza?

I understand now, thank you. :approve:
 
  • #73
does anyone appreciate my comment about sqrt(1-x^2) not being differentiable at
x= 1?

this is the familiar fact that the tangent line to a circle at the equator is vertical.

it is rather interesting that this derivative function can be "integrated" in some sense (i.e. as an improper integral) in spite of being unbounded.

does anyone agree that the polar parametrizations given are not actually one to one? and does anyone see why that does not matter?

(but that it does call for a new definition of parametrization?)
 
  • #74
My apologies for not having read the text so I am sure its already been pointed out.

One endless source of confusion for me when I was learning this stuff is the notion of axial and polar vectors. At first glance its easy and obvious, but then terminology starts getting confused, particularly when you learn clifford algebras and some peoples pet concepts to reinvite notation via geometric algebra.

People get in endless debates about how to properly distinguish these different types of *things*. eg What constitutes active and passive transformations of the system, what is a parity change, do we take Grassman or Clifford notation blah blah blah.

Unfortunately if you want a cutesy picture of what's going on, alla MTW (forms now look like piercing planes) some of this stuff becomes relevant or else you quickly end up with ambiguities.

Most of the confusion goes away when you get into some of the more abstract and general bundle theory, but then the audience quickly starts getting pushed into late undergrad/early grad material and the point is lost.
 
  • #75
mathwonk said:
does anyone appreciate my comment about sqrt(1-x^2) not being differentiable at
x= 1?

this is the familiar fact that the tangent line to a circle at the equator is vertical.

Yes, but we're not there yet. As I said in the beginning, I want to march through the book sequentially. The purpose of this thread is twofold:

1. To help my advisees for their presentation.
2. To see if a book such as Bachman's could be used as a follow-up course to what is normally called "Calculus III".

It doesn't really help to achieve my primary goal (#1) if we jump all over the place. My advisees are in Chapter 4 (on differentiation), and we are using this thread to nail down any loose ends that we left along the way in our effort to keep moving ahead.

I'll be posting the last of my Chapter 2 notes tonight and tomorrow. Once the discussion has died down I'll start posting notes on Chapter 3, which is about integration. I'll also try to pick up the pace.

Thanks mathwonk and everyone else for your useful comments, especially post #65 by mathwonk.

edit to add:

By the way mathwonk, my copy of Spivak's Calculus on Manifolds is in. Great book, thanks for the tip! One of my advisees (*melinda*) picked up Differential Forms with Applications to the Physical Sciences by Flanders. What do you think of it?
 
Last edited:
  • #76
i like flanders.


i do not understanbd your reamrk about the sequential treatment, and not being up to my comment yet.

if you are talking about amrching sequentially throguh bachmann, i started on page 1, and those comments are about chapters 1 and 2. how can someone be in chapter 4 and not be sequentially up to chapters 1 and 2 yet?


are you talking about chapter 4 of some other book?

it seems to me you guys are still way ahead of me.
 
  • #77
flanders had a little introductory article in a little MAA book, maybe Studies in Global Geometry and Analysis (ISBN:0883851040)
Chern, S.S., that first got me unafraid of differential forms, by just showing how to calculate with them.

i had been frightened off of them by an abstract introduction in college. i had only learned their axioms and flanders showed just how easy it is to multiply them. i liked the little article better than his more detailed books.
 
  • #78
mathwonk said:
i do not understanbd your reamrk about the sequential treatment, and not being up to my comment yet.

Never mind my comment. I was looking at the arXiv version of Bachman's book, in which page 39 is in Chapter 3 (the chapter on integrating 1-forms).

To prevent further confusion, I am now going to burn the arXiv version and exclusively use the version from his website. I'll re-do the chapter and section numbers in my notes.
 
  • #79
thats right, there were two versions of the book!
 
  • #80
Flanders is sort of the defacto reference book on differential forms for US math majors. You get some treatment in Spivak, and also some good stuff in various physics books, but its not quite the same.

A modern book some people liked a lot was Darling's book on Differential forms.

Regardless I am a little bit wary of placing too much weight on intuitive pictures of the whole affair. Differential forms to me are much ore of a formal language that makes calculations tremendously simpler (not to mention the fact that they are much more natural geometric objects what with being coordinate independant and hence perfect for subjects like cohomology and algebraic geometry). Notation changes from area to area and I suspect having too rigid a 'geometric' intution might actually hurt in some cases.

I guess I am just a little bit disenchanted with some of the earlier attempts to 'picture' what's happening, like the piercing plane idea from MTW (Bachmans text has a good section where they explain why that whole thing doesn't quite work out well in generality)
 
  • #81
Chapter 3: Forms

Section 4: 2-forms on T_p\mathbb{R}^3​

Here is the next set of notes. As always comments, corrections, and questions are warmly invited.


Exercise 3.15

Try as you might, you will not be able to find a 2-form (edit: on T_p\mathbb{R}^3) which is not the product of 1-forms. We in this thread have already argued as much, and indeed in the ensuing text Bachman explains that he has just asked you to do something that is impossible. Nice guy, that Dave. :-p
[/color]

This brings us to the two Lemmas of this section. I feel that the details of the proofs are straightforward enough to omit, so I am just going to talk about what the lemmas say. If any of our students has any questions about the proofs, go right ahead and ask.

Lemma 3.1 reinforces the idea that was first brought up by Gza: The 1-forms whose wedge product make up a 2-form are not unique.

Lemma 3.2 is really what we want to see: It is the proof that any 2-form is a product of 1-forms. The lemma itself states that if you start with two 2-forms that are the product of 1-forms, then their sum is a 2-form that is the product of 1-forms. That is, any 2-form that can be written as the sum of the product of 1-forms, is itself a product of 1-forms.


Note: There is a typo in Bachman's proof (both versions of the book).

Where it says:

"In this case it must be that \alpha_1\wedge\beta_1=C\alpha_2\wedge\beta_2, and hence \alpha_1\wedge\beta_1+\alpha_2\wedge\beta_2=(1+C)\alpha_1\wedge\beta_1",

it should say:

"In this case it must be that \alpha_1\wedge\beta_1=C\alpha_2\wedge\beta_2, and hence \alpha_1\wedge\beta_1+\alpha_2\wedge\beta_2=(1+C)\alpha_2\wedge\beta_2".
[/color]

Bachman goes from the last statement in black above to concluding that "any 2-form is the sum of products of 1-forms."


To explicitly show this, start with the most general 2-form:
<br /> \omega=c_1dx \wedge dy+c_2dz \wedge dy+c_3dz \wedge dx<br />

Now use the distributive property:
<br /> \omega=(c_1dx+c_2dz) \wedge dy +c_3dz \wedge dx<br />

And there we have it.
[/color]

This leads us to the following conclusion:

David Bachman said:
Every 2-form on T_p\mathbb{R}^3 projects pairs of vectors onto some plane and returns the area of the resulting parallelogram, scaled by some constant.

There is thus no longer any need for the "Caution!" on page 55.

edit: That is, there is no need for it when we are dealing with 2-forms on T_p\mathbb{R}^3. See post #82.


Exercise 3.16

Now that we know that every 2-form on T_p\mathbb{R}^3 is a product of 1-forms, this is a piece of cake. Just look at the following 2-form:

\omega(V_1,V_2)=\alpha\wedge\beta(V_1,V_2)
\omega(V_1,V_2)=\alpha(V_1)\beta(V_2)-\alpha(V_1)\beta(V_2)
\omega (V_1,V_2)=(&lt;\alpha&gt;\cdot V_1)(&lt;\beta&gt;\cdot V_2)-(&lt;\alpha&gt;\cdot V_2)(&lt;\beta&gt;\cdot V_1)

This 2-form vanishes identically if either V_1 or V_2 (doesn't matter which) is orthogonal to both &lt;\alpha&gt; and &lt;\beta&gt;.

Exercise 3.17

Incorrect answer edited out:

The above argument does not extend to higher dimensions because not all 2-forms are factorable in higher dimensions.

Counterexample:

Take the following 2-form on T_p\mathbb{R}^4:

\omega=dx \wedge dy + dz \wedge dy +dz \wedge dw + 2dx \wedge dw.

Try to factor by grouping:

(dx+dz) \wedge dy + (dz+2dx) \wedge dw,

and note that we can go no further. It turns out that no grouping of terms will result in a successful factorization.
[/color]


Exercise 3.18

Maybe I'm just being dense, but I do not see how to solve this one. The hint right after the exercise doesn't help. If l is in the plane spanned by V_1 and V_2, then of course the vectors that are perpendicular to V_1 and V_2 will be perpendicular to l.

Anyone want to jump in here?
[/color]
 
Last edited:
  • #82
Hi all,

Sorry I have been silent for a few days. Busy, busy busy...

And even now I do not have time to give proper responses, but here are a quick few...

Mathwonck, please read a bit more carefully if you are going to take on a role as "proofreader":

To your comment about integrating with evenly space intervals: there is a discussion of this on page 41.
To your comment on saying that we want an "oriented area": I couldn't use the word "oriented" because at this point students have no idea what an orientation is. In fact, at that point in the text I do not even assume that the student realizes that the deterimant can give you a negative answer (although I am sure this seems obvious to you). I do, however, emphasize this by inentionally computing an example where the answer is negative, and then pointing out that we really don't want "area", but rather a "signed area". It's all there.

Next... there is a rather long discussion here about factoring 2-forms into products. Mathwonk has a "proof" in one of his earlier posts, but this was a little bit of wasted effort, since this is the content of Section 4 of Chapter 3.

Also, Tom... be careful! The CAUTION on page 55 is ALWAYS something to look out for. The point of Section 4 of Chap 3 is that dimension 3 is special, because there you can always factor 2-forms. The next edition of the book will have a new section about 2-forms in four dimensions, with particular interest on those that can NOT be factored.

Hopefully more tomorrow... I should give you more of a hint on Exercise 3.18.

Dave.
 
  • #83
Dave I am sorry to see my corrections are not welcomed by you. They are accurate however.

As an expert I probably should have not gotten involved since everyone is having fun, and my corrections are invisible to the average student. But you did ask for comments in your introduction. When you do that, you should expect to get some.

I think this book is nice for a first dip into the topic, but I have a concern that a person learning the subject from this source will be left with a certain amount of confusion, due to the imprecise discussion, and non standard language, which will cause problems in trying to discuss the material with more knowledgeable people.

If followed up with Spivak however it should be fine. And any source that gets people involved and allows them friendly access to a topic is good. This is the strength of Dave's book. I don't know who they sent it to for reviewing, but Dave, I think you might get some comments like mine from other reviewers.
 
Last edited:
  • #84
for tom and students: you can argue that diff forms are useful in the 10 or more dimensions physicists apparently use now for space time, and they are also easily adaptable to the complex structures used there and in in string theory (Riemann surfaces, complex "Calabi Yau" manifolds).
 
  • #85
Bachman said:
Hi all,

Sorry I have been silent for a few days. Busy, busy busy...

Glad to see you back. :smile:

Also, Tom... be careful! The CAUTION on page 55 is ALWAYS something to look out for. The point of Section 4 of Chap 3 is that dimension 3 is special, because there you can always factor 2-forms.

Whoops. I've put in an edit that corrects my remark about the Caution. I've also changed my answer to Exercise 3.17, which was evidently wrong.
 
  • #86
another comment about selling differential forms to your audience. Dave has a nice application in chapter 7 showing that their use reduces Maxwell's equations from 4 to 2.
 
  • #87
The line l = \{\vec{r}t + \vec{p} : t \in \mathbb{R}\} for some \vec{r},\ \vec{p} \in T_p\mathbb{R}^3. Suppose \vec{v},\ \vec{w} \in T_p\mathbb{R}^3 such that l \subseteq Span(\{\vec{v},\ \vec{w}\}). Then the set \{\vec{p},\ \vec{v},\ \vec{w}\} is linearly dependent, hence:

\det (\vec{p}\ \ \vec{v}\ \ \vec{w}) = 0[/itex]<br /> <br /> Define \omega such that:<br /> <br /> \omega (\vec{x},\ \vec{y}) = \det (\vec{p}\ \ \vec{x}\ \ \vec{y}) \ \forall \vec{x}, \vec{y} \in T_p\mathbb{R}^3<br /> <br /> You can easily check, knowing the properties of determinants, that \omega is an alternating bilinear functional, and hence a 2-form. If you want, you can express it as a linear combination of dx \wedge dy,\ dy \wedge dz,\ dx \wedge dz, and it shouldn&#039;t be hard, but probably not necessary.<br /> <br /> EDIT: actually, to answer the question as given, perhaps you will want to write \omega in terms of those wedge products, and determine \vec{p} from there. Then, to find l you just need to choose <i>any</i> line that passes through \vec{p}. Any two vectors containing that line will have to contain \vec{p}, hence those three vectors must be linearly dependent, hence their determinant will be zero, and since \omega depends only on \vec{p} and not the choice of \vec{r}, you&#039;re done.
 
Last edited:
  • #88
hi
~Thanks everyone on the feedback to my question. It’s so reassuring to know when you’ve got the right idea!
~For exercise 3.17 (post 81), Tom says:

“The above argument does not extend to higher dimensions because not all 2-forms are factorable in higher dimensions”.

~I can see why this is the case in exercise 3.16, but it seems like there’s a bit more to this than a simple question of factorability. I’m probably way off, but I was thinking that it has more to do with some general property of 3-space that makes it inherently different than say, 4-space or any other space for that matter. Then again, I suppose that not being able to write a 2-form as a product of 1-forms in R^4 could very well be a general property of higher dimensions. Unfortunately these are ideas that I don’t know very much about yet, so please excuse if my questions are a bit silly or obvious.
 
  • #89
For applications, I know of many places in physics where differential forms are useful, even to an undergrad.

First and foremost, the often quoted derivation of maxwells equations in a very neat and elegant form.

The fundamental equations of thermodynamics as well are often cast in differential form notation. You instantly get out several relations that are painful to get in other notation.

Finally general relativity/String theory etc

One thing to note though.. I really didn't see at the time the advantage of using differential forms in those situations, I often would ask 'why not just use tensor calculus instead'? And I was right in the sense that you will get very compact notation (if you suppress the irratating indices) just as quickly as with differential forms without the added hassle of learning the new, somewhat unintuitive language.

I was wrong though in the deeper meaning of these objects. It wasn't until I learned of Yang Mills theory, and principle bundles as applied to general relativity, that the full power of differential forms became instantly apparent.

Modern Physics fundamentally wants to be written down in coordinate invariant, read diffeomorphism invariant language. It doesn't necessarily want to know about metrics, and things like that. Indeed there are situations where such concepts stop you from seeing the global topology of the problem, and it is in that sense that differential forms immediately become obvious as THE god given physical language.
 
  • #90
melinda,

pardon me if my posts have been unhelpful. I will try to explain why a 2 form is never a product of one forms in any dimension higher than 3.

Let V be the space of one forms on R^n, and let V^V be the space of 2 forms. Then since V has coordinates dx1,...dxn, and has dimension n, V^V has coordinates dxi^dxj with i <j, so has dimension = bonomial coefficient "n choose 2".


Now, just look at the product map, VxV-->V^V, taking a pair of 1 forms f,g to their product f^g. The question is when is this map surjective?

Without going into it too much, I claim that this map cannot raise dimension, much as a linear map cannot, so since the domain has dimension 2n and the range has dimension (1/2)(n)(n-1), it follows that as soon as the second number outruns the first, the map cannnot be surjective.

In particular for n > 5, the map cannot be surjective, but actually this occurs sooner than that, I claim for n > 3.

The key is to look at the dimension of the fibers of the map. Here there is a principle almost exactly the same as the "rank - dimension" theorem in linear algebra.

i.e. if we can discover the dimension of the set of domain points which map to a given point in the target of ther map, then the dimension of the actual image of the map cannot be more than the amount by which the dimension of the domain exceeds this "fiber" dimension. i.e. if (f,g) is a general point of the domain VxV, then the dimension of the set of 2 forms which are products in V^V, cannot be more than 2n - dim of the set of one forms having the same product f^g as f and g.


Now it helps to think geometrically, i.e. of f and g as vectors and f^g as the parallelogram they span. Then two other vectors have the same product if and only if they span a parallelogram in the same plane as f and g, and also ahving the same area.

So there is a 2 dimenmsional family of vectors in that plane, hence a 4 dimensional fmaily of pairs of vectors in that plane spanning it, but if choose only thos having the right area, there is noly a three dimnsional family.

Thus the inverse image of a general product f^g is 3 dimensional in VxV. Thus the dimension of the image of the rpoduct map, in V^V, i.e. the dimension of the family of factorable 2 forms, equals 2n - 3. we see this is less than (1/2)(n)(n-1) as soon as n >3.

so for n > 3, it never again happens that all 2 forms are a product of two 1 forms.

does that help?

if you look back at some of my free flying posts earlier you will probably see that these ideas are there, but not explained well.
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
2K
  • · Replies 70 ·
3
Replies
70
Views
16K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 14 ·
Replies
14
Views
5K
Replies
2
Views
565
  • · Replies 23 ·
Replies
23
Views
20K
  • · Replies 1 ·
Replies
1
Views
3K