A Geometric Approach to Differential Forms by David Bachman

  • #51
a look at the generalized stokes theorem on page 104 of dave's book, and his nice table on page 110, contrasting the different looking classical version of the theorems with the completely unified looking versions on the right side of the table, should convince most people this is the way to go.

for me personally, this lovely synthesis made me feel i could relax about these theorems after merely understanding green's theorem for a rectangle!
 
Physics news on Phys.org
  • #52
I have a question regarding how you would find the constituent "wedged" one-forms making up a 2-form if you happened to know the 2-form. For instance, if you knew a 2-form to be α = 3dx^dy + 2dy^dz +4 dx^dz, how would you find both one-forms, and if so, would they even be unique? I tried a painful method of writing out the one forms with yet to be determined constants, and plugging in the basis vectors <1,0,0>, <0,1,0> and <0,0,1>, trying to match the constants with the "scaling factors" for each term in α. I'm sure there is a smarter way to do this, but how?
 
  • #53
Gza,

The way you describe is exactly how we did it. There are exercises that ask us to do precisely this a little later, and I'll post my solutions probably tomorrow, after I've fully digested mathwonk's posts (*burp*).

BUT, this method is not all that painful. I took \alpha=a_1dx+a_2dy+a_3dz and \beta=b_1dx+b_2dy+b_3dz. Note that we have 6 constants, but only 3 constraining equations. That means that you get to pick 3 of the constants, so no the choices are not unique. Once you pick 3, finding the other 3 is easy.

My standard way of doing it is to let a_1=a_2=b_1=1.
 
  • #54
start with the last and shortest one.
 
  • #55
this is not much since my best intentions yesterday foundered on lack of energy, end of week binge, and ignorance. but so what, here goes: maybe someone else will fix it.

the idea of grassman was apparently to create an algebra of geometric objects. i.e. he wanted to generalize the algebra of one dimensional vectors to an algebraic technique allowing him to add also 2 dimensional objects, 3 dimensional objects etc.

so think about a vector spanning a line. there are many vectors spanning the same line, and they differ only by a scale factor, the quotient of their lengths.

to generalize we let a pair of vectors represent a parallelogram spanning a plane. two different parallelograms in that plane span the same plane and differ only by a scale factor, the quotient of their areas. so we equate two parallelograms if they span the same plane and have the same area.

given two vectors, their product is the parallelogram they span, up to this equivalence relation. hence dependent vectors have product zero.

now how do we add two such parallelograms? acn we do this so as to get another parallelogram? well we could try, in three space, in the following way: size them up [within their equivalence classes] so they have the same length side on one side of each and thus fit together as two sides of a parallelepiped. then they span a unqie parallelpiped, which thus has a third side, which might be their sum.

alternatively we could use the dot product on three space to replace each parallelogram by a single vector as follows: given an ordered parallelogram, find a vector orthogonal to the paralll... and take it to have length equal to the area of the parallelogram, and be oriented so as to obey the right hand rule, i.e. form the "cross product" of the two sides of the parallelogram.

then in the reverse order, a vector also determiens a plane orthogonal to it, as well as an equivalence class of parallelograms in that plane all having area equal to the length of the given vector.

then to add two parallelograms we could simply add their cross product vectors and then pass back to the asociated parallelogram. hopefully this gives thesame answer as the first methd but i have not thought about why it should except that life is often simple, and i am an optimist.

in particular, this seems to show that the sum of two parallelograms is always another parallelogram, up to equivalence, in three space.

but what happens in 4 space? when we try toa dd two parallelograms, the planes they spane need not meet,a nd so there is no parallelepiped, and no unique orthogonal vector. the oprthogoanjl complement now is another plane, which gives no advantage over the original object. so we must simply add the parallelograms in a more formal way.

i.e. now we allow formal sums of two or more parallelograms, and call them something like 2 - chains, or whatever. again we have an equivalence relation, and we et a vector space of these things but no longer is it true that every object in this space is a simple parallelogram, i.e. the product of just two vectors.

but anyway we do get an algebra of objects generated by parallelepipeds of various dimensions.


now there is a dual construction, which starts not from vectors, but from covectors, i.e. from linear functionals, like x and y, and so os, the coordinate functions on R^n.

we can also form products of these guys, and that is what is happening in constructing bilinear functions or tensors of form x(tensor)y.

but we are dpoing the alternating theory, so we have things like x^y, or dx^dy.

we add them formally. and instead of being parallelograms, they are objects that assign generalized "areas" to parallelograms,...


ok i pooped out. somebody else will have totake over. please do not begin too negatively. this is obviously still in the right brain [?] fantasizing stage.

oddly though this already suggests that dually all 2 forms on 3 space are actually writable as a product of two one forms.

is that obvious? i.e. the space of 2 forms on R^3 has dimension 3, and is spanned by dx^dy, dx^dz, dy^dz.

the space of one forms is also 3 dimensional spanned by dx,dy,dz. so if we multiply we get a bilinear map oneforms x oneforms-->twoforms. surjective?

it seems to be. i.e. given two one forms mapping to a 2form, think gweometrically of two vectors mapping toa plane. in that plane there are a two dimensioonal way of ways to choose a vector hence a 4 dimensional way to choose 2 vectors spanning it. but if they must span a parallelogram with fixed area that cuts down the family to three dimensions. so tha map above has three dimensional fibaers, hence a 6 dimensional image. so it is onto.?


oh yes, i was trying to elaborate on the natural algebraic construction of wedge products in all dimensions, and note how special the cross product phenomenon is to three dimensons. yipes time flies when you are haivng fun, and i have missed the firat NCAA game!

no wonder no one is responding. its like the day italy was in the world cup and i drove through the deserted streets of rome completely unhindered by traffic.
 
Last edited:
  • #56
mathwonk said:
no wonder no one is responding.

Doesn't mean we're not reading. I especially liked posts #48 and #51; thanks a lot for that. As I said, my advisees are doing 2 presentations: one in 2 weeks for the faculty at our school, and another in 4 weeks for the Conference. I am thinking that the first one will be more of a pitch to sell differential forms to the faculty, while at the Conference the ladies plan on talking about the generalized Stokes' theorem.
 
  • #57
Chapter 3: Forms

Section 3: Multiplying 1-Forms (cont'd)​

Picking up from page 24 in the arXiv version of the book (edit: that's page 54 in the newer version) , right after Exercise 3.10, we come to the geometric interpretation of the action of \omega\wedge\nu on a pair of vectors V_1 and V_2. I think that the argument leading up to the interpretation is clear enough to not expand on, so I'm just going to present the conclusion. If any of the students reading this thread have any questions about it, go ahead and ask.

David Bachman said:
Evaluating \omega\wedge\nu onthe pair of vectors (V_1,V_2) gives the area of parellelogram spanned by V_1 and V_2 projected onto the plane containing the vectors &lt;\omega&gt; and &lt;\nu&gt;, and multiplied by the area of the parallelogram spanned by &lt;\omega&gt; and &lt;\nu&gt;.

Then there is the word of caution: This interpretation is only valid if our 2-form is the product of 1-forms. We will later see that this is always the case, at least for 2-forms on T_p\mathbb{R}^3.


Exercise 3.11
This exercise seems to be flawed. On the LHS we have a 2-form acting on a pair of vectors. This quantity is a real number. But on the RHS we have a 2-form that is not acting on anything. This quantity is, well, a 2-form! Correct me if I'm wrong, but in order for that equation to be correct then either the wedge product on the LHS should not be acting on those two vectors, or the 2-form on the RHS should be acting on the same pair of vectors. That's how I interpret the problem.

So in essence what we are asked to show is that any 2-form on T_p\mathbb{R}^3 can be expressed as the product of 1-forms. Here goes.

Let \omega=w_1dx+w_2dy+w_3dz and \nu=v_1dx+v_2dy+v_3dz be 1-forms. Now consider the wedge product \omega\wedge\nu.

<br /> \omega\wedge\nu=(w_1v_2-w_2v_1)dx \wedge dy+(w_1v_3-w_3v_1)dx \wedge dz+(w_2v_3-w_3v_2)dy \wedge dz<br />

Now set our expression for \omega\wedge\nu equal to c_1dx \wedge dy+c_2dx \wedge dz+c_3 dy \wedge dz. Equating components yields:

<br /> w_1v_2-w_2v_1=c_1<br />
<br /> w_1v_3-w_3v_1=c_2<br />
<br /> w_2v_3-w_3v_2=c_3<br />

Since there are 3 equations and 6 constants, we can choose 3 of the constants (Note: Letting all the components of a either of the 1-forms equal 1 will not work, and letting any of the components equal to 0 will not work.) A convenient choice is w_1=w_2=v_1=1. This yields:

<br /> \omega=dx+dy+\frac{c_2-c_3}{c_1}dz<br />
<br /> \nu=dx+(c_1+1)dy+(c_2+\frac{c_2-c_3}{c_1})dz<br />.

This choice for 3 of the constants is only valid if c_1 \neq 0. Other choices can be found that are valid for c_2 \neq 0 and c_3 \neq 0, so that all 2-forms with either one or no constants equal to zero are covered. If two constants are equal to zero then it is trivially easy to express the 2-form as a product of 1-forms.
[/color]

This exercise, together with the discussion before it, are supposed to lead us to the following conclusion.

David Bachman said:
Every 2-form projects the parallelogram spanned by V_1 and V_2 onto each of the (2-dimensional) coordinate planes, computes the resulting (signed) areas, multiplies each by some constant, and adds the results.

Note now that there is no need for the word of caution that was supplied after the first geometric interpretation. Both may now be applied to "every 2-form" because every 2-form on T_p\mathbb{R}^3 is expressible as a product of 1-forms.

Exercise 3.12
\omega\wedge\nu (&lt;1,2,3&gt;,&lt;-1,4,-2&gt;)=\left |\begin{array}{cc}\omega(&lt;1,2,3&gt;)&amp;\nu(&lt;1,2,3&gt;)\\\omega(&lt;-1,4,-2&gt;)&amp;\nu(&lt;-1,4,-2&gt;)\end{array}\right|<br />

\omega\wedge\nu (&lt;1,2,3&gt;,&lt;-1,4,-2&gt;)=\left |\begin{array}{cc}8&amp;3\\21&amp;-8\end{array}\right|<br />
\omega\wedge\nu (&lt;1,2,3&gt;,&lt;-1,4,-2&gt;)=-127

Exercise 3.13
Given two 1-forms, we are asked to find the 2-form that is their wedge product.

<br /> \omega\wedge\nu=-11dx \wedge dy+4dy \wedge dz+3dx \wedge dz<br />

On comparision it is obvious that c_1=-11, c_2=4, and c_3=3.

Exercise 3.14
Now we are asked to go the other way: given four 2-forms, we are asked to express them as products of 1-forms.

(1) Use the skew-symmetry property.
3dx\wedge dy+dy\wedge dx=3dx\wedge dy-dx\wedge dy=2dx \wedge dy

(2) Use the distribuitve property.
dx\wedge dy+dx\wedge dz=dx\wedge (dy+dz)

(3) Use the results from (1) and (2).
3dx\wedge dy+dy\wedge dx +dx\wedge dz=2dx\wedge dy+dx\wedge (dy+dz)

Now use the distirbutive property again.
3dx\wedge dy+dy\wedge dx+dx\wedge dz=dx\wedge (2dy+dz)

(4) This one's more involved. Using the method I described above 2.11 (defining two 1-forms \omega and \nu and letting w_1=w_2=v_1=1), I get:

\omega=dx+dy+7dz
\nu=dx+2dy+11dz

Note that this pair of 1-forms is not unique.
[/color]

That's it for now. I really don't have any questions on this section, so I will post my notes and questions on Sections 3.4 and 3.5 once any discussion on this section dies down.

Till next time...
 
Last edited:
  • #58
Re: "This interpretation is only valid if our 2-form is the product of 1-forms. We will later see that this is always the case, at least for 2-forms on R^3."

I think I essentially proved this in post 55.
 
  • #59
we say a k form is "decomposable" if it is a product of one forms. then gerometrically this is sort of dual to a k chain being simply a k plane.

now recall that 2 planes in three space also form a linear space namely the dual space, at least projectively. i.e. the dual of projective 2 space is also a projective 2 space.

the same holds in all dimensions, i.e. the dual of rpojective 3 space is a also a projective 3 space, but the elements are made up of hyperplanes in projective 3 space, i.e. projective palnes, hence spanned by triples of "points" in projective space, i.e. by triples of vectors in the underlying vector space.

so the space of projective lines in projective 3 space corresponds to the decomposable 2 forms on a 4 dimensiopna vector space like R^4. these do not form a vector space, but a quadric cone in a 6 dimensional vector space.

i.e. when we take sums of 2 planes, or 2 forms, in 4 space, we get a linear space, but not all elements are simple products, for the geometric reason that projective lines in projective three space do not form a linear space.

so the fact that any 2 form is a product of one forms in 3 space is equivalent to the fact that the dual of a projective plane is also a projective plane.

in projective 3 space however, note there are various different kinds of pairs of lines, some meet, some do not.

however the algebraic constructions above do allow us to assign coordinates to lines in projective 3 space. i.e. take any plane in a 4 diml vector space, and it will be the zeroes of a pair of linear functions f,g. then represent that plane by f^g.

when f^g is written as a linear combination of dx, dy, dz, dw, we get coordinates for our plane in R^4, i.e. our line in P^3.

since the wedge product map R^4 x R^4 still has 3 dimensional fibers as abnove, the image this time, of decomposable 2 forms, is 5 diemsnional, while the space of all 2 forms is 6 dimensional, so we get a hypersurface in a 6 dimensional vector space or in a 5 dimensional rpojective space. this hypersurface is called the grassmannian variety of all "lines in P^3".

hey this geometric approach to forms is pretty cool. I am learning something after all. thanks dave! this always seems to happen to me when a subject is being well explained, even if i think i already know it.

i never really grasped this algebra - geometry link before for k planes in R^n.
 
  • #60
building on the previous discussions, i believe that one can characterize those 2 forms on four space, i.e. those linear combinations of products of dx0, dx1, dx2, dx3, which are products of two one forms, by the equation p01p23 - p02p13+ p03p12 = 0, where pij is the coefficient of dxi^dxj.

here is a little trick to see that in 4 dimensions not all 2 forms are products of one forms. since the product of a one form with itself is zero, if W is a 2 form which is a product of one forms, then W^W = 0. But note that [dx^dy + dz^dw] ^ [dx^dy + dz^dw] = 2 dx^dy^dz^dw is not zero. so this 2 form is not a product of one forms.


since there is only one condition on a 2 form in 4 space to be a product of one forms, this must be it.

Note if we wedge p01dx0^dx1 + p02 dx0^dx2 + p03 dx0^dx3 + p12 dx1^dx2 + p13 dx1^dx3 + p23 dx2^dx3 with itself note we get something like

2(p01p23 - p02p13+ p03p12) dx0^dx1^dx2^dx3 which must be zero, if this 2 form is going to be a product of one forms.


I just learned something else new! I had it hard wired into my brain that any form wedged with itself is zero, but this is false! it does hold for one forms, and i was just mostly in the habit of wedging one forms together, and thinking about them exclusively.

in three space of course, if you wedge two 2 forms togetehr you get a 4 form, and thsoe are all zero on 3 space, so the same confusion can arise. also another reawson is that in 3 space all 2 forms are products of one forms, so again they wedge to zero with themselves, again for special reasons that do not generalize.
 
  • #61
Gza, the discussion reveals that the one forms having a given 2 form as product are certainly not unique. for example if N and M are anyone forms at all

N^M = N^(N+M) = N^(cN+M) = (cM+N)^M, for any constant c.

geometrically if we think about representing a plane and an oriented area, by an oriented parallelogram, any parallelogram in that plane having oriented area equal to that number would do. so the wedge product of any two independent vectors in that plane oriented properly, and with fixed product for their lengths, would have the same wedge product.

thus even if you fix one vector and its length, even then the other vector is not fixed. only its projection orthognal to the first vector is fixed. even if you also fix the length of the other vector, there still seem usually to be 2 choices for it.

the abstract discussion i gave mentioned the map from pairs of one forms to their wedge product, and stated that the "fibers" of this map are three dimensional. in particular the fibers are not single points as they would be if the two one forms were determined by their product.

i.e. thinking again geometrically, given a plane, how many ways are there to pick two indepedent vectors in it? each vector can be chosen in a 2 dimensional family of ways, hence the pair can be chosen in a 4 dimensional family of ways.

even if we fix their orientation and the area of the parallelogram they span, we only lose one parameter, so it brings down the fiber dimension from 4 to three.
 
Last edited:
  • #62
it would seem that geometrically, to factor a 2 form, you would just find two independent vectors both perpendicular to the vector of coefficients of the 2 form. there are lots of those. then adjust the lengths by a scalar.

this is just solving a single homogeneous linear equation in three unknowns.
 
  • #63
it would seem that geometrically, to factor a 2 form, you would just find two independent vectors both perpendicular to the vector of coefficients of the 2 form.

So on what geometric basis would I be able to consider the coefficients of a two form as a vector? I'm having a hard time visualizing it.
 
Last edited:
  • #64
to paraphrase some of my physicist friends on here,
if it has three numbers its a vector right?

so use the zen approach, if it looks like a vector and quacks like a vector, treat it as a vector.


see the full solution in the next post.
 
Last edited:
  • #65
well here is how i thought of it: i figured the wedge product of two one forms has components which were 2by2 determinants, so they were essentiaslly the same as the components of the cross product (in 3 space). that mkeans the vector with those components should be perpendicular to the pl;ane spanned by the original two vectors, assuming they were independent.

now to perove that one would use the lagrange expansion of a determinant but i can't do thnat in my head so i just assumed it worked. then let's see, oh yes, that means that we are essentiaslly given the cross product of the two vectors and are l;ooking fopr the two vectors, which mkeans we want two vectors perpendicualr to the given vector, and spanning a parallelogram with area given by the length of the gove vbector. so i guess to be honest it was all inspired by the cross product interpretation whichw e are not using, i.e. eschewing.

but so what, if it helps, use it. just a suggestion, as it seemed easier than what i was hearing as a solution method. of course if it fails miserably i have egg on my face.
\
lets try one:


the product of oh, dx and dy is dx^dy, which has coefficients (1,0,0).

so the perp is (0,1,0) and (0,0,1). i.e. dy and dz, oops. i don't give up though but must understand what is going on.

AHA! the right way to assign coordinates is no doubt to call dx^dy dual to dz hence to (0,0,1), so in fact the coefficients of dx^dy should be (0,0,1), hence perpendicualr to (1,0,0) and (0,1,0), i.e. to dx and dy.

but of course this is cheating to make it work out. you need to give a decent explanation that works in general, but i still believe it.

why don't you give this a little shot? see if ti works for a little more complicated one like dx^dy + dx^dz. this ahs coords (0,0,1) + (0,1,0) = (0,1,1) or maybe (0,0,1) - (0,1,0) = (0,-1,1).

anyway, the perp is either (1,0,0) and (0,1,1), or (1,0,0) and (0,1,-1).

try both. multiply (1,0,0) = dx times (0,1,1) = dy + dz and get hey! dx^dy + dx^dz!

it works!

what do you think, was i just lucky? got to go now, marge is getting implants on the simpsons.
 
Last edited:
  • #66
ok: a dydz + b dzdx + c dxdy = (a,b,c)

has orthocomplement spanned by (-b,a,0), (0,-c,b), if b is not zero.

hence we try [-bdx + ady]^[-cdy+bdz]

= bcdxdy -b^2 dxdz + abdydz = bcdxdy + b^2 dzdx + abdydz

= b [a dydz + b dzdx + c dxdy].

so just divide one of the one forms by b.

if b=0, use the basis (0,1,0), (-c,0,a), for vectors orthogonal to (a,b,c).

then we get dy^(-cdx + adz) = ady^dz + c dx^dy.

what about this Gza?
 
Last edited:
  • #67
hi everyone!
I’m one of the students who will be presenting this topic at a conference. It’s taken me a while to sign on, but now that I’ve jumped in I’ll hopefully be able to add to the discussion regularly.
~First, to answer Tom’s question on post #37… Why don’t we take the absolute value of the signed area? The property of superposition gives us the equality below.
<br /> \omega\wedge\nu(V_1+V_2,V_3)=\omega\wedge\nu(V_1,V_3)+\omega\wedge\nu(V_2,V_3)<br />
If the absolute value is taken for all three wedge products, it’s pretty easy to see that the right side of the equation will not always equal the left side. This can be checked by plugging some vectors in, computing and taking note of the result. That’s what I did.
~Also, on pg. 26 of the arXiv version of the book Bachman says, “To give a 2-form in 4-dimensional Euclidian space we need to specify 6 numbers.” A question similar to this statement is asked a little further ahead in the reading. My question is, can this be treated as a combination? 4choose2 = 6. I also noticed that to give a 3-form in 3-space (3choose3 = 1), you need to specify one number
 
  • #68
*melinda* said:
~Also, on pg. 26 of the arXiv version of the book Bachman says, “To give a 2-form in 4-dimensional Euclidian space we need to specify 6 numbers.” A question similar to this statement is asked a little further ahead in the reading. My question is, can this be treated as a combination? 4choose2 = 6. I also noticed that to give a 3-form in 3-space (3choose3 = 1), you need to specify one number
That's the right track. To prove the general form, first note that the set of k-forms on an n-dimensional vector space is a vector space. Then find a basis for the set of k-forms (note that a one-form wedged with itself is zero, and reordering a wedge product simply changes the sign, in the same manner as the even or oddness of a permutation). Since the size of the basis determines the dimension of the vector space, which determines how many numbers are necessary to specify an element of the space, counting the size of the basis (which you will find is a combination) will tell you how many numbers you need.
 
Last edited:
  • #69
*melinda* : a basis for the k forms in n variables would be all k fold wedge products of the n one forms dx1,...dxn. but note that these prodcucts are zero unless al k of the forms multiplkied are distinct. so there are exactly n choose k ways to find k distinct ones.
 
  • #70
i know you guys skipped chapter 1, but i have learned so much reading your posts id ecided to try again reading the book. here are some tiny remarks that may be of help to dave in proofreading:

on page 15, ex 1.3 should say the area is |ad-bc|, if area is meant to be non negative. or else it should probably be called "oriented area".

in the next line the definition of determinant is also incorrect since it is defined as an area instead of an oriented area.

such obvious mistakes seem to be purposeful, but they do not make logical sense to me. i.e. it is incompatible in one line to say an area is a number that could have two values, one of them negative, and in the next line to define a determinant as an area, which can only be non negative??


what did you want to achieve here dave? are approaching the subject from the point of view that a few small inaccuracies will not matter to beginners?

if so, then please ignore all this. but if you want a proofreader, here goes.


same comment top of page 16, that "volume" formula is not always non negative.


line 2 of section on multiple variables: "these spaces a very familiar" should be "these spaces are very familiar"

a point of philosophy: it might be safer to say that picturing R^20 is very difficult for most of us. certainly some people think they can do it. in the other direction, the picture at the top of the elementary school blackboard does not allow one to picture R^1 either because it is not long enough.

but these are matters of taste. still why discourage anyone who wants to try to picture R^20? indeed you have already sketched how to do it in the introduction, as a product of 10 copies of R^2.

for example imagine 20 parallel copies of R, erected at the points 1,2...,20 on the x axis. and then imagine choosing one point on each line, perhaps connected by a zigzag line. that's a general point of R^20.

I admit these depictions do not allow one to "see" all of R^20, but no more does a line segment allow one to see all of R^1.

but this kind of thing could go on forever.


bottom page 18: it is not quite true to say we define the integral via evenly spaced subdivisions. indeed the integral is only defined for functions for which the type of spacing does not affect the outcome of the limit. if you want to say you are defining the integral of continuous functions this would be ok. but it is not too hard to define a non (riemann) integrable function such that the limit described will exist and not be equal to some other limits with other spacings.

same comment for volume integrals on page 19.

perhaps the word "compute" would be more appropriate than "define", since we do compute integrals this way when they exist.

ok on page 22 there is a caveat that technical issues are being ignored (like continuity). such caveats should probably be placed at the beginning of the discussion. even simpler is just to say at the beginning that we are discussing the case for continuous functions, since then everything said is actually true.

at the top of page 33, a parameterization for a surface is required to be one to one and onto, but in example 1.12 page 36, the parametrization given there of the unit disc is not one to one. perhaps it would be better to allow parametrizations which fail to be one to one on the boundary of the domain? (as in this standard example.)

the reader will face the same challenge in trying to solve ex 1.26 by a one to one parametrization.
 
Last edited:
  • #71
chap 2: page 39, same incorrect statement about defining integrals via evenly spaced subdivisions occurs again.

problems witth the definition of parametrization raises its head again on page 40. on page 23 a parametrization of a curve was defiend as a one to one, onto, differentiable map from (all of) R^1 to the curve, (although most exampels so far have not bee defiend on all of R^1, so it might have been better to say from an interval in R^1.

more significant, the first example given on page 40 is not differentiable at the end points of its domain. so again it might be well to say the parametrization, although continuous on the whole interval may fail to be differentiable at the endpoints.

this is the beginning of another potential situation where one probably is intending to integrate this derivative even though it is not continuous or even bounded on its whole domain. this problem is often overlooked in calculus courses. i.e. when the "antiderivative" is well defined and continuous on a closed interval, it is often not noticed that the derivative is not actually riemann integrable by virtue of being unbounded.

indeed as i predicted, exercise 2.1 page 43 asks the reader to integrate the non - integrable function, derivative of (1-a^2)^(1/2), from -1 to 1.

this function is not defined at the endpoints of that interval and is also unbounded on that interval. interestingly enouhg it has a bounded continulous "antiderivative" which enables one to "integrate" it, but not by the definition given in the section, since the limit of those riemann sums does not in fact exist.

the polar parametrization of the hemisphere, on page 44, is again not one to one. and again the third coordinate function of the parametrization phi is not differentiable wrt r at r=1, hence the integral written is again not defined by a limit of riemann sums.

it seems worthwhile to face head on this problem about many natural parametrizations often not being one to one, and point out that for questions of integration, there is no harm in non one to one ness occurring on sets of lower dimension, since the integral over those sets will be zero.

Stieltjes is misspelled on page 44, both the t and one e are omitted.

the language at the bottom of page 45 describes regions parametrized by R^1, R^2, and R^n, although what is apparently meant, and what is done, is to parametrize by rectangular blocks in those spaces.
 
Last edited:
  • #72
what about this Gza?

I understand now, thank you. :approve:
 
  • #73
does anyone appreciate my comment about sqrt(1-x^2) not being differentiable at
x= 1?

this is the familiar fact that the tangent line to a circle at the equator is vertical.

it is rather interesting that this derivative function can be "integrated" in some sense (i.e. as an improper integral) in spite of being unbounded.

does anyone agree that the polar parametrizations given are not actually one to one? and does anyone see why that does not matter?

(but that it does call for a new definition of parametrization?)
 
  • #74
My apologies for not having read the text so I am sure its already been pointed out.

One endless source of confusion for me when I was learning this stuff is the notion of axial and polar vectors. At first glance its easy and obvious, but then terminology starts getting confused, particularly when you learn clifford algebras and some peoples pet concepts to reinvite notation via geometric algebra.

People get in endless debates about how to properly distinguish these different types of *things*. eg What constitutes active and passive transformations of the system, what is a parity change, do we take Grassman or Clifford notation blah blah blah.

Unfortunately if you want a cutesy picture of what's going on, alla MTW (forms now look like piercing planes) some of this stuff becomes relevant or else you quickly end up with ambiguities.

Most of the confusion goes away when you get into some of the more abstract and general bundle theory, but then the audience quickly starts getting pushed into late undergrad/early grad material and the point is lost.
 
  • #75
mathwonk said:
does anyone appreciate my comment about sqrt(1-x^2) not being differentiable at
x= 1?

this is the familiar fact that the tangent line to a circle at the equator is vertical.

Yes, but we're not there yet. As I said in the beginning, I want to march through the book sequentially. The purpose of this thread is twofold:

1. To help my advisees for their presentation.
2. To see if a book such as Bachman's could be used as a follow-up course to what is normally called "Calculus III".

It doesn't really help to achieve my primary goal (#1) if we jump all over the place. My advisees are in Chapter 4 (on differentiation), and we are using this thread to nail down any loose ends that we left along the way in our effort to keep moving ahead.

I'll be posting the last of my Chapter 2 notes tonight and tomorrow. Once the discussion has died down I'll start posting notes on Chapter 3, which is about integration. I'll also try to pick up the pace.

Thanks mathwonk and everyone else for your useful comments, especially post #65 by mathwonk.

edit to add:

By the way mathwonk, my copy of Spivak's Calculus on Manifolds is in. Great book, thanks for the tip! One of my advisees (*melinda*) picked up Differential Forms with Applications to the Physical Sciences by Flanders. What do you think of it?
 
Last edited:
  • #76
i like flanders.


i do not understanbd your reamrk about the sequential treatment, and not being up to my comment yet.

if you are talking about amrching sequentially throguh bachmann, i started on page 1, and those comments are about chapters 1 and 2. how can someone be in chapter 4 and not be sequentially up to chapters 1 and 2 yet?


are you talking about chapter 4 of some other book?

it seems to me you guys are still way ahead of me.
 
  • #77
flanders had a little introductory article in a little MAA book, maybe Studies in Global Geometry and Analysis (ISBN:0883851040)
Chern, S.S., that first got me unafraid of differential forms, by just showing how to calculate with them.

i had been frightened off of them by an abstract introduction in college. i had only learned their axioms and flanders showed just how easy it is to multiply them. i liked the little article better than his more detailed books.
 
  • #78
mathwonk said:
i do not understanbd your reamrk about the sequential treatment, and not being up to my comment yet.

Never mind my comment. I was looking at the arXiv version of Bachman's book, in which page 39 is in Chapter 3 (the chapter on integrating 1-forms).

To prevent further confusion, I am now going to burn the arXiv version and exclusively use the version from his website. I'll re-do the chapter and section numbers in my notes.
 
  • #79
thats right, there were two versions of the book!
 
  • #80
Flanders is sort of the defacto reference book on differential forms for US math majors. You get some treatment in Spivak, and also some good stuff in various physics books, but its not quite the same.

A modern book some people liked a lot was Darling's book on Differential forms.

Regardless I am a little bit wary of placing too much weight on intuitive pictures of the whole affair. Differential forms to me are much ore of a formal language that makes calculations tremendously simpler (not to mention the fact that they are much more natural geometric objects what with being coordinate independant and hence perfect for subjects like cohomology and algebraic geometry). Notation changes from area to area and I suspect having too rigid a 'geometric' intution might actually hurt in some cases.

I guess I am just a little bit disenchanted with some of the earlier attempts to 'picture' what's happening, like the piercing plane idea from MTW (Bachmans text has a good section where they explain why that whole thing doesn't quite work out well in generality)
 
  • #81
Chapter 3: Forms

Section 4: 2-forms on T_p\mathbb{R}^3​

Here is the next set of notes. As always comments, corrections, and questions are warmly invited.


Exercise 3.15

Try as you might, you will not be able to find a 2-form (edit: on T_p\mathbb{R}^3) which is not the product of 1-forms. We in this thread have already argued as much, and indeed in the ensuing text Bachman explains that he has just asked you to do something that is impossible. Nice guy, that Dave. :-p
[/color]

This brings us to the two Lemmas of this section. I feel that the details of the proofs are straightforward enough to omit, so I am just going to talk about what the lemmas say. If any of our students has any questions about the proofs, go right ahead and ask.

Lemma 3.1 reinforces the idea that was first brought up by Gza: The 1-forms whose wedge product make up a 2-form are not unique.

Lemma 3.2 is really what we want to see: It is the proof that any 2-form is a product of 1-forms. The lemma itself states that if you start with two 2-forms that are the product of 1-forms, then their sum is a 2-form that is the product of 1-forms. That is, any 2-form that can be written as the sum of the product of 1-forms, is itself a product of 1-forms.


Note: There is a typo in Bachman's proof (both versions of the book).

Where it says:

"In this case it must be that \alpha_1\wedge\beta_1=C\alpha_2\wedge\beta_2, and hence \alpha_1\wedge\beta_1+\alpha_2\wedge\beta_2=(1+C)\alpha_1\wedge\beta_1",

it should say:

"In this case it must be that \alpha_1\wedge\beta_1=C\alpha_2\wedge\beta_2, and hence \alpha_1\wedge\beta_1+\alpha_2\wedge\beta_2=(1+C)\alpha_2\wedge\beta_2".
[/color]

Bachman goes from the last statement in black above to concluding that "any 2-form is the sum of products of 1-forms."


To explicitly show this, start with the most general 2-form:
<br /> \omega=c_1dx \wedge dy+c_2dz \wedge dy+c_3dz \wedge dx<br />

Now use the distributive property:
<br /> \omega=(c_1dx+c_2dz) \wedge dy +c_3dz \wedge dx<br />

And there we have it.
[/color]

This leads us to the following conclusion:

David Bachman said:
Every 2-form on T_p\mathbb{R}^3 projects pairs of vectors onto some plane and returns the area of the resulting parallelogram, scaled by some constant.

There is thus no longer any need for the "Caution!" on page 55.

edit: That is, there is no need for it when we are dealing with 2-forms on T_p\mathbb{R}^3. See post #82.


Exercise 3.16

Now that we know that every 2-form on T_p\mathbb{R}^3 is a product of 1-forms, this is a piece of cake. Just look at the following 2-form:

\omega(V_1,V_2)=\alpha\wedge\beta(V_1,V_2)
\omega(V_1,V_2)=\alpha(V_1)\beta(V_2)-\alpha(V_1)\beta(V_2)
\omega (V_1,V_2)=(&lt;\alpha&gt;\cdot V_1)(&lt;\beta&gt;\cdot V_2)-(&lt;\alpha&gt;\cdot V_2)(&lt;\beta&gt;\cdot V_1)

This 2-form vanishes identically if either V_1 or V_2 (doesn't matter which) is orthogonal to both &lt;\alpha&gt; and &lt;\beta&gt;.

Exercise 3.17

Incorrect answer edited out:

The above argument does not extend to higher dimensions because not all 2-forms are factorable in higher dimensions.

Counterexample:

Take the following 2-form on T_p\mathbb{R}^4:

\omega=dx \wedge dy + dz \wedge dy +dz \wedge dw + 2dx \wedge dw.

Try to factor by grouping:

(dx+dz) \wedge dy + (dz+2dx) \wedge dw,

and note that we can go no further. It turns out that no grouping of terms will result in a successful factorization.
[/color]


Exercise 3.18

Maybe I'm just being dense, but I do not see how to solve this one. The hint right after the exercise doesn't help. If l is in the plane spanned by V_1 and V_2, then of course the vectors that are perpendicular to V_1 and V_2 will be perpendicular to l.

Anyone want to jump in here?
[/color]
 
Last edited:
  • #82
Hi all,

Sorry I have been silent for a few days. Busy, busy busy...

And even now I do not have time to give proper responses, but here are a quick few...

Mathwonck, please read a bit more carefully if you are going to take on a role as "proofreader":

To your comment about integrating with evenly space intervals: there is a discussion of this on page 41.
To your comment on saying that we want an "oriented area": I couldn't use the word "oriented" because at this point students have no idea what an orientation is. In fact, at that point in the text I do not even assume that the student realizes that the deterimant can give you a negative answer (although I am sure this seems obvious to you). I do, however, emphasize this by inentionally computing an example where the answer is negative, and then pointing out that we really don't want "area", but rather a "signed area". It's all there.

Next... there is a rather long discussion here about factoring 2-forms into products. Mathwonk has a "proof" in one of his earlier posts, but this was a little bit of wasted effort, since this is the content of Section 4 of Chapter 3.

Also, Tom... be careful! The CAUTION on page 55 is ALWAYS something to look out for. The point of Section 4 of Chap 3 is that dimension 3 is special, because there you can always factor 2-forms. The next edition of the book will have a new section about 2-forms in four dimensions, with particular interest on those that can NOT be factored.

Hopefully more tomorrow... I should give you more of a hint on Exercise 3.18.

Dave.
 
  • #83
Dave I am sorry to see my corrections are not welcomed by you. They are accurate however.

As an expert I probably should have not gotten involved since everyone is having fun, and my corrections are invisible to the average student. But you did ask for comments in your introduction. When you do that, you should expect to get some.

I think this book is nice for a first dip into the topic, but I have a concern that a person learning the subject from this source will be left with a certain amount of confusion, due to the imprecise discussion, and non standard language, which will cause problems in trying to discuss the material with more knowledgeable people.

If followed up with Spivak however it should be fine. And any source that gets people involved and allows them friendly access to a topic is good. This is the strength of Dave's book. I don't know who they sent it to for reviewing, but Dave, I think you might get some comments like mine from other reviewers.
 
Last edited:
  • #84
for tom and students: you can argue that diff forms are useful in the 10 or more dimensions physicists apparently use now for space time, and they are also easily adaptable to the complex structures used there and in in string theory (Riemann surfaces, complex "Calabi Yau" manifolds).
 
  • #85
Bachman said:
Hi all,

Sorry I have been silent for a few days. Busy, busy busy...

Glad to see you back. :smile:

Also, Tom... be careful! The CAUTION on page 55 is ALWAYS something to look out for. The point of Section 4 of Chap 3 is that dimension 3 is special, because there you can always factor 2-forms.

Whoops. I've put in an edit that corrects my remark about the Caution. I've also changed my answer to Exercise 3.17, which was evidently wrong.
 
  • #86
another comment about selling differential forms to your audience. Dave has a nice application in chapter 7 showing that their use reduces Maxwell's equations from 4 to 2.
 
  • #87
The line l = \{\vec{r}t + \vec{p} : t \in \mathbb{R}\} for some \vec{r},\ \vec{p} \in T_p\mathbb{R}^3. Suppose \vec{v},\ \vec{w} \in T_p\mathbb{R}^3 such that l \subseteq Span(\{\vec{v},\ \vec{w}\}). Then the set \{\vec{p},\ \vec{v},\ \vec{w}\} is linearly dependent, hence:

\det (\vec{p}\ \ \vec{v}\ \ \vec{w}) = 0[/itex]<br /> <br /> Define \omega such that:<br /> <br /> \omega (\vec{x},\ \vec{y}) = \det (\vec{p}\ \ \vec{x}\ \ \vec{y}) \ \forall \vec{x}, \vec{y} \in T_p\mathbb{R}^3<br /> <br /> You can easily check, knowing the properties of determinants, that \omega is an alternating bilinear functional, and hence a 2-form. If you want, you can express it as a linear combination of dx \wedge dy,\ dy \wedge dz,\ dx \wedge dz, and it shouldn&#039;t be hard, but probably not necessary.<br /> <br /> EDIT: actually, to answer the question as given, perhaps you will want to write \omega in terms of those wedge products, and determine \vec{p} from there. Then, to find l you just need to choose <i>any</i> line that passes through \vec{p}. Any two vectors containing that line will have to contain \vec{p}, hence those three vectors must be linearly dependent, hence their determinant will be zero, and since \omega depends only on \vec{p} and not the choice of \vec{r}, you&#039;re done.
 
Last edited:
  • #88
hi
~Thanks everyone on the feedback to my question. It’s so reassuring to know when you’ve got the right idea!
~For exercise 3.17 (post 81), Tom says:

“The above argument does not extend to higher dimensions because not all 2-forms are factorable in higher dimensions”.

~I can see why this is the case in exercise 3.16, but it seems like there’s a bit more to this than a simple question of factorability. I’m probably way off, but I was thinking that it has more to do with some general property of 3-space that makes it inherently different than say, 4-space or any other space for that matter. Then again, I suppose that not being able to write a 2-form as a product of 1-forms in R^4 could very well be a general property of higher dimensions. Unfortunately these are ideas that I don’t know very much about yet, so please excuse if my questions are a bit silly or obvious.
 
  • #89
For applications, I know of many places in physics where differential forms are useful, even to an undergrad.

First and foremost, the often quoted derivation of maxwells equations in a very neat and elegant form.

The fundamental equations of thermodynamics as well are often cast in differential form notation. You instantly get out several relations that are painful to get in other notation.

Finally general relativity/String theory etc

One thing to note though.. I really didn't see at the time the advantage of using differential forms in those situations, I often would ask 'why not just use tensor calculus instead'? And I was right in the sense that you will get very compact notation (if you suppress the irratating indices) just as quickly as with differential forms without the added hassle of learning the new, somewhat unintuitive language.

I was wrong though in the deeper meaning of these objects. It wasn't until I learned of Yang Mills theory, and principle bundles as applied to general relativity, that the full power of differential forms became instantly apparent.

Modern Physics fundamentally wants to be written down in coordinate invariant, read diffeomorphism invariant language. It doesn't necessarily want to know about metrics, and things like that. Indeed there are situations where such concepts stop you from seeing the global topology of the problem, and it is in that sense that differential forms immediately become obvious as THE god given physical language.
 
  • #90
melinda,

pardon me if my posts have been unhelpful. I will try to explain why a 2 form is never a product of one forms in any dimension higher than 3.

Let V be the space of one forms on R^n, and let V^V be the space of 2 forms. Then since V has coordinates dx1,...dxn, and has dimension n, V^V has coordinates dxi^dxj with i <j, so has dimension = bonomial coefficient "n choose 2".


Now, just look at the product map, VxV-->V^V, taking a pair of 1 forms f,g to their product f^g. The question is when is this map surjective?

Without going into it too much, I claim that this map cannot raise dimension, much as a linear map cannot, so since the domain has dimension 2n and the range has dimension (1/2)(n)(n-1), it follows that as soon as the second number outruns the first, the map cannnot be surjective.

In particular for n > 5, the map cannot be surjective, but actually this occurs sooner than that, I claim for n > 3.

The key is to look at the dimension of the fibers of the map. Here there is a principle almost exactly the same as the "rank - dimension" theorem in linear algebra.

i.e. if we can discover the dimension of the set of domain points which map to a given point in the target of ther map, then the dimension of the actual image of the map cannot be more than the amount by which the dimension of the domain exceeds this "fiber" dimension. i.e. if (f,g) is a general point of the domain VxV, then the dimension of the set of 2 forms which are products in V^V, cannot be more than 2n - dim of the set of one forms having the same product f^g as f and g.


Now it helps to think geometrically, i.e. of f and g as vectors and f^g as the parallelogram they span. Then two other vectors have the same product if and only if they span a parallelogram in the same plane as f and g, and also ahving the same area.

So there is a 2 dimenmsional family of vectors in that plane, hence a 4 dimensional fmaily of pairs of vectors in that plane spanning it, but if choose only thos having the right area, there is noly a three dimnsional family.

Thus the inverse image of a general product f^g is 3 dimensional in VxV. Thus the dimension of the image of the rpoduct map, in V^V, i.e. the dimension of the family of factorable 2 forms, equals 2n - 3. we see this is less than (1/2)(n)(n-1) as soon as n >3.

so for n > 3, it never again happens that all 2 forms are a product of two 1 forms.

does that help?

if you look back at some of my free flying posts earlier you will probably see that these ideas are there, but not explained well.
 
  • #91
An apology and some comments:

I apologize for making critical comments no one was interested in and which stemmed from not reading Dave's introduction well enough. He said there he was not interested in "getting it right", whereas "get it right" is my middle name (it was even chosen as the tagline under my photograph in high school, by the yearbook editor, now I know why!) I have always felt this way, even as an undergraduate, but apparently not everyone does. My happiest early moments in college came when the fog of imprecise high school explanations was rolled away by precise definitions and proofs.

On the first day of my beginning calculus class the teacher handed out axioms for the reals and we used them to prove everything. In the subsequent course the teacher began with a precise definition of the tangent space to the uncoordinatized euclidean plane as the vector space of translations on the plane.

E.g. if you are given a translation, and a point p, then you get a tangent vector based at p by letting p be the foot of the vector, then applying the translation to the point p and taking that result as the head of the vector.

This provides the isomorphism between a single vector space and all the spaces Tp(R^n) at once. Then we proceeded to do differential calculus in banach space, and derivatives were defined as (continuous) linear maps from the get go.

So I never experienced the traditional undergraduate calculus environment until trying to teach it. As a result I do not struggle with the basic concepts in this subject, but do struggle to understand attempts to "simplify" them.

I am interested in this material and will attempt to stifle the molecular imbalances which are provoked involuntarily by imprecise statements used as a technique for selling a subject to beginners.

One such point, concerning the use of "variables" will appear below, in answer to a question of hurkyl.

to post #6 from Tom, why does Dave derive the basis of Tp(R^2) the way he does? instead of merely using the fact that that space is isomorphic to R^2, hence has as basis the basis of R^2?

I think the point is that space is not equal to R^2, but only isomorphic to R^2. Hence the basis for that space should be obtained from the basis of R^2 via a given isomorphism.

Now the isomorphism from Tp(R^2) to R^2 proceeds by taking velocity vectors of curves through p, so Dave has chosen two natural curves through p, the horizontal line and the vertical line, and he has computed their velocity vectors, showing them to be <1,0> and <0,1>.

So we get not just two basis vectors for the space but we get a connection between those vectors and curves in the plane P. (Of course we have not proved directly they are a basis of Tp(P), but that is true of the velocity vectors to any two "transverse curves through p").

So if you believe it is natural to prefer those two curves through p, then you have specified a natural isomorphism of Tp(R^2) with R^2. In any case the construction shows how the formal algebraic vector <1,0> corresponds to something geometric associated to the plane and the point p.


In post #18, Hurkyl asks whether dx and dy are being used as vectors or as covectors? This is the key point that puzzled and confused me for so long. Dave has consciously chosen to extend the traditional confusion of x and y as "variables" on R^2 to an analogous confusion of dx and dy as variables on Tp(R^2).

The confusion is that the same letters (x,y) are used traditionally both as functions from R^2 to R, and as the VALUES of those functions, as in "let (x,y) be an arbitrary point of R^2."

In this sense (x,y) can mean either a pair of coordinate functions, or a point of R^2. Similarly, (dx,dy) can mean either a pair of linear functions on Tp(R^2) i.e. a pair of covectors, or as a pair of numbers in R^2, hence a tangent vector in Tp(R^2) via its isomorphism with R^2 described above.

So Dave is finessing the existence of covectors entirely.

This sort of thing is apparently successful in the standard undergraduate environment or Dave would not be using it, but it is not standard practice with mathematicians who tend to take one point of view on the use of a notation, and here it is that x and y are functions, and dx and dy are their differentials.

There is precedent for this type of attempt to popularize differentials as variables and hence render them useful earlier in college. M.E. Munroe tried it in his book, Calculus, in 1970 from Saunders publishers, but it quickly went out of print. Fortunately I think Dave's book is much more user friendly than Munroe's.

(Munroe intended his discussion as calculus I, not calculus III.)

In post #43, Gza asked what a k cycle is, after I said a k form was an animal that gobbles up k cycles and spits out numbers.

I was thinking of a k form as an integrand as Dave does in his introduction, and hence of a k cycle as the domain of integration. Hence it is some kind of k dimensional object over which one can integrate.


Now the simplest version would be a k dimensional parallelpiped, and that is spannned by k vectors in n space, exactly as Gza surmised. A more general such object would be a formal algebraic sum, or linear combination, of such things, and a non linear version would be a piece of k dimensional surface, or a sum or lin. comb. of such.


now to integrate a k form over a k diml surface. one could parametrize the surface via a map from a rectangular block, and then approximate the map by the linear map of that block using the derivative of the parameter map.

Then the k form would see the approximating parametrized parallelepiped and spit out a number approximating the integral.

By subdividing the block we get a family of smaller approximating parallelepipeds and our k form spits out numbers on these that add up to a better approximation to the integral, etc...


so k cycles of the form : "sum of parallelepipeds" do approximate non linear k cycles for the purposes of integration over them by k forms.

The whole exercise people are going through trying to "picture" differential forms, may be grounded in the denial of their nature as covectors rather than vectors. I.e. one seldom tries to picture functions on a space geometrically, except perhaps as graphs.

On the other hand I have several times used the technique of discussing parallelepipeds in stead of forms. That is because the construction of 2 forms from 1 forms is a formal one, that of taking an alternating product. the same, or analogous, construction that sends pairs of one forms to 2 forms, also sends pairs of tangent vectors to (equivalence classes of) parallelograms.

I.e. there is a concept of taking an alternating product. if applied to 1 forms it yields 2 forms, if applied to vectors it yields "alternating 2 - vectors".

In post #81, Tom asked for the proof of the lemma 3.2 that all 2 forms in R^3 are products of 1 forms. I have explicitly proved this in the most concrete way in post #66 by simply writing down the factors in the general case.

In another post in answer to a question of Gza I have written down more than one solution to every factorization, proving the factors are not unique.

Also in post #81, Tom asked about solving ex 3.18. What about something like this?
Intuitively, a 1 form measures the (scaled) length of the projection of a vector onto a line, and a 2 form measures the (scaled) area of the projection of a parallelogram onto a plane. Hence any plane containing the normal vector to that plane will project to a line in that plane. hence any parallelogram lying in such a plane will project to have area zero in that plane.

e.g. dx^dy should vanish on any pair of vectors spanning a plane containing the z axis.

Notice that when brainstorming I allow myself the luxury of being imprecise! there are two sides to the brain, the creative side and the critical side. one should not live exclusively on either one.
 
Last edited:
  • #92
Melinda,

You can also see that in dimensions bigger than three you will not always be able to factor 2-forms by just writing one down. If there are at least four coordinates then consider the following 2-form:

\omega=dx_1 \wedge dx_2 + dx_3 \wedge dx_4

Now, if this 2-form could be written as \alpha \wedge \beta then

\omega \wedge \omega=\alpha \wedge \beta \wedge \alpha \wedge \beta=0

But when you compute \omega \wedge \omega for the above 2-form you do not get zero. The conclusion is that this 2-form can never be factored.

Dave.
 
  • #93
Dear all,

I have been going through my book agaiin with my current students and we have found a few errors. I'll post them:

Exercise 1.6 (4) The coefficient should be \frac{2}{5} instead of \frac{5}{2}
Exercise 3.21 ... then V_{\omega}=\langle F_x, F_y, F_z \rangle.
Exercise 4.8 The form should be 2z\ dx \wedge dy + y\ dy \wedge dz -x\ dx \wedge dz. The answer should be \frac{1}{6}.
Exercise 4.13 Answer sholuld be \frac{32}{3}

If anyone finds any more please let me know!

Dave.
 
  • #94
Dave's example recalls post #60:

"here is a little trick to see that in 4 dimensions not all 2 forms are products of one forms. since the product of a one form with itself is zero, if W is a 2 form which is a product of one forms, then W^W = 0. But note that [dx^dy + dz^dw] ^ [dx^dy + dz^dw] = 2 dx^dy^dz^dw is not zero. so this 2 form is not a product of one forms."

Indeed if n= 4, we have argued above that the subspace of products has codimension one in the space of 2 forms, and it seems the condition w^w = 0 is then necessary and sufficient for a 2 form to be a product.
 
  • #95
Here is another use of the constructions Dave is explaining to us: analyzing the structure of lines in 3 space.

For example what if we consider the old problem of Schubert: how many lines in (projective) 3 space meet 4 general fixed lines? This has been tackled valiantly in another thread by several people, some successfully.

I claim this can be solved using the algebriac tools we are learning.

I am going to try to wing this along the lines of the discussion so far, so Dave, feel free to jump in and correct, clarify, or augment my misstatements.

We have been seeing that a 2 form assigns a number to a pair of vectors. Since every 2 form is a linear combination of basic ones, i.e. of products of one forms, it suffices to know how those behave, and we have been seeing that e.g. the one form dx^dy seems to project our two vectors into the x, y plane and then take the oriented area of the parallelogram they span.

Now just as in linear algebra when we "mod out" a domain vector space by the kernel of a linear transformation, to make the new domain space into a space on which the transformation is one to one, we could also try to mod out the space of pairs of vectors, by equating two pairs to which every 2 form assigns the same number.

Now it suffices as remarked above, to equate two pairs of vectors if the basic two forms dxi^dxj all agree on them. From the discussion so far, it seems this means we should equate two pairs of vectors if the parallelogram they span has the same oriented area when projected into every pair of coordinate planes.

Now I claim this just means the two pairs of vectors span the same plane, and the parallelograms they span have the same area, and the same orientation. So this essentially contains the data of the plane they span, plus a real scalar.

We denote the equivalence class of all pairs equivalent in this way to v,w by the symbol v^w. Then we have taken alternating products of vectors, just as before we took alternating products of one forms, i.e. of functionals.

i.e. the same formal rules hold; v^w = - w^v, v^(u+w) = v^u + v^w, v^aw = av^w, etc...

But we again cannot add these except formally, so we consider also formal linear combinations of such guys: v^w + u^z, etc...

Now just as in 4 space and higher, not all 3 forms were products of one forms, so also not all 2-vectors are simple ones of form v^w.

E.g. in 4 space the same condition must hold as remarked above for 2 forms, i.e. that a 2 vector T is a simple product if and only if T^T = 0.

Now we have constructed a linear space of alternating 2 vectors T, in which those that satisfy the property T^T =0 correspond to products v^w. For vectors in R^4, this linear space has dimension "4 choose 2" = 6. So the space of all 2 vectors in R^4 is identifiable with R^6.

I claim this has the following interpretation:

by definition projective 3 space consists of lines through the origin of R^4, so 2 planes in R^4 correspond to lines in projective 3 space.

Now each 2 plane in R^2 is represented by a simple 2 vector, i.e. a product v^w, in fact by a "line" of such 2 vectors, since v^w and av^w represent the same plane, just accompanied by a different oriented area.

so 2 planes in R^4 are represented by the lines through the points of R^6 representing simple 2 vectors. Moreover this subset of R^6 is defined by the quadratic equation T^T = 0, hence 2 planes in R^4 are represented by a quadratic cone of lines in R^6.

If we consider the projective space of lines through the origin of R^6, we have the space of all lines in projective three space, represenetd as a quadric hypersurface of dimension 4 in the projective 5 space defined by all 2 vectors in R^4.


Now in projective 3 space we ask what it means algebraically for two lines to meet? i.e. when do the two pairs of simple 2 vectors u^v, and z^w represent planes in R^4 that have a line in common? Well it means that u^v^z^w = 0, (since this happens when the 4 diml parallelepiped they span has volume zero in 4 space).

Consequently when u^v is fixed, this is a linear equation in z^w, hence the lines in projective 3 space meeting a given line, correspond to a linear hyperplane section in 5 space, on the quadric of all lines. hence the lines meeting 4 given lines in 3 space, would be the intersection of our quadric of all lines, with 4 linear hyperplanes.

But 4 linear hyperplanes in P^5 meet in a line, so the lines in 3 space meeting 4 given lines, correspond to the points of P^5 where a quadric hypersurface meets a line, i.e. exactly 2 points.


You might ask an audience, consisting of skeptics as to the value of alternating form methods, if they can solve that little geometry problem as neatly using classical vector analysis.
 
Last edited:
  • #96
I guess to make sure that quadric meets that line in 2 points, I should have chosen an algebraically closed field, like the complex numbers, to work over, instead of the reals?
 
  • #97
It finally dawned on me what Dave is doing and why he calls this a geometric approach to differential forms.

given a vector space V, the space of linear functions on V is the dual space V*. But if we define a dot product on V we get an isomorphism between V* and V. I.e. then a linear functional f on V is represented by a vector w in V. The value of f at a vector v is given by projecting v onto the line spanned by w and multiplying the length of the projection by (plus or minus) the length of w.


Now suppose we jack that up by one degree to bilinear functions. I.e. given a dot product, a bilinear alternating functional which is an alternating product of two linear forms, is represented by a parallelogram, such that the action of the function on a pair of vectors becomes projection of those two vectors into the plane of the parallelogram, taking (plus or minus) the area of the image parallelogram, and multiplying by the area of the given parallelogram.

So this approach has more structure than strictly necessary for the concept of differential forms, but allows them to be represented as (a sum of) projection operators.

nice.

In that spirit, one is led to pose geometric versions of the factorization questions asked above in R^3:
1) given two parallelograms in R^3, find one parallelogram such that the bilinear function defined by the sum of those two given parallograms equals the one given by projection on the one resultant parallelogram.
2) give a geometric proof in R^4 that the bilinear function defined by the sum of dx^dy and dz^dw, cannot be equal to the function defined by projection on the plane spanned by anyone parallogram.

In short the use of a dot product, allows one to have an isomorphism between the space V*^V* of 2 forms and the more geometric object V^V I defined above, which I said was analogous to the space of 2 forms.

Dave, you have obviously put a lot of thought into this.
 
Last edited:
  • #98
another in my wildly popular series of commentaries:

towards a more fully geometric view of differential forms.

It seems after reading Dave's section on how [to and] not to picture differential one forms, he does not advocate there the use of the dot product. I.e. he suggests picturing the kernel planes of the field of one forms in R^3, a view point which depends only on the nature of a one form as functional, having a kernel, and not on its nature as a dot product.

I.e. I would have thought one might use the picture of the one form df, for example as a "gradient field", i.e. as a vector field whose vector at each point is given by the cooordinate vector of partial derivatives of f in the chosen coordinate dircetions.

I guess Dave is not doing this because he wants to give us a coordinate invariant view of forms although coordinates seem to be used in the projected area point of view introduced earlier.

If we pursue this, we have an interpretation of every one form as a vector, namely the vector perpendicular to the kernel hyperplane, with length equal to the valoue of the functional on a unit vector.

Then we truly have a geometric object representing a one form (although it depends on a dot product), and moreover we can add one forms and representing vectors interchangeably. I.e. the vector representing the sum of two one forms, is the geometric vector sum of the vectors representing each of them.

In this same vein, if we represent a 2 form on R^3 as an oriented parallelogram, as suggested above, and in R^4 as a formal sum of oriented parallelograms, then we do get a geometric representation of 2 forms, i.e. as a sum of parallelograms.

But to have a fully geometric interpretatioin we should haver a geometric view also of addition of 2 forms. so as asked before, given two parallelograms in R^3, what is a geometric construction of a parallelogram in R^3 represenmting their sum as 2 forms?

And since in R^4, we have a 6 dimensional space of 2 forms, and it is one quadratic condition to be represented by just one parallelogram, we ask what is the geometric condition on a pair of parallelograms that their sum be represented by just one parallelogram, and then what is that parallelogram?

Well, we already know part of this don't we? Because Dave's condition w^w = 0, for this says that the two parallelograms have a sum represented by just one parallelogram if ands only if they span together only a 3 space in R^4. And then surely the construction is the same as the construction in R^3, whatever that is.

If we try to avoid the choice of dot product, as Dave does in his "kernel plane" interpretation of one forms, what would be the correct interpretation?

If we restrict to factorable 2 forms, is there a geometric kernel plane interpretation?

peace.

More free flowing conjectures: We "know" that in projective 5 space the point represented by the coordinates of a 2 form on R^4 is factorable into a product of one forms if and only if satisfies w^w = 0, i.e. if and only if it lies on the 4 dimensional quadric hypersurface defiend by that degree two equation in the coordinates of the 2 form.

Now what is the geometric condition for the sum of two factorable 2 forms to still be factorable? Would it be that the line joining those two points on the quadric still lies wholly in the quadric? I.e. just as a quadric surface in P^3 is doubly ruled by lines, a quadric 4 fold in P^5 also contains a lot of lines.

Just wondering and dreaming. And urging people who want a "geometric" view of the subject to explore further what that would mean.

peace.
 
Last edited:
  • #99
Sorry I've been away for so long. Work gets in the way of what I really want to do, sometimes. :frown:

AKG said:
The line l = \{\vec{r}t + \vec{p} : t \in \mathbb{R}\} for some \vec{r},\ \vec{p} \in T_p\mathbb{R}^3. Suppose \vec{v},\ \vec{w} \in T_p\mathbb{R}^3 such that l \subseteq Span(\{\vec{v},\ \vec{w}\}). Then the set \{\vec{p},\ \vec{v},\ \vec{w}\} is linearly dependent, hence:

\det (\vec{p}\ \ \vec{v}\ \ \vec{w}) = 0[/itex]<br /> <br /> Define \omega such that:<br /> <br /> \omega (\vec{x},\ \vec{y}) = \det (\vec{p}\ \ \vec{x}\ \ \vec{y}) \ \forall \vec{x}, \vec{y} \in T_p\mathbb{R}^3<br /> <br /> You can easily check, knowing the properties of determinants, that \omega is an alternating bilinear functional, and hence a 2-form. If you want, you can express it as a linear combination of dx \wedge dy,\ dy \wedge dz,\ dx \wedge dz, and it shouldn&#039;t be hard, but probably not necessary.<br />
<br /> <br /> OK thanks, but as you recognized this is answering the reverse question: Given the line, find the 2-form.<br /> <br /> <blockquote data-attributes="" data-quote="" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> EDIT: actually, to answer the question as given, perhaps you will want to write \omega in terms of those wedge products, and determine \vec{p} from there. Then, to find l you just need to choose <i>any</i> line that passes through \vec{p}. Any two vectors containing that line will have to contain \vec{p}, hence those three vectors must be linearly dependent, hence their determinant will be zero, and since \omega depends only on \vec{p} and not the choice of \vec{r}, you&#039;re done. </div> </div> </blockquote><br /> Right, this is what I was wondering about. I think I&#039;ve worked it out correctly. Here goes.<br /> <br /> <b>Exercise 3.18</b><br /> Let \omega=w_1dx \wedge dy +w_2dy \wedge dz +w_3dz \wedge dx.<br /> Let A=&amp;lt;a_1,a_2,a_3&amp;gt; and B=&amp;lt;b_1,b_2,b_3&amp;gt; be vectors in \mathbb{R}^3.<br /> Let C=[c_1,c_2,c_3] be a vector in T_p\mathbb{R}^3 such that C=k_1A+k_2B. So the set {A,B,C} are dependent. That implies that det|C A B|=0.<br /> <br /> Explicitly:<br /> <br /> &lt;br /&gt; det [C A B]=\left |\begin{array}{ccc}c_1&amp;amp;c_2&amp;amp;c_3\\a_1&amp;amp;a_2&amp;amp;a_3\\b_1&amp;amp;b_2&amp;amp;b_3\end{array}\right|&lt;br /&gt;<br /> <br /> &lt;br /&gt; det [C A B]=c_1(a_2b_3-a_3b_2)-c_2(a_1b_3-a_3b_1)+c_3(a_1b_2-a_2b_1)&lt;br /&gt;<br /> <br /> Now let \omega act on A and B. We obtain the following:<br /> <br /> &lt;br /&gt; \omega (A,B)=w_1(a_1b_2-a_2b_1)+w_2(a_2b_3-a_3b_2)+w_3(a_3b_1-a_1b_3)&lt;br /&gt;<br /> <br /> Upon comparing the expressions for det [C A B] and \omega (A,B) we find that \omega (A,B)=0 if w_1=c_3, w_2=c_1, and w_3=c_2. So the line l is the line that is parallel to the vector [w_2,w_3,w_1]. So I can write down parametric equations for l as follows:<br /> <br /> &lt;br /&gt; x=x_0+w_2t&lt;br /&gt;<br /> &lt;br /&gt; y=y_0+w_3t&lt;br /&gt;<br /> &lt;br /&gt; z=z_0+w_1t&lt;br /&gt;<br /> [/color]<br /> <br /> I&#039;ll wait for any corrections on this before continuing. If this is all kosher, then I&#039;ll post the last of my Chapter 3 notes and we can finally get to differential forms, and the integration thereof.<br /> <br /> <blockquote data-attributes="" data-quote="mathwonk" data-source="" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-title"> mathwonk said: </div> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Also in post #81, Tom asked about solving ex 3.18. What about something like this?<br /> Intuitively, a 1 form measures the (scaled) length of the projection of a vector onto a line, and a 2 form measures the (scaled) area of the projection of a parallelogram onto a plane. Hence any plane containing the normal vector to that plane will project to a line in that plane. hence any parallelogram lying in such a plane will project to have area zero in that plane. </div> </div> </blockquote><br /> That&#039;s helpful. I have to admit I don&#039;t really like this geometric approach. But I think that I haven&#039;t warmed up to it yet because it still feels uncomfortable. I very much prefer to formalize the antecedent conditions and manipulate expressions or equations until I have my answer, as I&#039;ve done with all my solutions to the exercises so far. It&#039;s my shortcoming, I&#039;m sure.
 
Last edited:
  • #100
have you read post 98?

I apologize if my comments are not of interest. I am stuck between trying to be helpful and just letting my own epiphanies flow as they will.


I appreciate your patience.
 
Back
Top