What Are Differential Forms and How Do They Apply in Differential Geometry?

  • #61
Originally posted by jeff
Intrinsic curvature is defined by using the fairly easy to understand idea of "parallel transport". Imagine some closed curve ...
Does this mean that 1-D manifolds can not have intrinsic curvature (how do you make a closed curve on the parabola)?
 
Physics news on Phys.org
  • #62
Originally posted by turin
Can you give the justification that \mathbb{R}^n is flat and that a parabola is curved (I'm assuming that you mean a parabola to be the 2-D representation of all points (x,y) that exist in the 2-D space which satisfy the coordinate values y = x2, or some scaled, translated, or rotated version thereof).

I can see the literal curvature of the parabola, and the literal flatness of the x-axis (\mathbb{R}^1) if I view them in the context of the x-y plane, but I don't see how you can characterize such a thing in 1-D, and such characterization seems to be in the spirit of this thread.
i don t really need to be rigorous about the difference between flat and curved in those posts, since i was just mentioning it to give an intuition, and the notion is not actually well defined: the difference between the parabola and the real line is just a different embedding in R2, i.e. it is not intrinsic.

the only reason i brought it up was to convince people why the notions we learned in R3 just won t work for a general manifold. R3 is a vector space, and that is what i meant by calling it flat (no metric involved). you can add points in the manifold to each other if it is flat. tangent vectors to the manifold can also be thought of as living in the manifold itself if it is flat.

neither of these things is true if the manifold is not a vector space, and so that s all i meant.

Here's my problem: I can't see the fundamental difference between the parabola and the real number line as 1-D manifolds. Both have 1 dimension (another concept I don't quite understand, but I'll defer that until later). I don't see what more you can say without a metric, or at least a coordinatization. If I choose to label the points on the parabola by the arclength along the parabola from the origin (which is, IMO, the most natural way to do it), then how would I know it was curved? Alternatively, how would I know to label points using their x values to show the curvature, when, for the sake of purity, I should not be appealing to any x-axis in some x-y plane? In other words, how do I know that the parabola imbeds itself in the x-y plane as a parabola instead of a straight flat line, without already knowing that it was, in fact, a parabola in the x-y plane.

yes, you are correct. a good observation. instrinsically, all 1D spaces have the same geometry.
 
  • #63


Originally posted by turin
I was just a little uncomfortable with this notation. Do you mean:

-v is defined as (-1)v

where (-1) is a member of the field? Isn't this rather trivial? I'm assuming you mean to require an additive inverse for your vector space (or "abelian group" or whatever you called it). IM very HO, this could be reworded to:

"for every vector, v, there is a vector, vinv, such that: v + vinv = 0. This is the existence of an inverse."

Shankar has the more indicative notation, using the kets, to include the minus sign inside the ket, to distinguish it from a literal negative sign as a multiplication by (-1).
yes, i agree with all this. i guess i just can t be bothered with that level of formalism, but i do think that it is very important to see that kind of thing when you first do abstract algebra. it let's you divorce yourself of misconceptions or generalizations that you learned in your high school algebra class.

for abelian groups, i think its pretty harmless. it is trivially easy to show that (-1)v=vinv
 
  • #64


Originally posted by turin
"This defines THE dual space," as in, "there is ONLY ONE WAY to do it," or, "this defines the dual space," as in, "this is the way we HAPPEN TO do it?" This looks suspiciously like you have sneeked a metric tensor into the discussion under the guise of defining the dual space. Is this the case?
note that i didn t use the kronecker delta to define the dual space, but only to choose a basis. it just happened to be on hand as a way of choosing a basis.

you can choose any basis you like, as long as you can make sure that it is actually a basis (linearly independent, etc). with the choice i made above this was easy to check.

Is there some way to define a dual space without using this metric-ish definition? Does this Kronecker-Delta generalize to the metric tensor for general spaces?
like i said above, the kronecker delta is for choosing a basis, not for defining the dual space. a dual vector acting on a vector gives me a real number. i just have to make a choice for which numbers my basis vectors will give, and i choose 1s and 0s.

this notion does not generalize to other metrics: the metric is not defined between vectors and covectors. some books use an inner product type notation, but i dislike this a lot.
 
  • #65
Originally posted by turin
Does this mean that 1-D manifolds can not have intrinsic curvature (how do you make a closed curve on the parabola)?
you can make a closed curve on the parabola, you just have to be willing to trace back on yourself.

a 1D manifold cannot have any intrinsic curvature, but for other reasons.
 
  • #66


Originally posted by lethe
you can add points in the manifold to each other if it is flat.
Can this be done without applying coordinates to the manifold? What does it mean to add point P to point Q?




Originally posted by lethe
instrinsically, all 1D spaces have the same geometry.
What does "geometry" mean? I thought we were discussing pre-geometry manifolds. Does a circle have the same geometry as the real number line?




Originally posted by lethe
... the kronecker delta is for choosing a basis, not for defining the dual space.
I think I understand the distinction here, but I don't understand the significance. If you want to talk about the objects that live in your dual space, then aren't you going to need a basis? Can you give some non-trivial demonstration/identity/proof (not a definition) that does not require a basis?




Originally posted by lethe
a dual vector acting on a vector gives me a real number.
Is this THE definition of a dual vector?
 
  • #67


Originally posted by turin
Can this be done without applying coordinates to the manifold? What does it mean to add point P to point Q?

some manifolds admit algebraic structures, and some don t. linear spaces all do, since it is part of their definition. you do not have to choose coordinates on your manifold to have algebra.

What does "geometry" mean? I thought we were discussing pre-geometry manifolds. Does a circle have the same geometry as the real number line?
in this instance, geometry means curvature. any 1 dimensional manifold has no intrinsic curvature, and thus, locally, all 1D manifolds have the same geometry.

we are discussing differentiable manifolds (pregeometry manifolds, as you say) in this thread. it was you who brought up the issue about the parabola and the line being the same, and so i only mentioned that to make that discussion a little clearer.


I think I understand the distinction here, but I don't understand the significance. If you want to talk about the objects that live in your dual space, then aren't you going to need a basis?
no
Can you give some non-trivial demonstration/identity/proof (not a definition) that does not require a basis?
the dual of the dual of a vector space is canonically isomorphic to the vector space. this theorem can be proved without ever choosing a basis.

the problem with choosing a basis is that there are many equally good bases to pick from, and there is no "best" basis, so sticking to only one is unnatural. but once i have made this unnatural choice, there is a best choice for the basis of the dual space, which i describe above.


Is this THE definition of a dual vector?
yes
 
  • #68
This all strikes me as funny for some reason. A circle is a 1D manifold, and it is curved in the sense that walking along the circle in a constant direction eventually leads you back over your own footsteps. Of course, that definition of "curved" is not mathematically sound.

At the same time, there's no way for a 1D creature who lives on the circle to do any experiments to determine if there is or is not curvature. The only figures he can draw is his 1D space are lines and points, and his lines will always have the same length, no matter which direction he draws them...

I suppose I accept the fact that a 1D curve has no intrinsic curvature, but it bugs me somehow.

- Warren
 
  • #69
Originally posted by chroot
This all strikes me as funny for some reason. A circle is a 1D manifold, and it is curved in the sense that walking along the circle in a constant direction eventually leads you back over your own footsteps. Of course, that definition of "curved" is not mathematically sound.
this notion is mathematically sound, its just not the definition of curvature. you are talking about some global property (comes back to the beginning), and geometry talks about local properties (curvature). this kind of coming back on itself is a common subject of study in topology.

At the same time, there's no way for a 1D creature who lives on the circle to do any experiments to determine if there is or is not curvature. The only figures he can draw is his 1D space are lines and points, and his lines will always have the same length, no matter which direction he draws them...
i think this bug can do experiments to measure curvature. he just won t ever measure anything other than zero.
 
  • #70
Originally posted by lethe
this notion is mathematically sound, its just not the definition of curvature. you are talking about some global property (comes back to the beginning), and geometry talks about local properties (curvature). this kind of coming back on itself is a common subject of study in topology.
Hmm, but I thought the definition of a Riemannian manifold was that it was locally flat at every point? Don't all manifolds have this property of being locally flat?

I suppose being locally flat just means you can introduce a Euclidean coordinate system anywhere and neglect the curvature; it's still intrinsically present, you're just neglecting it.
i think this bug can do experiments to measure curvature. he just won t ever measure anything other than zero.
What sorts of experiments can he do? Besides drawing lines and measuring their lengths?

- Warren
 
  • #71
Originally posted by chroot
Hmm, but I thought the definition of a Riemannian manifold was that it was locally flat at every point? Don't all manifolds have this property of being locally flat?
all manifolds have the property of being locally euclidean. this is a topological property that has nothing to say about flatness.

I suppose being locally flat just means you can introduce a Euclidean coordinate system anywhere and neglect the curvature; it's still intrinsically present, you're just neglecting it.
the definition of a manifold makes no mention of curvature, no assumptions about curvature, nothing like that. the Riemannian manifold is just a differentiable manifold with a Riemannian metric on it.

the curvature is not constrained.

What sorts of experiments can he do? Besides drawing lines and measuring their lengths?
the curvature tensor tells you how a vector transforms when you go in a loop. so to measure curvature, he could draw 1dimensional loops, and see what happens. he would surely find that all vectors remain unchanged.
 
  • #72
Hmmm but how does he draw a closed 1D loop in his 1D space? That's just a line, eh? And it can't be closed. So I guess that's why a 1D space has no intrinsic curvature. It doesn't make sense with the definition of intrinsic curvature.

- Warren
 
  • #73
Originally posted by chroot
Hmmm but how does he draw a closed 1D loop in his 1D space? That's just a line, eh? And it can't be closed. So I guess that's why a 1D space has no intrinsic curvature. It doesn't make sense with the definition of intrinsic curvature.

- Warren
why can t the loop be closed? what about the curvature doesn t make sense?
 
  • #74
Originally posted by lethe
why can t the loop be closed? what about the curvature doesn t make sense?
If you're an ant living in a 1D space, how can you draw a loop in the first place? I mean, how can a loop even exist in 1D? Am I missing something?

- Warren
 
  • #75
Originally posted by chroot
If you're an ant living in a 1D space, how can you draw a loop in the first place? I mean, how can a loop even exist in 1D? Am I missing something?

- Warren

here is a loop on the real line:

<br /> \gamma(t)=\begin{cases}2t&amp; 0\leq t\leq 1/2\\ 2-2t &amp; 1/2\leq t\leq 1\end{cases}<br />
 
  • #76
Okay, I guess I can't argue that it's a loop, even though it's sort of a "degenerate" one.

So if the concept of a loop is well-defined in 1D space, why then does the curvature tensor always vanish? Sorry, I believe it, I just don't grok it.

- Warren
 
  • #77
Originally posted by chroot
Okay, I guess I can't argue that it's a loop, even though it's sort of a "degenerate" one.

So if the concept of a loop is well-defined in 1D space, why then does the curvature tensor always vanish? Sorry, I believe it, I just don't grok it.

the tangent space is 1 dimensional. any vector has only one choice, it cannot rotate.
 
  • #78
Aha, that makes sense now. If our ant on the 1D circular manifold pushes a vector around, there's no way it can rotate (assuming that it can't snap around 180 degrees -- that sort of wacky behavior is impossible in a smooth manifold, I guess?). And since curvature is defined by the angle between a vector and its counterpart after being parallely transported in a small loop, the angle must be zero, so there must be no (intrinsic) curvature.

Okay, got it. Thanks.

- Warren
 
  • #79


Originally posted by lethe
some manifolds admit algebraic structures, and some don t.
...
you do not have to choose coordinates on your manifold to have algebra.
If it doesn't deviate terribly from the main discussion, could you just give the basic requirements of an algebra, specifically to distinguish them from the requirements of a vector space?




Originally posted by lethe
it was you who brought up the issue about the parabola and the line being the same, and so i only mentioned that to make that discussion a little clearer.
I'm just trying to distinquish between a parabola and a line in our context. I thought you were trying to make a point of it in the beginning.




Originally posted by lethe
the dual of the dual of a vector space is canonically isomorphic to the vector space.
That sure isn't obvious using the stack of pancakes notion. Is this a good example of why you don't like it?




Originally posted by lethe
the problem with choosing a basis is that there are many equally good bases to pick from, and there is no "best" basis, so sticking to only one is unnatural. but once i have made this unnatural choice, ...
I'm, assuming this was talking about the basis of the vector space, as opposed to the dual space?
 
  • #80


Originally posted by turin
If it doesn't deviate terribly from the main discussion, could you just give the basic requirements of an algebra, specifically to distinguish them from the requirements of a vector space?
an algebra is a vector space that has a vector product.

in general, tangent spaces will just be vector spaces, not algebras, but if the manifold is also a group, then some of the tangent spaces will be algebras. R3 with the vector cross product is an example of this.

just to be clear though: when i said algebra above, i didn t say an algebra. an algebra is a vector space with vector product. algebra is a more general term, it just means anything having to do with addition and multiplication. so i said above something about manifolds not having an algebraic structure, i just meant that there is no way to add or multiply, in a consistent way, points on, say, a sphere, whereas there are such notions for linear spaces (by definition).

I'm just trying to distinquish between a parabola and a line in our context. I thought you were trying to make a point of it in the beginning.
its been a while, but i think the reason i wrote that in the beginning was just to show why you need vectors to live in the tangent space, and they cannot, in general, live in the manifold. that was the only point.

i didn t want to imply that the parabola had intrinsic curvature or anything like that. i m beginning to regret even mentioning the word "curved" up there, since i had not defined it yet. it was just supposed to help your intuitive picture, when thinking of vectors and manifolds.


That sure isn't obvious using the stack of pancakes notion. Is this a good example of why you don't like it?]
sure, i guess so.


I'm, assuming this was talking about the basis of the vector space, as opposed to the dual space?
yeah, i guess that is what i was talking about, but it could work the other way as well.
 
Last edited:
  • #81
An algebra is not a vector space which has a vector product.

Exercise define vector product properly. Vector product is usually the term reserved for the cross product in three dimensions. there are plenty of three dimensional algebras not isomorphic to R^3 with the vector product.

An algebra is a Ring which is also a vector space.
 
  • #82
Originally posted by matt grime
An algebra is not a vector space which has a vector product.

i suppose this is a semantic argument. i define a vector product to be a bilinear map on a vector space into the vector space. under this definition, a vector space with a vector product is an algebra.
Exercise define vector product properly. Vector product is usually the term reserved for the cross product in three dimensions.
i think you should say vector cross product (or simply cross product), when you mean the cross product in R3.

the names of objects in mathematics ought to be descriptive enough to leave no ambiguity, this is my opinion, at least. under this philosophy, vector product, scalar product, cross product, inner product, outer product and dot product are all different, and there is no ambiguity in any of the terms.
there are plenty of three dimensional algebras not isomorphic to R^3 with the vector product.
i would say that R3 with the vector cross product is an example of an algebra. certainly there are other examples.



An algebra is a Ring which is also a vector space.
this definition also works. of course, the ring is not as familiar to people as the vector space, so i prefer my definition. but it is a matter of taste.
 
  • #83
How about, an algebra is a vector space which has(completely outside its VS structure) a distributive product. Very often physicists will define an algebra just by defining the product, since the underlying vector space is "obvious".

In the case of 3D vectors, of course, the cross product is really an outer product (Grassmann style) and its result is not exactly a true (polar) vector, but an axial vector that behaves differently under parity operations. This distinction had an important role in physicsts' attempts to understnd the weak force.

The true algebra that contains the 3D vectors is the quaternions.
 
  • #84
Originally posted by selfAdjoint
How about, an algebra is a vector space which has(completely outside its VS structure) a distributive product.
but, as i am sure you are aware, the dot product is distributive, but an inner product space is not an algebra. i think distributivity isn t worth mentioning. in my world, if its not distributive, it isn t a product. so the word "product", for me, contains the information "bilinear" and "distributive". what does need to be mentioned to distinguish it from other products is that it is vector valued

In the case of 3D vectors, of course, the cross product is really an outer product (Grassmann style)
i don t quite agree with this. there is a sense in which the cross product can be thought of as a Grassman product (which i call a "wedge product", not an "outer product". for me, "outer product" is synonymous with "tensor product"), but the Grassman algebra is certainly not isomorphic to R3 as an algebra. for example, inR3, you have (ixj)xi=j whereas in the Grassman algebra, you have (ixj)xi=0. not isomorphic. if you toss the Hodge dual in there in the appropriate place, then you can make an isomorphism

what R3 is isomorphic to is the Lie algebra \mathfrak{so}(3)
 
Last edited:
  • #85
2-forms

RDT2 is around, let me see if i can't post some more on this thread. this is still taken from the thread at sciforums.[/size]

now... where were we. ah, yes. we had just finished building the 2-forms, and we re about ready to move to more general p-forms.

but before we leave the 2-forms, let s find a basis for them, and look at their coordinate representation. it should be obvious how to do that, right? we built our 2-form from two 1-forms, so we should be able to build a basis for our 2-forms from the basis for our 1-forms. let s recall what that was (6):

df=\partial_\nu f dx^\nu

let s take two of those, and wedge them together:

df\wedge dg=(\partial_\mu fdx^\mu)\wedge(\partial_\nu gdx^\nu)=(\partial_\mu f)(\partial_\nu g) dx^\mu\wedge dx^\nu<br />


here the advantages of using the einstein summation notation become more clear. when you are multiplying two (or more) long summations, carrying around a lot of extra sigmas can get quite unwieldy.

this isn t quite a proof that dx^\nu\wedge dx^\mu is the basis, i.e. that any alternating second rank tensor can be written as such a sum, but it should be convincing at any rate.

now that we ve found a basis, let s count the dimension of this vector space, the space of all 2-forms. remember, the dimension of a vector space is just the number of elements in the basis. so how many independent dx^\nu\wedge dx^\mu are there? well there are N different dx^\nu and N different dx^\mu, so there should be N^2 ways to write the product, where N is the dimension of the manifold, and the tangent vector space, and the cotangent space.

so the dimension of \bigwedge^2T^*M_p is N^2. right?

not so fast, hot shot! there may be N^2 ways to write that product, but they are not all linearly independent. remember the properties of 2-forms: dx^\nu\wedge dx^\mu = -dx^\mu\wedge dx^\nu. so we don t want to count this guy twice. furthermore dx^\nu\wedge dx^\nu = 0! so we definitely don t want to count those cases when \nu=\mu. so when counting the basis elements we should only count those for which, say, \nu &lt; \mu

if you like combinatorics, you can work out the formula. i don t really so i m just going to say the answer: there are N(N-1)/2 linearly independent 2-forms. that formula might look familiar to some of you, it is {}_NC_2 (N choose 2).
 
Last edited:
  • #86


Originally posted by lethe
... we should be able to build a basis for our 2-forms from the basis for our 1-forms. let s recall what that was (6):

df=\partial_\nu f dx^\nu
I still don't understand how this gives us a basis. Does this notation imply something more specific than generalized coordinates? This is just the total derivative of a multivariable function, right?




Originally posted by lethe
let s take two of those, and wedge them together:

df\wedge dg=(\partial_\mu fdx^\mu)\wedge(\partial_\nu gdx^\nu)=(\partial_\mu f)(\partial_\nu g) dx^\mu dx^\nu<br />
Two of whats? they look like scalars to me since there is a contraction. Is there a previous post in which you explain how these would be vectors? I'm so confused. I read your post in which you introduced vectors, but they look like scalars to me.




Originally posted by lethe
this isn t quite a proof that dx^\nu\wedge dx^\mu is the basis, i.e. that any alternating second rank tensor can be written as such a sum, ...
I don't see how the dx^\nu\wedge dx^\mu shows up.
 
Last edited:
  • #87


Originally posted by turin
I still don't understand how this gives us a basis. Does this notation imply something more specific than generalized coordinates? This is just the total derivative of a multivariable function, right?
no, this notation implies nothing beyond the representation of a 1-form in terms of some general coordinates. your manifold has coordinates, which in turn yield a basis for the tangent space (\{\partial/\partial x^\mu\}), which in turn induces a basis for the dual space (\{dx^\mu\}). for this dual space, as for any vector space, expressing any vector in terms of the basis means finding a linear combination of the basis vectors that equals the vector in question. in this case, it is df=\partial_\mu f dx^\mu. df is the vector, \partial_\mu f are the coefficients, and dx^\mu are the basis vectors

this formula does look like the formula one learns in elementary calculus for the derivative of a function. there we have the chain rule df/dt=(\partial f/\partial x^\mu)(dx^\mu/dt). if we "multiply" both sides of this equation by dt, then we get the above formula. of course, this multiplication step is invalid, since in elementary calculus, we have no object called dt, and dx^\mu/dt is not a fraction, but a single object. but the similarity is no coincidence. it looks this way because the exterior derivative really is a kind of derivative, and so has to include the chain rule of elementary calculus.




Two of whats? they look like scalars to me since there is a contraction. Is there a previous post in which you explain how these would be vectors? I'm so confused. I read your post in which you introduced vectors, but they look like scalars to me.
this is an excellent question. you probably have learned the following rule of thumb: anything with no indices is a scalar, anything with one index is a vector, anything with more indices is a tensor.

this rule is nonsense. or at least, it is only true about the coordinate components of those objects, and not the objects themselves.

here is a better rule: any geometric object, which has a coordinate independent meaning, cannot have any indices (since indices indicate dependence on your choice of coordinates). scalars, vectors, tensors, 1-forms, and anything else worth talking about, are all geometric objects, with coordinate independent existences. therefore they cannot have indices.

in the expression \mathbf{v}=v^\mu\partial_\mu, v^\mu is the coefficient of my vector in terms of some basis. this number depends on my choice of basis (i.e. my choice of coordinates) and any coordinate dependent object should carry an index. but it is not a vector, it is only a component of a vector. you probably learned that anything with a raised index is a (contravariant) vector. part of my goal with this stuff is to teach you why that picture is misleading (and why the word contravariant is a mistake in this context).

\partial_\mu also carries an index. but this one is lowered. does that mean that it is (covariant) vector? no it doesn t! it is still a tangent vector. the real vector is \mathbf{v}=v^\mu\mathbg{\partial_\mu}=v^1\mathbf{\partial_1}+v^2\mathbf{\partial_2}+...+v^n\mathbf{\partial_n} which carries no indices! to sum up, the point is: anything with an index cannot have any intrinsic meaning, since it is a coordinate dependent object.

in some other post, i will explain why the words covariant and contravariant are exactly backwards. above, i used contravariant to mean "having a raised index, like the coordinates", and covariant to mean "having a lowered index, like the derivative", since this is how it is usually taught to physicists. i will never use those words in that sense again, and for me, the words are actually switched.

I don't see how the dx^\nu\wedge dx^\mu shows up.
well, the dx^\nu\wedge dx^\mu comes from df\wedge dg by linearity on the basis vectors. i actually slipped up in one of the equations in my post, and forgot to include the \wedge. maybe it is clearer now? the point is, df and dg both have coordinate representations in terms of our basis vectors dx^\mu, so i pull the coefficients of the two 1-forms in this wedge product out front (since the wedge product is linear), and am left with a sum over the wedge products of the basis vectors.

note that the coefficients of the 2-form have two indices (\partial_\mu f\partial_\nu g), and the basis (co-)vectors also have two indices (dx^\mu\wedge dx^\nu), but the expression for the entire 2-form has both indices contracted, and so carries no indices. but it is certainly not a scalar!
 
Last edited:
  • #88


I HATE ITEX!
I will type the corresponding html next to the itex.
OK, I will delete all itex crap and never use it again. It keeps changing on me.

Originally posted by lethe
... you probably have learned the following rule of thumb: anything with no indices is a scalar, anything with one index is a vector, anything with more indices is a tensor.

this rule is nonsense.
I have this appreciation. My major prof adamantly declares that a tensor is defined by its transformation properties. If this is true, then it is obvious that the rule of thumb you mention here is nonsense. If it is a bad way to think of it in terms of the transformation properties (I seem to vaguely remember you discouraging this way of thinking), then please remind me.

I wasn't thinking that the index free quality indicated scalar-ness. I was more concerned with the apparent contraction of two apparent vectors.
According to my major prof:
- the contraction of two rank 1 tensors (vectors) is a rank 0 tensor (scalar)
- &part;&mu; and dx&mu; are rank 1 tensors (at least in Minkowski space-time).
From this I infer:
- the object in question still looks like a scalar to me, unless I radically change my understanding of tensors.

What about the proper time interval: d&tau;2 = dx&mu;dx&mu;?

It seems like there are two inconsistent ways of looking at it:
- either this is a contraction and therefore a scalar
- or this is a 1-form with components dx&mu;.

Do the components of any 1-form form a vector basis, and do the components of any vector form a covector basis?




Originally posted by lethe
in the expression \mathbf{v}=v^\mu\partial_\mu, v^\mu is the coefficient of my vector in terms of some basis. this number depends on my choice of basis (i.e. my choice of coordinates) and any coordinate dependent object should carry an index. but it is not a vector, it is only a component of a vector.
My major prof would say that it is OK to call v&mu; a vector because it implies all of the components (and I guess because it implies the basis?). What say you? I don't want to be picky, just trying to get a handle on the different notational formalisms.




Originally posted by lethe
i actually slipped up in one of the equations in my post, and forgot to include the \wedge. maybe it is clearer now?
Ya. If you meant for that wedge to be in there, then I get it. Again, I'm not trying to be picky, but I have been given the impression lately that these kinds of notational issues are important.




Originally posted by lethe
... df and dg both have coordinate representations in terms of our basis vectors dx^\mu, ...
I thought the basis vectors were &part;&mu; and that the 1-forms were dx&mu;. Did you mean &part;&mu; here?
 
Last edited:
  • #89


Originally posted by turin
I HATE ITEX!
I will type the corresponding html next to the itex.
well, feel free to use html. i actually preferred the html, since it seems to fit more nicely with the text, however it doesn t display on some peoples browsers, and of course it can t do as much stuff as tex.

I have this appreciation. My major prof. adamantly declares that a tensor is defined by its transformation properties. If this is true, then it is obvious that the rule of thumb you mention here is nonsense. If it is a bad way to think of it in terms of the transformation properties (I seem to vaguely remember you discouraging this way of thinking), then please remind me.
physicists definition of a tensor:
an object with (r,s) raised, lowered indices (and therefore a coordinate dependent object) that transforms in such and such a way when you change coordinates

mathematicians definition of a tensor:
a tensor product of r,s vectors, covectors. the mathematicians definition of a vector and covector is such that it makes no reference to coordinates, and thus neither does the definition of a tensor. it is an exercise for the reader in most math books to check that when you look at the coordinate components of a tensor, they transform in the physicists way when you change your choice of coordinates.

you take your pick as to which definition. it is nice to understand both definitions, and then one doesn t have to adamantly adhere to one or the other. but certainly one can have a preference, mine is the mathematicians definition.

I wasn't thinking that the index free quality indicated scalar-ness. I was more concerned with the apparent contraction of two apparent vectors. According to my major prof:
- the contraction of two rank 1 tensors (vectors) is a rank 0 tensor (scalar)
you cannot contract two (1,0) rank tensors. you can only contract a (1,0) tensor with a (0,1) tensor. of course, since the metric (if you are doing Riemannian geometry, known to physicists as relativity) and the symplectic form (if you are doing symplectic geometry, known to physicists as classical mechanics) are both nondegenerate, you can always convert one of your (1,0) rank tensors into a (0,1) rank tensor and then contract them (known to physicists as raising and lowering indices). but in the absence of a metric or symplectic form, there is no canonical isomorphism between the vector space of tangent vectors ((1,0) rank tensors) and the vector space of covectors/dual vectors ((0,1) rank tensors), and therefore you cannot contract them.

if you would like any of those terms explained further, please ask.

- \partial_\mu (that is, &part;&mu;) and dx^\mu (that is, dx&mu;) are tensors (at least in Minkowski space-time)
therefore:
yes indeed, those are both tensors (they are basis tensors, and therefore coordinate dependent). also, none of this is particular to Minkowski space.

- the object in question still looks like a scalar to me, unless I radically change my understanding of tensors.
yes indeed, the contraction of dx^\mu and \mathbf{\partial_\mu} is indeed a scalar. in fact, it is 1.

What about the proper time interval:

d\tau^2 = dx_\mu dx^\mu (that is, d&tau;2 = dx&mu;dx&mu;)?

It seems like there are two inconsistent ways of looking at it:
- either this is a contraction and therefore a scalar
contraction makes a scalar if the contracted objects are a (1,0) tensor and a (0,1) tensor. that is not the case here, so that thing is not a scalar. the fact that it doesn t carry any indices indicates that it is a geometric object, independent of coordinates.

- or this is a 1-form with components dx_\mu (that is, dx&mu;).
it is also not a 1-form. it is a tensor product of 2 1-forms. if it were also antisymmetric, i would call it a 2-form, but it is not antisymmetric, so i will call it a (0,2) rank tensor.

but you could have figured out that it was a (0,2) rank tensor just by looking at its coordinate components g_{\mu\nu}. 2 lowered indices on the coordinate components = (0,2) rank tensor.

Do the components of any 1-form form a vector basis, and do the components of any vector form a covector basis?
components do not form a basis, since they are not vectors. the components of a 1-form happen to transform like the basis vectors of tangent space, and components of the tangent vector happen to transform like the basis vectors of the cotangent vector space, but this does not mean that the components are themselves vectors

in fact, this point of confusion is exactly the reason that i dislike the physicists definition of a tensor. you become confused about what is a vector, what is a covector, and what are just components.

i said above "happen to transform", but it is no coincidence. recall that any vector (for any vector space. i m thinking linear algebra here) can be written like this:

\mathbf{x}=x_1\mathbf{e}_1+x_2\mathbf{e}_2+...x_n\mathbf{e}_n

here, \mathbf{x} is a vector, which exists in any basis, but has different components. the components live in some field (sometimes these guys are called scalars in math class, but i won t use that word here. for physicists, scalar means something that is invariant under coordinate transformations). thus the components are not vectors. the basis vectors are vectors, but they also depend on your choice of basis (obviously).

if you make a change of basis, you can achieve this by multiplying the basis vectors by some matrix to get a new basis. then you multiply the components by the inverse of that matrix to get the components of the vector in the new basis. the vector itself has matrix times matrix^-1, and thus doesn t change. it is independent of your choice of basis. it is only the coordinates that depend on your choice of basis, and they change in the opposite way that the basis vectors themselves change.

this is why the components of a tangent vector transform like the basis vectors of the cotangent space.




My major prof. would say that it is OK to call v^\mu (that is, v&mu;) a vector because it implies all of the components (and I guess because it implies the basis?). What say you? I don't want to be picky, just trying to get a handle on the different notational formalisms.
yeah, all physicists do this. it is fine to call v^\mu a vector. in fact, i do it myself whenever i am doing physics. but just keep in the back of your head that v^\mu are really the components of a vector, strictly speaking, they are not the vector itself. since, in physics, we only ever deal with components, we can replace them in our minds. but be aware that doing so will lead to confusion when you try to do this in a math class. and when it comes time to start doing non-Abelian gauge theory, you will wish you were in the math camp, instead of the physics camp.




Ya. If you meant for that wedge to be in there, then I get it. Again, I'm not trying to be picky, but I have been given the impression lately that these kinds of notational issues are important.
indeed they are (in my opinion)


I thought the basis vectors were \partial_\mu (that is, &part;&mu;) and that the 1-forms were dx^\mu (that is, dx&mu;). Did you mean \partial_\mu (that is, &part;&mu;) here?
no.

\mathbf{\partial}_\mu are the basis vectors for the tangent space, and dx^\mu are the basis vectors for the cotangent space. since df and dg are 1-forms (by definition, a 1-form is a member of the cotangent space), they can be written in terms of the basis of that space. of course, since the basis vectors of any vector space are themselves members of that vector space, dx^\mu is itself a 1-form, it is a basis 1-form. but this 1-form depends on your coordinates. and likewise for \mathbf{\partial}_\mu
 
Last edited:
  • #90


Holy crap!

Originally posted by lethe
... and the symplectic form (if you are doing symplectic geometry, known to physicists as classical mechanics) are both nondegenerate, you can always convert one of your (1,0) rank tensors into a (0,1) rank tensor and then contract them (known to physicists as raising and lowering indices).
Ya, uh, question. What is "symplectic form?"




Originally posted by lethe
... the contraction of dx^\mu and \mathbf{\partial_\mu} is indeed a scalar.
This seems to contradict the definition of &part;&mu;f dx&mu; as a vector. Is a vector the same thing as a scalar in math land?




Originally posted by lethe
contraction makes a scalar if the contracted objects are a (1,0) tensor and a (0,1) tensor. that is not the case here, so that thing is not a scalar. the fact that it doesn t carry any indices indicates that it is a geometric object, independent of coordinates.
OK, so a (1,0) tensor is not synonymous with a contravariant vector, nor is a (0,1) tensor synonymous with a covariant vector?

I have also been told rather emphatically that the metric is a scalar because it does not get transformed by a Lorentz transformation. Not true?




Originally posted by lethe
it is also not a 1-form. it is a tensor product of 2 1-forms. if it were also antisymmetric, i would call it a 2-form, but it is not antisymmetric, so i will call it a (0,2) rank tensor.
By writing one of the indices as a subscript, contraction with the metric tensor is already implied, and the dx&mu; is supposed to be the covariant form of dx&mu;. Is this just a matter of confusing termonology? I think it may be deeper than termonology and notation, because I would have sworn yesterday that d&tau;2 was a scalar, and that g&mu;&nu; was a second rank tensor. You're starting to scare me.




Originally posted by lethe
but you could have figured out that it was a (0,2) rank tensor just by looking at its coordinate components g_{\mu\nu}. 2 lowered indices on the coordinate components = (0,2) rank tensor.
I understand that dx&mu;dx&nu; is a second rank tensor (and so, I guess that means a (0,2) tensor?). But dx&beta;dx&beta;?




Originally posted by lethe
and when it comes time to start doing non-Abelian gauge theory, you will wish you were in the math camp, instead of the physics camp.
I wish I was in the math camp right now. I am starting to think that physics is teaching me bad habits.




Originally posted by lethe
since df and dg are 1-forms (by definition, a 1-form is a member of the cotangent space), they can be written in terms of the basis of that space.
How do you know that they are 1-forms and not vectors? What is wrong with saying that df is a vector, and in the &part;&mu;f basis, it has components dx&mu;?
 
Last edited:

Similar threads

  • · Replies 9 ·
Replies
9
Views
905
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 70 ·
3
Replies
70
Views
16K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 17 ·
Replies
17
Views
3K