Some elementary geometry questions

In summary, the author has some elementary questions about geometry, and they often find that they are not totally understanding what they are doing. They ask others about their understanding of the subject, and they find that they are not quite understanding the connection between the Stokes theorem concerning n-forms and the Stokes theorem concerning vector densities. They also find that it is troubling to express an n-form in an antisymmetrical basis.
  • #1
haushofer
Science Advisor
Insights Author
2,952
1,497
Hi, I have some elementary questions about geometry. I often find that I am perfectly able to do calculations, but sometimes I have the feeling I'm not totally understanding what I'm actually doing. Maybe this is familiar for some of you ;) Up 'till now I have some questions about quite different topics and maybe some other questions will pop up in my mind, I hope some of you can shine their light upon it. :)

1)

About the definition of the metric tensor: the coefficients are defined by

[tex]
g_{\mu\nu} = g(e_{\mu},e_{\nu}) = e_{\mu} \cdot e_{\nu}
[/tex]

where the dot is the standard inner product. It feels for me some kind of cheating to use the standard inner product to define a general inner product. How would this go for eg the minkowski tensor [tex] \eta_{\mu\nu}[/tex] ? What would the basis vectors be? For instance, [tex]\eta_{00} = -1 [/tex], so [tex]e_{0}\cdot e_{0} = -1 [/tex], so wouldn't we need imaginary components ?

2)

About viewing the integrand of an integral over a manifold as an n-form, and Stokes Theorem.

One identifies the volume element [tex]d^{4}x[/tex] with [tex] dx^{a}\wedge dx^{b} \wedge dx^{c} \wedge dx^{d}[/tex] . We know that the volume-element is a tensor density, but that wedge product looks like an honest tensor... If f is a scalar function, then

[tex]
\int f \sqrt{g} d^{4}x = \int f \sqrt{g} \varepsilon_{abcd} dx^{a}\wedge dx^{b} \wedge dx^{c} \wedge dx^{d} = \int f \epsilon
[/tex]

where [tex] \epsilon_{abcd} = \sqrt{g} \varepsilon_{abcd} [/tex] and [tex]\varepsilon[/tex] is the Levi Civita alternating symbol. So here I would say that the sqrt is a density, [tex]\epsilon[/tex] is a tensor, so the wedge product of coordinatefunctions is also a density. But if I express an n-form in an antisymmetrical basis as [tex]\omega = \omega_{abcd} dx^{a} \wedge dx^{b} \wedge dx^{c} \wedge dx^{d}[/tex], I know that the wedge product of coordinate functions is a tensor ( after all, it is an antisymmetric basis for tensors )... I'm overlooking something, but what ?

Also, if I have that

[tex]
\int f \sqrt{g} d^{4}x = \int d\omega
[/tex]

by looking at the integrand as an n-form, how can I solve this to find the 3-form [tex]\omega[/tex] ? I have the feeling I don't quite understand the connection between the Stokes theorem concerning n-forms and the Stokes theorem concerning vector densities Y;

[tex]
\int_{M} d\omega = \int_{\partial M} \omega
[/tex]

and

[tex]
\int_{M} \nabla_{\mu} Y^{\mu} d \Omega = \int_{\delta M} Y^{\mu}dS_{\mu}
[/tex]

3)

About looking at vectors as differential operators and one-forms as differentials.

I understand that one can look upon a vector as being a differential operator,

[tex]
X = X^{\mu}\partial_{\mu}
[/tex]

and that the basis vectors are given by

[tex]
e_{\mu} = \partial_{\mu}
[/tex]

But in my mind vectors have numerical values. What does a statement like

[tex]
dx^{\mu} (\partial_{\nu} ) = \delta ^{\mu}_{\nu}
[/tex]

mean? Is it appropriate to look upon it as if there is a one-to-one correspondence between numerical values and the operators themselves ? This also pops up if you consider the norm of a vector; how do I consider the norm of a basisvector if it is given by a differential operator? I'm feeling uncomfortable by giving the vector in that way a certain numerical value, like

[tex]
\partial_{\mu} = (1,0,0,0)
[/tex]

so I don't understand what it means to perform an innerproduct between 2 vectors expressed via differential operators.

It's bothering me for quite some time, so who can help me? :)
 
Last edited:
Physics news on Phys.org
  • #3
Hi, haushofer,

haushofer said:
About the definition of the metric tensor: the coefficients are defined by
[tex]
g_{\mu\nu} = g(e_{\mu},e_{\nu}) = e_{\mu} \cdot e_{\nu}
[/tex]
where the dot is the standard inner product.

To avoid confusion with frame fields, you should write your coordinate basis vectors as [itex]\partial_{x^\mu}[/itex], rather than [itex]\vec{e}_\mu[/itex].

haushofer said:
It feels for me some kind of cheating to use the standard inner product to define a general inner product.

Not at all; this follows the philosophy of "bundling" infinitesimal structures defined at the level of a jet space at a point to obtain a structure defined on a manifold.

haushofer said:
How would this go for eg the minkowski tensor [tex] \eta_{\mu\nu}[/tex] ?

You mean, "how would this work for Lorentzian four-manifolds instead of Riemannian four-manifolds?" You would use a quadratic form with signature (-1,1,1,1) instead of (1,1,1,1) at the level of tangent spaces in order to define your metric tensor.

haushofer said:
What would the basis vectors be?

Coordinate basis vectors.

haushofer said:
Maybe too many questions at once :P

Yes, given that you didn't tell us what books you are studying or anything about your mathematical background or interests (e.g. are you hoping to understand gtr?). Thus, I have not made any serious attempt to answer even the first set of questions, pending your response to my query.

(A general complaint, which I express from time to time: why oh why to inquiring posters at forums like PF so rarely think to mention their background or to cite their recent reading? Is it simple laziness or a genuine inability to understand how useful this information would be to those trying to frame useful responses?)
 
Last edited:
  • #4
Chris Hillman said:
Hi, haushofer,



To avoid confusion with frame fields, you should write your coordinate basis vectors as [itex]\partial_{x^\mu}[/itex], rather than [itex]\vec{e}_\mu[/itex].



Not at all; this follows the philosophy of "bundling" infinitesimal structures defined at the level of a jet space at a point to obtain a structure defined on a manifold.



You mean, "how would this work for Lorentzian four-manifolds instead of Riemannian four-manifolds?" You would use a quadratic form with signature (-1,1,1,1) instead of (1,1,1,1) at the level of tangent spaces in order to define your metric tensor.



Coordinate basis vectors.



Yes, given that you didn't tell us what books you are studying or anything about your mathematical background or interests (e.g. are you hoping to understand gtr?). Thus, I have not made any serious attempt to answer even the first set of questions, pending your response to my query.

(A general complaint, which I express from time to time: why oh why to inquiring posters at forums like PF so rarely think to mention their background or to cite their recent reading? Is it simple laziness or a genuine inability to understand how useful this information would be to those trying to frame useful responses?)


Ok, sorry for being not to informational about my background, I didn't think about it. I'm a physics student writing my thesis. I have read a good deal of GR-books, like Carroll, Inverno, Hervik&Gron, pieces of Wald, and read a fair deal of geometry books. So I'm familiar with things like forms, index-free notation, Lie-derivatives, etc.

About question 1 and 3:

Ok, so we have in Minkowski space-time that, for our set of basisvectors [tex]\partial_{\mu} [/tex] that

[tex]
\partial_{\mu} \cdot \partial_{\nu} = \eta_{\mu\nu}
[/tex]

So I would say that, working in coordinates (t,x,y,z) that

[tex]
\partial_{t}\cdot\partial_{t} = -1
[/tex]

and

[tex]
\partial_{x}\cdot\partial_{x} = \partial_{y}\cdot\partial_{y}= \partial_{z}\cdot\partial_{z}=1
[/tex]

How do I interpret such a innerproduct between differential operators? If I work out these inner products, a solution of this would be

[tex]
\partial_{0} = (i,0,0,0)
[/tex]
( because i*i = -1 )

and

[tex]
\partial_{x} = (0,1,0,0)
[/tex]

etc. But my problems with this are:

* I have the idea that something is going wrong, because I'm not comfortable with having an basisvector with an imaginary component.

* I'm not sure what it means that a derivative operator can be represented by a set of numbers.
 
Last edited:
  • #5
It is customary to avoid the imaginary unit by using a real-valued quadratic form with signature (-1,1,1,1) which is then bundled. Isn't this clear from say Carroll?
 
  • #6
Chris Hillman said:
It is customary to avoid the imaginary unit by using a real-valued quadratic form with signature (-1,1,1,1) which is then bundled. Isn't this clear from say Carroll?


Well, apparently not :) I don't see how one loses the imaginary unit. The situation is like "we have the Minkowski-metric, what are the basis vectors which make up this metric?" Then I end up with that factor i in the first basisvector.
 
  • #7
I am more and more flummoxed how to respond without trying to write my own tutorials more or less from scratch (way too much work).

Can you clarify "thesis"? (An honors thesis?) What undergraduate math courses have you taken? In what region of the world do you attend school?
 
Last edited:
  • #8
Chris Hillman said:
I am more and more flummoxed how to respond without trying to write my own tutorials more or less from scratch (way too much work).

Can you clarify "thesis"? (An honors thesis?) What undergraduate math courses have you taken? In what region of the world do you attend school?

I'm writing my master thesis in Holland. I've taken the usual math courses like complex analysis, calculus etc and a course on differential geometry in physics.
 
  • #9
Weird, since you seem to be adhering to very archaic notation. Well, I think you will need to unlearn some bad habits you were apparently taught. I think the best thing is for you to get a more elementary book on low dimensional differential geometry, like Millman and Parker, and master the material in that, and then try a more modern textbook on differential geometry in physics, e.g. the one by Isham, before our language will have sufficient overlap to make conversation convenient.
 
  • #10
Chris Hillman said:
Weird, since you seem to be adhering to very archaic notation. Well, I think you will need to unlearn some bad habits you were apparently taught. I think the best thing is for you to get a more elementary book on low dimensional differential geometry, like Millman and Parker, and master the material in that, and then try a more modern textbook on differential geometry in physics, e.g. the one by Isham, before our language will have sufficient overlap to make conversation convenient.

So my question can't be answered in a simple way? I'm not sure why you are convinced of my notation being "archaic", it is the notation used in a lot of material.

Then I will look somewhere else to answer my question. Or maybe here from some-one who is not bothered by my "archaic" notation.
 
  • #11
haushofer said:
So my question can't be answered in a simple way? I'm not sure why you are convinced of my notation being "archaic", it is the notation used in a lot of material.

Then I will look somewhere else to answer my question. Or maybe here from some-one who is not bothered by my "archaic" notation.

In fairness to Chris, your notation (or at least parts of it) is archaic. Take, for example, when you write

haushofer said:
[tex]\partial_\mu\cdot\partial_\nu = \eta_{\mu\nu}[/tex]

This is a strange way of writing things. In particular, your insistence on using a dot to denote inner products is pretty uncommon. For example, I would write the above as

[tex]\eta = \eta_{\mu\nu} dx^\mu\otimes dx^\nu,[/tex]

where

[tex]\eta_{\mu\nu} \equiv \eta(\partial_\mu,\partial_\nu)[/tex]

are the components of the metric tensor [itex]\eta[/itex] with respect to the tensor product basis [itex]dx^\mu\otimes dx^\nu[/itex], and where [itex]\{\partial_\mu\}[/itex] and [itex]\{dx^\mu\}[/itex] are mutually dual bases of [itex]T_pM[/itex] and [itex]T_p^*M[/itex], respectively.
 
  • #12
shoehorn said:
In fairness to Chris, your notation (or at least parts of it) is archaic. Take, for example, when you write



This is a strange way of writing things. In particular, your insistence on using a dot to denote inner products is pretty uncommon. For example, I would write the above as

[tex]\eta = \eta_{\mu\nu} dx^\mu\otimes dx^\nu,[/tex]

where

[tex]\eta_{\mu\nu} \equiv \eta(\partial_\mu,\partial_\nu)[/tex]

are the components of the metric tensor [itex]\eta[/itex] with respect to the tensor product basis [itex]dx^\mu\otimes dx^\nu[/itex], and where [itex]\{\partial_\mu\}[/itex] and [itex]\{dx^\mu\}[/itex] are mutually dual bases of [itex]T_pM[/itex] and [itex]T_p^*M[/itex], respectively.

Ok, well, maybe that is where the confusion comes from. Let me ask the question differently: what would be a proper choice of basisvectors [tex]\partial_{\mu}[/tex] for Minkowski space- time?
 
  • #13
haushofer said:
Ok, well, maybe that is where the confusion comes from. Let me ask the question differently: what would be a proper choice of basisvectors [tex]\partial_{\mu}[/tex] for Minkowski space- time?

Minkowski space, [itex](\mathbb{R}^4,\eta)[/itex], has the terribly nice property that one can have a globally defined system of inertial coordinates. If, for example, [itex]x^\mu[/itex] [itex](\mu=0,1,2,3)[/itex] is this system of coordinates, then a basis for this spacetime is simply the set of coordinate partial derivatives

[tex]\{\partial_\mu\} \equiv
\left\{ \frac{\partial}{\partial x^\mu}\right\} [/tex]

Thus, choose any globally defined system of inertial coordinates and you get a basis for vectors on Minkowski space for free. By a similar token, one also gets a basis for one-forms as [itex]\{dx^\mu\}[/itex].

By the way, having just re-read your original post, it seems to me that much of your confusion may stem from the fact that you are unfamiliar with the idea of vectors being differential operators which act on functions. In a roundabout way, this is probably what Chris was getting at: you can't expect to gain any real understanding of the differential geometry used in (general) relativity unless you understand how a vector can be regarded as a differential operator and, by extension, how a tangent space of vectors can be regarded as an equivalence class of curves in a manifold. So, following him, I'd suggest that you concentrate first on this modern view of things. As already said, Isham's book contains rather a nice discussion of this, as do books such as Nakahara's and Frankel's or indeed any modern book on differential geometry
 
Last edited:
  • #14
Furthermore, to answer your original question (1), consider the following. If you're familiar with vectors only from, say, vector analysis in [itex]\mathbb{R}^n[/itex] then it is of course possible to become confused as to how a vector could have negative norm. However, the important point to remember is that ordinary vector calculus is an extraordinarily simple and special case.

For example, in [itex](\mathbb{R}^4,\eta)[/itex], the Minkowski space you originally asked about, an important idea is that you can split the set of all possible vectors into one of three classes. If we take the Minkowski metric to be of 'mostly plus' signature (this is standard in relativity, LQG, string theory, and so on, but most definitely not standard in QFT), these three classes are


1) Spacelike vectors: These are vectors whose norm, as measured with the Minkowski metric [itex]\eta[/itex], is positive. Thus spacelike vectors are all those vectors [itex]X[/itex] such that [itex]\eta(X,X)>0[/itex].

2) Null vectors: These are vectors whose norm is zero, i.e., those vectors [itex]X[/itex] such that [itex]\eta(X,X)=0[/itex].

3) Timelike vectors: These are vectors whose norm is negative. Thus, these are vectors [itex]X[/itex] such that [itex]\eta(X,X)<0[/itex].

So the answer to your question is straightforward: it's just a simple generalization of the allowed types of norm one can have on a vector space.
 
  • #15
I am familiar with the concept of vectors being differential operators acting on curves , but it seems to me that when you want to do calculations, you want to express your vectors into numbers. I have the feeling that my definition of the metric, with "the ordinary scalar product", is a little old-fashioned, and with that particular point of view you always keep those imaginary components. To avoid that, you simply state that the metric can also be choosen to be a bilinear tensor with components diag{-1,1,1,1} for instance in Minkowski space time. Right?

For example, I understand that the basis for the tangent space is
[tex]
e_{\mu} = \partial_{\mu}
[/tex]
and for the dual of it
[tex]
e^{\mu} = dx^{\mu}
[/tex]

I also understand that these vectors act on each other, and so the duality of these spaces is expressed as
[tex]
dx^{\mu}(\partial_{\nu}) = \partial_{\nu}(dx^{\mu}) = \delta^{\mu}_{\nu}
[/tex]
But if I want to express those objects numerically, then I get some conceptual problems. What does it mean that "the vector with components {1,0,0} can be identified with [tex]\partial_{x}[/tex] ? It may look to you that I didn't read the essential differential geometry, but I did. It's just that I have trouble with some concepts in it.

I will look after that book of Nakahara, looks like a nice book to have.
 

1. What is the difference between a line and a line segment?

A line is an infinite straight path that extends in both directions, whereas a line segment is a finite length of a line with two endpoints.

2. How can I determine if two triangles are congruent?

Two triangles are congruent if all three corresponding sides and angles are equal. This can be determined using congruence postulates and theorems, such as Side-Angle-Side (SAS) or Angle-Side-Angle (ASA).

3. What is the Pythagorean Theorem?

The Pythagorean Theorem states that in a right triangle, the square of the length of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the lengths of the other two sides.

4. How do I find the area of a circle?

The area of a circle can be found by multiplying the square of the radius (the distance from the center to the edge) by pi (approximately 3.14). The formula is A = πr^2.

5. What is the difference between a square and a rectangle?

A square is a special type of rectangle where all four sides are equal in length, while a rectangle can have two pairs of equal sides but not necessarily all four sides equal. Additionally, the angles in a square are all 90 degrees, while a rectangle can have any angle measurements as long as they add up to 360 degrees.

Similar threads

Replies
13
Views
504
Replies
6
Views
356
  • Differential Geometry
Replies
7
Views
2K
  • Differential Geometry
Replies
29
Views
1K
  • Differential Geometry
Replies
2
Views
2K
  • Differential Geometry
Replies
6
Views
536
  • Differential Geometry
Replies
3
Views
1K
  • Differential Geometry
Replies
3
Views
1K
  • Differential Geometry
Replies
7
Views
3K
  • Differential Geometry
Replies
3
Views
1K
Back
Top