# Some elementary geometry questions

1. Nov 11, 2007

### haushofer

Hi, I have some elementary questions about geometry. I often find that I am perfectly able to do calculations, but sometimes I have the feeling I'm not totally understanding what I'm actually doing. Maybe this is familiar for some of you ;) Up 'till now I have some questions about quite different topics and maybe some other questions will pop up in my mind, I hope some of you can shine their light upon it. :)

1)

About the definition of the metric tensor: the coefficients are defined by

$$g_{\mu\nu} = g(e_{\mu},e_{\nu}) = e_{\mu} \cdot e_{\nu}$$

where the dot is the standard inner product. It feels for me some kind of cheating to use the standard inner product to define a general inner product. How would this go for eg the minkowski tensor $$\eta_{\mu\nu}$$ ? What would the basis vectors be? For instance, $$\eta_{00} = -1$$, so $$e_{0}\cdot e_{0} = -1$$, so wouldn't we need imaginary components ?

2)

About viewing the integrand of an integral over a manifold as an n-form, and Stokes Theorem.

One identifies the volume element $$d^{4}x$$ with $$dx^{a}\wedge dx^{b} \wedge dx^{c} \wedge dx^{d}$$ . We know that the volume-element is a tensor density, but that wedge product looks like an honest tensor... If f is a scalar function, then

$$\int f \sqrt{g} d^{4}x = \int f \sqrt{g} \varepsilon_{abcd} dx^{a}\wedge dx^{b} \wedge dx^{c} \wedge dx^{d} = \int f \epsilon$$

where $$\epsilon_{abcd} = \sqrt{g} \varepsilon_{abcd}$$ and $$\varepsilon$$ is the Levi Civita alternating symbol. So here I would say that the sqrt is a density, $$\epsilon$$ is a tensor, so the wedge product of coordinatefunctions is also a density. But if I express an n-form in an antisymmetrical basis as $$\omega = \omega_{abcd} dx^{a} \wedge dx^{b} \wedge dx^{c} \wedge dx^{d}$$, I know that the wedge product of coordinate functions is a tensor ( after all, it is an antisymmetric basis for tensors )... I'm overlooking something, but what ?

Also, if I have that

$$\int f \sqrt{g} d^{4}x = \int d\omega$$

by looking at the integrand as an n-form, how can I solve this to find the 3-form $$\omega$$ ? I have the feeling I don't quite understand the connection between the Stokes theorem concerning n-forms and the Stokes theorem concerning vector densities Y;

$$\int_{M} d\omega = \int_{\partial M} \omega$$

and

$$\int_{M} \nabla_{\mu} Y^{\mu} d \Omega = \int_{\delta M} Y^{\mu}dS_{\mu}$$

3)

About looking at vectors as differential operators and one-forms as differentials.

I understand that one can look upon a vector as being a differential operator,

$$X = X^{\mu}\partial_{\mu}$$

and that the basis vectors are given by

$$e_{\mu} = \partial_{\mu}$$

But in my mind vectors have numerical values. What does a statement like

$$dx^{\mu} (\partial_{\nu} ) = \delta ^{\mu}_{\nu}$$

mean? Is it appropriate to look upon it as if there is a one-to-one correspondence between numerical values and the operators themselves ? This also pops up if you consider the norm of a vector; how do I consider the norm of a basisvector if it is given by a differential operator? I'm feeling uncomfortable by giving the vector in that way a certain numerical value, like

$$\partial_{\mu} = (1,0,0,0)$$

so I don't understand what it means to perform an innerproduct between 2 vectors expressed via differential operators.

It's bothering me for quite some time, so who can help me? :)

Last edited: Nov 11, 2007
2. Nov 11, 2007

### haushofer

Maybe too many questions at once :P

3. Nov 11, 2007

### Chris Hillman

Hi, haushofer,

To avoid confusion with frame fields, you should write your coordinate basis vectors as $\partial_{x^\mu}$, rather than $\vec{e}_\mu$.

Not at all; this follows the philosophy of "bundling" infinitesimal structures defined at the level of a jet space at a point to obtain a structure defined on a manifold.

You mean, "how would this work for Lorentzian four-manifolds instead of Riemannian four-manifolds?" You would use a quadratic form with signature (-1,1,1,1) instead of (1,1,1,1) at the level of tangent spaces in order to define your metric tensor.

Coordinate basis vectors.

Yes, given that you didn't tell us what books you are studying or anything about your mathematical background or interests (e.g. are you hoping to understand gtr?). Thus, I have not made any serious attempt to answer even the first set of questions, pending your response to my query.

(A general complaint, which I express from time to time: why oh why to inquiring posters at forums like PF so rarely think to mention their background or to cite their recent reading? Is it simple laziness or a genuine inability to understand how useful this information would be to those trying to frame useful responses?)

Last edited: Nov 11, 2007
4. Nov 11, 2007

### haushofer

Ok, sorry for being not to informational about my background, I didn't think about it. I'm a physics student writing my thesis. I have read a good deal of GR-books, like Carroll, Inverno, Hervik&Gron, pieces of Wald, and read a fair deal of geometry books. So I'm familiar with things like forms, index-free notation, Lie-derivatives, etc.

Ok, so we have in Minkowski space-time that, for our set of basisvectors $$\partial_{\mu}$$ that

$$\partial_{\mu} \cdot \partial_{\nu} = \eta_{\mu\nu}$$

So I would say that, working in coordinates (t,x,y,z) that

$$\partial_{t}\cdot\partial_{t} = -1$$

and

$$\partial_{x}\cdot\partial_{x} = \partial_{y}\cdot\partial_{y}= \partial_{z}\cdot\partial_{z}=1$$

How do I interpret such a innerproduct between differential operators? If I work out these inner products, a solution of this would be

$$\partial_{0} = (i,0,0,0)$$
( because i*i = -1 )

and

$$\partial_{x} = (0,1,0,0)$$

etc. But my problems with this are:

* I have the idea that something is going wrong, because I'm not comfortable with having an basisvector with an imaginary component.

* I'm not sure what it means that a derivative operator can be represented by a set of numbers.

Last edited: Nov 11, 2007
5. Nov 11, 2007

### Chris Hillman

It is customary to avoid the imaginary unit by using a real-valued quadratic form with signature (-1,1,1,1) which is then bundled. Isn't this clear from say Carroll?

6. Nov 11, 2007

### haushofer

Well, apparantly not :) I don't see how one loses the imaginary unit. The situation is like "we have the Minkowski-metric, what are the basis vectors which make up this metric?" Then I end up with that factor i in the first basisvector.

7. Nov 11, 2007

### Chris Hillman

I am more and more flummoxed how to respond without trying to write my own tutorials more or less from scratch (way too much work).

Can you clarify "thesis"? (An honors thesis?) What undergraduate math courses have you taken? In what region of the world do you attend school?

Last edited: Nov 11, 2007
8. Nov 12, 2007

### haushofer

I'm writing my master thesis in Holland. I've taken the usual math courses like complex analysis, calculus etc and a course on differential geometry in physics.

9. Nov 12, 2007

### Chris Hillman

Weird, since you seem to be adhering to very archaic notation. Well, I think you will need to unlearn some bad habits you were apparently taught. I think the best thing is for you to get a more elementary book on low dimensional differential geometry, like Millman and Parker, and master the material in that, and then try a more modern textbook on differential geometry in physics, e.g. the one by Isham, before our language will have sufficient overlap to make conversation convenient.

10. Nov 13, 2007

### haushofer

So my question can't be answered in a simple way? I'm not sure why you are convinced of my notation being "archaic", it is the notation used in a lot of material.

Then I will look somewhere else to answer my question. Or maybe here from some-one who is not bothered by my "archaic" notation.

11. Nov 13, 2007

### shoehorn

In fairness to Chris, your notation (or at least parts of it) is archaic. Take, for example, when you write

This is a strange way of writing things. In particular, your insistence on using a dot to denote inner products is pretty uncommon. For example, I would write the above as

$$\eta = \eta_{\mu\nu} dx^\mu\otimes dx^\nu,$$

where

$$\eta_{\mu\nu} \equiv \eta(\partial_\mu,\partial_\nu)$$

are the components of the metric tensor $\eta$ with respect to the tensor product basis $dx^\mu\otimes dx^\nu$, and where $\{\partial_\mu\}$ and $\{dx^\mu\}$ are mutually dual bases of $T_pM$ and $T_p^*M$, respectively.

12. Nov 14, 2007

### haushofer

Ok, well, maybe that is where the confusion comes from. Let me ask the question differently: what would be a proper choice of basisvectors $$\partial_{\mu}$$ for Minkowski space- time?

13. Nov 14, 2007

### shoehorn

Minkowski space, $(\mathbb{R}^4,\eta)$, has the terribly nice property that one can have a globally defined system of inertial coordinates. If, for example, $x^\mu$ $(\mu=0,1,2,3)$ is this system of coordinates, then a basis for this spacetime is simply the set of coordinate partial derivatives

$$\{\partial_\mu\} \equiv \left\{ \frac{\partial}{\partial x^\mu}\right\}$$

Thus, choose any globally defined system of inertial coordinates and you get a basis for vectors on Minkowski space for free. By a similar token, one also gets a basis for one-forms as $\{dx^\mu\}$.

By the way, having just re-read your original post, it seems to me that much of your confusion may stem from the fact that you are unfamiliar with the idea of vectors being differential operators which act on functions. In a roundabout way, this is probably what Chris was getting at: you can't expect to gain any real understanding of the differential geometry used in (general) relativity unless you understand how a vector can be regarded as a differential operator and, by extension, how a tangent space of vectors can be regarded as an equivalence class of curves in a manifold. So, following him, I'd suggest that you concentrate first on this modern view of things. As already said, Isham's book contains rather a nice discussion of this, as do books such as Nakahara's and Frankel's or indeed any modern book on differential geometry

Last edited: Nov 14, 2007
14. Nov 14, 2007

### shoehorn

Furthermore, to answer your original question (1), consider the following. If you're familiar with vectors only from, say, vector analysis in $\mathbb{R}^n$ then it is of course possible to become confused as to how a vector could have negative norm. However, the important point to remember is that ordinary vector calculus is an extraordinarily simple and special case.

For example, in $(\mathbb{R}^4,\eta)$, the Minkowski space you originally asked about, an important idea is that you can split the set of all possible vectors into one of three classes. If we take the Minkowski metric to be of 'mostly plus' signature (this is standard in relativity, LQG, string theory, and so on, but most definitely not standard in QFT), these three classes are

1) Spacelike vectors: These are vectors whose norm, as measured with the Minkowski metric $\eta$, is positive. Thus spacelike vectors are all those vectors $X$ such that $\eta(X,X)>0$.

2) Null vectors: These are vectors whose norm is zero, i.e., those vectors $X$ such that $\eta(X,X)=0$.

3) Timelike vectors: These are vectors whose norm is negative. Thus, these are vectors $X$ such that $\eta(X,X)<0$.

So the answer to your question is straightforward: it's just a simple generalization of the allowed types of norm one can have on a vector space.

15. Nov 14, 2007

### haushofer

I am familiar with the concept of vectors being differential operators acting on curves , but it seems to me that when you want to do calculations, you want to express your vectors into numbers. I have the feeling that my definition of the metric, with "the ordinary scalar product", is a little old-fashioned, and with that particular point of view you always keep those imaginary components. To avoid that, you simply state that the metric can also be choosen to be a bilinear tensor with components diag{-1,1,1,1} for instance in Minkowski space time. Right?

For example, I understand that the basis for the tangent space is
$$e_{\mu} = \partial_{\mu}$$
and for the dual of it
$$e^{\mu} = dx^{\mu}$$

I also understand that these vectors act on each other, and so the duality of these spaces is expressed as
$$dx^{\mu}(\partial_{\nu}) = \partial_{\nu}(dx^{\mu}) = \delta^{\mu}_{\nu}$$
But if I want to express those objects numerically, then I get some conceptual problems. What does it mean that "the vector with components {1,0,0} can be identified with $$\partial_{x}$$ ? It may look to you that I didn't read the essential differential geometry, but I did. It's just that I have trouble with some concepts in it.

I will look after that book of Nakahara, looks like a nice book to have.