# Tensors, getting to know them

Hello. I am trying to get the hang of tensors. I saw this written in http://mathworld.wolfram.com/MetricTensor.html
I just wanted to make sure it was correct.

$g^{\alpha\beta}$ =$\widehat{e}^{i}\ast \widehat{e}^{j}$

Which says that the dot product of two unit vectors equals the metric tensor. Like I said, I am very new to this stuff and I'm just trying to get the hang of it.

## Answers and Replies

Hello. I am trying to get the hang of tensors. I saw this written in http://mathworld.wolfram.com/MetricTensor.html
I just wanted to make sure it was correct.

$g^{\alpha\beta}$ =$\widehat{e}^{i}\ast \widehat{e}^{j}$

Which says that the dot product of two unit vectors equals the metric tensor. Like I said, I am very new to this stuff and I'm just trying to get the hang of it.

That should be

$$g^{\alpha\beta} = \widehat{e}^{\alpha}\ast \widehat{e}^{\beta}$$

Wolfram--or Wikipedia, aren't the best places to learn about tensors. If you want to learn about how tensors as used in relativity, use a relativity text that introduces tensors in the process of teaching you relativity.

That should be

$$g^{\alpha\beta} = \widehat{e}^{\alpha}\ast \widehat{e}^{\beta}$$

Wolfram--or Wikipedia, aren't the best places to learn about tensors. If you want to learn about how tensors as used in relativity, use a relativity text that introduces tensors in the process of teaching you relativity.

Wolfram made a mistake? Crap! I guess I have no choice, I have to buy a book. I'm looking at some books on Amazon.com. I'm looking at, A First Course in General Relativity by Schuts.

I'm watching the Leonard Suskind video: General Relativity. I was watching lecture 7, but I really didn't understand lectures 4 through 6. Tensors are made to look so easy, but there not. I know that contravariant indexes are superscripts and covariant indices are subscripts; but I'm unclear of why you would use one or the other.

Matterwave
Gold Member
The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.

The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.

I thought that the metric tensor had to do with translation from one coordinate system to another; perhaps it is just one part. The relationship between the book keeper frame (e.g. flat space) and the proper frame is an equation that uses the space-time metric, Christoffel symbols, and other tensors. But all mathematical machinery is still very mysterious to me. I'm trying to convince myself that I should spend \$50 and buy the book. The goal is to be able to articulate my ideas about the Einstein equations.

So dot products are part of mapping from one coordinate system to another; dot products are done with the metric tensor. I'm assuming that something has to operate on the metric tensor before a dot product can take place.

Last edited:
Fredrik
Staff Emeritus
Gold Member
Schutz's book is a good place to start learning about tensors. If you really want to be good at it, you might want to continue with "Introduction to smooth manifolds" by John M. Lee after that.

This post is a good place to get an overview. (Ignore the first two paragraphs).

The metric tensor has nothing to do with coordinate changes.

The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.

I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$

Last edited:
Schutz's book is a good place to start learning about tensors. If you really want to be good at it, you might want to continue with "Introduction to smooth manifolds" by John M. Lee after that.

This post is a good place to get an overview. (Ignore the first two paragraphs).

The metric tensor has nothing to do with coordinate changes.

I'm really not ready for manifolds; but I have bookmarked the link for when I am. I bought Schutz's book. I hope it's authoritative and correct. I have this sense that tensors are really just matrices, vectors and calculus. I need to experience the drudgery of using only these things so that the tensor can be the hero that makes everything easier.:rofl: I forgot, there is nothing easy about general relativity.

I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say gαβ=∂∂xα⊗∂∂xβ
Phrak, you're killing me.

In Suskind's lecture#2 he writes down the formula for the $\nabla$ operator. But he left off the unit vectors. The definition of $\nabla$ should look something like this,
$\vec\nabla = \frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}+\frac{\partial}{\partial z}\hat{z}$
In other words, a partial derivative with a unit vector.

Last edited:
Fredrik
Staff Emeritus
Gold Member
I have this sense that tensors are really just matrices, vectors and calculus.
What I liked the most about Schutz's approach is that he made that clear. At least that's how I remember it, but it was a long time ago. He talks about vector spaces, their dual spaces, bases and multilinear maps, making it perfectly clear what a tensor is without even mentioning manifolds. However, a lot of people who use the word "tensor" really mean "tensor field". To understand the difference, you need to read something like the post I linked to, and the ones I linked to in that one. It can certainly wait until you understand Schutz's definition of "tensor", but you shouldn't wait much longer than that.

I remember reading about duals. I was trying to figure out what the difference was between contravariant and covariant tensors. I read somewhere that a contravariant tensor is the dual of a covariant tensor. I'm just not sure what a dual is; is it an inverse?

Fredrik
Staff Emeritus
Gold Member
I remember reading about duals. I was trying to figure out what the difference was between contravariant and covariant tensors. I read somewhere that a contravariant tensor is the dual of a covariant tensor. I'm just not sure what a dual is; is it an inverse?
These things are explained in the post I linked to earlier.

Matterwave
Gold Member
I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$

I believe what I said should still be correct, though.

$${\bf g}(\vec{e_i},\vec{e_j}) \equiv g_{ij}=g_{kl}e^k_ie^l_j \equiv \vec{e_i}\cdot\vec{e_j}$$

Correct?

Fredrik
Staff Emeritus
Gold Member
I believe what I said should still be correct, though.

$${\bf g}(\vec{e_i},\vec{e_j}) \equiv g_{ij}=g_{kl}e^k_ie^l_j \equiv \vec{e_i}\cdot\vec{e_j}$$

Correct?
Yes.

I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$
Let's call the vector space V. The dot in eq. (9) is just the usual "almost inner product" on V, defined by the metric. The dot in eq. (8) is a similar "almost inner product" on V*. (I don't want to use the term "inner product" since (in relativity) we don't have $\langle x,x\rangle\geq 0$ for all x).

Eq. (9) says that $g_{\alpha\beta}=e_\alpha\cdot e_\beta$. This is true because both the left-hand side and the right-hand side are defined as $g(e_\alpha,e_\beta)$.

Eq. (8) says that $g^{\alpha\beta}=e^\alpha\cdot e^\beta$. This one is a bit more tricky. The map $u\mapsto g(u,\cdot)$ is an isomorphism from V to V*. So if r and s are members of V*, there exist unique u,v in V such that $r=g(u,\cdot)$ and $s=g(v,\cdot)$. This suggests a way to define $r\cdot s$. We define it to be equal to $g(u,v)$. It's not too hard to show that the member of V that corresponds to $e^\alpha$ is $g^{\alpha\gamma}e_\gamma$, where $g^{\alpha\gamma}$ denotes the $\alpha\gamma$ component of the inverse of the matrix with components $g_{\alpha\gamma}$. So $$e^\alpha\cdot e^\beta =g\big(g^{\alpha\gamma} e_\gamma,g^{\beta\delta}e_\delta\big) =g^{\alpha\gamma}g_{\gamma\delta} g^{\beta\delta}=\delta^\alpha_\delta g^{\beta\delta} =g^{\alpha\beta}.$$

Regarding the $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$, it has the indices upstairs on the left, and downstairs on the right. Maybe you're thinking of the fact that the almost inner product on V* can be expressed as $$g^{\alpha \beta} \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta},$$ meaning that
$$g^{\alpha \beta} \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}(r,s)=r\cdot s.$$

Last edited:
Let's call the vector space V. The dot in eq. (9) is just the usual "almost inner product" on V, defined by the metric. The dot in eq. (8) is a similar "almost inner product" on V*. (I don't want to use the term "inner product" since (in relativity) we don't have $\langle x,x\rangle\geq 0$ for all x).

Eq. (9) says that $g_{\alpha\beta}=e_\alpha\cdot e_\beta$. This is true because both the left-hand side and the right-hand side are defined as $g(e_\alpha,e_\beta)$.
So the covariant metric tensor is a function of the two sets of unit vectors that are going to be transformed over? Is the covariant metric tensor the inner product (almost) between two sets of unit vectors.

Fredrik
Staff Emeritus
Gold Member
So the covariant metric tensor is a function of the two sets of unit vectors that are going to be transformed over? Is the covariant metric tensor the inner product (almost) between two sets of unit vectors.
There's only one metric tensor, g. Its components in the basis $\{e_\alpha\}$ are $g_{\alpha\beta}=g(e_\alpha,e_\beta)$. $g^{\alpha\beta}$ denotes the component on row $\alpha$, column $\beta$ of the inverse of the matrix that has $g_{\alpha\beta}$ on row $\alpha$, column $\beta$. g is a bilinear form with all the properties of an inner product, except g(u,u)≥0 for all u. Bilinear forms like inner products and metrics aren't "between two sets of unit vectors". They are functions that take a pair of vectors to a number. No part of what I said had anything to do with coordinate transformations, since I used the same basis the whole time.

Eq. (8) says that $g^{\alpha\beta}=e^\alpha\cdot e^\beta$. This one is a bit more tricky. The map $u\mapsto g(u,\cdot)$ is an isomorphism from V to V*. So if r and s are members of V*, there exist unique u,v in V such that $r=g(u,\cdot)$ and $s=g(v,\cdot)$. This suggests a way to define $r\cdot s$. We define it to be equal to $g(u,v)$. It's not too hard to show that the member of V that corresponds to $e^\alpha$ is $g^{\alpha\gamma}e_\gamma$, where $g^{\alpha\gamma}$ denotes the $\alpha\gamma$ component of the inverse of the matrix with components $g_{\alpha\gamma}$. So $$e^\alpha\cdot e^\beta =g\big(g^{\alpha\gamma} e_\gamma,g^{\beta\delta}e_\delta\big) =g^{\alpha\gamma}g_{\gamma\delta} g^{\beta\delta}=\delta^\alpha_\delta g^{\beta\delta} =g^{\alpha\beta}.$$

I'm studying this equation. How did $$g(...,g^{\beta \delta} e_{\delta}) = ...g_{\gamma \delta}$$? Where did the covariant metric tensor come from?

Fredrik
Staff Emeritus
Gold Member
The only part that I didn't explain is how to find out that $g^{\alpha\gamma}e_\gamma$ is the member of V that corresponds to $e^\alpha$. We're looking for the $u\in V$ such that $e^\alpha=g(u,\cdot)$. Start by having both sides of the equality act on $e_\beta$.
$$\left\{ \begin{array}{l} e^{\alpha}(e_\beta)=\delta^\alpha_\beta\\ g(u,\cdot)(e_\beta)=g(u,e_\beta)=g(u^\gamma e_\gamma,e_\beta)=u^\gamma g_{\gamma\beta} \end{array} \right.~~\Rightarrow~~ \delta^\alpha_\beta=u^\gamma g_{\gamma\beta}$$ Multiply both sides of the last equality by $g^{\beta\delta}$. The result is $g^{\alpha\delta}=u^\delta$. This implies that $$u=u^\gamma e_\gamma=g^{\alpha\gamma}e_\gamma.$$

Matterwave
Gold Member
I find it's not all that helpful to think of the metric tensor (or it's components) as some dot product of basis vectors. The metric tensor defines the dot product, so it's kind of circular reasoning.

I'm still here. There are a lot of subtle details here that I'm trying to understand. Something like a $u^{\gamma}$ is just a simple vector. Likewise, $e^{\beta}$ is just a unit vector. I keep wondering if the metric tensor g has anything to do with g as gravitational acceleration.

Matterwave
Gold Member
$e^\beta$

Is usually not used to denote a unit vector, but rather a basis one-form. Basis vectors do not necessarily need to be unit length. The metric tensor g defines the dot product, as well as a natural way to put a 1 to 1 correspondence between one forms and vectors. It's related to gravitation in that gravitation itself is merely curvature in space-time. The curvature depends on the metric.

$e^\beta$

Is usually not used to denote a unit vector, but rather a basis one-form. Basis vectors do not necessarily need to be unit length. The metric tensor g defines the dot product, as well as a natural way to put a 1 to 1 correspondence between one forms and vectors. It's related to gravitation in that gravitation itself is merely curvature in space-time. The curvature depends on the metric.

I should have said that $e_{\beta}$ is a unit vector. When I think of vectors, I think of $\vec{A} = A_{1}\hat{e}_{1} + A_{2}\hat{e}_{2} + A_{3}\hat{e}_{3}$. When I write it this way, the index is covariant.

Just a quick look at one-forms, from wiki, quotes as: "Often one-forms are described locally, particularly in local coordinates. In a local coordinate system, a one-form is a linear combination of the differentials of the coordinates:"

That makes me think that $e^{\beta}$ is used for dealing with differentials. I've seen differential equations before.

Yes, it does make sense that gravitation is just a curvature of space-time.

Matterwave
Gold Member
Basis vectors need not be normalized (i.e. they need not have unit length). Using orthonormal basis vectors actually requires you to modify your methods a little in GR, this is called the tetrad method to GR (a tetrad being 4 orthonormal basis vectors). Calculating tensors and such is slightly different using this method.

The only part that I didn't explain is how to find out that $g^{\alpha\gamma}e_\gamma$ is the member of V that corresponds to $e^\alpha$. We're looking for the $u\in V$ such that $e^\alpha=g(u,\cdot)$. Start by having both sides of the equality act on $e_\beta$.
$$\left\{ \begin{array}{l} e^{\alpha}(e_\beta)=\delta^\alpha_\beta\\ g(u,\cdot)(e_\beta)=g(u,e_\beta)=g(u^\gamma e_\gamma,e_\beta)=u^\gamma g_{\gamma\beta} \end{array} \right.~~\Rightarrow~~ \delta^\alpha_\beta=u^\gamma g_{\gamma\beta}$$ Multiply both sides of the last equality by $g^{\beta\delta}$. The result is $g^{\alpha\delta}=u^\delta$. This implies that $$u=u^\gamma e_\gamma=g^{\alpha\gamma}e_\gamma.$$

So if I write something like $g^{\alpha \gamma}e_{\gamma} = e^{\alpha}$, then I am writing down a transformation of the basis unit vector.

Now $e^{\alpha}e_\beta = \delta^{\alpha}_{\beta}$ is starting to make sense.

I do wonder about those one form $e^{\beta}$ differential objects. How do differentials enter the picture?

Last edited:
Fredrik
Staff Emeritus
Gold Member
Something like a $u^{\gamma}$ is just a simple vector.
It's the component of a vector in a basis. What you wrote as $\vec{A} = A_{1}\hat{e}_{1} + A_{2}\hat{e}_{2} + A_{3}\hat{e}_{3}$, can be written as $A=A^\mu e_\mu$.

I keep wondering if the metric tensor g has anything to do with g as gravitational acceleration.
We still need a metric even when there's no such thing as gravity (i.e. in special relativity), but if you're asking if the choice of the symbol g was inspired by it, then I don't know, but it's possible, since a lot of differential geometry was developed after it was discovered that it was needed in general relativity.

So if I write something like $g^{\alpha \gamma}e_{\gamma} = e_{\gamma}$, then I am writing down a transformation of the basis unit vector.
Consider an example: If $g^{\alpha\beta}$ denotes the components of the metric of Minkowski spacetime in an inertial coordinate system, then what you wrote down means $$g^{\alpha 0}e_0+g^{\alpha 1}e_1 +g^{\alpha 2}e_2+g^{\alpha 3}e_3=e_\gamma$$ and this simplifies to $$-e_0+e_1+e_2+e_3=e_\gamma$$ which is clearly false for all $\gamma$, assuming that $\{e_\alpha\}_{\alpha=0}^3$ is a basis.

Now $e^{\alpha}e_\beta = \delta^{\alpha}_{\beta}$ is starting to make sense.
It's the definition of a basis on V*. This is explained in the post I linked to earlier, and more details can be found in the first of the three posts I linked to in the end of that one.

I do wonder about those one form $e^{\beta}$ differential objects. How do differentials enter the picture?
For any smooth function $f:U\rightarrow\mathbb R$, there's a cotangent vector $(df)_p\in T_pM^*$ for each $p\in U$. I think some authors would call each $(df)_p$ a 1-form, and the map $p\mapsto (df)_p$ a 1-form field, and that others would just call $(df)_p$ a cotangent vector and $p\mapsto(df)_p$ a 1-form. $(df)_p$ is the cotangent vector defined by $(df)_p(v)=v(f)$ for all $v\in T_pM$.

There are several ways to define the tangent space $T_pM$. (Click the last link in the post I linked to earlier for more information). When we define $T_pM$ as a space of derivative operators, the basis vectors associated with the coordinate system $x:V\rightarrow \mathbb R^n$ are the partial derivative operators $\left.\frac{\partial}{\partial x^\mu}\right|_p$ (defined in that post), and the dual of this basis is $\{(dx^\mu)_p\}$, where $x^\mu$ is the function that takes $p\in V$ to $(x(p))^\mu$.

Last edited:
Regarding the $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$, it has the indices upstairs on the left, and downstairs on the right.

My mistake.

For any smooth function $f:U\rightarrow\mathbb R$, there's a cotangent vector $(df)_p\in T_pM^*$ for each $p\in U$. I think some authors would call each $(df)_p$ a 1-form, and the map $p\mapsto (df)_p$ a 1-form field, and that others would just call $(df)_p$ a cotangent vector and $p\mapsto(df)_p$ a 1-form. $(df)_p$ is the cotangent vector defined by $(df)_p(v)=v(f)$ for all $v\in T_pM$.

You said that a smooth function $f:U\rightarrow \mathbb R$, which means that some function f is used to map the set of objects of U into the space R. The set of objects includes cotangent vectors $(df)_p\in T_p = df_1 + df_2 +df_3$ which are elements of $T_p M^*$; $T_p M$ is a tangent space; I guess that means it's tangent to a curve s through space-time. $T_p$ is a vector, but I don't know what M is.

I found this definition is a book by Lovelock and Rund.
A set of n quantities $(A^1,...A^n)$ is said to constitute the components of a contravariant vector at a point P with coordinates $(x^1,...,x^n)$ if, under transformation 4.1, these quantities transform according to the relations
$\bar{A^j}=\sum^{n}_{h=1}\frac{\partial \overline{x}^j}{\partial x^h} A^h$

Transformation 4.1 is $\bar{x}^j=\bar{x}^j(x^h)$

Oh my! Now we're talking about functions $f^1,...,f^n$

OK, I've got some idea that contravariant vectors are differentials. Do I know enough now to grapple with the Einstein equations?

Fredrik
Staff Emeritus
Gold Member
You said that a smooth function $f:U\rightarrow \mathbb R$, which means that some function f is used to map the set of objects of U into the space R. The set of objects includes cotangent vectors $(df)_p\in T_p = df_1 + df_2 +df_3$ which are elements of $T_p M^*$; $T_p M$ is a tangent space; I guess that means it's tangent to a curve s through space-time. $T_p$ is a vector, but I don't know what M is.
M is a smooth manifold (e.g. a sphere, or spacetime in SR or GR). U is an open set in M. p is a member of U. $T_pM$ is the tangent space at p (a vector space associated with the point p). Earlier, I was talking about a vector space V. In this context, $T_pM$ is that vector space. So think $T_pM=V$. The notation $T_p$ is not defined. I supposed it could refer to the value of a vector field at p, which would make $T_p$ a tangent vector at p, but I didn't denote any vector field by T.

$(df)_p$ is a cotangent vector, i.e. a member of $T_pM^*$. The basis of $T_pM^*$ that's dual to the basis $\big\{\frac{\partial}{\partial x}\big|_p\big\}$ of $T_pM$ is $\{(dx^\mu)_p\}$. So $(df)_p$ can be written as $(df)_p=((df)_p)_\mu (dx^\mu)_p$, where
$$((df)_p)_\mu =(df)_p\bigg(\frac{\partial}{\partial x^\mu}\!\bigg|_p\bigg) =\frac{\partial}{\partial x^\mu}\!\bigg|_p f =\frac{\partial f(p)}{\partial x^\mu} =(f\circ x^{-1})_{,\mu}(x(p)).$$ This holds for all p in U, so we can also write $$df=\frac{\partial f}{\partial x^\mu}dx^\mu.$$ If you still haven't read the post I linked to earlier or the posts I linked to in that one, you really need to do that now.

Fredrik
Staff Emeritus
Gold Member
I found this definition is a book by Lovelock and Rund.
A set of n quantities $(A^1,...A^n)$ is said to constitute the components of a contravariant vector at a point P with coordinates $(x^1,...,x^n)$ if, under transformation 4.1, these quantities transform according to the relations
$\bar{A^j}=\sum^{n}_{h=1}\frac{\partial \overline{x}^j}{\partial x^h} A^h$

Transformation 4.1 is $\bar{x}^j=\bar{x}^j(x^h)$

Oh my! Now we're talking about functions $f^1,...,f^n$
I absolutely hate that definition, but unfortunately, it's very common in physics books. What irritates me the most about it is that the people who use it can't even state it right. It's not just a set of n "quantities". It's a set of n "quantities" (vectors actually, in the sense that they are members of some vector space) associated with each coordinate system. Without that piece of information, the concept of "transformation" doesn't make sense. The tensor transformation law describes how the "quantities" associated with one coordinate system are related to the corresponding "quantities" associated with another coordinate system. It's not the set of "quantities" that should be called a tensor in this definition. It's the function that associated one such set with each coordinate system.

However, I would advise you to ignore this definition until you fully understand the modern definitions. (See this post).

OK, I've got some idea that contravariant vectors are differentials. Do I know enough now to grapple with the Einstein equations?
I would say that you're at least a month of pretty hard work away from that.

Hi Fredrick,
I was looking at the link you provided about manifolds. You said,
The metric at p is a function g:TpM×TpM→ℝ that's linear in both variables and satisfies g(u,v)=g(v,u) and one more thing that I'll mention in a minute.
I recognise g at the metric tensor; but I though g(u,v) only meant that g(u,v) is a function of u and v; such a statement is very general. So why do we worry about a strict rule that g(u,v) = g(v,u), which implies that sometimes this isn't true? What am I misunderstanding?

Fredrik
Staff Emeritus
Gold Member
Hi Fredrick,
I was looking at the link you provided about manifolds. You said,
I recognise g at the metric tensor; but I though g(u,v) only meant that g(u,v) is a function of u and v; such a statement is very general. So why do we worry about a strict rule that g(u,v) = g(v,u), which implies that sometimes this isn't true? What am I misunderstanding?
g denotes a function. g(u,v) and g(v,u) denote numbers in its range. When u≠v, the condition g(u,v)=g(v,u) says that g takes two different members of its domain to the same number. The statement you quoted is incomplete. It should say that we require that g(u,v)=g(v,u) for all u,v in TpM.

Matterwave
Gold Member
The metric tensor must be symmetric because it defines the inner product. An inner product must be symmetric, i.e. a dot b must be the same as b dot a, or else this is no longer an inner product.

(At least it defines a semi-inner product since positive definiteness is not always satisfied)

g denotes a function. g(u,v) and g(v,u) denote numbers in its range. When u≠v, the condition g(u,v)=g(v,u) says that g takes two different members of its domain to the same number. The statement you quoted is incomplete. It should say that we require that g(u,v)=g(v,u) for all u,v in TpM.

I had to look at wiki which says,
In the mathematical field of differential geometry, a metric tensor is a type of function defined on a manifold (such as a surface in space) which takes as input a pair of tangent vectors v and w and produces a real number (scalar) g(v,w) in a way that generalizes many of the familiar properties of the dot product of vectors in Euclidean space. In the same way as a dot product, metric tensors are used to define the length of, and angle between, tangent vectors.
So a metric tensor is:
1. a function,
2. defined on a manifold,
3. which takes as inputs a pair of tangental vectors,
4. spits out a scalar,
5. it's dot product,
6. dot product is an inner product,
7. must be symmetric g(u,v)=g(v,u),

Matterwave
Gold Member
Metrics, however, do not need to be positive definite like an inner product technically does.

Fredrik
Staff Emeritus
Gold Member
I had to look at wiki which says,

So a metric tensor is:
1. a function,
2. defined on a manifold,
3. which takes as inputs a pair of tangental vectors,
4. spits out a scalar,
5. it's dot product,
6. dot product is an inner product,
7. must be symmetric g(u,v)=g(v,u),
OK, let's try to be really accurate here. A metric on a smooth manifold M isn't a tensor, it's a global tensor field of type (0,2). That means that it's a function that takes each point in the manifold to a tensor of type (0,2) at that point. I will denote the tensor that g associates with the point p by gp, and I will call it "the metric at p".

For each p in M, gp is a (0,2) tensor at p. Each one of these tensors (one for each point p in the manifold) is a bilinear, symmetric, non-degenerate function from TpM×TpM into ℝ.

Bilinear means that for each $u\in T_pM$, the maps $v\mapsto g_p(u,v)$ and $v\mapsto g_p(v,u)$ are both linear.

Symmetric means that for all $u,v\in T_pM$, we have $g(u,v)=g(v,u)$.

Non-degenerate means that for all $u\in T_pM$, the map $u\mapsto g(u,\cdot)$ is a bijection. (Here $g(u,\cdot)$ denotes the map that takes v to g(u,v)).

Compare this with the definition of an inner product on TpM. An inner product on TpM is a bilinear, symmetric, positive definite function $s:T_pM\times T_pM\to\mathbb R$. Positive definite means two things: 1. For all $u\in T_pM$, we have $s(u,u)\geq 0$. 2. For all $u\in T_pM$, we have $s(u,u)$ only if u=0.

As you can see, an inner product on TpM has properties very similar to the metric at p, but the requirements are not quite the same. The requirements on inner products do however imply that inner products are non-degenerate. This means that a global (0,2) tensor field that assigns an inner product gp to each p in M would be a metric. Such a metric is called a Riemannian metric. A smooth manifold with a Riemannian metric is called a Riemannian manifold. Spacetime in GR and SR is not a Riemannian manifold, because there are (for each p) lots of non-zero vectors such that $g_p(u,u)=0$, and even lots of vectors such that $g_p(u,u)<0$.

(In case you're not sure, "map" and "function" mean exactly the same thing).

Last edited: