# Tensors, getting to know them

## Main Question or Discussion Point

Hello. I am trying to get the hang of tensors. I saw this written in http://mathworld.wolfram.com/MetricTensor.html
I just wanted to make sure it was correct.

$g^{\alpha\beta}$ =$\widehat{e}^{i}\ast \widehat{e}^{j}$

Which says that the dot product of two unit vectors equals the metric tensor. Like I said, I am very new to this stuff and I'm just trying to get the hang of it.

Related Special and General Relativity News on Phys.org
Hello. I am trying to get the hang of tensors. I saw this written in http://mathworld.wolfram.com/MetricTensor.html
I just wanted to make sure it was correct.

$g^{\alpha\beta}$ =$\widehat{e}^{i}\ast \widehat{e}^{j}$

Which says that the dot product of two unit vectors equals the metric tensor. Like I said, I am very new to this stuff and I'm just trying to get the hang of it.

That should be

$$g^{\alpha\beta} = \widehat{e}^{\alpha}\ast \widehat{e}^{\beta}$$

Wolfram--or Wikipedia, aren't the best places to learn about tensors. If you want to learn about how tensors as used in relativity, use a relativity text that introduces tensors in the process of teaching you relativity.

That should be

$$g^{\alpha\beta} = \widehat{e}^{\alpha}\ast \widehat{e}^{\beta}$$

Wolfram--or Wikipedia, aren't the best places to learn about tensors. If you want to learn about how tensors as used in relativity, use a relativity text that introduces tensors in the process of teaching you relativity.
Wolfram made a mistake? Crap! I guess I have no choice, I have to buy a book. I'm looking at some books on Amazon.com. I'm looking at, A First Course in General Relativity by Schuts.

I'm watching the Leonard Suskind video: General Relativity. I was watching lecture 7, but I really didn't understand lectures 4 through 6. Tensors are made to look so easy, but there not. I know that contravariant indexes are superscripts and covariant indices are subscripts; but I'm unclear of why you would use one or the other.

Matterwave
Gold Member
The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.

The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.
I thought that the metric tensor had to do with translation from one coordinate system to another; perhaps it is just one part. The relationship between the book keeper frame (e.g. flat space) and the proper frame is an equation that uses the space-time metric, Christoffel symbols, and other tensors. But all mathematical machinery is still very mysterious to me. I'm trying to convince myself that I should spend \$50 and buy the book. The goal is to be able to articulate my ideas about the Einstein equations.

So dot products are part of mapping from one coordinate system to another; dot products are done with the metric tensor. I'm assuming that something has to operate on the metric tensor before a dot product can take place.

Last edited:
Fredrik
Staff Emeritus
Gold Member
Schutz's book is a good place to start learning about tensors. If you really want to be good at it, you might want to continue with "Introduction to smooth manifolds" by John M. Lee after that.

This post is a good place to get an overview. (Ignore the first two paragraphs).

The metric tensor has nothing to do with coordinate changes.

The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.
I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$

Last edited:
Schutz's book is a good place to start learning about tensors. If you really want to be good at it, you might want to continue with "Introduction to smooth manifolds" by John M. Lee after that.

This post is a good place to get an overview. (Ignore the first two paragraphs).

The metric tensor has nothing to do with coordinate changes.
I'm really not ready for manifolds; but I have bookmarked the link for when I am. I bought Schutz's book. I hope it's authoritative and correct. I have this sense that tensors are really just matrices, vectors and calculus. I need to experience the drudgery of using only these things so that the tensor can be the hero that makes everything easier.:rofl: I forgot, there is nothing easy about general relativity.

I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say gαβ=∂∂xα⊗∂∂xβ
Phrak, you're killing me.

In Suskind's lecture#2 he writes down the formula for the $\nabla$ operator. But he left off the unit vectors. The definition of $\nabla$ should look something like this,
$\vec\nabla = \frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}+\frac{\partial}{\partial z}\hat{z}$
In other words, a partial derivative with a unit vector.

Last edited:
Fredrik
Staff Emeritus
Gold Member
I have this sense that tensors are really just matrices, vectors and calculus.
What I liked the most about Schutz's approach is that he made that clear. At least that's how I remember it, but it was a long time ago. He talks about vector spaces, their dual spaces, bases and multilinear maps, making it perfectly clear what a tensor is without even mentioning manifolds. However, a lot of people who use the word "tensor" really mean "tensor field". To understand the difference, you need to read something like the post I linked to, and the ones I linked to in that one. It can certainly wait until you understand Schutz's definition of "tensor", but you shouldn't wait much longer than that.

I remember reading about duals. I was trying to figure out what the difference was between contravariant and covariant tensors. I read somewhere that a contravariant tensor is the dual of a covariant tensor. I'm just not sure what a dual is; is it an inverse?

Fredrik
Staff Emeritus
Gold Member
I remember reading about duals. I was trying to figure out what the difference was between contravariant and covariant tensors. I read somewhere that a contravariant tensor is the dual of a covariant tensor. I'm just not sure what a dual is; is it an inverse?
These things are explained in the post I linked to earlier.

Matterwave
Gold Member
I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$
I believe what I said should still be correct, though.

$${\bf g}(\vec{e_i},\vec{e_j}) \equiv g_{ij}=g_{kl}e^k_ie^l_j \equiv \vec{e_i}\cdot\vec{e_j}$$

Correct?

Fredrik
Staff Emeritus
Gold Member
I believe what I said should still be correct, though.

$${\bf g}(\vec{e_i},\vec{e_j}) \equiv g_{ij}=g_{kl}e^k_ie^l_j \equiv \vec{e_i}\cdot\vec{e_j}$$

Correct?
Yes.

I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$
Let's call the vector space V. The dot in eq. (9) is just the usual "almost inner product" on V, defined by the metric. The dot in eq. (8) is a similar "almost inner product" on V*. (I don't want to use the term "inner product" since (in relativity) we don't have $\langle x,x\rangle\geq 0$ for all x).

Eq. (9) says that $g_{\alpha\beta}=e_\alpha\cdot e_\beta$. This is true because both the left-hand side and the right-hand side are defined as $g(e_\alpha,e_\beta)$.

Eq. (8) says that $g^{\alpha\beta}=e^\alpha\cdot e^\beta$. This one is a bit more tricky. The map $u\mapsto g(u,\cdot)$ is an isomorphism from V to V*. So if r and s are members of V*, there exist unique u,v in V such that $r=g(u,\cdot)$ and $s=g(v,\cdot)$. This suggests a way to define $r\cdot s$. We define it to be equal to $g(u,v)$. It's not too hard to show that the member of V that corresponds to $e^\alpha$ is $g^{\alpha\gamma}e_\gamma$, where $g^{\alpha\gamma}$ denotes the $\alpha\gamma$ component of the inverse of the matrix with components $g_{\alpha\gamma}$. So $$e^\alpha\cdot e^\beta =g\big(g^{\alpha\gamma} e_\gamma,g^{\beta\delta}e_\delta\big) =g^{\alpha\gamma}g_{\gamma\delta} g^{\beta\delta}=\delta^\alpha_\delta g^{\beta\delta} =g^{\alpha\beta}.$$

Regarding the $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$, it has the indices upstairs on the left, and downstairs on the right. Maybe you're thinking of the fact that the almost inner product on V* can be expressed as $$g^{\alpha \beta} \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta},$$ meaning that
$$g^{\alpha \beta} \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}(r,s)=r\cdot s.$$

Last edited:
Let's call the vector space V. The dot in eq. (9) is just the usual "almost inner product" on V, defined by the metric. The dot in eq. (8) is a similar "almost inner product" on V*. (I don't want to use the term "inner product" since (in relativity) we don't have $\langle x,x\rangle\geq 0$ for all x).

Eq. (9) says that $g_{\alpha\beta}=e_\alpha\cdot e_\beta$. This is true because both the left-hand side and the right-hand side are defined as $g(e_\alpha,e_\beta)$.
So the covariant metric tensor is a function of the two sets of unit vectors that are going to be transformed over? Is the covariant metric tensor the inner product (almost) between two sets of unit vectors.

Fredrik
Staff Emeritus
Gold Member
So the covariant metric tensor is a function of the two sets of unit vectors that are going to be transformed over? Is the covariant metric tensor the inner product (almost) between two sets of unit vectors.
There's only one metric tensor, g. Its components in the basis $\{e_\alpha\}$ are $g_{\alpha\beta}=g(e_\alpha,e_\beta)$. $g^{\alpha\beta}$ denotes the component on row $\alpha$, column $\beta$ of the inverse of the matrix that has $g_{\alpha\beta}$ on row $\alpha$, column $\beta$. g is a bilinear form with all the properties of an inner product, except g(u,u)≥0 for all u. Bilinear forms like inner products and metrics aren't "between two sets of unit vectors". They are functions that take a pair of vectors to a number. No part of what I said had anything to do with coordinate transformations, since I used the same basis the whole time.

Eq. (8) says that $g^{\alpha\beta}=e^\alpha\cdot e^\beta$. This one is a bit more tricky. The map $u\mapsto g(u,\cdot)$ is an isomorphism from V to V*. So if r and s are members of V*, there exist unique u,v in V such that $r=g(u,\cdot)$ and $s=g(v,\cdot)$. This suggests a way to define $r\cdot s$. We define it to be equal to $g(u,v)$. It's not too hard to show that the member of V that corresponds to $e^\alpha$ is $g^{\alpha\gamma}e_\gamma$, where $g^{\alpha\gamma}$ denotes the $\alpha\gamma$ component of the inverse of the matrix with components $g_{\alpha\gamma}$. So $$e^\alpha\cdot e^\beta =g\big(g^{\alpha\gamma} e_\gamma,g^{\beta\delta}e_\delta\big) =g^{\alpha\gamma}g_{\gamma\delta} g^{\beta\delta}=\delta^\alpha_\delta g^{\beta\delta} =g^{\alpha\beta}.$$
I'm studying this equation. How did $$g(...,g^{\beta \delta} e_{\delta}) = ...g_{\gamma \delta}$$? Where did the covariant metric tensor come from?

Fredrik
Staff Emeritus
Gold Member
The only part that I didn't explain is how to find out that $g^{\alpha\gamma}e_\gamma$ is the member of V that corresponds to $e^\alpha$. We're looking for the $u\in V$ such that $e^\alpha=g(u,\cdot)$. Start by having both sides of the equality act on $e_\beta$.
$$\left\{ \begin{array}{l} e^{\alpha}(e_\beta)=\delta^\alpha_\beta\\ g(u,\cdot)(e_\beta)=g(u,e_\beta)=g(u^\gamma e_\gamma,e_\beta)=u^\gamma g_{\gamma\beta} \end{array} \right.~~\Rightarrow~~ \delta^\alpha_\beta=u^\gamma g_{\gamma\beta}$$ Multiply both sides of the last equality by $g^{\beta\delta}$. The result is $g^{\alpha\delta}=u^\delta$. This implies that $$u=u^\gamma e_\gamma=g^{\alpha\gamma}e_\gamma.$$

Matterwave
Gold Member
I find it's not all that helpful to think of the metric tensor (or it's components) as some dot product of basis vectors. The metric tensor defines the dot product, so it's kind of circular reasoning.

I'm still here. There are a lot of subtle details here that I'm trying to understand. Something like a $u^{\gamma}$ is just a simple vector. Likewise, $e^{\beta}$ is just a unit vector. I keep wondering if the metric tensor g has anything to do with g as gravitational acceleration.

Matterwave
Gold Member
$e^\beta$

Is usually not used to denote a unit vector, but rather a basis one-form. Basis vectors do not necessarily need to be unit length. The metric tensor g defines the dot product, as well as a natural way to put a 1 to 1 correspondence between one forms and vectors. It's related to gravitation in that gravitation itself is merely curvature in space-time. The curvature depends on the metric.

$e^\beta$

Is usually not used to denote a unit vector, but rather a basis one-form. Basis vectors do not necessarily need to be unit length. The metric tensor g defines the dot product, as well as a natural way to put a 1 to 1 correspondence between one forms and vectors. It's related to gravitation in that gravitation itself is merely curvature in space-time. The curvature depends on the metric.
I should have said that $e_{\beta}$ is a unit vector. When I think of vectors, I think of $\vec{A} = A_{1}\hat{e}_{1} + A_{2}\hat{e}_{2} + A_{3}\hat{e}_{3}$. When I write it this way, the index is covariant.

Just a quick look at one-forms, from wiki, quotes as: "Often one-forms are described locally, particularly in local coordinates. In a local coordinate system, a one-form is a linear combination of the differentials of the coordinates:"

That makes me think that $e^{\beta}$ is used for dealing with differentials. I've seen differential equations before.

Yes, it does make sense that gravitation is just a curvature of space-time.

Matterwave
Gold Member
Basis vectors need not be normalized (i.e. they need not have unit length). Using orthonormal basis vectors actually requires you to modify your methods a little in GR, this is called the tetrad method to GR (a tetrad being 4 orthonormal basis vectors). Calculating tensors and such is slightly different using this method.

The only part that I didn't explain is how to find out that $g^{\alpha\gamma}e_\gamma$ is the member of V that corresponds to $e^\alpha$. We're looking for the $u\in V$ such that $e^\alpha=g(u,\cdot)$. Start by having both sides of the equality act on $e_\beta$.
$$\left\{ \begin{array}{l} e^{\alpha}(e_\beta)=\delta^\alpha_\beta\\ g(u,\cdot)(e_\beta)=g(u,e_\beta)=g(u^\gamma e_\gamma,e_\beta)=u^\gamma g_{\gamma\beta} \end{array} \right.~~\Rightarrow~~ \delta^\alpha_\beta=u^\gamma g_{\gamma\beta}$$ Multiply both sides of the last equality by $g^{\beta\delta}$. The result is $g^{\alpha\delta}=u^\delta$. This implies that $$u=u^\gamma e_\gamma=g^{\alpha\gamma}e_\gamma.$$
So if I write something like $g^{\alpha \gamma}e_{\gamma} = e^{\alpha}$, then I am writing down a transformation of the basis unit vector.

Now $e^{\alpha}e_\beta = \delta^{\alpha}_{\beta}$ is starting to make sense.

I do wonder about those one form $e^{\beta}$ differential objects. How do differentials enter the picture?

Last edited:
Fredrik
Staff Emeritus
Gold Member
Something like a $u^{\gamma}$ is just a simple vector.
It's the component of a vector in a basis. What you wrote as $\vec{A} = A_{1}\hat{e}_{1} + A_{2}\hat{e}_{2} + A_{3}\hat{e}_{3}$, can be written as $A=A^\mu e_\mu$.

I keep wondering if the metric tensor g has anything to do with g as gravitational acceleration.
We still need a metric even when there's no such thing as gravity (i.e. in special relativity), but if you're asking if the choice of the symbol g was inspired by it, then I don't know, but it's possible, since a lot of differential geometry was developed after it was discovered that it was needed in general relativity.

So if I write something like $g^{\alpha \gamma}e_{\gamma} = e_{\gamma}$, then I am writing down a transformation of the basis unit vector.
Consider an example: If $g^{\alpha\beta}$ denotes the components of the metric of Minkowski spacetime in an inertial coordinate system, then what you wrote down means $$g^{\alpha 0}e_0+g^{\alpha 1}e_1 +g^{\alpha 2}e_2+g^{\alpha 3}e_3=e_\gamma$$ and this simplifies to $$-e_0+e_1+e_2+e_3=e_\gamma$$ which is clearly false for all $\gamma$, assuming that $\{e_\alpha\}_{\alpha=0}^3$ is a basis.

Now $e^{\alpha}e_\beta = \delta^{\alpha}_{\beta}$ is starting to make sense.
It's the definition of a basis on V*. This is explained in the post I linked to earlier, and more details can be found in the first of the three posts I linked to in the end of that one.

I do wonder about those one form $e^{\beta}$ differential objects. How do differentials enter the picture?
For any smooth function $f:U\rightarrow\mathbb R$, there's a cotangent vector $(df)_p\in T_pM^*$ for each $p\in U$. I think some authors would call each $(df)_p$ a 1-form, and the map $p\mapsto (df)_p$ a 1-form field, and that others would just call $(df)_p$ a cotangent vector and $p\mapsto(df)_p$ a 1-form. $(df)_p$ is the cotangent vector defined by $(df)_p(v)=v(f)$ for all $v\in T_pM$.

There are several ways to define the tangent space $T_pM$. (Click the last link in the post I linked to earlier for more information). When we define $T_pM$ as a space of derivative operators, the basis vectors associated with the coordinate system $x:V\rightarrow \mathbb R^n$ are the partial derivative operators $\left.\frac{\partial}{\partial x^\mu}\right|_p$ (defined in that post), and the dual of this basis is $\{(dx^\mu)_p\}$, where $x^\mu$ is the function that takes $p\in V$ to $(x(p))^\mu$.

Last edited:
Regarding the $g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}$, it has the indices upstairs on the left, and downstairs on the right.
My mistake.