What is the Metric Tensor and How is it Used in Tensors?

  • Thread starter Thread starter Mazulu
  • Start date Start date
  • Tags Tags
    Tensors
Click For Summary
The metric tensor is a fundamental concept in tensor analysis, defining how dot products are computed in a vector space. It is not simply the dot product of two basis vectors; rather, it provides a framework for performing these operations. The notation used by Wolfram may be misleading, as it implies a direct product of basis vectors instead of an inner product. Resources like Schutz's "A First Course in General Relativity" are recommended for a clearer understanding of tensors and their applications in relativity. Overall, the metric tensor serves as a bilinear form that relates pairs of vectors, crucial for understanding geometric and physical concepts in relativity.
Mazulu
Messages
26
Reaction score
0
Hello. I am trying to get the hang of tensors. I saw this written in http://mathworld.wolfram.com/MetricTensor.html
I just wanted to make sure it was correct.

g^{\alpha\beta} =\widehat{e}^{i}\ast \widehat{e}^{j}

Which says that the dot product of two unit vectors equals the metric tensor. Like I said, I am very new to this stuff and I'm just trying to get the hang of it.
 
Physics news on Phys.org
Mazulu said:
Hello. I am trying to get the hang of tensors. I saw this written in http://mathworld.wolfram.com/MetricTensor.html
I just wanted to make sure it was correct.

g^{\alpha\beta} =\widehat{e}^{i}\ast \widehat{e}^{j}

Which says that the dot product of two unit vectors equals the metric tensor. Like I said, I am very new to this stuff and I'm just trying to get the hang of it.


That should be

g^{\alpha\beta} = \widehat{e}^{\alpha}\ast \widehat{e}^{\beta}

Wolfram--or Wikipedia, aren't the best places to learn about tensors. If you want to learn about how tensors as used in relativity, use a relativity text that introduces tensors in the process of teaching you relativity.
 
Phrak said:
That should be

g^{\alpha\beta} = \widehat{e}^{\alpha}\ast \widehat{e}^{\beta}

Wolfram--or Wikipedia, aren't the best places to learn about tensors. If you want to learn about how tensors as used in relativity, use a relativity text that introduces tensors in the process of teaching you relativity.

Wolfram made a mistake? Crap! I guess I have no choice, I have to buy a book. I'm looking at some books on Amazon.com. I'm looking at, A First Course in General Relativity by Schuts.


I'm watching the Leonard Suskind video: General Relativity. I was watching lecture 7, but I really didn't understand lectures 4 through 6. Tensors are made to look so easy, but there not. I know that contravariant indexes are superscripts and covariant indices are subscripts; but I'm unclear of why you would use one or the other.
 
The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.
 
Matterwave said:
The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.

I thought that the metric tensor had to do with translation from one coordinate system to another; perhaps it is just one part. The relationship between the book keeper frame (e.g. flat space) and the proper frame is an equation that uses the space-time metric, Christoffel symbols, and other tensors. But all mathematical machinery is still very mysterious to me. I'm trying to convince myself that I should spend $50 and buy the book. The goal is to be able to articulate my ideas about the Einstein equations.

So dot products are part of mapping from one coordinate system to another; dot products are done with the metric tensor. I'm assuming that something has to operate on the metric tensor before a dot product can take place.
 
Last edited:
Schutz's book is a good place to start learning about tensors. If you really want to be good at it, you might want to continue with "Introduction to smooth manifolds" by John M. Lee after that.

This post is a good place to get an overview. (Ignore the first two paragraphs).

The metric tensor has nothing to do with coordinate changes.
 
Matterwave said:
The metric tensor is not the dot product of two basis vectors, that COMPONENT of the metric tensor is the dot product of two basis vectors, by definition. The metric tensor tells you HOW you DO dot products in your vector space.

I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}
 
Last edited:
Fredrik said:
Schutz's book is a good place to start learning about tensors. If you really want to be good at it, you might want to continue with "Introduction to smooth manifolds" by John M. Lee after that.

This post is a good place to get an overview. (Ignore the first two paragraphs).

The metric tensor has nothing to do with coordinate changes.

I'm really not ready for manifolds; but I have bookmarked the link for when I am. I bought Schutz's book. I hope it's authoritative and correct. I have this sense that tensors are really just matrices, vectors and calculus. I need to experience the drudgery of using only these things so that the tensor can be the hero that makes everything easier.:smile: I forgot, there is nothing easy about general relativity.

I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say gαβ=∂∂xα⊗∂∂xβ
Phrak, you're killing me.:cry:

In Suskind's lecture#2 he writes down the formula for the \nabla operator. But he left off the unit vectors. The definition of \nabla should look something like this,
\vec\nabla = \frac{\partial}{\partial x}\hat{x}+\frac{\partial}{\partial y}\hat{y}+\frac{\partial}{\partial z}\hat{z}
In other words, a partial derivative with a unit vector.
 
Last edited:
Mazulu said:
I have this sense that tensors are really just matrices, vectors and calculus.
What I liked the most about Schutz's approach is that he made that clear. At least that's how I remember it, but it was a long time ago. He talks about vector spaces, their dual spaces, bases and multilinear maps, making it perfectly clear what a tensor is without even mentioning manifolds. However, a lot of people who use the word "tensor" really mean "tensor field". To understand the difference, you need to read something like the post I linked to, and the ones I linked to in that one. It can certainly wait until you understand Schutz's definition of "tensor", but you shouldn't wait much longer than that.
 
  • #10
I remember reading about duals. I was trying to figure out what the difference was between contravariant and covariant tensors. I read somewhere that a contravariant tensor is the dual of a covariant tensor. I'm just not sure what a dual is; is it an inverse?
 
  • #11
Mazulu said:
I remember reading about duals. I was trying to figure out what the difference was between contravariant and covariant tensors. I read somewhere that a contravariant tensor is the dual of a covariant tensor. I'm just not sure what a dual is; is it an inverse?
These things are explained in the post I linked to earlier.
 
  • #12
Phrak said:
I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}

I believe what I said should still be correct, though.

{\bf g}(\vec{e_i},\vec{e_j}) \equiv g_{ij}=g_{kl}e^k_ie^l_j \equiv \vec{e_i}\cdot\vec{e_j}

Correct?
 
  • #13
Matterwave said:
I believe what I said should still be correct, though.

{\bf g}(\vec{e_i},\vec{e_j}) \equiv g_{ij}=g_{kl}e^k_ie^l_j \equiv \vec{e_i}\cdot\vec{e_j}

Correct?
Yes.

Phrak said:
I haven't seen the notation used by Wolfram before, however they have a direct product of basis vectors rather than an inner product. It seems to say that the elements of the metric tensor with upper indices are equal to the direct products of the dual basis. Sounds reasonable...

Writing-out the basis, the equation would say g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}
Let's call the vector space V. The dot in eq. (9) is just the usual "almost inner product" on V, defined by the metric. The dot in eq. (8) is a similar "almost inner product" on V*. (I don't want to use the term "inner product" since (in relativity) we don't have \langle x,x\rangle\geq 0 for all x).

Eq. (9) says that g_{\alpha\beta}=e_\alpha\cdot e_\beta. This is true because both the left-hand side and the right-hand side are defined as g(e_\alpha,e_\beta).

Eq. (8) says that g^{\alpha\beta}=e^\alpha\cdot e^\beta. This one is a bit more tricky. The map u\mapsto g(u,\cdot) is an isomorphism from V to V*. So if r and s are members of V*, there exist unique u,v in V such that r=g(u,\cdot) and s=g(v,\cdot). This suggests a way to define r\cdot s. We define it to be equal to g(u,v). It's not too hard to show that the member of V that corresponds to e^\alpha is g^{\alpha\gamma}e_\gamma, where g^{\alpha\gamma} denotes the \alpha\gamma component of the inverse of the matrix with components g_{\alpha\gamma}. So e^\alpha\cdot e^\beta =g\big(g^{\alpha\gamma} e_\gamma,g^{\beta\delta}e_\delta\big) =g^{\alpha\gamma}g_{\gamma\delta} g^{\beta\delta}=\delta^\alpha_\delta g^{\beta\delta} =g^{\alpha\beta}.

Regarding the g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}, it has the indices upstairs on the left, and downstairs on the right. Maybe you're thinking of the fact that the almost inner product on V* can be expressed as g^{\alpha \beta} \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}, meaning that
g^{\alpha \beta} \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}(r,s)=r\cdot s.
 
Last edited:
  • #14
Fredrik said:
Let's call the vector space V. The dot in eq. (9) is just the usual "almost inner product" on V, defined by the metric. The dot in eq. (8) is a similar "almost inner product" on V*. (I don't want to use the term "inner product" since (in relativity) we don't have \langle x,x\rangle\geq 0 for all x).

Eq. (9) says that g_{\alpha\beta}=e_\alpha\cdot e_\beta. This is true because both the left-hand side and the right-hand side are defined as g(e_\alpha,e_\beta).
So the covariant metric tensor is a function of the two sets of unit vectors that are going to be transformed over? Is the covariant metric tensor the inner product (almost) between two sets of unit vectors.
 
  • #15
Mazulu said:
So the covariant metric tensor is a function of the two sets of unit vectors that are going to be transformed over? Is the covariant metric tensor the inner product (almost) between two sets of unit vectors.
There's only one metric tensor, g. Its components in the basis \{e_\alpha\} are g_{\alpha\beta}=g(e_\alpha,e_\beta). g^{\alpha\beta} denotes the component on row \alpha, column \beta of the inverse of the matrix that has g_{\alpha\beta} on row \alpha, column \beta. g is a bilinear form with all the properties of an inner product, except g(u,u)≥0 for all u. Bilinear forms like inner products and metrics aren't "between two sets of unit vectors". They are functions that take a pair of vectors to a number. No part of what I said had anything to do with coordinate transformations, since I used the same basis the whole time.
 
  • #16
Fredrik said:
Eq. (8) says that g^{\alpha\beta}=e^\alpha\cdot e^\beta. This one is a bit more tricky. The map u\mapsto g(u,\cdot) is an isomorphism from V to V*. So if r and s are members of V*, there exist unique u,v in V such that r=g(u,\cdot) and s=g(v,\cdot). This suggests a way to define r\cdot s. We define it to be equal to g(u,v). It's not too hard to show that the member of V that corresponds to e^\alpha is g^{\alpha\gamma}e_\gamma, where g^{\alpha\gamma} denotes the \alpha\gamma component of the inverse of the matrix with components g_{\alpha\gamma}. So e^\alpha\cdot e^\beta =g\big(g^{\alpha\gamma} e_\gamma,g^{\beta\delta}e_\delta\big) =g^{\alpha\gamma}g_{\gamma\delta} g^{\beta\delta}=\delta^\alpha_\delta g^{\beta\delta} =g^{\alpha\beta}.

I'm studying this equation. How did g(...,g^{\beta \delta} e_{\delta}) = ...g_{\gamma \delta}? Where did the covariant metric tensor come from?
 
  • #17
The only part that I didn't explain is how to find out that g^{\alpha\gamma}e_\gamma is the member of V that corresponds to e^\alpha. We're looking for the u\in V such that e^\alpha=g(u,\cdot). Start by having both sides of the equality act on e_\beta.
<br /> \left\{<br /> \begin{array}{l}<br /> e^{\alpha}(e_\beta)=\delta^\alpha_\beta\\<br /> g(u,\cdot)(e_\beta)=g(u,e_\beta)=g(u^\gamma e_\gamma,e_\beta)=u^\gamma g_{\gamma\beta}<br /> \end{array}<br /> \right.~~\Rightarrow~~ \delta^\alpha_\beta=u^\gamma g_{\gamma\beta}<br /> Multiply both sides of the last equality by g^{\beta\delta}. The result is g^{\alpha\delta}=u^\delta. This implies that u=u^\gamma e_\gamma=g^{\alpha\gamma}e_\gamma.
 
  • #18
I find it's not all that helpful to think of the metric tensor (or it's components) as some dot product of basis vectors. The metric tensor defines the dot product, so it's kind of circular reasoning.
 
  • #19
I'm still here. There are a lot of subtle details here that I'm trying to understand. Something like a u^{\gamma} is just a simple vector. Likewise, e^{\beta} is just a unit vector. I keep wondering if the metric tensor g has anything to do with g as gravitational acceleration.
 
  • #20
e^\beta

Is usually not used to denote a unit vector, but rather a basis one-form. Basis vectors do not necessarily need to be unit length. The metric tensor g defines the dot product, as well as a natural way to put a 1 to 1 correspondence between one forms and vectors. It's related to gravitation in that gravitation itself is merely curvature in space-time. The curvature depends on the metric.
 
  • #21
Matterwave said:
e^\beta

Is usually not used to denote a unit vector, but rather a basis one-form. Basis vectors do not necessarily need to be unit length. The metric tensor g defines the dot product, as well as a natural way to put a 1 to 1 correspondence between one forms and vectors. It's related to gravitation in that gravitation itself is merely curvature in space-time. The curvature depends on the metric.

I should have said that e_{\beta} is a unit vector. When I think of vectors, I think of \vec{A} = A_{1}\hat{e}_{1} + A_{2}\hat{e}_{2} + A_{3}\hat{e}_{3}. When I write it this way, the index is covariant.

Just a quick look at one-forms, from wiki, quotes as: "Often one-forms are described locally, particularly in local coordinates. In a local coordinate system, a one-form is a linear combination of the differentials of the coordinates:"

That makes me think that e^{\beta} is used for dealing with differentials. I've seen differential equations before.


Yes, it does make sense that gravitation is just a curvature of space-time.
 
  • #22
Basis vectors need not be normalized (i.e. they need not have unit length). Using orthonormal basis vectors actually requires you to modify your methods a little in GR, this is called the tetrad method to GR (a tetrad being 4 orthonormal basis vectors). Calculating tensors and such is slightly different using this method.
 
  • #23
Fredrik said:
The only part that I didn't explain is how to find out that g^{\alpha\gamma}e_\gamma is the member of V that corresponds to e^\alpha. We're looking for the u\in V such that e^\alpha=g(u,\cdot). Start by having both sides of the equality act on e_\beta.
<br /> \left\{<br /> \begin{array}{l}<br /> e^{\alpha}(e_\beta)=\delta^\alpha_\beta\\<br /> g(u,\cdot)(e_\beta)=g(u,e_\beta)=g(u^\gamma e_\gamma,e_\beta)=u^\gamma g_{\gamma\beta}<br /> \end{array}<br /> \right.~~\Rightarrow~~ \delta^\alpha_\beta=u^\gamma g_{\gamma\beta}<br /> Multiply both sides of the last equality by g^{\beta\delta}. The result is g^{\alpha\delta}=u^\delta. This implies that u=u^\gamma e_\gamma=g^{\alpha\gamma}e_\gamma.

So if I write something like g^{\alpha \gamma}e_{\gamma} = e^{\alpha}, then I am writing down a transformation of the basis unit vector.

Now e^{\alpha}e_\beta = \delta^{\alpha}_{\beta} is starting to make sense.

I do wonder about those one form e^{\beta} differential objects. How do differentials enter the picture?
 
Last edited:
  • #24
Mazulu said:
Something like a u^{\gamma} is just a simple vector.
It's the component of a vector in a basis. What you wrote as \vec{A} = A_{1}\hat{e}_{1} + A_{2}\hat{e}_{2} + A_{3}\hat{e}_{3}, can be written as A=A^\mu e_\mu.

Mazulu said:
I keep wondering if the metric tensor g has anything to do with g as gravitational acceleration.
We still need a metric even when there's no such thing as gravity (i.e. in special relativity), but if you're asking if the choice of the symbol g was inspired by it, then I don't know, but it's possible, since a lot of differential geometry was developed after it was discovered that it was needed in general relativity.

Mazulu said:
So if I write something like g^{\alpha \gamma}e_{\gamma} = e_{\gamma}, then I am writing down a transformation of the basis unit vector.
Consider an example: If g^{\alpha\beta} denotes the components of the metric of Minkowski spacetime in an inertial coordinate system, then what you wrote down means g^{\alpha 0}e_0+g^{\alpha 1}e_1 +g^{\alpha 2}e_2+g^{\alpha 3}e_3=e_\gamma and this simplifies to -e_0+e_1+e_2+e_3=e_\gamma which is clearly false for all \gamma, assuming that \{e_\alpha\}_{\alpha=0}^3 is a basis.

Mazulu said:
Now e^{\alpha}e_\beta = \delta^{\alpha}_{\beta} is starting to make sense.
It's the definition of a basis on V*. This is explained in the post I linked to earlier, and more details can be found in the first of the three posts I linked to in the end of that one.

Mazulu said:
I do wonder about those one form e^{\beta} differential objects. How do differentials enter the picture?
For any smooth function f:U\rightarrow\mathbb R, there's a cotangent vector (df)_p\in T_pM^* for each p\in U. I think some authors would call each (df)_p a 1-form, and the map p\mapsto (df)_p a 1-form field, and that others would just call (df)_p a cotangent vector and p\mapsto(df)_p a 1-form. (df)_p is the cotangent vector defined by (df)_p(v)=v(f) for all v\in T_pM.

There are several ways to define the tangent space T_pM. (Click the last link in the post I linked to earlier for more information). When we define T_pM as a space of derivative operators, the basis vectors associated with the coordinate system x:V\rightarrow \mathbb R^n are the partial derivative operators \left.\frac{\partial}{\partial x^\mu}\right|_p (defined in that post), and the dual of this basis is \{(dx^\mu)_p\}, where x^\mu is the function that takes p\in V to (x(p))^\mu.
 
Last edited:
  • #25
Fredrik said:
Regarding the g^{\alpha \beta} = \frac{\partial}{\partial x^\alpha} \otimes \frac{\partial}{\partial x^\beta}, it has the indices upstairs on the left, and downstairs on the right.

My mistake.
 
  • #26
Fredrik said:
For any smooth function f:U\rightarrow\mathbb R, there's a cotangent vector (df)_p\in T_pM^* for each p\in U. I think some authors would call each (df)_p a 1-form, and the map p\mapsto (df)_p a 1-form field, and that others would just call (df)_p a cotangent vector and p\mapsto(df)_p a 1-form. (df)_p is the cotangent vector defined by (df)_p(v)=v(f) for all v\in T_pM.

You said that a smooth function f:U\rightarrow \mathbb R, which means that some function f is used to map the set of objects of U into the space R. The set of objects includes cotangent vectors (df)_p\in T_p = df_1 + df_2 +df_3 which are elements of T_p M^*; T_p M is a tangent space; I guess that means it's tangent to a curve s through space-time. T_p is a vector, but I don't know what M is.
 
  • #27
I found this definition is a book by Lovelock and Rund.
A set of n quantities (A^1,...A^n) is said to constitute the components of a contravariant vector at a point P with coordinates (x^1,...,x^n) if, under transformation 4.1, these quantities transform according to the relations
\bar{A^j}=\sum^{n}_{h=1}\frac{\partial \overline{x}^j}{\partial x^h} A^h

Transformation 4.1 is \bar{x}^j=\bar{x}^j(x^h)

Oh my! Now we're talking about functions f^1,...,f^n

OK, I've got some idea that contravariant vectors are differentials. Do I know enough now to grapple with the Einstein equations?
 
  • #28
Mazulu said:
You said that a smooth function f:U\rightarrow \mathbb R, which means that some function f is used to map the set of objects of U into the space R. The set of objects includes cotangent vectors (df)_p\in T_p = df_1 + df_2 +df_3 which are elements of T_p M^*; T_p M is a tangent space; I guess that means it's tangent to a curve s through space-time. T_p is a vector, but I don't know what M is.
M is a smooth manifold (e.g. a sphere, or spacetime in SR or GR). U is an open set in M. p is a member of U. T_pM is the tangent space at p (a vector space associated with the point p). Earlier, I was talking about a vector space V. In this context, T_pM is that vector space. So think T_pM=V. The notation T_p is not defined. I supposed it could refer to the value of a vector field at p, which would make T_p a tangent vector at p, but I didn't denote any vector field by T.

(df)_p is a cotangent vector, i.e. a member of T_pM^*. The basis of T_pM^* that's dual to the basis \big\{\frac{\partial}{\partial x}\big|_p\big\} of T_pM is \{(dx^\mu)_p\}. So (df)_p can be written as (df)_p=((df)_p)_\mu (dx^\mu)_p, where
((df)_p)_\mu =(df)_p\bigg(\frac{\partial}{\partial x^\mu}\!\bigg|_p\bigg) =\frac{\partial}{\partial x^\mu}\!\bigg|_p f =\frac{\partial f(p)}{\partial x^\mu}<br /> =(f\circ x^{-1})_{,\mu}(x(p)). This holds for all p in U, so we can also write df=\frac{\partial f}{\partial x^\mu}dx^\mu. If you still haven't read the post I linked to earlier or the posts I linked to in that one, you really need to do that now.
 
  • #29
Mazulu said:
I found this definition is a book by Lovelock and Rund.
A set of n quantities (A^1,...A^n) is said to constitute the components of a contravariant vector at a point P with coordinates (x^1,...,x^n) if, under transformation 4.1, these quantities transform according to the relations
\bar{A^j}=\sum^{n}_{h=1}\frac{\partial \overline{x}^j}{\partial x^h} A^h

Transformation 4.1 is \bar{x}^j=\bar{x}^j(x^h)

Oh my! Now we're talking about functions f^1,...,f^n
I absolutely hate that definition, but unfortunately, it's very common in physics books. What irritates me the most about it is that the people who use it can't even state it right. It's not just a set of n "quantities". It's a set of n "quantities" (vectors actually, in the sense that they are members of some vector space) associated with each coordinate system. Without that piece of information, the concept of "transformation" doesn't make sense. The tensor transformation law describes how the "quantities" associated with one coordinate system are related to the corresponding "quantities" associated with another coordinate system. It's not the set of "quantities" that should be called a tensor in this definition. It's the function that associated one such set with each coordinate system.

However, I would advise you to ignore this definition until you fully understand the modern definitions. (See this post).

Mazulu said:
OK, I've got some idea that contravariant vectors are differentials. Do I know enough now to grapple with the Einstein equations?
I would say that you're at least a month of pretty hard work away from that.
 
  • #30
Hi Fredrick,
I was looking at the link you provided about manifolds. You said,
The metric at p is a function g:TpM×TpM→ℝ that's linear in both variables and satisfies g(u,v)=g(v,u) and one more thing that I'll mention in a minute.
I recognise g at the metric tensor; but I though g(u,v) only meant that g(u,v) is a function of u and v; such a statement is very general. So why do we worry about a strict rule that g(u,v) = g(v,u), which implies that sometimes this isn't true? What am I misunderstanding?
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
856
  • · Replies 7 ·
Replies
7
Views
780
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
1K
Replies
19
Views
2K