What is the Metric Tensor and How is it Used in Tensors?

  • Thread starter Mazulu
  • Start date
  • Tags
    Tensors
In summary, Wolfram made a mistake, and Schutz's book is a good place to start learning about tensors.
  • #36
Fredrik said:
OK, let's try to be really accurate here. A metric on a smooth manifold M isn't a tensor, it's a global tensor field of type (0,2). That means that it's a function that takes each point in the manifold to a tensor of type (0,2) at that point. I will denote the tensor that g associates with the point p by gp, and I will call it "the metric at p".

For each p in M, gp is a (0,2) tensor at p. Each one of these tensors (one for each point p in the manifold) is a bilinear, symmetric, non-degenerate function from TpM×TpM into ℝ.

Bilinear means that for each [itex]u\in T_pM[/itex], the maps [itex]v\mapsto g_p(u,v)[/itex] and [itex]v\mapsto g_p(v,u)[/itex] are both linear.

Symmetric means that for all [itex]u,v\in T_pM[/itex], we have [itex]g(u,v)=g(v,u)[/itex].

Non-degenerate means that for all [itex]u\in T_pM[/itex], the map [itex]u\mapsto g(u,\cdot)[/itex] is a bijection. (Here [itex]g(u,\cdot)[/itex] denotes the map that takes v to g(u,v)).

Compare this with the definition of an inner product on TpM. An inner product on TpM is a bilinear, symmetric, positive definite function [itex]s:T_pM\times T_pM\to\mathbb R[/itex]. Positive definite means two things: 1. For all [itex]u\in T_pM[/itex], we have [itex]s(u,u)\geq 0[/itex]. 2. For all [itex]u\in T_pM[/itex], we have [itex]s(u,u)[/itex] only if u=0.

As you can see, an inner product on TpM has properties very similar to the metric at p, but the requirements are not quite the same. The requirements on inner products do however imply that inner products are non-degenerate. This means that a global (0,2) tensor field that assigns an inner product gp to each p in M would be a metric. Such a metric is called a Riemannian metric. A smooth manifold with a Riemannian metric is called a Riemannian manifold. Spacetime in GR and SR is not a Riemannian metric, because there are (for each p) lots of non-zero vectors such that [itex]g_p(u,u)=0[/itex], and even lots of vectors such that [itex]g_p(u,u)<0[/itex].

(In case you're not sure, "map" and "function" mean exactly the same thing).

Now I understand why they called Einstein a genius. There is a lot to digest here. My break is over and I don't have a good question; sorry about that. I'll take a look at this when I get home.
 
Physics news on Phys.org
  • #37
Actually, much of tensor analysis and differential geometry was done by Riemann (although I did hear a story that Gauss did a lot in differential geometry but did not publish it). Einstein was taught a lot of differential geometry by Levi-Civita I think.
 
  • #38
Mazulu said:
There is a lot to digest here.
Yes, this stuff isn't easy. When you understand metrics and tensor fields in general, you're off to a good start, but to really understand Einstein's equation, you also need to understand connections, covariant derivatives, parallel transport, geodesics and curvature. This is much harder.

Fredrik said:
Non-degenerate means that for all [itex]u\in T_pM[/itex], the map [itex]u\mapsto g(u,\cdot)[/itex] is a bijection. (Here [itex]g(u,\cdot)[/itex] denotes the map that takes v to g(u,v)).
Instead of "bijection", I should have said "bijection onto [itex]T_pM^*[/itex]" just to be more clear.
 
Last edited:
  • #39
In GR, a metric tensor is a matrix that looks like,
http://en.wikipedia.org/wiki/File:Metrictensor.svg
I can easily imagine this metric tensor operating on a vector to yield a new vector.

But in mathematics, the metric tensor has these properties,
So a metric tensor is:
1. a function,
2. defined on a manifold,
3. which takes as inputs a pair of tangental vectors,
4. spits out a scalar,
5. it's dot product,
6. dot product is an inner product,
7. must be symmetric g(u,v)=g(v,u),

But then you said that a metric (not a metric tensor, just a metric) is really a global tensor field. My break is over, sorry. I just want to understand what [itex]g^{\alpha \beta}[/itex] and [itex]g_{\alpha \beta}[/itex] really look like when the tensor math is stripped away. I'm expecting either a matrix, or an inner product or both. When I thing of the metric tensor, I think of a matrix doing something to a vector. But I can't picture a metric operating on a vector in such a way that you get an inner product or g(u,v).
 
  • #40
A metric is what I said, a tensor field. It assigns a tensor at p to each p in M. I called it "the metric at p", but perhaps I should have called it "the value of the metric at p".

It's possible to define these terms differently. You could, e.g. define a metric to be a bilinear, symmetric, non-degenerate real-valued function, and use a term like "metric tensor field" for what I called a metric.

The components of a tensor at p, in a given basis for [itex]T_pM[/itex], are the numbers you get when you have the tensor act on the basis vectors. In this case, we have [tex]g_{\mu\nu}(p)=g_p(e_\mu,e_\nu).[/tex] I wouldn't say that the matrix of components is [itex]g_p[/itex], but since [itex]g_p[/itex] is completely determined by those numbers and vice versa, it's only a very minor abuse of the terminology to do so.

Note that since [itex]g_p[/itex] is bilinear, we have [tex]g_p(u,v)=g_p(u^\mu e_\mu,v^\nu e_\nu)=u^\mu v^\nu g_p(e_\mu,e_\nu)=u^\mu v^\nu g_{\mu\nu}(p).[/tex] If you compare this to the definition of matrix multiplication: [tex](AB)^i_j=A^i_k B^k_j[/tex] (row index upstairs, column index downstairs) you will see that [itex]u^\mu v^\nu g_{\mu\nu}(p)[/itex] is equal to the only component of the 1×1 matrix
[tex]\begin{pmatrix}u^0 & u^1 & u^2 & u^3\end{pmatrix}
\begin{pmatrix}
g_{00}(p) & g_{01}(p) & g_{02}(p) & g_{03}(p)\\
g_{10}(p) & g_{11}(p) & g_{12}(p) & g_{13}(p)\\
g_{20}(p) & g_{21}(p) & g_{22}(p) & g_{23}(p)\\
g_{30}(p) & g_{31}(p) & g_{32}(p) & g_{33}(p)
\end{pmatrix}
\begin{pmatrix}v^0\\ v^1\\ v^2\\ v^3\end{pmatrix}
[/tex] So if we allow ourselves to use the same notation for a vector/tensor and its matrix of components, and put an equality sign between a real number r and a 1×1 matrix (r), we have [tex]g_p(u,v)=u^Tg_p v.[/tex]
 
Last edited:
  • #41
Fredrik said:
Note that since [itex]g_p[/itex] is linear, we have [tex]g_p(u,v)=g_p(u^\mu e_\mu,v^\nu e_\nu)=u^\mu v^\nu g_p(e_\mu,e_\nu)=u^\mu v^\nu g_{\mu\nu}(p).[/tex]

This equation is spectacular. It's like you took two vectors u and v, and you pulled there coefficients right through the metric tensor gp. In doing so, the metric tensor now only acts on the two unit vectors eμ and e\nu. But it looks like gp became a different metric tensor, gμ\nu.
 
  • #42
Mazulu said:
This equation is spectacular. It's like you took two vectors u and v, and you pulled there coefficients right through the metric tensor gp.
Yes, that's what bilinearity means. (Note that I should have said "bilinear" or "linear in each variable" where I said "linear". I corrected that in my post after you quoted it).

Mazulu said:
But it looks like gp became a different metric tensor, gμ\nu.
I don't quite understand this comment. I'm just using the definition of "components" of [itex]g_p[/itex] in the basis [itex]\{e_\mu\}[/itex]. Perhaps I should write them as [itex](g_p)_{\mu\nu}[/itex], but I prefer to write them as [itex]g_{\mu\nu}(p)[/itex].
 
  • #43
Note also that in that formula there is an implied summation over all mu and nu.
 
  • #44
Fredrik said:
Yes, that's what bilinearity means. (Note that I should have said "bilinear" or "linear in each variable" where I said "linear". I corrected that in my post after you quoted it).
It's bilinear because there are two input variables, u and v? Thus, g(u,v)?

I don't quite understand this comment. I'm just using the definition of "components" of [itex]g_p[/itex] in the basis [itex]\{e_\mu\}[/itex]. Perhaps I should write them as [itex](g_p)_{\mu\nu}[/itex], but I prefer to write them as [itex]g_{\mu\nu}(p)[/itex].
I vaguely remember that the metric tensor g is supposed to operate on one basis to get another basis. If that is true, then g won't change if I pull the vector components out.

I wouldn't have noticed the distinction between [itex](g_p)_{\mu\nu}[/itex] and [itex]g_{\mu\nu}(p)[/itex]. They are both metric tensors. They both have indices μ and \nu. [itex](g_p)_{\mu\nu}[/itex] refers to point p as a subscript. [itex]g_{\mu\nu}(p)[/itex] is a function of p.
 
  • #45
Bilinear means that the function g is linear in both terms. I.e. g(u+v,w)=g(u,w)+g(v,w)
and g(u,v+w)=g(u,v)+g(u,w), and g(a*u,v)=g(u,a*v)=a*g(u,v) where a is a number.
 
  • #46
Matterwave said:
Bilinear means that the function g is linear in both terms. I.e. g(u+v,w)=g(u,w)+g(v,w)
and g(u,v+w)=g(u,v)+g(u,w), and g(a*u,v)=g(u,a*v)=a*g(u,v) where a is a number.
That makes sense; I'm tracking. It's amazing that bilinearity works, not just for scalars like "a", but for vectors like uμ as well. I'm just referring to the recent quote,
gp(u,v)=gp(uμeμ,vνeν)=uμvνgp(eμ,eν)=uμvνgμν(p).
 
  • #47
Fredrik said:
[tex]\begin{pmatrix}u^0 & u^1 & u^2 & u^3\end{pmatrix}
\begin{pmatrix}
g_{00}(p) & g_{01}(p) & g_{02}(p) & g_{03}(p)\\
g_{10}(p) & g_{11}(p) & g_{12}(p) & g_{13}(p)\\
g_{20}(p) & g_{21}(p) & g_{22}(p) & g_{23}(p)\\
g_{30}(p) & g_{31}(p) & g_{32}(p) & g_{33}(p)
\end{pmatrix}
\begin{pmatrix}v^0\\ v^1\\ v^2\\ v^3\end{pmatrix}
[/tex]
A metric tensor is supposed to function as an inner product of two vectors. Is this what that looks like?
 
  • #48
u^mu as used by Fredrik is not a vector, it's a vector component (the mu'th component of the vector u). You cannot simply pull vectors out, that wouldn't make sense.

The problem is with notation. A lot of authors use u^mu as the notation for a vector. In that case, it's hard to distinguish between vectors and vector components, but it's generally more convenient than always sticking with correct vector notation.
 
  • #49
Matterwave said:
u^mu as used by Fredrik is not a vector, it's a vector component (the mu'th component of the vector u). You cannot simply pull vectors out, that wouldn't make sense.

The problem is with notation. A lot of authors use u^mu as the notation for a vector. In that case, it's hard to distinguish between vectors and vector components, but it's generally more convenient than always sticking with correct vector notation.

I totally agree. I was going to tell you just what you told me, but I had to get back to work.

Hey, guess what! I just got my book, written by Bernard Shutz, called A First Course in General Relativity. In just a few months, I'll be able to build my own black hole! Hurray!:biggrin:
 
  • #50
Mazulu said:
It's bilinear because there are two input variables, u and v? Thus, g(u,v)?
Matterwave answered this.

Mazulu said:
I vaguely remember that the metric tensor g is supposed to operate on one basis to get another basis.
This is wrong. Maybe you're thinking of the relationship between a basis for [itex]T_pM[/itex] and its dual basis, which is a basis for [itex]T_pM^*[/itex]. This was covered in detail earlier in this thread. In post #17, I showed that [itex]g(g^{\alpha\gamma}e_\gamma,\cdot)=e^\alpha[/itex]. The metric is normally not involved in a change of basis. There's a basis associated with each coordinate system, so if you just choose to use another coordinate system, that changes the basis.

If [itex]\{e'_\mu\}[/itex] is another basis for [itex]T_pM[/itex], then there's a matrix [itex]M[/itex] such that [itex]e'_\mu=M^\nu_\mu e_\nu[/itex]. So the components of [itex]g_p[/itex] in that basis are [tex]g'_{\mu\nu}(p)=g_p(e'_\mu,e'_\nu)=g_p(M_\mu^\rho e_\rho,M_\nu^\sigma e_\sigma)=M_\mu^\rho M_\nu^\sigma g_p(e_\rho,e_\sigma)=M_\mu^\rho M_\nu^\sigma g_{\rho\sigma}(p)[/tex] If we denote the matrix of components of [itex]g_p[/itex] in these two bases by [itex]g_p[/itex] and [itex]g_p'[/itex], then this result can also be written as [tex]g_p'=M^T g_p M.[/tex]
Mazulu said:
A metric tensor is supposed to function as an inner product of two vectors. Is this what that looks like?
I'm not sure what you mean by "looks like", but for any inner product on a finite-dimensional vector space over ℝ, there's a symmetric matrix S such that [itex]\langle x,y\rangle=x^TSy[/itex]. However, every symmetric matrix S doesn't define an inner product this way. I think it also needs to be such that there exists another symmetric matrix T such that S=T2. This requirement is certainly sufficient, but I haven't thought about it enough to be sure that it's necessary.
 
  • #51
I've been studying vectors, vector algebra and basis vectors from the Bernard Schutz book I bought. Last night, I had home made pizza for dinner, and left overs for breakfast; so you ask, what does this have to do with tensor calculus? Well this morning I was looking at an equation in the book,
[itex]\Lambda^{\overline{β}}_\alpha=\Lambda^\bar{\beta}_{α} (v)[/itex].

But there was an extra prime in the equation that didn't make sense. That is, until I noticed that it wasn't a prime at all. It was a tiny peace of charred crust that had broken off from the pizza and landed right on the equation.
 
Last edited:
  • #52
Yeah, pizzas are making it much more difficult to learn tensors. :smile:
 
  • #53
I happen to write with a pen which is contrary to what my kindergarten teacher told me about 37 years ago. She said that math is done with a pencil so you can erase if you make a mistake. I don't remember her name, but I do know that she didn't know tensors. Erasers are a significant source of primes, hats, bars and other tiny little marks that can mess with our tensors.
 

Similar threads

  • Special and General Relativity
Replies
4
Views
2K
  • Special and General Relativity
Replies
8
Views
2K
  • Special and General Relativity
Replies
1
Views
1K
  • Special and General Relativity
Replies
8
Views
2K
  • Special and General Relativity
Replies
5
Views
735
  • Special and General Relativity
Replies
4
Views
684
Replies
13
Views
616
  • Special and General Relativity
Replies
21
Views
2K
  • Special and General Relativity
Replies
2
Views
881
  • Special and General Relativity
Replies
32
Views
3K
Back
Top