Stupid question on index lowering and raising using metric

  • Context: Undergrad 
  • Thread starter Thread starter smodak
  • Start date Start date
  • Tags Tags
    Index Metric Stupid
Click For Summary

Discussion Overview

The discussion revolves around the use of the metric tensor to raise and lower indices on tensors, particularly focusing on rank one tensors. Participants explore the relationships between covariant and contravariant tensors and the conditions under which certain statements about these relationships hold true.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants assert that the relation ##\vec{a_i}\centerdot \vec{a^j}=\delta_i^j## is only valid for basis vectors in an orthonormal basis, while ##g_{ij}=\vec{a_i}\centerdot\vec{a_j}## holds for any coordinate basis.
  • Others argue that the statement ##\vec{a_i}=g_{ij}\vec{a^j}## cannot be derived from the previous relations as they do not hold in general.
  • A participant clarifies that the first relation is between a tangent vector basis and the corresponding covector basis, indicating that using different covector bases could lead to confusion.
  • Some participants question whether the discussion pertains specifically to curvilinear coordinates in Euclidean space, suggesting that the latter equation may not make sense outside this context.
  • One participant explains that the metric tensor is a type (0,2) tensor that, when contracted with a type (1,0) tensor, produces a type (0,1) tensor, establishing a correspondence between tangent and dual vectors.
  • Another participant acknowledges their use of older terminology and confirms their understanding of the relationship between covariant and contravariant vectors as defined by the metric.
  • Some participants note that the metric must be symmetric and positive definite (or non-degenerate) for the properties discussed to hold.
  • One participant emphasizes that tensors are invariant and that the components of tensors transform according to the basis used, with the metric's components transforming covariantly.

Areas of Agreement / Disagreement

There is no consensus on the validity of the initial relations presented by the original poster, with multiple competing views on their applicability and the conditions under which they hold. The discussion remains unresolved regarding the clarity of the original question and the implications of the metric tensor's properties.

Contextual Notes

Participants express uncertainty about the assumptions underlying the original question and the specific context in which the metric tensor's properties apply, particularly regarding coordinate systems and the nature of the basis vectors involved.

smodak
Messages
457
Reaction score
249
Original Question (Please ignore this):

I knew this when I read about it the first time a while back but can't put the two and two together anymore. :)

I understand ##\vec{a_i}\centerdot \vec{a^j}=\delta_i^j## and ##g_{ij}=\vec{a_i}\centerdot\vec{a_j}##

How does one get from there to ##\vec{a_i}=g_{ij}\vec{a^j}##?

Modified Question (Please answer this):

How can you prove that the metric tensor can be used to raise and lower indices on any tensor? How does the metric help transform between covariant and contravariant tensors? Proving this for rank one tensors should be good enough. How does the metric perform the conversion ##T_{i}=g_{ij} T{^j}##?
 
Last edited:
Physics news on Phys.org
smodak said:
I understand ##\vec{a_i}\centerdot \vec{a^j}=\delta_i^j## and ##g_{ij}=\vec{a_i}\centerdot\vec{a_j}##

Neither of these are true in general. The first is true only for basis vectors in an orthonormal basis. The second is true only for basis vectors in any coordinate basis (not necessarily orthonormal).

smodak said:
How does one get from there to ##\vec{a_i}=g_{ij}\vec{a^j}##?

You can't, because the latter statement is true for all vectors, whereas the two previous statements are not (see above).
 
PeterDonis said:
The first is true only for basis vectors in an orthonormal basis.
The first relation is a relation between a tangent vector basis and the corresponding covector basis. Of course you can select to use a different covector basis, but using ##\vec a^i## to denote it would then be strongly misleading in my opinion. In other words, I believe that by ##\vec a_i## and ##\vec a^i##, the OP intends the basis vectors.
 
Orodruin said:
The first relation is a relation between a tangent vector basis and the corresponding covector basis. Of course you can select to use a different covector basis, but using ##\vec a^i## to denote it would then be strongly misleading in my opinion. In other words, I believe that by ##\vec a_i## and ##\vec a^i##, the OP intends the basis vectors.
Correct. I do.
 
PeterDonis said:
You can't, because the latter statement is true for all vectors
Ok. How do you prove that it is true?
 
Am I correct in assuming that this is just for a coordinate basis in curvilinear coordinates on a Euclidean space? The latter equation certainly makes no sense at all if this is not the case.
(@PeterDonis : From what I gather, he is referring to the actual basis vectors, not to the vector components. The second and third relations then only makes sense if you can directly identify the tangent vector space with the cotangent vector space.)
 
Orodruin said:
Am I correct in assuming that this is just for a coordinate basis in curvilinear coordinates on a Euclidean space? The latter equation certainly makes no sense at all if this is not the case.
(@PeterDonis : From what I gather, he is referring to the actual basis vectors, not to the vector components. The second and third relations then only makes sense if you can directly identify the tangent vector space with the cotangent vector space.)
I see that my question has created a lot of confusion. It is my fault as I have not clearly posed the question. Please ignore my original question. I want to ask the question differently:

How can you prove that the metric tensor can be used to raise and lower indices on any tensor? How does the metric help transform between covariant and contravariant tensors? Proving this for rank one tensors should be good enough. How does the metric perform the conversion ##T_{i}=g_{ij} T{^j}##?
 
smodak said:
How can you prove that the metric tensor can be used to raise and lower indices on any tensor?
What do you mean by this? The metric is a type (0,2) tensor so if you contract it with a type (1,0) tensor, i.e., a tangent vector, you will obtain a type (0,1) tensor, i.e., a dual vector, by construction. The fact that the metric is invertible establihes a 1-to-1 correspondence between tangent and dual vectors and so we call them by the same symbol, but with the index up/down.
 
Orodruin said:
What do you mean by this? The metric is a type (0,2) tensor so if you contract it with a type (1,0) tensor, i.e., a tangent vector, you will obtain a type (0,1) tensor, i.e., a dual vector, by construction. The fact that the metric is invertible establihes a 1-to-1 correspondence between tangent and dual vectors and so we call them by the same symbol, but with the index up/down.

I think I understand what you are saying. I am using the old (covariant/contravariant) terminology. Basically what you are saying that given a arbitrary contravariant vector ##T^j##, a covariant vector ##T_i## is defined as ##T_i = g_{ij} T^j##. Also, ##g^{ij} T_i = g^{ij} g_{ik} T^k = \delta^j_k T^k = T^j##.

I guess I was just trying to prove the definition. So, the question indeed is a stupid question. :)
 
  • #10
Orodruin said:
The first relation is a relation between a tangent vector basis and the corresponding covector basis.

Yes, you're right, I was being sloppy.

Orodruin said:
From what I gather, he is referring to the actual basis vectors, not to the vector components.

Yes, agreed.

(I see that the OP has actually retracted this original question, but these points are still worth clarifying for other readers of the thread, so I'm glad you did.)
 
  • #11
smodak said:
I think I understand what you are saying. I am using the old (covariant/contravariant) terminology. Basically what you are saying that given a arbitrary contravariant vector ##T^j##, a covariant vector ##T_i## is defined as ##T_i = g_{ij} T^j##. Also, ##g^{ij} T_i = g^{ij} g_{ik} T^k = \delta^j_k T^k = T^j##.

I guess I was just trying to prove the definition. So, the question indeed is a stupid question. :)
Yes. The metric is by definition a mapping from vector spaces to their duals.
 
  • #12
haushofer said:
Yes. The metric is by definition a mapping from vector spaces to their duals.
Just to be clear, this is not the only requirement on the metric. This is the same as saying it is a (0,2) tensor. You will additionally need that it should be symmetric and positive definite (or non-degenerate for the case of a pseudo metric).
 
  • #13
Of course.

(non-degeneracy is not even needed, but that's a technicality you only encounter in e.g. Newton-Cartan theory or theories with Carroll-symmetries, i.e. limits of theories which do have non-degenerate metrics. I guess that depends on nomenclature)
 
  • #14
smodak said:
How can you prove that the metric tensor can be used to raise and lower indices on any tensor? How does the metric help transform between covariant and contravariant tensors? Proving this for rank one tensors should be good enough. How does the metric perform the conversion ##T_{i}=g_{ij} T{^j}##?
First of all it is important to note that tensors are invariant. The components of tensors in a (pseudo-)metric space are co- or contravariant as well as the basis vectors used to define them. In the following the Einstein summation convention is always implied.

So let ##V## be a finite dimensional real vector space with a non-degenerate symmetric bilinear form ##g:V \times V \rightarrow \mathbb{R}## (pseudo-scalarproduct). Further let ##\boldsymbol{b}_j## a basis of ##V##. Now for any other basis ##\boldsymbol{b}_j'## we have a transformation matrix ##{T^j}_k## such that
$$\boldsymbol{b}_k={T^j}_k \boldsymbol{b}_j'.$$
To derive how the vector components transform we write
$$\boldsymbol{x}=x^k \boldsymbol{b}_k={T^j}_k x^k \boldsymbol{b}_j' \; \Rightarrow \; x'{}^j={T^j}_k x^k.$$
One says the vector components transform contragrediently to the basis vectors. The basis vectors are said to transform covariantly and the vector components contravariantly.

Now let's see what about the pseudo-scalarproduct. It's components are defined by
$$g_{jk}=g(\boldsymbol{b}_j,\boldsymbol{b}_k).$$
Now let's see how these transform. We have
$$g_{jk}=g(\boldsymbol{b}_j,\boldsymbol{b}_k)=g({T^l}_j \boldsymbol{b}_l',{T^m}_k \boldsymbol{b}_m')={T^l}_j {T^m}_k g_{lm}',$$
i.e. the components transform covariantly.

Since ##g## is non-degenerate the matrix ##(g_{jk})## is invertible, and we call its inverse ##(g^{jk})##. Then we can define new basis vectors
$$\boldsymbol{b}^j=g^{jk} \boldsymbol{b}_k.$$
Now you can express any vector also with respect to this new basis
$$\boldsymbol{x}=x_j \boldsymbol{b}^j = x_j g^{jk} \boldsymbol{b}_k =x^k \boldsymbol{b}_k \; \Rightarrow \; x^k=g^{jk} x_j$$
and of course also the other way around
$$x_j=g_{jk} x^k.$$
You should now prove that the ##\boldsymbol{b}^j## transform contravariantly and the ##x_j## covariantly under the change of bases. All this is not so difficult if one keeps the bookkeeping of all these upper and lower indices concisely consistent!
 
  • #15
vanhees71 said:
First of all it is important to note that tensors are invariant. The components of tensors in a (pseudo-)metric space are co- or contravariant as well as the basis vectors used to define them. In the following the Einstein summation convention is always implied.

So let ##V## be a finite dimensional real vector space with a non-degenerate symmetric bilinear form ##g:V \times V \rightarrow \mathbb{R}## (pseudo-scalarproduct). Further let ##\boldsymbol{b}_j## a basis of ##V##. Now for any other basis ##\boldsymbol{b}_j'## we have a transformation matrix ##{T^j}_k## such that
$$\boldsymbol{b}_k={T^j}_k \boldsymbol{b}_j'.$$
To derive how the vector components transform we write
$$\boldsymbol{x}=x^k \boldsymbol{b}_k={T^j}_k x^k \boldsymbol{b}_j' \; \Rightarrow \; x'{}^j={T^j}_k x^k.$$
One says the vector components transform contragrediently to the basis vectors. The basis vectors are said to transform covariantly and the vector components contravariantly.

Now let's see what about the pseudo-scalarproduct. It's components are defined by
$$g_{jk}=g(\boldsymbol{b}_j,\boldsymbol{b}_k).$$
Now let's see how these transform. We have
$$g_{jk}=g(\boldsymbol{b}_j,\boldsymbol{b}_k)=g({T^l}_j \boldsymbol{b}_l',{T^m}_k \boldsymbol{b}_m')={T^l}_j {T^m}_k g_{lm}',$$
i.e. the components transform covariantly.

Since ##g## is non-degenerate the matrix ##(g_{jk})## is invertible, and we call its inverse ##(g^{jk})##. Then we can define new basis vectors
$$\boldsymbol{b}^j=g^{jk} \boldsymbol{b}_k.$$
Now you can express any vector also with respect to this new basis
$$\boldsymbol{x}=x_j \boldsymbol{b}^j = x_j g^{jk} \boldsymbol{b}_k =x^k \boldsymbol{b}_k \; \Rightarrow \; x^k=g^{jk} x_j$$
and of course also the other way around
$$x_j=g_{jk} x^k.$$
You should now prove that the ##\boldsymbol{b}^j## transform contravariantly and the ##x_j## covariantly under the change of bases. All this is not so difficult if one keeps the bookkeeping of all these upper and lower indices concisely consistent!
Wow. Fantastic Explanation. Many Thanks! I am reading a book by Pavel Grinfeld and he tries to explain similarly albeit in a different language.
 
Last edited:
  • #16
haushofer said:
Of course.

(non-degeneracy is not even needed, but that's a technicality you only encounter in e.g. Newton-Cartan theory or theories with Carroll-symmetries, i.e. limits of theories which do have non-degenerate metrics. I guess that depends on nomenclature)
haushofer said:
Of course.

(non-degeneracy is not even needed, but that's a technicality you only encounter in e.g. Newton-Cartan theory or theories with Carroll-symmetries, i.e. limits of theories which do have non-degenerate metrics. I guess that depends on nomenclature)
Sorry I have no idea what non-degeneracy means or why it is required in this context; help? I also do not know what Newton-Cartan theory is or what Carroll-symmetries are. But I would like to learn. Could you point me to some resources?
 
Last edited:
  • #17
smodak said:
Sorry I have no idea what non-degeneracy means or why it is required in this context; help? I also do not know what Newton-Cartan theory is or what Carroll-symmetries are. But I would like to learn. Could you point me to some resources?
I'm not sure if it's a good idea to dive into Newton-Cartan theory if you're just learning about diff.geometry and GR, so I apologize for bringing it up. But if you do want to learn about it, you can check my Insights article and references therein,

https://www.physicsforums.com/insights/revival-Newton-cartan-theory/

"Degenerate" means "having one or more zero eigenvalues".
 
  • #18

Similar threads

  • · Replies 9 ·
Replies
9
Views
714
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K