Commutation between covariant derivative and metric

In summary, it is known that the covariant derivative of the metric vanishes, and it is demonstrated that the covariant derivative of the metric is equal to the contraction of the covariant derivative of the tensor.
  • #1
Jufa
101
15
TL;DR Summary
I am struggling to proving the following identity
First, we shall mention that it is known that the covariant derivative of the metric vanishes, i.e ##\nabla_i g_{mn} = 0##.
Now I want tro prove the following:

$$ \nabla_i A_k = g_{kn}\nabla_i A^n$$

The demonstration I encounter takes advantage of the Leibniz rule:

$$ \nabla_i A_k = \nabla_i \big(g_{kn}A^n \big) = \nabla_i\big(g_{kn})A^n + g_{kn}\nabla_iA^n = 0 + g_{kn}\nabla_i A^n$$

Nevertheless, we must be careful at the last step. To my undersand, when writing ##\nabla_i\big(g_{kn})##, we are not referring (in this case) to taking the covariant derivative of the whole covariant tensor ##g_{kn}## and evaluating its ##kn## component, but instead we are fixing the value of ##n##. That is, it is not the same to compute the covariant derivative of the #mn# component of a certain tensor ##T^{mn}##, which will be just the partial derivative of this component, than to compute the ##mn## component of the whole tensor's covariant derivative, which will involve christoffel symbols. Following this reasoning, when I compute ##\nabla_i\big(g_{kn})A^n## I get the following:

$$ \nabla_i\big(g_{kn})A^n = \Big( \partial g_{kn} - \Gamma^l_{ik} g_{ln} \Big) A^n $$

Which clear does not vanish in general, since the quantity that we know that actually vanishes is:

$$ 0 = \nabla_i\big(g_{kn})= \partial g_{kn} - \Gamma^l_{ik} g_{ln} - \Gamma^l_{in} g_{kl} $$

Note that in the latter the covariant derivative is taken in the whole tensor.

Can someone help?
Thanks in advance.
 
Last edited:
Physics news on Phys.org
  • #2
I already realized what happens. Following my reasoning in the first equation displayed I should have written the partial derivative of A, not the covariant one. This rescues it all. Indeed, it seems to me that it is a general rule that covariant derivative of a contraction, i.e. the covariant derivative of an scalar, is equal to the contraction of the covariant derivative of the tensor.
 
  • #3
Jufa said:
I already realized what happens. Following my reasoning in the first equation displayed I should have written the partial derivative of A, not the covariant one. This rescues it all. Indeed, it seems to me that it is a general rule that covariant derivative of a contraction, i.e. the covariant derivative of an scalar, is equal to the contraction of the covariant derivative of the tensor.
No it doesn’t. It should definitely not have been the partial derivative of A.
Jufa said:
To my undersand, when writing ∇i(gkn), we are not referring (in this case) to taking the covariant derivative of the whole covariant tensor gkn and evaluating its kn component, but instead we are fixing the value of n.
That is not the case. If you use the Leibniz rule you get the covariant derivative in both terms.
 
  • #4
Jufa said:
we are fixing the value of ##n##.

No, you're not. ##n## is a dummy index that is used to lower the index of ##A## by contracting it with the metric. So all possible values of ##n## are included, not just one.
 
  • #5
I think the mistake is in not considering the complete covariant derivative of the metric components, which should read
$$\nabla_i g_{kn}=\partial_i g_{kn} -{\Gamma^{j}}_{ik} g_{jn} -{\Gamma^{j}}_{in} g_{kj}. \qquad (1)$$
but now
$${\Gamma^{j}}_{ik}=\frac{1}{2} g^{jl} (\partial_i g_{lk}+\partial_{k} g_{li}-\partial_l g_{ik}).$$
So the 2nd term in the parenthesis is
$$g_{jn} {\Gamma^j}_{ik} =\frac{1}{2} (\partial_i g_{nk}+\partial_k g_{ni} - \partial_n g_{ik})$$
and the 3rd term
$$g_{kj} {\Gamma^{j}}_{in}=\partial_i g_{kn}+\partial_n g_{ik}-\partial_k g_{in}.$$
Adding these two equations gives ##-\partial_i g_{kn}## and thus from (1) we get
$$\nabla_i g_{kn}=0,$$
which is no surprise, because the Christoffel symbols in GR are the uniquely defined connection coefficients in a torsion-free (pseudo-)Riemannian space which is metric compatible. Torsion free means that the connection symbols are symmetric under exchange of its lower indices, and metric compatible means that the covariant derivatives of the (pseudo-)metric vanishes.
 
  • #6
Orodruin said:
No it doesn’t. It should definitely not have been the partial derivative of A.

Yes.
 
  • #7
No. That’s not how the covariant derivative works.
 
  • #8
It must be the partial derivative of ##A^n## if you understand ##\nabla_i g_{kn}## with ##n## fixed (a 1-covariant tensor). They are equivalent ways of seeing it. Just as the covariant derivative of a contraction can be seen just as the partial derivative of it or as the contraction of the covariant derivative of the whole tensor.

I think the whole confusing thing has to do with the tricky index notation of the covariant derivative, though it ends up working quite well once you master it.
 
  • #9
Jufa said:
It must be the partial derivative of An if you understand ∇igkn with n fixed (a 1-covariant tensor).
There is not much gained by doing so. In fact, it is just confusing and means you cannot use the metric compatibility of the connection directly.

It is also wrong to consider it a ”fixed n” as already pointed out. What you would do is to consider ##g_{in}A^n## a dual vector and then apply the expression for the covariant derivative. Then you would apply the Leibniz rule for the derivative part - not argue that n is somehow ”fixed” (it isn’t, it is a summation index).
 
  • #10
Taking a fixed n in a summation to draw conclusions is like considering the derivative of a function and consider it to be constant to conclude that the derivative of a function should always vanish. It's nonsense, giving nonsense conclusions.
 
  • #11
Orodruin said:
There is not much gained by doing so. In fact, it is just confusing and means you cannot use the metric compatibility of the connection directly.

It is also wrong to consider it a ”fixed n” as already pointed out. What you would do is to consider ##g_{in}A^n## a dual vector and then apply the expression for the covariant derivative. Then you would apply the Leibniz rule for the derivative part - not argue that n is somehow ”fixed” (it isn’t, it is a summation index).
Well, by fixing "n" I mean that you first perform the summation and then derive, what means that you get a 1-covariant tensor for each term of the sum. In each term of the sum n is definitely fixed. That's what I mean.
 
  • #12
haushofer said:
Taking a fixed n in a summation to draw conclusions is like considering the derivative of a function and consider it to be constant to conclude that the derivative of a function should always vanish. It's nonsense, giving nonsense conclusions.
Not at all.
 
  • #13
We were just talking in another thread about notation being misleading.

Sometimes when people write ##g_{kn}## they are referring to the metric tensor, and sometimes they are referring to a particular component of the metric tensor. In the expression:

##\nabla_i (g_{kn} A^n)##

it's even more confusing because ##g## is partially contracted so it's neither fully a component nor fully a tensor. To make things clearer, you can either use the objects, rather than their components, or you can explicitly use basis vectors.

The first approach is
##\nabla_i (g_{kn} A^n) \equiv (\nabla g(A))_{ik} = ((\nabla g)(A) + (g(\nabla(A)))_{ik} = (g (\nabla A))_{ik}##
(since ##\nabla g = 0##).

Then since ##g## is a ##(2,0)## tensor and ##\nabla A## is a ##(1,1)## tensor, we can reinsert components in the only way that makes sense:

##g (\nabla A))_{ik} = g_{in} (\nabla A)^n_k##

which in the traditional bad way of writing covariant derivatives becomes
##g_{in} \nabla_k A^n##

The other way to make things clearer is to explicitly put in basis vectors and go all the way toward components:

##g = g_{kn} e^k \otimes e^n##
##A = A^m e_m##

##g## operating on ##A## produces the tensor##{^\dagger}##

##g A = g_{kn} A^m (\delta^n_m e^k) = g_{kn} A^n e^k##

(##^\dagger##This is the reason why people use the index notation in the first place. It's ambiguous as to what it means for a tensor to operate on a vector. ##g## as a (1,1) tensor is a function that takes two vectors and returns a scalar. You're only supplying one vector. So what that means is that you're getting an object that still needs another vector to produce a scalar. In other words, what you're getting is a covector or one-form. But it's ambiguous as to which "slot" you're sticking the vector into. Index notation makes this explicit: you write ##g_{kn} A^n## meaning you're sticking ##A## into the second slot, the one labeled ##n##. In the case of ##g##, since it's symmetric, it doesn't matter which slot you're using. In terms of basis vectors, the basis ##(2,0)## tensors are ##e^k \otimes e^n## the basis vectors are ##e_m##. So if you operate on the latter with the former, you might mean ##e^k(e_m) e^n## or you might mean ##e^k (e^n(e_m))##. The first one becomes ##\delta^k_m e^n##, The second one becomes ##\delta^n_m e^n##. Again, since ##g## is symmetric, it doesn't matter, so I'm just picking the latter.)

Now operating on ##g A## with the covariant derivative ##\nabla_i## becomes##^{\dagger \dagger}##:

##\nabla_i (g A) = \nabla_i (g_{kn} A^n e^k) = (\partial_i g_{kn}) A^n e^k + g_{kn} (\partial_i A^n) e^k + g_{kn} A^n \nabla_i e^k##
##= (\partial_i g_{kn}) A^n e^k + g_{kn} (\partial_i A^n) e^k + g_{kn} A^n (-\Gamma^k_{im} e^m)##

where I have used that ##\nabla_i (\phi V) = (\partial_i \phi) V + \phi (\nabla_i V)## when ##\phi## is a scalar, and where I have used ##\nabla_i e^k = - \Gamma^k_{im} e^m##. Notice that you get a minus sign when acting on basis covectors, and you get a plus sign when acting on basis vectors.

(##^{\dagger \dagger}##I am using the uncommon convention that ##\nabla_i## as an operator means ##\nabla_{e_i}##. If you don't want to use this convention, then you can stick to ##\nabla## rather than ##\nabla_i## and just take the ##i^{th}## component of the result. I don't like doing this, because we can easily write, in the case where ##\phi## is a scalar, ##\nabla_i \phi = \partial_i##. If you are using ##\nabla##, then ##\nabla \phi## is ##(\partial_i \phi) e^i##. So you have an extra basis vector to worry about. Not a big deal, but it would make a long derivation even longer.)

Relabelling dummy indices produces:
##= (\partial_i g_{kn}) A^n e^k + g_{kn} (\partial_i A^n) e^k + g_{mn} A^n (-\Gamma^m_{ik} e^k)##
##= ((\partial_i g_{kn}) A^n + g_{kn} (\partial_i A^n) - g_{mn} A^n \Gamma^m_{ik} ) e^k##

This doesn't look like what we want, but if you expand out the definition of ##\Gamma^m_{ik}## in terms of ##g##:

##g_{mn} \Gamma^m_{ik} = \frac{1}{2}(\partial_i g_{nk} + \partial_k g_{in} - \partial_n g_{ik})##

Substituting this into the expression gives:

##\nabla_i (g A) = ([\partial_i g_{kn} - \frac{1}{2}\partial_i g_{nk} - \frac{1}{2} \partial_k g_{in} + \partial_n g_{ik}] A^n + g_{kn} \partial_i A^n) e^k##

The expression in ##[]## simplifies to ##\frac{1}{2}\partial_i g_{nk} + \partial_n g_{ik} - \frac{1}{2} \partial_k g_{in} + \partial_n g_{ik} = \Gamma_{kin} = g_{km} \Gamma^m_{in}##

So we finally get:

##\nabla_i (g A) = (g_{km} \Gamma^m_{in} A^n + g_{kn} \partial_i A^n) e^k##

Relabelling dummy indices in the first term (swapping ##n## and ##m##) gives:

##\nabla_i (g A) = (g_{kn} \Gamma^n_{im} A^m + g_{kn} \partial_i A^n) e^k##
##= g_{kn} (\nabla_i A)^n e^k##

which taking component ##_k## gives:

##(\nabla_i (g A))_k = g_{kn} (\nabla_i A)^n##

With the bad (in my opinion) notation for covariant derivatives, this becomes:

##(\nabla_i (g_{kn} A^n) = g_{kn} \nabla_i A^n##
 
  • Like
Likes Jufa
  • #14
The upshot is that using the bad notation that I don't like:

##\nabla_i (g_{kn} A^n) = (\partial_i g_{kn}) A^n + g_{kn} (\partial_i A^n) - \Gamma^m_{ik} g_{mn} A^n##

The way to remember this is that there is an uncontracted tensor index ##k## and it is lowered. So that indicates a hidden covector ##e^k##. A covariant derivative operating on covectors produces a ##-\Gamma##. There is only one way to assign the indices that make sense, so you get ##- \Gamma^m_{ik} g_{mn} A^n##
 
  • Like
Likes vanhees71
  • #15
Jufa said:
by fixing "n" I mean that you first perform the summation and then derive, what means that you get a 1-covariant tensor for each term of the sum.

No, you don't. The full sum (contraction of the metric ##g## with the vector ##A##) is a covector (1-form, or what you are calling a "1-covariant tensor"), but the individual terms of the sum are not. The object you get when you apply the covariant derivative operator ##\nabla## is a (1, 1) tensor (one upper, one lower index), but individual terms in the summation involved (the contraction of the metric ##g## with the covariant derivative ##\nabla A##) are not.
 
  • #16
Is there a way of closing the thread?
 
  • #17
Jufa said:
Is there a way of closing the thread?

Sure, just tell a moderator you want it closed. I've closed it now.
 
  • Like
Likes vanhees71

1. What is the commutation between covariant derivative and metric?

The commutation between covariant derivative and metric refers to the order in which these two mathematical operations are performed. The covariant derivative is a mathematical operation that measures the rate of change of a tensor field along a given direction, while the metric is a mathematical object that defines the distance between points in a space. The commutation between these two operations determines whether they can be performed in any order or if the order matters.

2. Why is the commutation between covariant derivative and metric important?

The commutation between covariant derivative and metric is important because it affects the calculation of quantities in differential geometry, such as curvature and geodesic equations. It also plays a crucial role in general relativity, where the metric is used to describe the curvature of spacetime and the covariant derivative is used to describe the motion of particles in this curved spacetime.

3. How is the commutation between covariant derivative and metric calculated?

The commutation between covariant derivative and metric is calculated using the commutator bracket, which is a mathematical operation that measures the difference between two operators. In this case, the commutator bracket is used to measure the difference between the covariant derivative and the metric, and the result is a tensor field that describes the commutation between these two operations.

4. What is the physical significance of the commutation between covariant derivative and metric?

The physical significance of the commutation between covariant derivative and metric is that it determines the behavior of physical quantities in curved spacetime. In general relativity, the metric is used to describe the curvature of spacetime, while the covariant derivative is used to describe the motion of particles in this curved spacetime. Therefore, the commutation between these two operations affects the calculation of physical quantities, such as energy, momentum, and curvature.

5. Can the commutation between covariant derivative and metric be zero?

Yes, the commutation between covariant derivative and metric can be zero in certain cases. This occurs when the metric is constant, meaning that it does not change with position, and the covariant derivative is taken with respect to a coordinate basis. In this case, the commutator bracket between the covariant derivative and metric is zero, indicating that the order in which these operations are performed does not matter.

Similar threads

  • Special and General Relativity
Replies
11
Views
239
  • Special and General Relativity
Replies
7
Views
183
  • Special and General Relativity
Replies
6
Views
2K
  • Special and General Relativity
Replies
2
Views
1K
  • Special and General Relativity
Replies
1
Views
668
  • Special and General Relativity
Replies
2
Views
849
  • Special and General Relativity
Replies
2
Views
999
  • Special and General Relativity
Replies
2
Views
929
  • Special and General Relativity
4
Replies
124
Views
6K
  • Special and General Relativity
2
Replies
62
Views
3K
Back
Top