Vanishing of Einstein tensor from Bianchi identity

binbagsss
Messages
1,291
Reaction score
12
I'm looking at the informal arguements in deriving the EFE equation.
The step that by the bianchi identity the divergence of the einstein tensor is automatically zero.

So the bianchi identity is ##\bigtriangledown^{u}R_{pu}-\frac{1}{2}\bigtriangledown_{p}R=0##
##G_{uv}=R_{uv}-\frac{1}{2}Rg_{uv}##

So I see this if the covariant derivative is a actual tensor itself, such that indices can be lowered and raised i.e. ##\bigtriangledown^{u}G_{uv}=\bigtriangledown^{u}R_{uv}-\frac{1}{2}\bigtriangledown^{u}Rg_{uv}=\bigtriangledown^{u}R_{uv}-\frac{1}{2}\bigtriangledown_{v}R##

So from the 2nd to third equality I've assumed the covariant derivaitve is a tensor.
Is it?

Or is my working incorrect?

Thanks.
 
Physics news on Phys.org
**I do believe**
Covariant derivatives are tensors. There's a component(s) that accounts for the change in direction and magnitude of a vector and a component(s) that accounts for the change in direction of the unit vector.
This doesn't explicitly state this as fact, but maybe it'll help you out. http://en.wikipedia.org/wiki/Covariant_derivative#Tensor_fields
 
  • Like
Likes binbagsss
Could anybody point me to a source that explicitly states the covariant derivative is a tensor? Thanks.
 
The covariant derivative is not a tensor... it's an operator on tensors.

It is more correct to say that the "covariant derivative of a tensor" is a tensor.
http://physicspages.com/2013/12/21/covariant-derivative-of-a-general-tensor/
Usually, the covariant derivative is compatible with the metric tensor... so the "covariant derivative of the metric tensor" is zero.
 
robphy said:
The covariant derivative is not a tensor... it's an operator on tensors.

It is more correct to say that the "covariant derivative of a tensor" is a tensor.
http://physicspages.com/2013/12/21/covariant-derivative-of-a-general-tensor/
Usually, the covariant derivative is compatible with the metric tensor... so the "covariant derivative of the metric tensor" is zero.
okay thanks,
and so you can not raise or lower indices of it?

So the working in my op is wrong, could anyone please give a hint as to the correct way to show the divergence of the Einstein tensor vanishing automatically follows from the bianchi identity? Thanks.
 
You can raise and lower its indices with the metric tensor.
Your equations are valid [but need a better understanding of the objects]...
However, you didn't write the Bianchi identity... you wrote a contraction of it.
(See page 222 of Misner Thorne Wheeler's Gravitation. )
 
  • Like
Likes binbagsss
The trick is this:

\nabla_w g_{u v} = 0

The metric is "covariantly constant". So that means that

\nabla_w (R g_{u v}) = (\nabla_w R) g_{u v} + R (\nabla_w g_{u v}) =(\nabla_w R) g_{u v}

So if you multiply both sides by g^{u w} and sum over w, you get:

g^{u w} \nabla_w (R g_{u v}) = g^{u w} (\nabla_w R) g_{u v}) = \delta^w_v (\nabla_w R) = \nabla_v R

So we have:

\nabla^u (R g_{u v}) = \nabla_v R
 
The only reason for using covariant derivatives rather than normal derivatives is so that they'll act like tensors.
 
  • Like
Likes BiGyElLoWhAt
stevendaryl said:
The trick is this:

\nabla_w g_{u v} = 0

The metric is "covariantly constant". So that means that

\nabla_w (R g_{u v}) = (\nabla_w R) g_{u v} + R (\nabla_w g_{u v}) =(\nabla_w R) g_{u v}

So if you multiply both sides by g^{u w} and sum over w, you get:

g^{u w} \nabla_w (R g_{u v}) = g^{u w} (\nabla_w R) g_{u v}) = \delta^w_v (\nabla_w R) = \nabla_v R

So we have:

\nabla^u (R g_{u v}) = \nabla_v R
Thanks. I follow most. i think I'm being stupid but I don't see where ##\nabla^u## is in the above, i.e. don't follow the last line there's only lower covariant derivaitves in the proof.
 
  • #10
binbagsss said:
Thanks. I follow most. i think I'm being stupid but I don't see where ##\nabla^u## is in the above, i.e. don't follow the last line there's only lower covariant derivaitves in the proof.

The definition of \nabla^u is

\nabla^u = g^{uw} \nabla_w (where w is summed over)
 
  • #11
stevendaryl said:
The definition of \nabla^u is

\nabla^u = g^{uw} \nabla_w (where w is summed over)

So in prooving that you can raise and lower indexes on a covariant derivative we have to use the assumption that it holds?

Edit: Oh your identity includes the scalar R, and is not a proof of raising and lowering indicces on the covariant derivaitve. But raising and lowerig indices on the covariant is what the question in my OP was adressing.
 
  • #12
binbagsss said:
So in prooving that you can raise and lower indexes on a covariant derivative we have to use the assumption that it holds?

You have to look at the order of terms.

It's true by definition that
g_{uv} \nabla^u R = \nabla_v R

But it's not true by definition that
\nabla^u (g_{uv} R) = \nabla_v R

You have to use that g_{uv} is covariantly constant to be able to move it to the left side of \nabla^u
 
  • #13
stevendaryl said:
The definition of \nabla^u is

\nabla^u = g^{uw} \nabla_w (where w is summed over)

But this is a definition? One of my question in this thread was that whether or not the covariant derivative is a tensor.
And if not(from previous answeres I believe it's not) how you can raise and lower indices on it, but you've just said by definition you can.
But how is this the case if it's not a tensor?
 
Last edited:
  • #14
binbagsss said:
But this is a definition?

Yes, it's just a definition.

My question in this thread was that whether or not the covariant derivative is a tensor.

No it's an operator that returns a tensor when applied to a field (scalar, vector or tensor).

And if not(from previous answeres I believe it's not) how you can raise and lower indices on it, but you've just said by definition you can.
But how is this the case if it's not a tensor?

\nabla_v is not a tensor, it is an operator that returns a tensor. \nabla^u is defined to be an operator such that:

\nabla^u X = g^{uv} (\nabla_v X)
 
  • #15
##\nabla^u## might not be a tensor, but it's still a matrix, and returns a tensor of same order as the matrix.
 
  • #16
I guess my confusion is coming from this (this might help OP): ##\nabla## is definitely treated as a vector when dealing with vector fields, does that mean it is a vector, or not? Does this logic still apply when dealing with tensor fields and using either of the "dels"?
 
  • #17
BiGyElLoWhAt said:
##\nabla^u## might not be a tensor, but it's still a matrix, and returns a tensor of same order as the matrix.
##\nabla^u## applied to a scalar field ##\alpha## (a type-(0,0) tensor field) results in ##\nabla^u \alpha##, which is a type-(1,0) tensor field [which looks like a column-vector (of scalar fields) in matrix form]... unless you are thinking of a type-(0,0)-matrix [i.e. a 1x1 matrix] with a single vector-field-element.

Although ##\nabla^u## might have vector-properties like addition and scalar-multiplication,
it isn't commutative: That is, although ##E^a E^b E^c Q_{de}{}^{fg}= E^b E^a E^c Q_{de}{}^{fg}## for vectors ##E^a##,
we have ##\nabla^a \nabla^b \nabla^c Q_{de}{}^{fg}\neq \nabla^b \nabla^a \nabla^c Q_{de}{}^{fg}## .
 
  • #18
To me it seems that it is the notations that cause problems. The expression ##\nabla_\mu X##, whether it is in coordinates or in abstract index notation, means ##(\nabla X)_\mu##. So you can raise and lower its indeces. Of course depending on what type of tensor ##X## is it will have other indeces as well, if ##X## is a vecotr filed, then ##\nabla X## is a ##(1,1)## tensor field and so on.
 

Similar threads

Back
Top