# Vanishing of Einstein tensor from Bianchi identity

1. Apr 7, 2015

### binbagsss

I'm looking at the informal arguements in deriving the EFE equation.
The step that by the bianchi identity the divergence of the einstein tensor is automatically zero.

So the bianchi identity is $\bigtriangledown^{u}R_{pu}-\frac{1}{2}\bigtriangledown_{p}R=0$
$G_{uv}=R_{uv}-\frac{1}{2}Rg_{uv}$

So I see this if the covariant derivative is a actual tensor itself, such that indices can be lowered and raised i.e. $\bigtriangledown^{u}G_{uv}=\bigtriangledown^{u}R_{uv}-\frac{1}{2}\bigtriangledown^{u}Rg_{uv}=\bigtriangledown^{u}R_{uv}-\frac{1}{2}\bigtriangledown_{v}R$

So from the 2nd to third equality I've assumed the covariant derivaitve is a tensor.
Is it?

Or is my working incorrect?

Thanks.

2. Apr 7, 2015

### BiGyElLoWhAt

**I do believe**
Covariant derivatives are tensors. There's a component(s) that accounts for the change in direction and magnitude of a vector and a component(s) that accounts for the change in direction of the unit vector.
This doesn't explicitly state this as fact, but maybe it'll help you out. http://en.wikipedia.org/wiki/Covariant_derivative#Tensor_fields

3. Apr 12, 2015

### binbagsss

Could anybody point me to a source that explicitly states the covariant derivative is a tensor? Thanks.

4. Apr 12, 2015

### robphy

5. Apr 12, 2015

### binbagsss

okay thanks,
and so you can not raise or lower indices of it?

So the working in my op is wrong, could anyone please give a hint as to the correct way to show the divergence of the Einstein tensor vanishing automatically follows from the bianchi identity? Thanks.

6. Apr 12, 2015

### robphy

You can raise and lower its indices with the metric tensor.
Your equations are valid [but need a better understanding of the objects]....
However, you didn't write the Bianchi identity... you wrote a contraction of it.
(See page 222 of Misner Thorne Wheeler's Gravitation. )

7. Apr 12, 2015

### stevendaryl

Staff Emeritus
The trick is this:

$\nabla_w g_{u v} = 0$

The metric is "covariantly constant". So that means that

$\nabla_w (R g_{u v}) = (\nabla_w R) g_{u v} + R (\nabla_w g_{u v}) =(\nabla_w R) g_{u v}$

So if you multiply both sides by $g^{u w}$ and sum over $w$, you get:

$g^{u w} \nabla_w (R g_{u v}) = g^{u w} (\nabla_w R) g_{u v}) = \delta^w_v (\nabla_w R) = \nabla_v R$

So we have:

$\nabla^u (R g_{u v}) = \nabla_v R$

8. Apr 12, 2015

### bcrowell

Staff Emeritus
The only reason for using covariant derivatives rather than normal derivatives is so that they'll act like tensors.

9. Apr 14, 2015

### binbagsss

Thanks. I follow most. i think I'm being stupid but I don't see where $\nabla^u$ is in the above, i.e. dont follow the last line theres only lower covariant derivaitves in the proof.

10. Apr 14, 2015

### stevendaryl

Staff Emeritus
The definition of $\nabla^u$ is

$\nabla^u = g^{uw} \nabla_w$ (where $w$ is summed over)

11. Apr 14, 2015

### binbagsss

So in prooving that you can raise and lower indexes on a covariant derivative we have to use the assumption that it holds?

Edit: Oh your identity includes the scalar R, and is not a proof of raising and lowering indicces on the covariant derivaitve. But raising and lowerig indices on the covariant is what the question in my OP was adressing.

12. Apr 14, 2015

### stevendaryl

Staff Emeritus
You have to look at the order of terms.

It's true by definition that
$g_{uv} \nabla^u R = \nabla_v R$

But it's not true by definition that
$\nabla^u (g_{uv} R) = \nabla_v R$

You have to use that $g_{uv}$ is covariantly constant to be able to move it to the left side of $\nabla^u$

13. Apr 14, 2015

### binbagsss

But this is a definition? One of my question in this thread was that whether or not the covariant derivative is a tensor.
And if not(from previous answeres I believe it's not) how you can raise and lower indices on it, but you've just said by definition you can.
But how is this the case if it's not a tensor?

Last edited: Apr 14, 2015
14. Apr 14, 2015

### stevendaryl

Staff Emeritus
Yes, it's just a definition.

No it's an operator that returns a tensor when applied to a field (scalar, vector or tensor).

$\nabla_v$ is not a tensor, it is an operator that returns a tensor. $\nabla^u$ is defined to be an operator such that:

$\nabla^u X = g^{uv} (\nabla_v X)$

15. Apr 14, 2015

### BiGyElLoWhAt

$\nabla^u$ might not be a tensor, but it's still a matrix, and returns a tensor of same order as the matrix.

16. Apr 14, 2015

### BiGyElLoWhAt

I guess my confusion is coming from this (this might help OP): $\nabla$ is definitely treated as a vector when dealing with vector fields, does that mean it is a vector, or not? Does this logic still apply when dealing with tensor fields and using either of the "dels"?

17. Apr 14, 2015

### robphy

$\nabla^u$ applied to a scalar field $\alpha$ (a type-(0,0) tensor field) results in $\nabla^u \alpha$, which is a type-(1,0) tensor field [which looks like a column-vector (of scalar fields) in matrix form]... unless you are thinking of a type-(0,0)-matrix [i.e. a 1x1 matrix] with a single vector-field-element.

Although $\nabla^u$ might have vector-properties like addition and scalar-multiplication,
it isn't commutative: That is, although $E^a E^b E^c Q_{de}{}^{fg}= E^b E^a E^c Q_{de}{}^{fg}$ for vectors $E^a$,
we have $\nabla^a \nabla^b \nabla^c Q_{de}{}^{fg}\neq \nabla^b \nabla^a \nabla^c Q_{de}{}^{fg}$ .

18. Apr 15, 2015

### martinbn

To me it seems that it is the notations that cause problems. The expression $\nabla_\mu X$, whether it is in coordinates or in abstract index notation, means $(\nabla X)_\mu$. So you can raise and lower its indeces. Of course depending on what type of tensor $X$ is it will have other indeces as well, if $X$ is a vecotr filed, then $\nabla X$ is a $(1,1)$ tensor field and so on.