Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Bianchi Identity

  1. Mar 9, 2015 #1
    What is bianchi identity? Can anyone explain it to me as simple as possible? Is it something that allows us to convert riemannian tensor to ricci curvature tensor?
     
  2. jcsd
  3. Mar 9, 2015 #2

    robphy

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Do you know about "divergence" and "curl"?
     
  4. Mar 9, 2015 #3

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    You can obtain the Ricci curvature tensor from the Riemann curvature tensor by a contraction: ##R_{\mu\nu}=R^\alpha_{\mu\alpha\nu}## (sign convention varies by author). There is no "converting" the Riemann curvature tensor into the Ricci tensor. The Riemann tensor contains more information than the Ricci tensor.

    The Bianchi identity is an identity that holds for the Riemann tensor (and there is an associated one for the Einstein tensor) that basically says some sum of some permutation of the (covariant) derivatives of the Riemann tensor gives you 0. The importance of the Bianchi identity in GR is that it enforces the fact that the covariant divergence of the Einstein tensor is 0. In this way, the Einstein Field Equations enforces local energy conservation by setting the covariant divergence of the stress-energy tensor to 0 as well.
     
  5. Mar 10, 2015 #4
    I know what you mean by the permutation but I dont really understand the rationale behind it. Is it obtained due to differentiating it? Like whenever you differentiate more than one functions together, you need to change their order.
    Moreover what you mean by covariant divergence? Is it something like divergence(which is a vector operator that measures the magnitude of a vector field's source or sink at a given point, in terms of a signed scalar) except that it is invariant?
     
  6. Mar 10, 2015 #5

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    The Bianchi identities looks like this: ##\nabla_\mu R_{\alpha\beta\gamma\delta}+\nabla_\gamma R_{\alpha\beta\delta\mu}+\nabla_\delta R_{\alpha\beta\mu\gamma}## where ##\nabla## is the covariant derivative on the manifold. That's all there is to it. It is an identity which holds for the Riemann tensor. There's no more "rationale" for this than for any of the symmetries of the Riemann tensor. It's there because of the way the Riemann tensor is defined.

    The term "covariant divergence" is loosely used on a tensor ##G^{\mu\nu}## to mean something like ##\nabla_\mu G^{\mu\nu}##. Thus, saying "the covariant divergence of the Einstein tensor vanishes" means ##\nabla_\mu G^{\mu\nu}=0##
     
  7. Mar 10, 2015 #6

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    The words "loosely" and "something like" are not correct here. What you have given is a precise definition of the term "covariant divergence".
     
  8. Mar 10, 2015 #7

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    Well, if the tensor is not symmetric, one will get confused on which index should be summed over to be called the "covariant divergence" right? And if the tensor has more indexes, one gets even more confused on this terminology. That's why I say "loosely". Maybe there is a convention that I am not aware of that it's always the first index?
     
  9. Mar 10, 2015 #8

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    That's why we label each index with a different Greek letter. As you have written it, ##\nabla_{\mu} G^{\mu \nu}## takes the divergence on the first index; we could also write ##\nabla_{\nu} G^{\mu \nu}## to indicate that we are taking the divergence on the second index, which happens to give the same result in this case since the Einstein tensor is symmetric, but with a non-symmetric tensor would give a different tensor as a result. With higher-rank tensors we do the same thing to make it clear which index we are taking the divergence on; divergences on different indexes may give different tensors as a result.

    This is true not just for divergences but for contractions of tensors in general (a divergence is just a contraction of the derivative operator with the tensor it's operating on); you have to specify which indexes you are contracting. For example, the Ricci tensor is defined as

    $$
    R_{\mu \nu} = R^{\alpha}{}_{\mu \alpha \nu}
    $$

    i.e,. it is defined as the contraction of the Riemann tensor on the first and third indexes. Other possible contractions of the Riemann tensor do not necessarily yield the same second-rank tensor as a result.
     
  10. Mar 10, 2015 #9

    Matterwave

    User Avatar
    Science Advisor
    Gold Member

    Yes certainly, but my point was only a semantic one, namely the English phrase "divergence of a tensor". Which I argue is ambiguous as written.
     
  11. Mar 10, 2015 #10

    PeterDonis

    User Avatar
    2016 Award

    Staff: Mentor

    Ambiguous as to precisely which index the divergence is taken on, yes, I agree. But the term "covariant divergence" still narrows things down greatly; you know you're applying the ##\nabla## operator to the tensor and contracting, the only thing not specified is which index you're contracting on. (And in the particular case under discussion, the tensor is symmetric anyway, so it doesn't matter.) That's why I objected to the terms "loosely" and "something like"; to me they gave the implication that a lot more was left unspecified than was actually the case. (Particularly since you actually wrote down the expression in index form, which resolved the only ambiguity. :wink: )
     
  12. Mar 11, 2015 #11
    What is the difference between covariant divergence and covariant derivative? I still dont really get what you mean by covariant divergence.
     
  13. Mar 11, 2015 #12

    ChrisVer

    User Avatar
    Gold Member

    The divergence also has a contraction of indices...
    eg [itex]\nabla_\mu K^\nu[/itex] is the covariant derivative of [itex]K^\nu[/itex]
    eg [itex]\nabla_\nu K^\nu[/itex] is the covariant divergence of [itex]K^\nu[/itex]

    It's the same for the partial derivative, where you can have the partial derivative, or the divergence ...
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook