Clearing Up Confusion: G_{\mu\nu} Explained

  • Thread starter Thread starter John_Doe
  • Start date Start date
  • Tags Tags
    Confusion
Click For Summary

Homework Help Overview

The discussion revolves around the Einstein tensor \( G_{\mu\nu} \) and its relationship with the Ricci tensor \( R_{\mu\nu} \) and the metric tensor \( g_{\mu\nu} \). Participants are examining the mathematical expressions and identities related to these tensors, particularly focusing on the implications of index matching and tensor contractions.

Discussion Character

  • Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • Participants are attempting to clarify the derivation of \( G_{\mu\nu} \) and are questioning the validity of certain steps in the manipulation of indices and tensor contractions. There is a focus on ensuring proper index usage and the implications of tensor properties.

Discussion Status

The discussion is ongoing, with participants providing corrections and insights into the mathematical expressions. Some participants express confusion about specific tensor properties and the implications of their manipulations, while others offer clarifications regarding the use of indices and the nature of tensor products.

Contextual Notes

There is mention of dimensionality, specifically regarding the trace of the metric tensor in four dimensions, and the potential for misunderstanding in the use of indices as both dummy and summation indices.

John_Doe
Messages
50
Reaction score
0
Dumb question, but...

G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}R

Since

R=g^{\mu\nu}R_{\mu\nu}

and

g^{\mu\nu}g_{\mu\nu}=1

it would appear that

<br /> G_{\mu\nu}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\alpha\beta}R_{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\mu\beta}g_{\mu\beta}g^{\alpha\beta}R_{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{2} \delta^{\beta}_{\nu}\delta^{\alpha}_{\mu}R_{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{2} R_{\mu\nu}<br /> =\frac{1}{2} R_{\mu\nu}<br />

which cannot be correct. I would be very grateful if someone could clear this up for me. Thank you in advance.
 
Last edited:
Physics news on Phys.org
John_Doe said:
<br /> G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\alpha\beta}R_{\alpha\beta}<br /> R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\mu\beta}g_{\mu\beta}g^{\alpha\beta}R_{\alpha\beta}<br />

your \mu \nu indices don't match on both side of equation.. check that first and foremost
btw, don't mix fixed indices \mu \nu with the summation indices.
 
Last edited:
Yeah - there should be an = there so that it's
G_{\mu\nu}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\alpha\beta}R_{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\mu\beta}g_{\mu\beta}g^{\alpha\beta}R _{\alpha\beta}
Then the metric tensors contract, yielding the kronecker deltas. I'm not sure where the mistake is.
 
Last edited:
g^{\mu\nu}g_{\mu\nu}=1 No. It equals 4. Assuming you are in dimension 4. It's a trace.
 
John_Doe said:
Yeah - there should be an = there so that it's
G_{\mu\nu}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\alpha\beta}R_{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\mu\beta}g_{\mu\beta}g^{\alpha\beta}R _{\alpha\beta}
Then the metric tensors contract, yielding the kronecker deltas. I'm not sure where the mistake is.

in the second step you are contracting \mu (confusing choice of symbol) with the wrong metric tensor, note as I pointed out before \mu, \nu are not summation indices. If in doubt always put back in the summation signs and write things out in components.
 
Thank you very much, except now I seem to have

g_{\mu\alpha}g^{\mu\beta}=\delta_{\alpha}^{\beta}=\left\{\begin{array}{cc}1,&amp;\mbox{ if }<br /> \alpha=\beta\\0, &amp; \mbox{ if } \alpha\neq\beta\end{array}\right.

but also

g_{\mu\nu}g^{\mu\nu}=4

Marginal confusion there... and I also don't quite understand why it matters which tensors the contraction is done on, other than you get the wrong answer.

All help is appreciated, thank you.
 
g_{\mu\alpha}g^{\mu\beta}=\delta_{\alpha}^{\beta}= \left\{\begin{array}{cc}1,&amp;\mbox{ if }\alpha=\beta\\0, &amp; \mbox{ if } \alpha\neq\beta\end{array}\right.
is wrong. As Dick pointed out, for any tensor A^{ij}, A_{ij}A^{ij} is the trace of A.
 
I'm sorry, but you've lost me there. It's not equal to \delta_{\alpha}^{\beta}?

I thought that g^{\mu\nu} was defined as the inverse of the g_{\mu\nu}. After all, g^{\mu\nu}=\frac{G(\mu,\nu)}{g} if G(\mu,\nu) denotes the cofactors of g_{\mu\nu} and g = |g_{\mu\nu}|.

Edit:
Taking into account corrections,
<br /> G_{\mu\nu}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\alpha\beta}R_{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{8} g_{\mu\nu}g^{\mu\beta}g_{\mu\beta}g^{\alpha\beta}R _{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{8} \delta^{\beta}_{\nu}\delta^{\alpha}_{\mu}R_{\alpha \beta}<br /> =R_{\mu\nu}-\frac{1}{8} R_{\mu\nu}<br /> =\frac{7}{8} R_{\mu\nu}<br />
 
Last edited:
It's still not okay. You must not use the same index both as "dummy index" and as summation index, and here i mean "\mu" and must not use the same index twice as a summation index, and here i mean "\beta".
 
  • #10
<br /> G_{\mu\nu}<br /> =R_{\mu\nu}-\frac{1}{2} g_{\mu\nu}g^{\alpha\beta}R_{\alpha\beta}<br /> =R_{\mu\nu}-\frac{1}{8} g_{\mu\nu}g^{\sigma\tau}g_{\sigma\tau}g^{\alpha\beta}R _{\alpha\beta}<br />

So all the working here is now correct, even if it doesn't help in the slightest? That's a good thing. Thank you all very much.
 
  • #11
Yes, it's now correct. But it doesn't say very much, just 4*(1/8)=1/2. What were you trying to prove to begin with?
 
  • #12
No, it just seemed to me at first, by eye, that the equation should simpllify, which is obviously wrong. It's not supposed to say much now - I'm just insterested in the fact that all the working is now correct.

Except, one last thing: what if I have

g_{\alpha\beta}g^{\gamma\delta}

and each of the indices does not appear anywhere else in the equation? Shouldn't \alpha=\gamma and \beta=\delta?
 
  • #13
No, why would they they have to be like that ? Notice it's just a tensor product. 2 2-nd rank tensors multiplied yeilding a (2,2) 4-th rank tensor.
 

Similar threads

  • · Replies 49 ·
2
Replies
49
Views
4K
  • · Replies 3 ·
Replies
3
Views
944
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
910
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
Replies
0
Views
2K