Einstein linearized tensor is zero

In summary, the conversation discusses the use of the Einstein tensor and its relation to the Ricci tensor and scalar. The equations for the metric and the Riemann tensor are also mentioned. The conversation concludes with a discussion of the Einstein tensor's components and their relationship to second time derivatives in linearized theory.
  • #1
dpopchev
27
0

Homework Statement


We have the Einstein tensor [itex] G_{αβ} = R_{αβ} - \frac{1}{2}g_{αβ}R [/itex]
where [itex] R_{\alpha \beta}, R [/itex] are the Ricci tensor and scalar.

Homework Equations


We want the metric to be small perturbation of the flat space, so [itex] g_{\alpha \beta} = \eta_{\alpha \beta} + h_{\alpha \beta} [/itex] with [itex] h_{\alpha \beta} [/itex] is a small.

By definition we use [itex] \eta [/itex] to upper or down indexes.
So we can write [itex] R = R^\beta_\beta = \eta^{\alpha \beta} R_{\alpha \beta} [/itex]

The Attempt at a Solution


Lets substitute in above [itex] G_{\alpha \beta} = R_{\alpha \beta} - \frac{1}{2} \eta_{\alpha \beta} R = R_{\alpha \beta} - \frac{1}{2} \eta_{\alpha \beta}\eta^{\alpha \beta} R_{\alpha \beta} = R_{\alpha \beta}( 1 - \frac{1}{2} \eta^{\alpha_\beta} ) =
R_{\alpha \beta}( 1 - \frac{1}{2}tr(\eta) ) =
R_{\alpha \beta}( 1 - \frac{2}{2} ) = R_{\alpha \beta}( 1 - 1 ) = 0[/itex]

This cannot be... I seem not to see my mistake...
 
Physics news on Phys.org
  • #2
dpopchev said:
[itex] G_{\alpha \beta} = R_{\alpha \beta} - \frac{1}{2} \eta_{\alpha \beta} R = R_{\alpha \beta} - \frac{1}{2} \eta_{\alpha \beta}\eta^{\alpha \beta} R_{\alpha \beta}[/itex]

You can't use ##\alpha## and ##\beta## as dummy summation indices for ##R## since ##\alpha## and ##\beta## are already fixed indices. Note the ambiguity in what is being summed in the last term.
 
  • #3
Yes. I see it. Pretty careless handle of indexes. Thanks. So I am stuck in this problem:

In that line of thought how can I show that Einstein tensor members [itex] G_{00}, G_{0i} [/itex] do not contain second time derivatives of [itex] h [/itex] in linearized theory, when it is given by to second order:

WRONG([itex]G_{\alpha \beta} = h^{\sigma}_{\mu,\sigma \nu} + h^{\sigma}_{\nu,\sigma \mu} - h_{,\mu \nu} - h_{\mu \nu, \gamma}^{,\gamma} - \eta_{\mu \nu}h^{\mu \nu}_{,\mu \nu} + \eta_{\mu \nu}h^{,\gamma}_{,\gamma}[/itex])READ BELOW

[itex] G_{\mu \nu} = h^{\sigma}_{\mu,\sigma \nu} + h^{\sigma}_{\nu,\sigma \mu} - h_{,\mu \nu} - h_{\mu \nu, \gamma}^{,\gamma} - \eta_{\mu \nu}h^{\mu \nu}_{,\mu \nu} + \eta_{\mu \nu}h^{,\gamma}_{,\gamma} [/itex]

The expression was taken from here

I just cannot see it. Any hints?
 
Last edited:
  • #4
dpopchev said:
[itex] G_{\alpha \beta} = h^{\sigma}_{\mu,\sigma \nu} + h^{\sigma}_{\nu,\sigma \mu} - h_{,\mu \nu} - h_{\mu \nu, \gamma}^{,\gamma} - \eta_{\mu \nu}h^{\mu \nu}_{,\mu \nu} + \eta_{\mu \nu}h^{,\gamma}_{,\gamma} [/itex]

Still have some indices problems. Note you have the fixed indices ##\alpha## and ##\beta## on the left and yet they don't appear on the right. Also, the next to last term on the right is incorrect because the indices ##\mu## and ##\nu## are each appearing three times. Equation 6.8 in your reference also has this mistake.

Once you get that corrected, all you need to do is write out carefully the expressions for ##G_{00}## and ##G_{0i}## and you'll see that all terms involving second time derivatives will cancel. You'll need to remember that in the linearized theory, raising or lowering the index 0 will just change the sign of the expression. For example ##h^{0}\:_\mu## = ##-h_{0\mu}##.
 
Last edited:
  • #5
I will write step by step:

Riemann tensor: [itex] R_{\alpha \beta \mu \nu} = \frac{1}{2}( h_{\alpha \nu, \beta \mu} + h_{\beta \mu, \alpha \nu} - h_{\alpha \mu, \beta \nu} - h_{\beta \nu, \alpha \mu} )[/itex]

Ricci tensor: [itex] R_{\beta \nu} = R^{\mu}_{\beta \mu \nu} = \eta^{\alpha \mu} R_{\alpha \beta \mu \nu} = \frac{1}{2}( h^{\mu}_{\nu, \beta \mu} + h^{\alpha}_{\beta, \alpha \nu} - h^{\alpha}_{\alpha, \beta \nu} - h^{,\mu}_{\beta \nu, \mu})[/itex]

Ricci scalar: [itex] R = R^{\nu}_{\nu} = \eta^{\beta \nu} R_{\beta \nu} = \frac{1}{2}( h^{\mu \beta}_{, \beta \mu} + h^{\alpha \nu}_{, \alpha \nu} - h^{\alpha, \beta }_{\alpha, \beta} - h^{\beta,\mu}_{\beta, \mu}) [/itex]

If I learned from my previous mistake it would be stupid if I make changes in the following manner:
[itex] h^{\alpha, \beta }_{\alpha, \beta} = tr(h)^{,\beta}_{,\beta} = tr(h)^{,\gamma}_{,\gamma} = h^{\sigma, \gamma}_{\sigma, \gamma} [/itex] because there are indexes involved with the initial indexing.
This would mean it is correct to introduce repeating indexes only if they weren't included in the beginning. So a correct would be something like this [itex] T^{\alpha}_{\beta} + tr(h) + tr(\eta) = T^{\alpha}_{\beta} + h^{\gamma}_{\gamma} + \eta^{\sigma}_{\sigma} = T^{\alpha}_{\beta} + h^{\lambda}_{\lambda} + \eta^{\lambda}_{\lambda} [/itex]

Let's continue with Einstein tensor: [itex] G_{\beta \nu} = R_{\beta \nu} - \frac{1}{2}\eta_{\beta \nu} R [/itex] Now we should just substitute the above things here, but should with some other dummy indexes, so it won't be like above but for example [itex] R = R^{\gamma}_{\gamma} = \eta^{\sigma \gamma} R_{\sigma \gamma} [/itex] but like this there won't be any shared indexes. I mean the [itex] \mu, \alpha [/itex] indexes or I could add them like dummy indexes... dunno

EDIT: Find similar problem. I am not the only one.

EDIT: More helpful than the Einstein notation seems to be Ricci calculus
 
Last edited:
  • #6
dpopchev said:
I will write step by step:

Riemann tensor: [itex] R_{\alpha \beta \mu \nu} = \frac{1}{2}( h_{\alpha \nu, \beta \mu} + h_{\beta \mu, \alpha \nu} - h_{\alpha \mu, \beta \nu} - h_{\beta \nu, \alpha \mu} )[/itex]

Ricci tensor: [itex] R_{\beta \nu} = R^{\mu}_{\beta \mu \nu} = \eta^{\alpha \mu} R_{\alpha \beta \mu \nu} = \frac{1}{2}( h^{\mu}_{\nu, \beta \mu} + h^{\alpha}_{\beta, \alpha \nu} - h^{\alpha}_{\alpha, \beta \nu} - h^{,\mu}_{\beta \nu, \mu})[/itex]

Ricci scalar: [itex] R = R^{\nu}_{\nu} = \eta^{\beta \nu} R_{\beta \nu} = \frac{1}{2}( h^{\mu \beta}_{, \beta \mu} + h^{\alpha \nu}_{, \alpha \nu} - h^{\alpha, \beta }_{\alpha, \beta} - h^{\beta,\mu}_{\beta, \mu}) [/itex]

If I learned from my previous mistake it would be stupid if I make changes in the following manner:
[itex] h^{\alpha, \beta }_{\alpha, \beta} = tr(h)^{,\beta}_{,\beta} = tr(h)^{,\gamma}_{,\gamma} = h^{\sigma, \gamma}_{\sigma, \gamma} [/itex] because there are indexes involved with the initial indexing.
I think all the above is correct.
This would mean it is correct to introduce repeating indexes only if they weren't included in the beginning. So a correct would be something like this [itex] T^{\alpha}_{\beta} + tr(h) + tr(\eta) = T^{\alpha}_{\beta} + h^{\gamma}_{\gamma} + \eta^{\sigma}_{\sigma} = T^{\alpha}_{\beta} + h^{\lambda}_{\lambda} + \eta^{\lambda}_{\lambda} [/itex]
The above equation doesn't make sense because you can't add a scalar quantity like Trace(h) to a tensor quantity like Tβα.
Let's continue with Einstein tensor: [itex] G_{\beta \nu} = R_{\beta \nu} - \frac{1}{2}\eta_{\beta \nu} R [/itex] Now we should just substitute the above things here, but should with some other dummy indexes, so it won't be like above but for example [itex] R = R^{\gamma}_{\gamma} = \eta^{\sigma \gamma} R_{\sigma \gamma} [/itex] but like this there won't be any shared indexes. I mean the [itex] \mu, \alpha [/itex] indexes or I could add them like dummy indexes... dunno
I believe what you're saying here is ok, although I don't know what ##\mu## and ##\alpha## indices you are referring to.
 
  • #7
So I am getting: [itex] G_{\alpha \beta} = R_{\alpha \beta} - \frac{1}{2}\eta_{\alpha \beta}R = \frac{1}{2}(h^{\gamma}_{\alpha, \beta \gamma} + h^{\gamma}_{\beta, \alpha \gamma} - h_{,\alpha \beta} - h^{,\gamma}_{\alpha \beta, \gamma} ) - \frac{1}{2} \eta_{\alpha \beta} \frac{1}{2}( h^{\lambda \sigma}_{,\lambda \sigma} + h^{\lambda \gamma}_{,\lambda \gamma} - h^{,\gamma}_{,\gamma} - h^{,\lambda}_{,\lambda )} [/itex]

Since linear approximation and we can ignore the derivatives of the traces because of Jacobi formula [itex] h,{\mu} = h h^{\alpha \beta} h_{\alpha \beta, \mu} [/itex] ?

So we are left with: [itex] G_{\alpha \beta} = \frac{1}{2}(h^{\gamma}_{\alpha, \beta \gamma} + h^{\gamma}_{\beta, \alpha \gamma} - h^{,\gamma}_{\alpha \beta, \gamma} ) - \frac{1}{2} \eta_{\alpha \beta} \frac{1}{2}( h^{\lambda \sigma}_{,\lambda \sigma} + h^{\lambda \gamma}_{,\lambda \gamma} )[/itex]

Can I do something about the last member ? I am looking at Shutz expression and he has just one like [itex] \eta_{\alpha \beta} h^{\mu \nu}_{,\mu \nu} [/itex] ?

So at this point I should see that the time derivatives are vanishing ?
 
  • #8
dpopchev said:
So I am getting: [itex] G_{\alpha \beta} = R_{\alpha \beta} - \frac{1}{2}\eta_{\alpha \beta}R = \frac{1}{2}(h^{\gamma}_{\alpha, \beta \gamma} + h^{\gamma}_{\beta, \alpha \gamma} - h_{,\alpha \beta} - h^{,\gamma}_{\alpha \beta, \gamma} ) - \frac{1}{2} \eta_{\alpha \beta} \frac{1}{2}( h^{\lambda \sigma}_{,\lambda \sigma} + h^{\lambda \gamma}_{,\lambda \gamma} - h^{,\gamma}_{,\gamma} - h^{,\lambda}_{,\lambda )} [/itex]
In the last term on the right, inside the parentheses, note that the first two terms are identically equal to each other and the last two terms are equal to each other. So, you might as well combine them and get rid of one of the factors of 1/2.
Since linear approximation and we can ignore the derivatives of the traces because of Jacobi formula [itex] h,{\mu} = h h^{\alpha \beta} h_{\alpha \beta, \mu} [/itex] ?

I don't think this identity is correctly written here and I don't think you can ignore the derivatives of the trace, h. Keeping the derivatives of the trace, you should still be able to show that if you isolate all the terms of ##G_{0 0}## that involve second derivatives of time, that these terms will cancel out.
 
  • #9
TSny said:
In the last term on the right, inside the parentheses, note that the first two terms are identically equal to each other and the last two terms are equal to each other. So, you might as well combine them and get rid of one of the factors of 1/2.

I wasn't sure if I could do that.

So by getting rid of the one [itex] \frac{1}{2} [/itex] in front of the last term and making the change: [itex] \alpha \beta → 0 \beta ; \gamma → 0 + k [/itex] I get
[itex] G_{0 \beta } = h^{0}_{0,\beta 0} + h^{k}_{0,\beta k} + h^{0}_{\beta,00} + h^{k}_{\beta ,0k} - h_{,0 \beta} - h_{0 \beta,0}^{,0} - h_{0 \beta, k}^{,k} - \eta_{0 \beta} h^{0 0}_{,00} - \eta_{0 \beta} h^{ik}_{,ik} +\eta_{0 \beta} h_{,0}^{,0} + \eta_{0 \beta} h_{,k}^{,k} [/itex]
Now I will neglect the terms without time derivative and take advantage of the fact [itex] h^0_\alpha = \eta^{0 \mu} h_{0 \alpha} = - h_{0 \alpha}[/itex] and obtain:
[itex] G_{0 \beta} = -h_{00,\beta 0} - h_{0 \beta,00} - h_{k \beta,0 k} - h_{0 \beta} + h_{0 \beta, 00 } + h_{00, 00} + h_{,00} =\\
-h_{00,0 0} -h_{00,i 0} - h_{0 \beta,00} - h_{k \beta,0 k} - h_{,0 0} - h_{,0 i} + h_{0 0, 00 } + h_{0 i, 00 } + h_{00, 00} + h_{,00} = -h_{00,i 0} - -h_{k \beta,0 k} -h_{,0i} [/itex]
Does this prove the point? I am not sure, because I expected that there would be no time derivatives.
 
  • #10
dpopchev said:
So by getting rid of the one [itex] \frac{1}{2} [/itex] in front of the last term and making the change: [itex] \alpha \beta → 0 \beta ; \gamma → 0 + k [/itex] I get
[itex] G_{0 \beta } = h^{0}_{0,\beta 0} + h^{k}_{0,\beta k} + h^{0}_{\beta,00} + h^{k}_{\beta ,0k} - h_{,0 \beta} - h_{0 \beta,0}^{,0} - h_{0 \beta, k}^{,k} - \eta_{0 \beta} h^{0 0}_{,00} - \eta_{0 \beta} h^{ik}_{,ik} +\eta_{0 \beta} h_{,0}^{,0} + \eta_{0 \beta} h_{,k}^{,k} [/itex]
That's getting close. There will still be an overall factor of 1/2 for the right hand side, but that's not important in what you want to show. Also, you have left out terms of the form ##\eta_{0 \beta} h^{i0}_{,i0}##. But they involve only first order derivatives in time and are not important in what you want to show.
Now I will neglect the terms without time derivative and take advantage of the fact [itex] h^0_\alpha = \eta^{0 \mu} h_{0 \alpha} = - h_{0 \alpha}[/itex] and obtain:
[itex] G_{0 \beta} = -h_{00,\beta 0} - h_{0 \beta,00} - h_{k \beta,0 k} - h_{0 \beta} + h_{0 \beta, 00 } + h_{00, 00} + h_{,00} =\\
-h_{00,0 0} -h_{00,i 0} - h_{0 \beta,00} - h_{k \beta,0 k} - h_{,0 0} - h_{,0 i} + h_{0 0, 00 } + h_{0 i, 00 } + h_{00, 00} + h_{,00} = -h_{00,i 0} - -h_{k \beta,0 k} -h_{,0i} [/itex]
Does this prove the point? I am not sure, because I expected that there would be no time derivatives.
In going from the first to second line in the quote above, it looks like you have treated ##\beta## as a summation variable on the right. But ##\beta## is a fixed index. What you want to do is write out the first line in the quote above for ##\beta = 0## and show that all second order time derivatives cancel. Then you want to repeat for the case where ##\beta = j## for a fixed spatial index ##j##.
 
  • #11
TSny said:
...show that all second order time derivatives cancel.

This second order derivatives are just [itex] \partial_{0}\partial_{0} := \partial^{2}_{0} [/itex] and doesn't include [itex] \partial_{something} \partial_{0} \equiv \partial_{0} \partial_{something} [/itex]

Somewhat off-topic: I got bit confused about the following: the metric is [itex] h_{\mu \nu} [/itex], the inverse is [itex] h^{\mu \nu}=\frac{1}{h_{\mu \nu}} [/itex] and the determinant is is [itex] h = det(h) [/itex]. But for some reason I got the impression that [itex] h [/itex] is used for the trace too. Is it because I assume it is diagonal or because I have to read closer.
 
  • #12
dpopchev said:
This second order derivatives are just [itex] \partial_{0}\partial_{0} := \partial^{2}_{0} [/itex] and doesn't include [itex] \partial_{something} \partial_{0} \equiv \partial_{0} \partial_{something} [/itex]
Right, as long a "something" is something other than 0.
the metric is [itex] h_{\mu \nu} [/itex],
Sorry for being pedantic, but the metric is ##g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}##. So ##h_{\mu\nu}## is only part of the metric.
the inverse is [itex] h^{\mu \nu}=\frac{1}{h_{\mu \nu}} [/itex]
The inverse of the matrix of ##h_{\mu\nu}## is not the matrix ##h^{\mu\nu}##. You are probably thinking of the metric matrix ##g_{\mu\nu}## whose inverse is the matrix ##g^{\mu\nu}##. But that doesn't apply to ##h_{\mu\nu}##. Also, the inverse of the matrix ##h_{\mu\nu}## is not the matrix with elements ##1/h^{\mu\nu}## unless ##h_{\mu\nu}## is diagonal.
and the determinant is is [itex] h = det(h) [/itex]. But for some reason I got the impression that [itex] h [/itex] is used for the trace too. Is it because I assume it is diagonal or because I have to read closer.
Here, ##h## represents the trace, not the determinant.
 
  • #13
TSny said:
Right, as long a "something" is something other than 0.
Yes, that was the hardest for me to understand. Thanks.

TSny said:
Sorry for being pedantic, but the metric is ##g_{\mu\nu} = \eta_{\mu\nu} + h_{\mu\nu}##. So ##h_{\mu\nu}## is only part of the metric.
My mistake I.

TSny said:
The inverse of the matrix of ##h_{\mu\nu}## is not the matrix ##h^{\mu\nu}##. You are probably thinking of the metric matrix ##g_{\mu\nu}## whose inverse is the matrix ##g^{\mu\nu}##. But that doesn't apply to ##h_{\mu\nu}##. Also, the inverse of the matrix ##h_{\mu\nu}## is not the matrix with elements ##1/h^{\mu\nu}## unless ##h_{\mu\nu}## is diagonal.

Here, ##h## represents the trace, not the determinant.
My mistake II.

Mistake I and II are because I confused [itex] g_{\mu \nu }[/itex] and [itex] h_{\mu \nu} [/itex]. Thanks for pointing me out this things. A follow up is the mistake that I can not apply the Jacobi formula for derivative of determinant to trace [itex] h [/itex].

[itex] 2G_{00} = h^{\gamma}_{0,0\gamma} + h^{\gamma}_{0,0\gamma} - h_{,00} - h^{,\gamma}_{00,\gamma} - \eta_{00} h^{\gamma \sigma}_{,\gamma \sigma} + \eta_{00} h^{,\gamma}_{,\gamma} = -2h_{\gamma 0,0 \gamma} - h_{,00} + h_{00,\gamma \gamma} + h_{\gamma \sigma, \gamma \sigma } + h_{,\gamma \gamma} = \\
-2h_{00,00} - h_{,00} + h_{00,00} + h_{00,00 } + h_{,00} +... = 0 + ...[/itex]
[itex] 2G_{0k} = h^{\gamma}_{0,k\gamma} + h^{\gamma}_{k,0\gamma} - h_{,0k} - h^{,\gamma}_{0k,\gamma} - \eta_{0k} h^{\gamma \sigma}_{,\gamma \sigma} + \eta_{0k} h^{,\gamma}_{,\gamma} = -h_{\gamma 0,k\gamma} -h_{\gamma k,0\gamma} - h_{,0k} + h_{0k,\gamma \gamma} - 0 + 0 = \\
-h_{0k,00} + h_{0k,00} + ...= 0 + ...[/itex]
 
  • #14
OK. However, I would quibble with the way you handled a couple of terms even though you got the correct final result.

For example, note that you wrote ##h^{\gamma}\,_{0,0\gamma} = -h_{\gamma 0, 0\gamma}##. This is not a correct step. ##\gamma## is a summation index which must be written once up and once down. Thus

##h^{\gamma}\,_{0,0\gamma} = h^{0}\,_{0,00} + h^{1}\,_{0,01} + h^{2}\,_{0,02} +h^{3}\,_{0,03} = -h_{00,00} + h_{10,01}+ h_{20,02}+ h_{30,03}.##

Only the first term on the right changes sign when lowering the upper index.

Otherwise, it looks good.
 

1. What is the Einstein linearized tensor?

The Einstein linearized tensor is a mathematical concept used in the theory of general relativity. It is a second-order tensor that is used to linearize the equations of general relativity. This allows for easier calculations and analysis of gravitational phenomena.

2. Why is the Einstein linearized tensor important?

The Einstein linearized tensor is important because it allows for the study and understanding of gravitational effects in weak gravitational fields. This is particularly useful in situations where the gravitational field is not very strong, such as in our solar system.

3. How is the Einstein linearized tensor calculated?

The Einstein linearized tensor is calculated by taking the second partial derivatives of the metric tensor, which describes the curvature of spacetime, and then rearranging the resulting equations. The resulting tensor is then used in the linearized equations of general relativity.

4. What does it mean for the Einstein linearized tensor to be zero?

If the Einstein linearized tensor is zero, it means that the gravitational field is either very weak or absent. This allows for the use of simpler equations and calculations to describe the behavior of objects in this field.

5. Can the Einstein linearized tensor be zero in all cases?

No, the Einstein linearized tensor can only be zero in certain situations where the gravitational field is weak. In strong gravitational fields, such as those near black holes, the tensor will not be zero and the full equations of general relativity must be used to accurately describe the behavior of objects.

Similar threads

  • Advanced Physics Homework Help
Replies
2
Views
470
  • Advanced Physics Homework Help
Replies
6
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
3K
  • Advanced Physics Homework Help
Replies
0
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
3
Views
4K
  • Advanced Physics Homework Help
Replies
2
Views
981
  • Advanced Physics Homework Help
Replies
1
Views
1K
Back
Top