Writing the Einstein action in terms of tetrads

Click For Summary

Discussion Overview

The discussion centers on the formulation of the Einstein action using tetrads, specifically examining a proof presented by Pullin regarding the non-degeneracy of a certain prefactor in the action's expression. Participants explore the implications of antisymmetrization and the properties of direct products of matrices in this context.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant references Pullin's paper and requests clarification on verifying the non-degeneracy of a prefactor involving tetrads.
  • Another participant suggests that the check involves ensuring antisymmetrizations do not lead to a zero prefactor for any indices.
  • A participant seeks further elaboration, drawing parallels to the implications of non-degenerate matrices and questioning whether the determinant of a direct product matrix needs to be considered.
  • Another participant agrees with the analogy but notes the complexity introduced by the tetrads and suggests that orthogonality of the tetrad vectors might be relevant.
  • A participant provides a detailed explanation of the determinant of a direct product of matrices, including the formula and notation involved.
  • Another participant simplifies the determinant calculation, stating that it can be expressed as the product of the determinants of the individual matrices when they are diagonalizable.
  • A final post links to a document claiming to provide a proof of the original statement regarding non-degeneracy.

Areas of Agreement / Disagreement

Participants express differing levels of understanding and approaches to the problem, with no consensus reached on the specific methods for verifying the non-degeneracy of the prefactor or the implications of the antisymmetrizations.

Contextual Notes

Participants discuss the complexity of the notation and the implications of working with direct products of matrices, indicating that assumptions about the properties of the matrices involved may not be fully resolved.

julian
Science Advisor
Gold Member
Messages
861
Reaction score
366
In the paper http://arxiv.org/pdf/hep-th/9301028.pdf pages 8-9 Pullin shows how to write the Einstein action in terms of tetrads [itex]e^a_I[/itex]. Part of the proof is: "...the last term yields [itex]e_M^{[a} e_N^{b]} \delta^M_{[I} \delta^K_{J]} C_{bK}^{\;\;\; N}[/itex]. It is easy to check that the prefactor in this expression is nondegenerate..."

Can somebody explain how you do this check?
 
Physics news on Phys.org
I think you want to check that the antisymmetrizations do not cause the prefactor to be zero for any indices a,I,J; that is all.
 
Could you elaborate a bit please?

I know that when you have [itex]M_{ab} v_b = 0[/itex], if the [itex]M_{ab}[/itex] is non-degenerate, (has no zero eigenvalues, or equaivalently a non-zero determinant), it implies [itex]v_b = 0[/itex].

Is Pullin's case the same, we are dealing with the direct product of three matrices though? Do we need to do it show the determinant of the direct product matrix is zero?

Is the antisymmetrizations you mentioned related to the calculation of this determinant?

Or is it to do with something more simple?
 
Yeah it's exactly like that, except now you have some euclidean indices from the tetrads to mess up the notation... But your equation basically reads [itex]M^{ab} V_b = 0[/itex]. But now your M is a tensor product of two vectors. Remember also that the tetrad vectors are orthogonal to each other. Perhaps that helps? I'm not entirely sure, I didn't actually bother to do it :D Whenever something reads "it's easy to see that... " you're probably going to have a bad time.
 
If [itex]A_{ij}[/itex] is a [itex]m \times m[/itex] matrix and [itex]B_{ij}[/itex] is a [itex]n \times n[/itex] matrix, the direct product is

[itex]C = A \otimes B[/itex]

where [itex]C[/itex] is an [itex]mn \times mn[/itex] matrix with elements

[itex]C_{\alpha \beta} = A_{ij} B_{kl}[/itex]

with


[itex]\alpha = n (i-1) + k , \;\;\;\; \beta = n(j-1) + l[/itex].

The determinant of [itex]C_{\alpha \beta}[/itex] is given by the usual formula

[itex] det C_{\alpha \beta} = {1 \over (mn)!} \sum_{\alpha_1 \beta_1} \cdots \sum_{\alpha_{mn} \beta_{mn}} <br /> \epsilon_{\alpha_1 \dots \alpha_{mn}} \epsilon_{\beta_1 \dots \beta_{mn}} <br /> C_{\alpha_1 \beta_1} \dots C_{\alpha_{mn} \beta_{mn}}[/itex].

In terms of [itex]A_{ij}[/itex] and [itex]B_{kl}[/itex] this becomes


[itex] det C_{\alpha \beta} = {1 \over (mn)!} \sum_{i_1, j_1, k_1, l_1} \cdots <br /> \sum_{i_{mn}, j_{mn}, k_{mn}, l_{mn}} <br /> \epsilon_{n(i_1 - 1) + k_1 \cdots n(i_{mn} - 1) + k_{mn}} <br /> \epsilon_{n(j_1 - 1) + l_1 \cdots n(j_{mn} - 1) + l_{mn}} [/itex]
[itex] A_{i_1 j_1} B_{k_1 l_1} \dots A_{i_{mn} j_{mn}} B_{k_{mn} l_{mn}}[/itex].

Looks a bit daunting
 
Last edited:
Actually looking into it you have the simple formula

[itex]det C = (det A)^n (det B)^m[/itex].

This is easy to see if [itex]A[/itex] and [itex]B[/itex] are diagonal, this suggests a way to prove the result when [itex]A[/itex] and [itex]B[/itex] can be diagonalised.
 
Last edited:
Proof of original statement...

http://dl.dropbox.com/u/81787406/nondegen.pdf
 
Last edited by a moderator:

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 12 ·
Replies
12
Views
7K
  • · Replies 79 ·
3
Replies
79
Views
19K