Find the Tetrad for Kerr Metric: Step-by-Step Guide
- Context: Graduate
- Thread starter Zuhaa Naz
- Start date
-
- Tags
- Kerr Kerr metric Metric Tetrad
Click For Summary
The discussion focuses on deriving the tetrad for the Kerr metric, emphasizing that there are infinitely many possible tetrads for any given metric. Participants highlight the need for a specific context or reference to understand the selection of a particular tetrad field. A key resource mentioned is the research paper available at arxiv.org/abs/1710.09880, which provides a framework for constructing orthonormal tetrads. The conversation also touches on the mathematical representation of the tetrad and its relationship to the metric, including the use of index-free notation.
PREREQUISITES- Understanding of General Relativity concepts
- Familiarity with tetrads and their mathematical representation
- Knowledge of the Kerr metric and its properties
- Proficiency in tensor algebra and index-free notation
- Study the derivation of tetrads from the Kerr metric using the paper at arxiv.org/abs/1710.09880
- Learn about the construction of orthonormal bases in General Relativity
- Explore the implications of Lorentz transformations on tetrads
- Investigate the relationship between tetrads and local Minkowski frames
Students and researchers in General Relativity, physicists interested in the Kerr metric, and anyone looking to deepen their understanding of tetrads and their applications in curved spacetime.
- 49,514
- 25,534
Zuhaa Naz said:how to find tetrad of this metric
There are an infinite number of possible tetrads for any metric, not just one.
Zuhaa Naz said:the tetrad given
Where? What reference are you getting this from?
- 13,513
- 16,156
As Peter says, there are infinitely many possible tetrads for any given metric. A tetrad at a point is just four mutually orthogonal vectors at that point, and you can always rotate or Lorentz boost any set of four to make another.Zuhaa Naz said:how to find tetrad of this metric
You have a specific tetrad field (i.e., a recipe for writing down a tetrad at any event such that nearby events have similarly oriented tetrads). Somebody must have made a choice on how to go about that, picking one field from the infinite choices, and it's that context that's needed.
- 14
- 0
Ibix said:As Peter says, there are infinitely many possible tetrads for any given metric. A tetrad at a point is just four mutually orthogonal vectors at that point, and you can always rotate or Lorentz boost any set of four to make another.
You have a specific tetrad field (i.e., a recipe for writing down a tetrad at any event such that nearby events have similarly oriented tetrads). Somebody must have made a choice on how to go about that, picking one field from the infinite choices, and it's that context that's needed.
I understand that point.
But Will you please help me out how to find these four tetrads.
- 14
- 0
Where? What reference are you getting this from?
Actually I found it in a research paper.
And I m interested to derive these four orthonormal tetrads from given Kerr metric
- 15,498
- 10,689
Which one? We need a link or other reference.Zuhaa Naz said:Actually I found it in a research paper.
- 13,513
- 16,156
Somebody decided what tetrads they wanted. Then they wrote down an expression for those four tetrad vectors in terms of the coordinate basis vectors. I don't think I can be more specific than that without knowing what the reasoning behind that selection of tetrad was.Zuhaa Naz said:But Will you please help me out how to find these four tetrads.
Provide a link to the paper and we might be able to help.Zuhaa Naz said:Actually I found it in a research paper.
- 13,513
- 16,156
- 14
- 0
Yup this oneIbix said:Just a link is fine: https://arxiv.org/abs/1710.09880
- 14
- 0
"Relativity_Demystified_-_a_self-teaching_guide"
Page no 100-101
But confused how to apply it to get tetrad comprising of four orthonormal vectors.
- 10,442
- 1,606
Zuhaa Naz said:how to find tetrad of this metric
View attachment 240116
the tetrad given is this one
View attachment 240117
I m a newly born in General Relativity
please help me out how this tetrad is derived
Your tetrad is just an orthornormal basis of one forms.
In index-free notation, the four one-forms are ##e^0, e^1, e^2, e^3##. Note that here the superscripts are not tensor indices, they are selectors, so that ##e^0## is the first memeber of the set, ##e^1## is the second member of the set, etc. In index-free notation we don't need the subscript.
The components of ##e^0## are written by the author of the paper you reference as ##e^{(0)}_a##, similarly for the other members of the tetread.
Note that dt, ##d\theta##, ##d\phi##, and dr as I write them also one forms. The author of the paper writes the components, so where I write dt he writes ##dt_a##, and similarly for the other one-forms.
The point then is the metric is just
$$-e^0\otimes e^0+ e^1 \otimes e^1 + e^2 \otimes e^2 + e^3 \otimes e^3$$
(I actually only verified the component for dt^2, but I think the others work out as well, but you should double check this).
Deriving this is basically just a matter of factoring the line element with algebra. It may help to re-write the original Kerr metric in the same index free notation that I use. If you go this route, don't forget that the metric must be symmetric, ##dt \otimes d\phi## is not the same as ##d\phi \otimes dt##. So the line element written as (...)dt##d\phi## becomes ##\frac{1}{2}## (...) (dt ##\otimes d\phi + d\phi \otimes## dt)
- 14
- 0
Thanks Respected @pervectpervect said:Your tetrad is just an orthornormal basis of one forms.
In index-free notation, the four one-forms are ##e^0, e^1, e^2, e^3##. Note that here the superscripts are not tensor indices, they are selectors, so that ##e^0## is the first memeber of the set, ##e^1## is the second member of the set, etc. In index-free notation we don't need the subscript.
The components of ##e^0## are written by the author of the paper you reference as ##e^{(0)}_a##, similarly for the other members of the tetread.
Note that dt, ##d\theta##, ##d\phi##, and dr as I write them also one forms. The author of the paper writes the components, so where I write dt he writes ##dt_a##, and similarly for the other one-forms.
The point then is the metric is just
$$-e^0\otimes e^0+ e^1 \otimes e^1 + e^2 \otimes e^2 + e^3 \otimes e^3$$
(I actually only verified the component for dt^2, but I think the others work out as well, but you should double check this).
Deriving this is basically just a matter of factoring the line element with algebra. It may help to re-write the original Kerr metric in the same index free notation that I use. If you go this route, don't forget that the metric must be symmetric, ##dt \otimes d\phi## is not the same as ##d\phi \otimes dt##. So the line element written as (...)dt##d\phi## becomes ##\frac{1}{2}## (...) (dt ##\otimes d\phi + d\phi \otimes## dt)
I hope I will reach to the conclusion by this method.
- 14
- 0
I m sorry to say that mine metric is not symmetric !
Is not there any other method to compute this tetrad.
- 14
- 0
I humbly request you to show me how you verified it?pervect said:(I actually only verified the component for dt^2, but I think the others work out as well, but you should double check this).
- 10,442
- 1,606
e0 and e3 are one forms, as are dt and d##\phi##. I'm using index free notation, you can imagine that everything has a subscript if you prefer abstract index notation. So my e0 is your paper's ##e^{(0)}_a##, my dt is your paper's ##dt_a##
I'm going to omit e1 and e2 from what I write for brevity, they can be handled in an even simpler manner
Then
$$
e0 = A\,dt + B\,d\phi \quad e3 = C\,dt + D\,d\phi
$$
I haven't written out the values of A,B,C,D, I've left them symbolic. You can get them from the paper. But I'll write out the value of A so you can compare.
$$A = \sqrt{\frac{\Delta}{\Sigma}}$$
Then
$$-e0 \otimes e0 + e3 \otimes e3 = -(A\,dt + B\,d\phi ) \otimes (A\,dt + B\,d\phi ) + (C\,dt + D\,d\phi) \otimes (C\,dt + D\,d\phi)
$$
The tensor product ##\otimes## is distributive, though it doesn't commute. Of course, A,B,C,and D are just numeric expressions, so their multiplication does commulte. Carrying this out, and collecting terms, we get..
$$
(C^2-A^2)\, dt \otimes dt + (C\,D - A\,B) ( dt \otimes d\phi + d\phi \otimes dt) + (D^2- B^2) d\phi \otimes d\phi
$$
In matrix notation, this just means that
$$
g_{tt} = C^2 - A^2 \quad g_{t\phi} = g_{\phi t} = C\,D - A\,B \quad g_{\phi\phi} = D^2 - B^2
$$
I've taken the liberty of using the symbolic subscripts for the metric coefficients rather than the numeric, hopefully it's clear what I meant.
And you can see that the resulting matrix is symmetric because ##g_{t\phi} = g_{\phi t}##
e1 and e2 , which I omitted, just give you ## g_{rr} ## and ##g_{\theta\theta}## in a very straightforwards manner, you just have to square the appropriate expressions in the paper.
- 14
- 0
pervect said:I'll so a short, symbolic version
e0 and e3 are one forms, as are dt and d##\phi##. I'm using index free notation, you can imagine that everything has a subscript if you prefer abstract index notation. So my e0 is your paper's ##e^{(0)}_a##, my dt is your paper's ##dt_a##
I'm going to omit e1 and e2 from what I write for brevity, they can be handled in an even simpler manner
Then
$$
e0 = A\,dt + B\,d\phi \quad e3 = C\,dt + D\,d\phi
$$
I haven't written out the values of A,B,C,D, I've left them symbolic. You can get them from the paper. But I'll write out the value of A so you can compare.
$$A = \sqrt{\frac{\Delta}{\Sigma}}$$
Then
$$-e0 \otimes e0 + e3 \otimes e3 = -(A\,dt + B\,d\phi ) \otimes (A\,dt + B\,d\phi ) + (C\,dt + D\,d\phi) \otimes (C\,dt + D\,d\phi)
$$
The tensor product ##\otimes## is distributive, though it doesn't commute. Of course, A,B,C,and D are just numeric expressions, so their multiplication does commulte. Carrying this out, and collecting terms, we get..
$$
(C^2-A^2)\, dt \otimes dt + (C\,D - A\,B) ( dt \otimes d\phi + d\phi \otimes dt) + (D^2- B^2) d\phi \otimes d\phi
$$
In matrix notation, this just means that
$$
g_{tt} = C^2 - A^2 \quad g_{t\phi} = g_{\phi t} = C\,D - A\,B \quad g_{\phi\phi} = D^2 - B^2
$$
I've taken the liberty of using the symbolic subscripts for the metric coefficients rather than the numeric, hopefully it's clear what I meant.
And you can see that the resulting matrix is symmetric because ##g_{t\phi} = g_{\phi t}##
e1 and e2 , which I omitted, just give you ## g_{rr} ## and ##g_{\theta\theta}## in a very straightforwards manner, you just have to square the appropriate expressions in the paper.
Superb ...
You have done a great job.
But one thing I want to clear that I have to derive tetrad using Kerr metric.
The method you above use ... it seems like you use tetrads to find ##g_{\theta\theta}## and other components of metric.
- 13,513
- 16,156
Unfortunately, the paper doesn't appear to say why it made that choice. It merely cites reference 16, which is a Phys Rev D paper that's behind a paywall. If you have access, look at that.
If you don't have access, it's worth noting that a tetrad at event ##t,r,\theta,\phi## is the basis for the natural local Minkowski frame of an observer whose four-velocity is parallel to ##e^{(0)}## - i.e. ##u^ae_a^{(\mu)}## is -1 for ##\mu=0## and zero for all other ##\mu##. So if I were you I'd write down that four velocity and work out what it means. For example, is it undergoing proper acceleration? In what direction?
That is basically the reverse of the process for writing a tetrad field. Usually you would say that you want the tetrads associated with a family of observers with certain properties (e.g., hovering at constant altitude, or freely infalling), which would let you write your timelike tetrad vector. Then you pick an orientation for the spacelike vectors, and then you've got your tetrad.
- 10,442
- 1,606
As far as the uniquness issue goes, the posters who point out that there isn't a unique answer are correct. For instance, given a tetrad, any spatial rotation of the three spatial vectors of the tetrad will be another tetrad. This isn't all the degrees of freedom, necessarily, just an example of why there isn't a unique tetrad.
Of course, there is an "easy" choice for e1 and e2 that makes the math simple, as they are already diagonal elements. So if we start out with this easy choice for e1 and e2, we only need to find e0 and e3.
We can treat ##g_{ij}## as a quadratic form, which is the line element of the metric. Every quadratic form can be associated with a symmetric matrix, and vica-versa. Given that we've already picked out the easy solution for e1 and e2, we only have to deal with a biquardratic form, a quadratic form in two variables, i.e. dt and d##\phi##.
We know that quadratic forms can be diagonalized. I believe this is essentially the eigenvalue / eigenvector problem. It's been too long, but some reading on eigenvalues, eigenvectors, and the principle axis transformation might help. Though I'm pretty sure we'll have one negative and one positive eigenvalue, so our matrix isn't positive definite, so anything that's restricted to positive definite matrices won't be applicable.
You might be able to manage it with just algebra, too.
- 10,442
- 1,606
If you write
e0 = dt + ##\alpha d\phi## and e3 =## d\phi + \beta## dt
the condition that ##g \otimes e0 \otimes e3=0## , that e0 and e3 are orthogonal, should allow you to find a relation between alpha and beta, so you could find beta in terms of alpha.
Then you should be able to find the value of ##\alpha## that makes the cross terms, e0 ##\otimes## e3, disappear.
Then you rescale e0 and e3 so they are unit length.
I haven't actually carried this out, but I think it should work.
- 14
- 0
I will try and then respond you back .
- 14
- 0
I know how to solve e0⊗e3 but don't know how g⊗e0⊗e3=0 is solved .pervect said:g⊗e0⊗e3=0g⊗e0⊗e3=0g \otimes e0 \otimes e3=0
will you please tell me how this is solved
- 10,442
- 1,606
Zuhaa Naz said:I know how to solve e0⊗e3 but don't know how g⊗e0⊗e3=0 is solved .
will you please tell me how this is solved
##g \otimes e0 \otimes e3## is just index free notation for ##g^{ab}e0_a\,e3_b## in abstract index notation
##g^{ab}## is the inverse metric. The operatior ##\otimes## is the tensor product, not the dot product. I suspect you might be confusing the two.
- 14
- 0
pervect said:##g \otimes e0 \otimes e3## is just index free notation for ##g^{ab}e0_a\,e3_b## in abstract index notation
##g^{ab}## is the inverse metric. The operatior ##\otimes## is the tensor product, not the dot product. I suspect you might be confusing the two.
Please see that our matrix is not diagonal.
Here see in images.
How we solve this problem ..
Attachments
- 13,513
- 16,156
- 14
- 0
Got it.Ibix said:You should get a single number - there are no free indices in this expression. It's an inner
We will expand it
- 10,442
- 1,606
Now, e0 and e3 are both one-forms, meaning they have subscripts, ##e0_a## and ##e3_b##. So we need to contract them with the version of g that has superscripts ##g^{ab} e0_a e3_b##. We can consider this to be the contraction of ##g^{ab}## and ##e0_a \otimes e3##.
But the usual expression for the metric is ##g_{ab}##, it's subscripted. ##g^{ab}## is the inverse metric. This is somewhat of a pain :(. Perhaps there is some better way to tackle the problem. From a theoretical point of view, eigenvectors can be used to diagonalize a metrix. Google finds a discussion in matrix (not tensor) notation at http://fourier.eng.hmc.edu/e161/lectures/algebra/node6.html and https://www.maths.manchester.ac.uk/~peter/MATH10212/notes10.pdf
The second had perhaps the cleanest, terse discussion, though it's still in matrix notation, not tensor notation.
Definition.A square matrix A is orthogonally diagonalizableif if there exists an orthogonal matrix Q such that ##Q^TAQ=D## is a diagonal matrix.
Remarks.Since ##Q^T=Q^{−1}## for orthogonal Q, the equality ##Q^TAQ=D## is the same as ##Q^{−1}AQ=D##, so A∼D, so this a special case of diagonaliza-tion: the diagonal entries of D are eigenvalues of A, and the columns of Qare corresponding eigenvectors. The only difference is the additional requirement that Q be orthogonal, which is equivalent to the fact that those eigenvectors – columns of Q – form an orthonormal basis of ##R^n##
The other proposal I made was a bit of an offhand -and given that we need to invert the metric, which I didn't realize when I had the thought, it's not necessarily so attractive. The eigenvector approach may not be all that attractive either, but it's got a well-known theoretical basis for solving the diagonalization problem. My goal here is just to find a diagonalization of a matrix in a convenient manner, and it's convenient to do this with an orthogonal matrix Q.
One possible complication is what happens if one has repeated eigenvalues.
As previously discussed, there is not just one way to diagonalize a matrix - the exact number of degrees of freedom isn't entirely intiutively clear to me at the moment. The approach using orthogonal diagoalization is a convenient and general one in theory to find a way to diagonalize a metrix, though. There may be some easier way to go about it, too. Of easy solutions, my favorite is looking up the answer and checking that it works, which was my first impluse.
Similar threads
- · Replies 11 ·
- Replies
- 11
- Views
- 3K
- · Replies 5 ·
- Replies
- 5
- Views
- 1K
- · Replies 43 ·
- Replies
- 43
- Views
- 4K
- · Replies 19 ·
- Replies
- 19
- Views
- 3K
- · Replies 5 ·
- Replies
- 5
- Views
- 4K
- · Replies 3 ·
- Replies
- 3
- Views
- 3K
- · Replies 15 ·
- Replies
- 15
- Views
- 3K
- · Replies 9 ·
- Replies
- 9
- Views
- 2K
- · Replies 8 ·
- Replies
- 8
- Views
- 4K
- · Replies 5 ·
- Replies
- 5
- Views
- 2K