• Zuhaa Naz

#### Zuhaa Naz

how to find tetrad of this metric

the tetrad given is this one

I m a newly born in General Relativity

#### Attachments

2.6 KB · Views: 1,283
2.4 KB · Views: 1,240

how to find tetrad of this metric

There are an infinite number of possible tetrads for any metric, not just one.

Where? What reference are you getting this from?

Zuhaa Naz
how to find tetrad of this metric
As Peter says, there are infinitely many possible tetrads for any given metric. A tetrad at a point is just four mutually orthogonal vectors at that point, and you can always rotate or Lorentz boost any set of four to make another.

You have a specific tetrad field (i.e., a recipe for writing down a tetrad at any event such that nearby events have similarly oriented tetrads). Somebody must have made a choice on how to go about that, picking one field from the infinite choices, and it's that context that's needed.

Last edited:
Zuhaa Naz
As Peter says, there are infinitely many possible tetrads for any given metric. A tetrad at a point is just four mutually orthogonal vectors at that point, and you can always rotate or Lorentz boost any set of four to make another.

You have a specific tetrad field (i.e., a recipe for writing down a tetrad at any event such that nearby events have similarly oriented tetrads). Somebody must have made a choice on how to go about that, picking one field from the infinite choices, and it's that context that's needed.

I understand that point.

Where? What reference are you getting this from?

Actually I found it in a research paper.
And I m interested to derive these four orthonormal tetrads from given Kerr metric

Actually I found it in a research paper.
Which one? We need a link or other reference.

Zuhaa Naz
Somebody decided what tetrads they wanted. Then they wrote down an expression for those four tetrad vectors in terms of the coordinate basis vectors. I don't think I can be more specific than that without knowing what the reasoning behind that selection of tetrad was.
Actually I found it in a research paper.
Provide a link to the paper and we might be able to help.

Zuhaa Naz
Which one? We need a link or other reference.
This one
On page 8

#### Attachments

• 1710.09880.pdf
172.6 KB · Views: 454
I also get relevant material in book named as
"Relativity_Demystified_-_a_self-teaching_guide"
Page no 100-101

But confused how to apply it to get tetrad comprising of four orthonormal vectors.

how to find tetrad of this metric
View attachment 240116

the tetrad given is this one

View attachment 240117

I m a newly born in General Relativity

In index-free notation, the four one-forms are ##e^0, e^1, e^2, e^3##. Note that here the superscripts are not tensor indices, they are selectors, so that ##e^0## is the first memeber of the set, ##e^1## is the second member of the set, etc. In index-free notation we don't need the subscript.

The components of ##e^0## are written by the author of the paper you reference as ##e^{(0)}_a##, similarly for the other members of the tetread.

Note that dt, ##d\theta##, ##d\phi##, and dr as I write them also one forms. The author of the paper writes the components, so where I write dt he writes ##dt_a##, and similarly for the other one-forms.

The point then is the metric is just

$$-e^0\otimes e^0+ e^1 \otimes e^1 + e^2 \otimes e^2 + e^3 \otimes e^3$$

(I actually only verified the component for dt^2, but I think the others work out as well, but you should double check this).

Deriving this is basically just a matter of factoring the line element with algebra. It may help to re-write the original Kerr metric in the same index free notation that I use. If you go this route, don't forget that the metric must be symmetric, ##dt \otimes d\phi## is not the same as ##d\phi \otimes dt##. So the line element written as (...)dt##d\phi## becomes ##\frac{1}{2}## (...) (dt ##\otimes d\phi + d\phi \otimes## dt)

Zuhaa Naz

In index-free notation, the four one-forms are ##e^0, e^1, e^2, e^3##. Note that here the superscripts are not tensor indices, they are selectors, so that ##e^0## is the first memeber of the set, ##e^1## is the second member of the set, etc. In index-free notation we don't need the subscript.

The components of ##e^0## are written by the author of the paper you reference as ##e^{(0)}_a##, similarly for the other members of the tetread.

Note that dt, ##d\theta##, ##d\phi##, and dr as I write them also one forms. The author of the paper writes the components, so where I write dt he writes ##dt_a##, and similarly for the other one-forms.

The point then is the metric is just

$$-e^0\otimes e^0+ e^1 \otimes e^1 + e^2 \otimes e^2 + e^3 \otimes e^3$$

(I actually only verified the component for dt^2, but I think the others work out as well, but you should double check this).

Deriving this is basically just a matter of factoring the line element with algebra. It may help to re-write the original Kerr metric in the same index free notation that I use. If you go this route, don't forget that the metric must be symmetric, ##dt \otimes d\phi## is not the same as ##d\phi \otimes dt##. So the line element written as (...)dt##d\phi## becomes ##\frac{1}{2}## (...) (dt ##\otimes d\phi + d\phi \otimes## dt)

Thanks Respected @pervect
I hope I will reach to the conclusion by this method.

Respected @pervect
I m sorry to say that mine metric is not symmetric !
Is not there any other method to compute this tetrad.

(I actually only verified the component for dt^2, but I think the others work out as well, but you should double check this).
I humbly request you to show me how you verified it?

I'll so a short, symbolic version

e0 and e3 are one forms, as are dt and d##\phi##. I'm using index free notation, you can imagine that everything has a subscript if you prefer abstract index notation. So my e0 is your paper's ##e^{(0)}_a##, my dt is your paper's ##dt_a##

I'm going to omit e1 and e2 from what I write for brevity, they can be handled in an even simpler manner

Then

$$e0 = A\,dt + B\,d\phi \quad e3 = C\,dt + D\,d\phi$$

I haven't written out the values of A,B,C,D, I've left them symbolic. You can get them from the paper. But I'll write out the value of A so you can compare.

$$A = \sqrt{\frac{\Delta}{\Sigma}}$$

Then
$$-e0 \otimes e0 + e3 \otimes e3 = -(A\,dt + B\,d\phi ) \otimes (A\,dt + B\,d\phi ) + (C\,dt + D\,d\phi) \otimes (C\,dt + D\,d\phi)$$

The tensor product ##\otimes## is distributive, though it doesn't commute. Of course, A,B,C,and D are just numeric expressions, so their multiplication does commulte. Carrying this out, and collecting terms, we get..

$$(C^2-A^2)\, dt \otimes dt + (C\,D - A\,B) ( dt \otimes d\phi + d\phi \otimes dt) + (D^2- B^2) d\phi \otimes d\phi$$

In matrix notation, this just means that
$$g_{tt} = C^2 - A^2 \quad g_{t\phi} = g_{\phi t} = C\,D - A\,B \quad g_{\phi\phi} = D^2 - B^2$$

I've taken the liberty of using the symbolic subscripts for the metric coefficients rather than the numeric, hopefully it's clear what I meant.

And you can see that the resulting matrix is symmetric because ##g_{t\phi} = g_{\phi t}##

e1 and e2 , which I omitted, just give you ## g_{rr} ## and ##g_{\theta\theta}## in a very straightforwards manner, you just have to square the appropriate expressions in the paper.

I'll so a short, symbolic version

e0 and e3 are one forms, as are dt and d##\phi##. I'm using index free notation, you can imagine that everything has a subscript if you prefer abstract index notation. So my e0 is your paper's ##e^{(0)}_a##, my dt is your paper's ##dt_a##

I'm going to omit e1 and e2 from what I write for brevity, they can be handled in an even simpler manner

Then

$$e0 = A\,dt + B\,d\phi \quad e3 = C\,dt + D\,d\phi$$

I haven't written out the values of A,B,C,D, I've left them symbolic. You can get them from the paper. But I'll write out the value of A so you can compare.

$$A = \sqrt{\frac{\Delta}{\Sigma}}$$

Then
$$-e0 \otimes e0 + e3 \otimes e3 = -(A\,dt + B\,d\phi ) \otimes (A\,dt + B\,d\phi ) + (C\,dt + D\,d\phi) \otimes (C\,dt + D\,d\phi)$$

The tensor product ##\otimes## is distributive, though it doesn't commute. Of course, A,B,C,and D are just numeric expressions, so their multiplication does commulte. Carrying this out, and collecting terms, we get..

$$(C^2-A^2)\, dt \otimes dt + (C\,D - A\,B) ( dt \otimes d\phi + d\phi \otimes dt) + (D^2- B^2) d\phi \otimes d\phi$$

In matrix notation, this just means that
$$g_{tt} = C^2 - A^2 \quad g_{t\phi} = g_{\phi t} = C\,D - A\,B \quad g_{\phi\phi} = D^2 - B^2$$

I've taken the liberty of using the symbolic subscripts for the metric coefficients rather than the numeric, hopefully it's clear what I meant.

And you can see that the resulting matrix is symmetric because ##g_{t\phi} = g_{\phi t}##

e1 and e2 , which I omitted, just give you ## g_{rr} ## and ##g_{\theta\theta}## in a very straightforwards manner, you just have to square the appropriate expressions in the paper.

Superb ...
You have done a great job.
But one thing I want to clear that I have to derive tetrad using Kerr metric.
The method you above use ... it seems like you use tetrads to find ##g_{\theta\theta}## and other components of metric.

The problem is that the paper is wrong to say "the" tetrad. You can just pick any four orthonormal vectors and there you have a tetrad. And there are infinitely many ways to do that.

Unfortunately, the paper doesn't appear to say why it made that choice. It merely cites reference 16, which is a Phys Rev D paper that's behind a paywall. If you have access, look at that.

If you don't have access, it's worth noting that a tetrad at event ##t,r,\theta,\phi## is the basis for the natural local Minkowski frame of an observer whose four-velocity is parallel to ##e^{(0)}## - i.e. ##u^ae_a^{(\mu)}## is -1 for ##\mu=0## and zero for all other ##\mu##. So if I were you I'd write down that four velocity and work out what it means. For example, is it undergoing proper acceleration? In what direction?

That is basically the reverse of the process for writing a tetrad field. Usually you would say that you want the tetrads associated with a family of observers with certain properties (e.g., hovering at constant altitude, or freely infalling), which would let you write your timelike tetrad vector. Then you pick an orientation for the spacelike vectors, and then you've got your tetrad.

Yes, I started with the tetrad, and outllined a way to confirmed that the tetrad given in your paper generated the Kerr metric - though I didn't grind through all the algebra to actually prove it myself, I just demonstrated how it was possible in principle. This is easier than going the other way (starting with the metric and finding a tetrad).

As far as the uniquness issue goes, the posters who point out that there isn't a unique answer are correct. For instance, given a tetrad, any spatial rotation of the three spatial vectors of the tetrad will be another tetrad. This isn't all the degrees of freedom, necessarily, just an example of why there isn't a unique tetrad.

Of course, there is an "easy" choice for e1 and e2 that makes the math simple, as they are already diagonal elements. So if we start out with this easy choice for e1 and e2, we only need to find e0 and e3.

We can treat ##g_{ij}## as a quadratic form, which is the line element of the metric. Every quadratic form can be associated with a symmetric matrix, and vica-versa. Given that we've already picked out the easy solution for e1 and e2, we only have to deal with a biquardratic form, a quadratic form in two variables, i.e. dt and d##\phi##.

We know that quadratic forms can be diagonalized. I believe this is essentially the eigenvalue / eigenvector problem. It's been too long, but some reading on eigenvalues, eigenvectors, and the principle axis transformation might help. Though I'm pretty sure we'll have one negative and one positive eigenvalue, so our matrix isn't positive definite, so anything that's restricted to positive definite matrices won't be applicable.

You might be able to manage it with just algebra, too.

I was thinking about this more, and I think I have a plan for how you might solve the original, forward problem.

If you write

e0 = dt + ##\alpha d\phi## and e3 =## d\phi + \beta## dt

the condition that ##g \otimes e0 \otimes e3=0## , that e0 and e3 are orthogonal, should allow you to find a relation between alpha and beta, so you could find beta in terms of alpha.

Then you should be able to find the value of ##\alpha## that makes the cross terms, e0 ##\otimes## e3, disappear.

Then you rescale e0 and e3 so they are unit length.

I haven't actually carried this out, but I think it should work.

Thank you a lot Researchers.
I will try and then respond you back .

g⊗e0⊗e3=0g⊗e0⊗e3=0g \otimes e0 \otimes e3=0
I know how to solve e0⊗e3 but don't know how g⊗e0⊗e3=0 is solved .
will you please tell me how this is solved

I know how to solve e0⊗e3 but don't know how g⊗e0⊗e3=0 is solved .
will you please tell me how this is solved

##g \otimes e0 \otimes e3## is just index free notation for ##g^{ab}e0_a\,e3_b## in abstract index notation

##g^{ab}## is the inverse metric. The operatior ##\otimes## is the tensor product, not the dot product. I suspect you might be confusing the two.

Zuhaa Naz
##g \otimes e0 \otimes e3## is just index free notation for ##g^{ab}e0_a\,e3_b## in abstract index notation

##g^{ab}## is the inverse metric. The operatior ##\otimes## is the tensor product, not the dot product. I suspect you might be confusing the two.

Please see that our matrix is not diagonal.
Here see in images.
How we solve this problem ..

#### Attachments

• IMG_20190317_163234.jpg
39.5 KB · Views: 422
• IMG_20190317_163336.jpg
33.6 KB · Views: 465
You should get a single number - there are no free indices in this expression. It's an inner product.

Zuhaa Naz
You should get a single number - there are no free indices in this expression. It's an inner
Got it.
We will expand it

You want to contract g, with ##e0 \otimes e3## to get a number, as Ibix said. The index-free notation doesn't handle contraction well - in the previous cases where I used this notation, the indices were mostly just distractions, because we didn't do any tensor contractions. However, in this cas,e we do have to contract the tensors. Rather than switch notations, I stuck with the index-free notation, but it isn't really the best choice for this problem. So, just switch back to index notation, which is designed to deal smoothly with contractions.

Now, e0 and e3 are both one-forms, meaning they have subscripts, ##e0_a## and ##e3_b##. So we need to contract them with the version of g that has superscripts ##g^{ab} e0_a e3_b##. We can consider this to be the contraction of ##g^{ab}## and ##e0_a \otimes e3##.

But the usual expression for the metric is ##g_{ab}##, it's subscripted. ##g^{ab}## is the inverse metric. This is somewhat of a pain :(. Perhaps there is some better way to tackle the problem. From a theoretical point of view, eigenvectors can be used to diagonalize a metrix. Google finds a discussion in matrix (not tensor) notation at http://fourier.eng.hmc.edu/e161/lectures/algebra/node6.html and https://www.maths.manchester.ac.uk/~peter/MATH10212/notes10.pdf

The second had perhaps the cleanest, terse discussion, though it's still in matrix notation, not tensor notation.
Definition.A square matrix A is orthogonally diagonalizableif if there exists an orthogonal matrix Q such that ##Q^TAQ=D## is a diagonal matrix.

Remarks.Since ##Q^T=Q^{−1}## for orthogonal Q, the equality ##Q^TAQ=D## is the same as ##Q^{−1}AQ=D##, so A∼D, so this a special case of diagonaliza-tion: the diagonal entries of D are eigenvalues of A, and the columns of Qare corresponding eigenvectors. The only difference is the additional requirement that Q be orthogonal, which is equivalent to the fact that those eigenvectors – columns of Q – form an orthonormal basis of ##R^n##

The other proposal I made was a bit of an offhand -and given that we need to invert the metric, which I didn't realize when I had the thought, it's not necessarily so attractive. The eigenvector approach may not be all that attractive either, but it's got a well-known theoretical basis for solving the diagonalization problem. My goal here is just to find a diagonalization of a matrix in a convenient manner, and it's convenient to do this with an orthogonal matrix Q.

One possible complication is what happens if one has repeated eigenvalues.

As previously discussed, there is not just one way to diagonalize a matrix - the exact number of degrees of freedom isn't entirely intiutively clear to me at the moment. The approach using orthogonal diagoalization is a convenient and general one in theory to find a way to diagonalize a metrix, though. There may be some easier way to go about it, too. Of easy solutions, my favorite is looking up the answer and checking that it works, which was my first impluse.