I Writing Metric in Matrix Form: Method?

ChrisJ
Messages
70
Reaction score
3
In ##c=1## units, from my SR courses I was told for example, that the Minkowski metric ## ds^2 = -dt^2 + dx^2 + dy^2 + dz^2 ## can be written in matrix form as the below..

\eta = <br /> \begin{pmatrix}<br /> -1 &amp; 0 &amp; 0 &amp; 0 \\<br /> 0 &amp; 1 &amp; 0 &amp; 0 \\<br /> 0 &amp; 0 &amp; 1 &amp; 0 \\<br /> 0 &amp; 0 &amp; 0 &amp; 1 <br /> \end{pmatrix}<br />

And it was just kind of given to me, but now as I am trying to learn GR and practise more with weird and unusual metrics I find that I do not know a formalism for turning a given metric of the form ##ds^2 =##.. into a matrix form ##g = ## .

Am I correct in thinking that the following metric ##ds^2 = \frac{1}{y^2} dx^2 + \frac{1}{y^2}dy^2 ## is just simply..

g = <br /> \begin{pmatrix}<br /> y^{-2} &amp; 0 \\<br /> 0 &amp; y^{-2}<br /> \end{pmatrix}<br />

If so, what about weirder ones with cross terms (i.e. values in the matrix that are not just along the diagonal ).

Is there a standard formalism for doing this? I have tried searching but not sure I am using the correct terms to get the results I want, or if I do find stuff it uses a lot of notation that I am unfamiliar with.
 
Physics news on Phys.org
ChrisJ said:
Am I correct in thinking that the following metric ##ds^2 = \frac{1}{y^2} dx^2 + \frac{1}{y^2}dy^2 ## is just simply..

g =<br /> \begin{pmatrix}<br /> y^{-2} &amp; 0 \\<br /> 0 &amp; y^{-2}<br /> \end{pmatrix}<br />

Yes. What you are doing is really writing a matrix representation of the metric.

If so, what about weirder ones with cross terms (i.e. values in the matrix that are not just along the diagonal ).

Is there a standard formalism for doing this? I have tried searching but not sure I am using the correct terms to get the results I want, or if I do find stuff it uses a lot of notation that I am unfamiliar with.

In general, the line element is given by
$$
ds^2 = g_{ab} dx^a dx^b.
$$
If you have the line element, just write out the sum and start identifying components (taking into account that the metric is symmetric so that ##g_{ab} = g_{ba}##. The matrix representation of the metric has the metric components ##g_{ab}## as its elements.

Edit: For example, consider the coordinates ##\xi = x-t## and ##\eta = x+t## in 2D Minkowski space (those are called light-cone coordinates. You would obtain that ##x = (\xi + \eta)/2## and ##t = (\eta-\xi)/2## and therefore
$$
ds^2 = -dt^2 + dx^2 = \frac{1}{4}[(d\xi + d\eta)^2 - (d\eta - d\xi)^2] = \frac{1}{2} d\xi \,d\eta
= g_{\xi\xi} d\xi^2 + 2 g_{\xi \eta} d\xi\, d\eta + g_{\eta\eta} d\eta^2.
$$
Identification directly gives ##g_{\xi\eta} = 1/4## and ##g_{\xi\xi} = g_{\eta\eta} = 0##.
 
  • Like
Likes Ibix and ChrisJ
In tensor notation, ##ds^2=g_{ij}dx^idx^j##. If you want to use matrix notation for it (careful! Tensors are not matrices and the rules for multiplication are not the same), it's ##ds^2=\vec{dx}^T\mathbf{g}\vec{dx}##.

So your example is correct. Essentially, the coefficient of ##dx^idx^j## goes in the i,j position of the matrix representation of the tensor. The only trap for the unwary is that ##dx^idx^j=dx^jdx^i##, so for off-diagonal elements if you have ##ds^2=\ldots+2Adx^idx^j+\ldots## then you put ##A## in the position i,j and also A in j,i.
 
  • Like
Likes ChrisJ
Ibix said:
In tensor notation, ##ds^2=g_{ij}dx^idx^j##. If you want to use matrix notation for it (careful! Tensors are not matrices and the rules for multiplication are not the same), it's ##ds^2=\vec{dx}^T\mathbf{g}\vec{dx}##.

So your example is correct. Essentially, the coefficient of ##dx^idx^j## goes in the i,j position of the matrix representation of the tensor. The only trap for the unwary is that ##dx^idx^j=dx^jdx^i##, so for off-diagonal elements if you have ##ds^2=\ldots+2Adx^idx^j+\ldots## then you put ##A## in the position i,j and also A in j,i.

Ok thanks both,

So, if I am understanding you both correct, something like ##ds^2 = -xdv^2 + 2dvdx## would be

<br /> g = \begin{pmatrix} -x &amp; 1 \\ 1 &amp; 0 \end{pmatrix}
 
  • Like
Likes Dale
Yes.
 
  • Like
Likes ChrisJ
OK, so this has bugged me for a while about the equivalence principle and the black hole information paradox. If black holes "evaporate" via Hawking radiation, then they cannot exist forever. So, from my external perspective, watching the person fall in, they slow down, freeze, and redshift to "nothing," but never cross the event horizon. Does the equivalence principle say my perspective is valid? If it does, is it possible that that person really never crossed the event horizon? The...
ASSUMPTIONS 1. Two identical clocks A and B in the same inertial frame are stationary relative to each other a fixed distance L apart. Time passes at the same rate for both. 2. Both clocks are able to send/receive light signals and to write/read the send/receive times into signals. 3. The speed of light is anisotropic. METHOD 1. At time t[A1] and time t[B1], clock A sends a light signal to clock B. The clock B time is unknown to A. 2. Clock B receives the signal from A at time t[B2] and...
From $$0 = \delta(g^{\alpha\mu}g_{\mu\nu}) = g^{\alpha\mu} \delta g_{\mu\nu} + g_{\mu\nu} \delta g^{\alpha\mu}$$ we have $$g^{\alpha\mu} \delta g_{\mu\nu} = -g_{\mu\nu} \delta g^{\alpha\mu} \,\, . $$ Multiply both sides by ##g_{\alpha\beta}## to get $$\delta g_{\beta\nu} = -g_{\alpha\beta} g_{\mu\nu} \delta g^{\alpha\mu} \qquad(*)$$ (This is Dirac's eq. (26.9) in "GTR".) On the other hand, the variation ##\delta g^{\alpha\mu} = \bar{g}^{\alpha\mu} - g^{\alpha\mu}## should be a tensor...

Similar threads

Replies
1
Views
991
Replies
5
Views
1K
Replies
7
Views
2K
Replies
8
Views
4K
Replies
8
Views
2K
Replies
10
Views
2K
Replies
2
Views
1K
Back
Top