Evaluating metric tensor in a primed coordinate system

In summary, the transformation from an unprimed coordinate system to a primed coordinate system is applicable for all tensors, but the method of evaluation of the metric tensor in Cartesian coordinates from the 2D polar system does not produce the expected identity matrix. To evaluate the metric in Cartesian coordinates, you must raise one of the covariant indices and use the rule that the transformation matrix is the transpose of the original matrix.
  • #1
vibhuav
43
0
I am trying to learn GR. In two of the books on tensors, there is an example of evaluating the inertia tensor in a primed coordinate system (for example, a rotated one) from that in an unprimed coordinate system using the eqn. ##I’ = R I R^{-1}## where R is the transformation matrix and ##R^{-1}## is its inverse, and ##I## is the inertia matrix.

Is this method of transformation of (inertia) tensor from one coordinate system to another applicable for all tensors? In particular, can I use this method to evaluate the metric tensor in the primed coordinate system, given the metric tensor in the unprimed system and the transformation matrix?

Assuming it is applicable, I attempted to evaluate the metric tensor in Cartesian coordinate system, from the 2D polar system (using ##g’(Cart) = T g(polar) T^{-1}##, where ##T## is the transformation matrix from polar to Cartesian), and was expecting an identity matrix with diagonal elements 1 and all others 0.

I am evaluating the metric tensor as follows:
\begin{equation}
g'(Cart) =
\begin{bmatrix}
cos\theta & -r\ sin\theta\\
sin\theta & r\ cos\theta \end{bmatrix} \times
\begin{bmatrix}
1 & 0\\
0 & r^2 \end{bmatrix} \times
\begin{bmatrix}
cos\theta & sin\theta\\
-sin\theta \over r & cos\theta \over r\end{bmatrix}
\end{equation}
Instead of the identity matrix for the metric tensor in Cartesian coordinates, I ended up with:
\begin{bmatrix}cos^2\theta + r^2 sin^2\theta&cos\theta\ sin\theta - r^2 cos\theta\ sin\theta\\
cos\theta\ sin\theta - r^2 cos\theta\ sin\theta&sin^2\theta+r^2 cos^2\theta \end{bmatrix}
Almost there, but not quite; the ##r^2## is messing it up.

What am I missing? Since the Euclidean space is flat, the Cartesian coordinate system has an identity matrix as a metric tensor, right?
 
Physics news on Phys.org
  • #2
vibhuav said:
Is this method of transformation of (inertia) tensor from one coordinate system to another applicable for all tensors?

Yes.

vibhuav said:
I am evaluating the metric tensor as follows

It looks like you are multiplying the matrices left to right. Try multiplying them right to left. (Note that right to left multiplication is the usual convention.)

vibhuav said:
Since the Euclidean space is flat, the Cartesian coordinate system has an identity matrix as a metric tensor, right?

Yes.
 
  • #3
PeterDonis said:
It looks like you are multiplying the matrices left to right. Try multiplying them right to left. (Note that right to left multiplication is the usual convention.)

Actually, the grouping (left first vs. right first) can't matter because matrix multiplication is associative. But I was able to get the right answer (the identity matrix) by first multiplying the right two factors, then multiplying the result by the left factor. So I'm not sure what is going wrong by doing it the other way.
 
  • #4
Shouldn't that be ##R^T##, not ##R^{-1}##? Because if we expect that ##R.g.R^{-1}=I## then it follows that ##g=R^{-1}.I.R## and hence ##g=I##.
 
  • #5
First of all, a tensor is not a matrix or vice versa. A rank two tensor can be represented by one, but it really is a different object.

The transformation rule you quote is valid only for tensors of rank two that have one covariant and one contravariant index. In the case of the inertia tensor, you are likely using Cartesian coordinates and then index placement really does not matter. Hence, your
vibhuav said:
for example, a rotated one
is not an example, but the special case when it is generally true regardless of index placement. In the case where you have two contravariant indices, the matrix representation will instead be on the form
$$
T' = R T R^T,
$$
where ##R^T## is the transpose of ##R##, not its inverse. Note that, in the case of rotations, ##R^T = R^{-1}## and the transformation rule indeed is the same. For the case of two covariant indices, you would instead get
$$
T' = (R^{-1})^T T R^{-1}.
$$
These relations follow directly from the more illustrative
$$
\newcommand{\dd}[2]{\frac{\partial #1}{\partial #2}}
T'^{ab} = \dd{x'^a}{x^c} \dd{x'^b}{x^d} T^{bd} \quad \mbox{and}
\quad T'_{ab} = \dd{x^c}{x'^a} \dd{x^d}{x'^b} T_{cd}.
$$
Note that the components of ##R## are ##\partial x'^a/\partial x^c## and those of ##R^{-1}## are ##\partial x^c/\partial x'^a##.

If you want to consider the appropriate transformation rule for the metric with ##R G R^{-1}##, then you need to raise one of the covariant indices, which turns the metric into the Kronecker delta, which is represented by the identity matrix ##I##. You would then obatin
$$
R I R^{-1} = RR^{-1} = I,
$$
which is correct as the Kronecker delta is represented by the identity matrix in all coordinate systems. For your metric tensor transformation, you would obtain (both indices are covariant)
$$
G' = \begin{pmatrix}
\cos\theta & \sin\theta\\
-r\sin\theta & r\cos\theta
\end{pmatrix}^{-1}\begin{pmatrix}
1 & 0 \\ 0 & r^2\end{pmatrix}
\begin{pmatrix}
\cos\theta & -r \sin\theta\\
\sin\theta & r \cos\theta
\end{pmatrix}^{-1}
=
\begin{pmatrix}
\cos\theta & -\frac 1r \sin\theta\\
\sin\theta & \frac 1r \cos\theta
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\ 0 & r^2\end{pmatrix}
\begin{pmatrix}
\cos\theta & \sin\theta\\
-\frac 1r \sin\theta & \frac 1r \cos\theta
\end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.
$$

Edit: Clearly I started writing this before the previous posts so in no way should it be taken as a critique of those. Personally, I try to stay away from matrix representations as much as possible as the actual transformation rules tend to get lost. Only use it if you are working in Cartesian coordinates (i.e., you are only interested in rotations) or know precisely what you are doing.
 
  • Like
Likes vibhuav, PeterDonis and Ibix
  • #6
Orodruin said:
...
In the case where you have two contravariant indices, the matrix representation will instead be on the form
$$
T' = R T R^T,
$$
where ##R^T## is the transpose of ##R##, not its inverse. Note that, in the case of rotations, ##R^T = R^{-1}## and the transformation rule indeed is the same. For the case of two covariant indices, you would instead get
$$
T' = (R^{-1})^T T R^{-1}.
$$
These relations follow directly from the more illustrative
$$
\newcommand{\dd}[2]{\frac{\partial #1}{\partial #2}}
T'^{ab} = \dd{x'^a}{x^c} \dd{x'^b}{x^d} T^{bd} \quad \mbox{and}
\quad T'_{ab} = \dd{x^c}{x'^a} \dd{x^d}{x'^b} T_{cd}.
$$
Note that the components of ##R## are ##\partial x'^a/\partial x^c## and those of ##R^{-1}## are ##\partial x^c/\partial x'^a##.

Great answer; thanks a lot! There is so much details to know and you shed some light for me regarding how contravariant and covariant indices are to be handled. As you said,
##T'^{ab} = \frac{\partial x'^a}{\partial x^c} \frac{\partial x'^b}{\partial x^d} T^{bd}## and ##T'_{ab} = \frac{\partial x^c}{\partial x'^a} \frac{\partial x^d}{\partial x'^b} T_{cd}## are more illustrative.
 
  • #7
Happy to be of help.

To be honest, the word "illustrative" might have been the wrong one to use. "Appropriate" or "illuminating" might have been better choices. Also, it makes sense to point out explicitly that for the case ##R^T = R^{-1}##, you would also have the covariant transformation turning into the one you quoted in the OP as then
$$
(R^{-1})^T T R^{-1} = (R^T)^T T R^{-1} = R T R^{-1}.
$$

On an unrelated note, I could not help but noticing that your ##\LaTeX## typesetting of the matrix elements was "r\ sin\theta" (etc), which produces ##r\ sin\theta##. I understand that you wanted the whitespace to separate the ##sin## from the ##r##, however there is a more technically correct way of doing it: Functions such as sine, cosine, the exponential function, etc, should be typeset in a particular way, namely with the function name in regular font rather the math mode variable font. When you write "rsin\theta", ##\LaTeX## interprets it all as a string of five variables and typesets it as such. If you instead use "r\sin\theta", then the sine function will be typeset appropriately as ##r\sin\theta##. Many functions like this are predefined but you find one missing you can add your own using the \DeclareMathOperator command from the amsmath package (essentially always include amsmath ...). Normally I would not bother pointing it out, but from your efforts it seems as if you did pay some attention to how the ##\LaTeX## appeared and it is a useful thing to know when (like me) you are a bit of a ##\LaTeX## perfectionist.
 
  • #8
Orodruin said:
On an unrelated note, I could not help but noticing that your ##\LaTeX## typesetting of the matrix...
Ah yes, and good observation!

I loved typesetting math equations with ##\LaTeX## when I was in college, almost 25 years ago, but lately don't use it. Being able to typeset with ##\LaTeX## is one of the perks of Physics Forum, and I tried to recollect as much as I could while writing the original post, but obviously I have forgotten many details. Maybe I'll bring my book and keep it handy, but thanks for goading me towards that!
 

1. What is a metric tensor and why is it important in evaluating coordinate systems?

A metric tensor is a mathematical object that describes the distance and angle relationships in a given space. In evaluating coordinate systems, the metric tensor allows us to calculate distances and angles between points in that space, providing a way to measure and understand the geometry of that system.

2. How does a metric tensor change in a primed coordinate system?

In a primed coordinate system, the metric tensor changes due to the change in basis vectors. The components of the metric tensor are transformed using the Jacobian matrix, which accounts for the change in basis vectors and their orientation in the new coordinate system.

3. What is the role of the inverse metric tensor in evaluating coordinate systems?

The inverse metric tensor is used to raise and lower indices in coordinate system calculations. It is needed because the metric tensor has both covariant and contravariant components, and the inverse metric tensor allows us to switch between the two. It is also used to calculate the length of vectors in a given coordinate system.

4. How is the metric tensor related to the curvature of a coordinate system?

The metric tensor is used to calculate the curvature of a coordinate system through the Riemann curvature tensor. The Riemann curvature tensor is derived from the metric tensor and describes the curvature of a space in terms of how vectors change as they move around a closed loop. This allows us to understand the geometric properties of a coordinate system.

5. Can the metric tensor be used to evaluate non-Cartesian coordinate systems?

Yes, the metric tensor can be used to evaluate any type of coordinate system, including non-Cartesian systems such as polar, cylindrical, or spherical coordinates. The metric tensor provides a way to measure distances and angles in any given coordinate system, allowing for a deeper understanding of its geometry.

Similar threads

  • Special and General Relativity
Replies
8
Views
1K
  • Special and General Relativity
Replies
11
Views
244
  • Special and General Relativity
Replies
9
Views
1K
  • Special and General Relativity
Replies
14
Views
800
Replies
40
Views
2K
  • Special and General Relativity
Replies
4
Views
780
  • Special and General Relativity
Replies
13
Views
2K
  • Special and General Relativity
Replies
11
Views
201
  • Special and General Relativity
Replies
1
Views
927
  • Special and General Relativity
Replies
19
Views
3K
Back
Top