Evaluating metric tensor in a primed coordinate system

Click For Summary

Discussion Overview

The discussion revolves around the transformation of tensors, specifically the metric tensor, between different coordinate systems in the context of General Relativity (GR). Participants explore the applicability of transformation methods for various types of tensors and evaluate the metric tensor in Cartesian coordinates derived from polar coordinates.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions whether the method of transforming the inertia tensor is applicable to all tensors, particularly the metric tensor.
  • Another participant confirms that the transformation method is valid for all tensors but suggests that the order of matrix multiplication may be causing confusion.
  • A different participant argues that matrix multiplication is associative, implying that the order should not matter, and shares their successful method of obtaining the identity matrix.
  • One participant proposes that the correct transformation should involve the transpose of the transformation matrix rather than its inverse.
  • Another participant clarifies that a tensor is distinct from a matrix and discusses the specific transformation rules for different types of tensors, emphasizing the importance of index placement.
  • Further elaboration is provided on the transformation rules for contravariant and covariant indices, with examples illustrating these concepts.
  • One participant expresses gratitude for the detailed explanation regarding tensor transformations and the handling of indices.
  • A later reply offers a correction on LaTeX typesetting for mathematical functions, suggesting a more appropriate formatting method.

Areas of Agreement / Disagreement

Participants generally agree on the validity of the transformation methods for tensors, but there is disagreement regarding the specifics of matrix multiplication and the appropriate transformation rules for different types of tensors. The discussion remains unresolved on certain technical details.

Contextual Notes

Participants note that the transformation rules depend on the rank and type of indices of the tensors involved, and there are unresolved aspects regarding the application of these rules in specific cases.

vibhuav
Messages
45
Reaction score
0
I am trying to learn GR. In two of the books on tensors, there is an example of evaluating the inertia tensor in a primed coordinate system (for example, a rotated one) from that in an unprimed coordinate system using the eqn. ##I’ = R I R^{-1}## where R is the transformation matrix and ##R^{-1}## is its inverse, and ##I## is the inertia matrix.

Is this method of transformation of (inertia) tensor from one coordinate system to another applicable for all tensors? In particular, can I use this method to evaluate the metric tensor in the primed coordinate system, given the metric tensor in the unprimed system and the transformation matrix?

Assuming it is applicable, I attempted to evaluate the metric tensor in Cartesian coordinate system, from the 2D polar system (using ##g’(Cart) = T g(polar) T^{-1}##, where ##T## is the transformation matrix from polar to Cartesian), and was expecting an identity matrix with diagonal elements 1 and all others 0.

I am evaluating the metric tensor as follows:
\begin{equation}
g'(Cart) =
\begin{bmatrix}
cos\theta & -r\ sin\theta\\
sin\theta & r\ cos\theta \end{bmatrix} \times
\begin{bmatrix}
1 & 0\\
0 & r^2 \end{bmatrix} \times
\begin{bmatrix}
cos\theta & sin\theta\\
-sin\theta \over r & cos\theta \over r\end{bmatrix}
\end{equation}
Instead of the identity matrix for the metric tensor in Cartesian coordinates, I ended up with:
\begin{bmatrix}cos^2\theta + r^2 sin^2\theta&cos\theta\ sin\theta - r^2 cos\theta\ sin\theta\\
cos\theta\ sin\theta - r^2 cos\theta\ sin\theta&sin^2\theta+r^2 cos^2\theta \end{bmatrix}
Almost there, but not quite; the ##r^2## is messing it up.

What am I missing? Since the Euclidean space is flat, the Cartesian coordinate system has an identity matrix as a metric tensor, right?
 
Physics news on Phys.org
vibhuav said:
Is this method of transformation of (inertia) tensor from one coordinate system to another applicable for all tensors?

Yes.

vibhuav said:
I am evaluating the metric tensor as follows

It looks like you are multiplying the matrices left to right. Try multiplying them right to left. (Note that right to left multiplication is the usual convention.)

vibhuav said:
Since the Euclidean space is flat, the Cartesian coordinate system has an identity matrix as a metric tensor, right?

Yes.
 
PeterDonis said:
It looks like you are multiplying the matrices left to right. Try multiplying them right to left. (Note that right to left multiplication is the usual convention.)

Actually, the grouping (left first vs. right first) can't matter because matrix multiplication is associative. But I was able to get the right answer (the identity matrix) by first multiplying the right two factors, then multiplying the result by the left factor. So I'm not sure what is going wrong by doing it the other way.
 
Shouldn't that be ##R^T##, not ##R^{-1}##? Because if we expect that ##R.g.R^{-1}=I## then it follows that ##g=R^{-1}.I.R## and hence ##g=I##.
 
First of all, a tensor is not a matrix or vice versa. A rank two tensor can be represented by one, but it really is a different object.

The transformation rule you quote is valid only for tensors of rank two that have one covariant and one contravariant index. In the case of the inertia tensor, you are likely using Cartesian coordinates and then index placement really does not matter. Hence, your
vibhuav said:
for example, a rotated one
is not an example, but the special case when it is generally true regardless of index placement. In the case where you have two contravariant indices, the matrix representation will instead be on the form
$$
T' = R T R^T,
$$
where ##R^T## is the transpose of ##R##, not its inverse. Note that, in the case of rotations, ##R^T = R^{-1}## and the transformation rule indeed is the same. For the case of two covariant indices, you would instead get
$$
T' = (R^{-1})^T T R^{-1}.
$$
These relations follow directly from the more illustrative
$$
\newcommand{\dd}[2]{\frac{\partial #1}{\partial #2}}
T'^{ab} = \dd{x'^a}{x^c} \dd{x'^b}{x^d} T^{bd} \quad \mbox{and}
\quad T'_{ab} = \dd{x^c}{x'^a} \dd{x^d}{x'^b} T_{cd}.
$$
Note that the components of ##R## are ##\partial x'^a/\partial x^c## and those of ##R^{-1}## are ##\partial x^c/\partial x'^a##.

If you want to consider the appropriate transformation rule for the metric with ##R G R^{-1}##, then you need to raise one of the covariant indices, which turns the metric into the Kronecker delta, which is represented by the identity matrix ##I##. You would then obatin
$$
R I R^{-1} = RR^{-1} = I,
$$
which is correct as the Kronecker delta is represented by the identity matrix in all coordinate systems. For your metric tensor transformation, you would obtain (both indices are covariant)
$$
G' = \begin{pmatrix}
\cos\theta & \sin\theta\\
-r\sin\theta & r\cos\theta
\end{pmatrix}^{-1}\begin{pmatrix}
1 & 0 \\ 0 & r^2\end{pmatrix}
\begin{pmatrix}
\cos\theta & -r \sin\theta\\
\sin\theta & r \cos\theta
\end{pmatrix}^{-1}
=
\begin{pmatrix}
\cos\theta & -\frac 1r \sin\theta\\
\sin\theta & \frac 1r \cos\theta
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\ 0 & r^2\end{pmatrix}
\begin{pmatrix}
\cos\theta & \sin\theta\\
-\frac 1r \sin\theta & \frac 1r \cos\theta
\end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}.
$$

Edit: Clearly I started writing this before the previous posts so in no way should it be taken as a critique of those. Personally, I try to stay away from matrix representations as much as possible as the actual transformation rules tend to get lost. Only use it if you are working in Cartesian coordinates (i.e., you are only interested in rotations) or know precisely what you are doing.
 
  • Like
Likes   Reactions: vibhuav, PeterDonis and Ibix
Orodruin said:
...
In the case where you have two contravariant indices, the matrix representation will instead be on the form
$$
T' = R T R^T,
$$
where ##R^T## is the transpose of ##R##, not its inverse. Note that, in the case of rotations, ##R^T = R^{-1}## and the transformation rule indeed is the same. For the case of two covariant indices, you would instead get
$$
T' = (R^{-1})^T T R^{-1}.
$$
These relations follow directly from the more illustrative
$$
\newcommand{\dd}[2]{\frac{\partial #1}{\partial #2}}
T'^{ab} = \dd{x'^a}{x^c} \dd{x'^b}{x^d} T^{bd} \quad \mbox{and}
\quad T'_{ab} = \dd{x^c}{x'^a} \dd{x^d}{x'^b} T_{cd}.
$$
Note that the components of ##R## are ##\partial x'^a/\partial x^c## and those of ##R^{-1}## are ##\partial x^c/\partial x'^a##.

Great answer; thanks a lot! There is so much details to know and you shed some light for me regarding how contravariant and covariant indices are to be handled. As you said,
##T'^{ab} = \frac{\partial x'^a}{\partial x^c} \frac{\partial x'^b}{\partial x^d} T^{bd}## and ##T'_{ab} = \frac{\partial x^c}{\partial x'^a} \frac{\partial x^d}{\partial x'^b} T_{cd}## are more illustrative.
 
Happy to be of help.

To be honest, the word "illustrative" might have been the wrong one to use. "Appropriate" or "illuminating" might have been better choices. Also, it makes sense to point out explicitly that for the case ##R^T = R^{-1}##, you would also have the covariant transformation turning into the one you quoted in the OP as then
$$
(R^{-1})^T T R^{-1} = (R^T)^T T R^{-1} = R T R^{-1}.
$$

On an unrelated note, I could not help but noticing that your ##\LaTeX## typesetting of the matrix elements was "r\ sin\theta" (etc), which produces ##r\ sin\theta##. I understand that you wanted the whitespace to separate the ##sin## from the ##r##, however there is a more technically correct way of doing it: Functions such as sine, cosine, the exponential function, etc, should be typeset in a particular way, namely with the function name in regular font rather the math mode variable font. When you write "rsin\theta", ##\LaTeX## interprets it all as a string of five variables and typesets it as such. If you instead use "r\sin\theta", then the sine function will be typeset appropriately as ##r\sin\theta##. Many functions like this are predefined but you find one missing you can add your own using the \DeclareMathOperator command from the amsmath package (essentially always include amsmath ...). Normally I would not bother pointing it out, but from your efforts it seems as if you did pay some attention to how the ##\LaTeX## appeared and it is a useful thing to know when (like me) you are a bit of a ##\LaTeX## perfectionist.
 
Orodruin said:
On an unrelated note, I could not help but noticing that your ##\LaTeX## typesetting of the matrix...
Ah yes, and good observation!

I loved typesetting math equations with ##\LaTeX## when I was in college, almost 25 years ago, but lately don't use it. Being able to typeset with ##\LaTeX## is one of the perks of Physics Forum, and I tried to recollect as much as I could while writing the original post, but obviously I have forgotten many details. Maybe I'll bring my book and keep it handy, but thanks for goading me towards that!
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 43 ·
2
Replies
43
Views
3K