vanhees71 said:
##\newcommand{\uvec}[1]{\underline{#1}}##
$$\tilde{\Lambda} = \hat{\eta} \hat{\Lambda} \hat{\eta} = (\hat{\Lambda}^{-1})^{\text{T}},$$
i.e., the covariant components transform contravariantly to the contravariant components as expected.
So a covariant vector transforms in such a way that the transformation matrix is the
transpose of the inverse of the LT (transformation matrix for a contravariant vector). By explicitly showing this in your derivation, there are a lot of things that I learned because textbooks don't explicitly explain this. (Please read my discussion/statement below to confirm if my understanding of what you derived is correct)
What do you mean by "
the covariant components transform contravariantly to the contravariant components as expected"?
Ibix said:
You never need to. I have to say, my feeling is that all of the index placement (except upper and lower) on coordinate transforms is an unnecessary flourish. There's only the forward and inverse transforms - but they have opposite effects when applied to co- and contra-variant tensor components. So people add this diagonal index placement to try to indicate that this forward transformation is actually an inverse transformation applied to a co-vector... or something. Honestly, as long as I establish clearly what the relationship between my coordinates is (i.e., write out my primed coordinates explicitly in terms of my unprimed ones at some point) I can do this:$$\begin{eqnarray*}
V'^\nu&=&L^\nu_\mu V^\mu\\
\omega'_\nu&=&L^\mu_\nu\omega_\mu\\
V^\nu&=&L^\nu_\mu V'^\mu\\
\omega_\nu&=&L^\mu_\nu\omega'_\mu\end{eqnarray*}$$I would not describe that as a
good notation, using the same symbol for both forward and inverse transforms, but it's still completely unambiguous. I'm just saying do a Lorentz transform (or whatever transform I established previously). You can see from the primes whether I'm transforming from unprimed to primed coordinates or vice versa. And you can tell from that and the index placement what the components of ##L## must be. The rules are:
- Each component of ##L## is a partial derivative of one coordinate from one system with respect to one coordinate in the other system (i.e. they all look like either ##\partial x^a/\partial x'^b## or ##\partial x'^a/\partial x^b##)
- If you're transforming an upper index, you need the matching index on the bottom of the derivative
- If you're transforming a lower index, you need the matching index on the top of the derivative
- Prime goes on the top or bottom coordinate, whichever has the index from the primed tensor
Applying those rules:$$\begin{eqnarray*}
V'^\nu&=&\frac{\partial x'^\nu}{\partial x^\mu} V^\mu\\
\omega'_\nu&=&\frac{\partial x^\mu}{\partial x'^\nu}\omega_\mu\\
V^\nu&=&\frac{\partial x^\nu}{\partial x'^\mu} V'^\mu\\
\omega_\nu&=&\frac{\partial x'^\mu}{\partial x^\nu}\omega'_\mu\end{eqnarray*}$$That works generally - not just for the Lorentz transforms. You are welcome to write out the sums explicitly and fill in the partial derivatives for the Lorentz transforms to see if you get the matrices you expect.
So my point is that upper/lower index placement is important, but order only matters for tensors. NW/SE index placement on coordinate transformations is unnecessary, and isn't used consistently between authors anyway (Carroll notes explicitly that Shutz uses a different convention from Carroll's). You can deduce what's actually meant purely from the upper/lower index placement and knowing what a coordinate transform actually does. The conventions are meant to help you, to remind you which is the forward and which is the inverse transform. But, frankly, since they're inconsistently used I think they're more confusing than helpful, and I just note that people have conventions and otherwise ignore them.
I think if I explicitly use partial derivatives as the transformation between contravariant and covariant vectors then you have a point in that there are only two kinds of transformations, ##\frac{\partial x'^\nu}{\partial x^\mu}## (forward/contravariant transformation) and ##\frac{\partial x^\mu}{\partial x'^\nu} ## (inverse/covariant transformation).
According to what you wrote,
##V'^\nu = \frac{\partial x'^\nu}{\partial x^\mu} V^\mu##
##\omega'_\nu = \frac{\partial x^\mu}{\partial x'^\nu}\omega_\mu##
The contravariant transformation shows that the prime is in the numerator and the unprimed is in the denominator. For a covariant transformation, we switch the prime and unprimed, BUT then due to the nature of the contravariant and covariant vectors, the indices also switches, i.e., that is for contravariant ##\nu## is in the numerator and ##\mu## is in the denominator, but for covariant ##\mu## is in the numerator and ##\nu## is in the denominator, I think this is the reason the horizontal index placement is important if
NOT writing in partial derivative form!
@vanhees71 am I correct here?
vanhees71 said:
But again. I strongly recommend to be careful with the vertical AND HORIZONTAL placement of the indices. The first equation must read
$$V^{\prime \nu}={\Lambda^{\nu}}_{\mu} V^{\mu}.$$
The second one is
$$\omega_{\nu}'=g_{\nu \rho} \omega^{\prime \rho} = g_{\nu \rho} {\Lambda^{\rho}}_{\sigma} \omega^{\sigma} = g_{\nu \rho} {\Lambda^{\rho}}_{\sigma} \eta^{\sigma \alpha} \omega_{\alpha} ={\Lambda_{\nu}}^{\alpha} \omega_{\alpha}={\Lambda_{\nu}}^{\mu} \omega_{\mu}.$$
The third reads
$$V^{\nu} = {(L^{-1})^{\nu}}_{\mu} V^{\prime \mu} = {L_{\mu}}^{\nu} V^{\prime \mu},$$
because
$$\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma}$$
and thus by mutliplying with ##\eta^{\rho \alpha}## on both sides
$${\Lambda_{\nu}}^{\alpha} {\Lambda^{\nu}}_{\sigma}=\delta_{\sigma}^{\alpha}$$
and thus
$${(\Lambda^{-1})^{\nu}}_{\mu} = {\Lambda_{\mu}}^{\nu}.$$
As you see, the horizontal placement of the indices is as crucial in the Ricci calculus as is the vertical!
I will summarize.
For contravariant,
##V' = \Lambda V \quad \rightarrow \quad (\Lambda^{-1})^T V' = (\Lambda^{-1})^T \Lambda V = V##
where ##\tilde{\Lambda} = (\Lambda^{-1})^T## is the covariant transformation matrix as
@vanhees71 stated in #17. I just want to clarify, so it is ##(\Lambda^{-1})^T \Lambda = \Lambda (\Lambda^{-1})^T = I~## not ##~(\Lambda^{-1}) \Lambda = \Lambda (\Lambda^{-1})= I~##? Or is it just a notation issue where in matrix notation ##(\Lambda^{-1})## already has an
inherent transpose in the operation (which is the case when taking inverses), so the transpose ##T## that you are pertaining in ##(\Lambda^{-1})^T## is the
inherent transpose or is a transpose that must be done after taking the inverse?
##V' ^\nu = \Lambda^\nu~_\mu V^\mu##
##(\Lambda^{-1})^\alpha~_\nu \Lambda^\nu~_\mu V^\mu = (\Lambda^{-1})^\alpha~_\nu V'^\nu##
##V^\alpha = (\Lambda^{-1})^\alpha~_\nu V'^\nu##
For ##\Lambda^\nu~_\mu##, ##\nu## is the row, but for ##(\Lambda^{-1})^\alpha~_\nu##, ##\nu## is the column, this is exactly the effect of the transpose ##(\Lambda^{-1})^T##.
For covariant,
##\omega' = (\Lambda^{-1})^T \omega \quad \rightarrow \quad \Lambda \omega' = \Lambda (\Lambda^{-1})^T \omega = \omega##
##\omega'_\nu = (\Lambda^{-1})^\mu~_\nu \omega_\mu##
##\Lambda^\nu~_\alpha (\Lambda^{-1})^\mu~_\nu \omega_\mu = \Lambda^\nu~_\alpha \omega'_\nu##
##(\Lambda^{-1})^\mu~_\nu \Lambda^\nu~_\alpha \omega_\mu = \Lambda^\nu~_\alpha \omega'_\nu##
##\omega_\alpha = \Lambda^\nu~_\alpha \omega'_\nu##
Notice that since ##(\Lambda^{-1})^\mu~_\nu##, so the correct index placement is that for the contravariant transformation, we have ##\Lambda^\nu~_\alpha##, we switch (transpose) the indices and change ##\mu## to ##\alpha##.
So ##\omega_\alpha## transforms in the same way as ##V' ^\nu## and ##V^\alpha## transforms in the same way as ##\omega'_\nu##.