# Transpose of a matrix with mixed indices

## Main Question or Discussion Point

Hi!
Given a matrix A of elements $A_i\;^j$, which is the right transpose:
$A_j\;^i$
or
$A^j\;_i$
?

Related Linear and Abstract Algebra News on Phys.org
Fredrik
Staff Emeritus
Gold Member
Assuming that you mean that ##A_i{}^j## is what's on row i, column j, then the transpose of the matrix has ##A_j{}^i## on row i, column j.

Uhm... I have this equation:
$$\Lambda^T g \Lambda = g$$
in index notation:
$$\left( \Lambda^T \right)^{\mu}\;_{\rho} g^{\rho}\;_{\alpha}\Lambda^{\alpha}\;_{\nu}= g^{\mu}\;_{\nu}$$
now,
$$\left( \Lambda^T \right)^{\mu}\;_{\rho}=\Lambda^{\rho}\;_{\mu}$$
right?
And so I get:
$$\Lambda^{\rho}\;_{\mu} g^{\rho}\;_{\alpha}\Lambda^{\alpha}\;_{\nu}= g^{\mu}\;_{\nu}$$
Is this equation correct? (I don't think so, because the positions of the indices on the both sides are not correct)

Fredrik
Staff Emeritus
Gold Member
Actually, in this context (special relativity), it's conventional to write row ##\rho##, column ##\alpha## of ##g## as ##g_{\rho\alpha}##.

The convention for ##g^{-1}## is that row ##\rho##, column ##\alpha## is written as ##g^{\rho\alpha}##.

Also note that when you multiply the original equation by ##g^{-1}## from the left, you find that $$\Lambda^{-1}=g^{-1}\Lambda^Tg.$$ Row ##\rho##, column ##\alpha## of this matrix is written as
$$(\Lambda^{-1})^\rho{}_\alpha =(g^{-1}\Lambda^Tg)^\rho{}_\alpha =g^{\rho\beta}\Lambda^\mu{}_\beta g_{\mu\alpha}=\Lambda_\alpha{}^\rho.$$

Last edited:
here's my 2 cents.

suppose i have two vector spaces (say X and Y) and a linear map f:X->Y
then you automatically get a map f*:Y*->X*
usually called the transpose or pullback. it's defined in the obvious way. let ω in Y*
then (f*ω)(x) = ω(fx).

if we have an inner product on X and Y we have an identification of X*
with X and Y* with Y. we can use this to define the transpose map
fT: Y ->X.

Specifically, define θ:X->X* by (θx)(x') = (x,x') for all x' in X.
similarly let ψ:Y->Y* {(ψy)(y')=(y,y')}. We put
fT = θ-1f*ψ.

or θfT = f*ψ.
then for any y we have
θfT(y) = f*ψ(y)
both sides are elements of X* so that we can compare them
by their action on an arbitrary x in X
θfT(y)[x] = (fTy, x)
f*ψ(y)[x] = ψ(y)( fx ) = (y, fx)

so we have a coordinate independent definition of transpose for
a map between two inner product spaces.
Take a basis for X and Y.

And find the components of any map f:X->Y by
(fx)i = fij xj

we have
(y, fx) = gijyi(fx)j = gijyifjkxk

and(fTy, x) = Gij(fTy)ixj= Gij(fT)ikykxj.

Reindex the dummy variables and compare:
gijfjk = Gjk(fT)ji.

or using the convention that the inverse of G has components Gij

(fT)ij = gjkfklGli.

using the convention that g (or G) raises or lowers indices we have

(fT)ij = fji.

-----------------------

This is just a longwinded way to say
gijfjk = Gjk(fT)ji is the same as fik = (fT)ki,
then make indices match on both sides of the equation.