What is the role of the transpose matrix in tensor transformations?

AI Thread Summary
The discussion centers on the role of the transpose matrix in tensor transformations, specifically in the context of the Lorentz transformation. The transformation of the Faraday tensor is expressed in both component and matrix forms, with the matrix equation F' = LFL^T raising questions about the necessity of the transpose. Participants explore the implications of matrix multiplication definitions and the proper handling of indices, emphasizing the distinction between covariant and contravariant tensors. The confusion stems from misunderstanding index manipulation and the proper application of matrix operations. Ultimately, clarity is achieved through revisiting matrix multiplication principles and recognizing the significance of index placement.
peterjaybee
Messages
62
Reaction score
0
Hi,

In component form the transformation for the following tensor can be written as
F^{\mu\nu}=\Lambda^{\mu}_{\alpha}\Lambda^{\nu}_{\beta}F^{\beta\alpha}

or in matrix notation, apparently as
F^{'}=LFL^{T}
Here L is the Lorentz transformation matrix

Im happy with the component form, but I don't understand where the transpose matrix bit comes from in the matrix equation, and why it is on the RHS of the F tensor.
 
Mathematics news on Phys.org
It follows immediately from the definition of the product of two matrices.

(AB)_{ij}=A_{ik}B_{kj}

(What does this definition say is on row i, column j of LFL^T?)
 
Im sorry, I still can't see it.
 
No need to apologize. I know a lot of people are having difficulties with this. I'm genuinely interested in why that is, so when you do see it, I'd appreciate if you could tell me what it was that confused you.

If we write the component on row \mu, column \nu, of an arbitrary matrix X as X_{\mu\nu}, then

(LFL^T)_{\mu\nu}=(LF)_{\mu\rho}(L^T)_{\rho\nu}=L_{\mu\sigma}F_{\sigma\rho}L_{\nu\rho}=L_{\mu\sigma}L_{\nu\rho}F_{\sigma\rho}
 
I strugle with this concept (and a lot of other index manipulations) because I find the index notation unfamiliar and a little alien. Because I don't understand it, my logic is flawed. For example when I initially saw
F^{\mu\nu}=\Lambda^{\mu}_{\alpha}\Lambda^{\nu}_{\beta}F^{\beta\alpha},
I thought to get the transformed faraday components you just times two lorentz matricies together then right multiply by the untransformed faraday tensor. Even though I know this is wrong (having tried it) I do not understand why it is wrong. It is very difficult to describe.


Thanks to you, I now understand how to do the manipulation which is a relief :smile:. The manipulation itself makes sense, I just don't understand where my logic in the above fails if you see what I mean.

Ill try expressing it in another way if someone asked me...
"Can you get the transformed faraday components by just multiplying two lorentz matricies together then right multiply by the untransformed faraday tensor?"...
I would say no, but if they then asked me why not, I would be stuck.
 
Maybe it will be help a little if we say it in coordinate free language. In spacetime, the distinction between a vector and a covector is only conceptually useful, but computationally, we can convert a vector into a covector or a covector into a vector using the metric tensor g(_,_). So if we have a vector v, then the covector corresponding to that is defined as g(v,_), i.e. the vector vμ corresponds to the covector vα through vα = gαμvμ

Similarly, if we have a contravariant tensor Fαβ, then we use this by contracting it with covectors v and w thus: Fαβvαwβ. But, as we have seen above, vα = gαμvμ and wβ = gβνvν. So Fαβvαwβ = Fαβgαμvμgβνvν = gβνgαμFαβvμvν.

So the contravariant tensor, which acts on pairs of covectors, can be made to act on their corresponding vectors by replacing it with the covariant tensor gβνgαμFαβ = Fμν.
 
Just noticed the question was not about raising and lowering indices. Ignore my previous post.
 
Look at the definition of matrix multiplication again, in #2. Note that the sum is always over an index that's a column index for the matrix on the left and a row index for the matrix on the right. Since \Lambda^\nu{}_{\beta} is row \nu, column \beta of a \Lambda, and F^{\beta\alpha} is row \beta, column \alpha of a F, the result

\Lambda^\nu{}_\beta F^{\beta\alpha}=(\Lambda F)^{\nu\alpha}

follows immediately from the definition of matrix multiplication. But now look at

\Lambda^\mu{}_\alpha F^{\beta\alpha}

Note that the sum is over the column index of F. If you have another look at the definition of matrix multiplication, you'll see that this means that if the above is a component of the product of two matrices, one of which is F, then F must be the matrix on the left. When you understand that, the rest should be easy.

Also note that you should LaTeX \Lambda^\mu{}_\nu as \Lambda^\mu{}_\nu, so that the column index appears diagonally to the right below the row index. And check out the comment about the inverse here to see why the horizontal position of the indices matters.
 
I finally get it! Its a miracle.

Thank you for your help.
 
Back
Top