1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Tensor transformations

  1. Apr 19, 2010 #1
    Hi,

    In component form the transformation for the following tensor can be written as
    [tex]F^{\mu\nu}=\Lambda^{\mu}_{\alpha}\Lambda^{\nu}_{\beta}F^{\beta\alpha}[/tex]

    or in matrix notation, apparently as
    [tex]F^{'}=LFL^{T}[/tex]
    Here L is the Lorentz transformation matrix

    Im happy with the component form, but I dont understand where the transpose matrix bit comes from in the matrix equation, and why it is on the RHS of the F tensor.
     
  2. jcsd
  3. Apr 19, 2010 #2

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    It follows immediately from the definition of the product of two matrices.

    [tex](AB)_{ij}=A_{ik}B_{kj}[/tex]

    (What does this definition say is on row i, column j of [tex]LFL^T[/tex]?)
     
  4. Apr 20, 2010 #3
    Im sorry, I still can't see it.
     
  5. Apr 20, 2010 #4

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    No need to apologize. I know a lot of people are having difficulties with this. I'm genuinely interested in why that is, so when you do see it, I'd appreciate if you could tell me what it was that confused you.

    If we write the component on row [itex]\mu[/itex], column [itex]\nu[/itex], of an arbitrary matrix X as [itex]X_{\mu\nu}[/itex], then

    [tex](LFL^T)_{\mu\nu}=(LF)_{\mu\rho}(L^T)_{\rho\nu}=L_{\mu\sigma}F_{\sigma\rho}L_{\nu\rho}=L_{\mu\sigma}L_{\nu\rho}F_{\sigma\rho}[/tex]
     
  6. Apr 20, 2010 #5
    I strugle with this concept (and alot of other index manipulations) because I find the index notation unfamiliar and a little alien. Because I dont understand it, my logic is flawed. For example when I initially saw
    [tex]F^{\mu\nu}=\Lambda^{\mu}_{\alpha}\Lambda^{\nu}_{\beta}F^{\beta\alpha}[/tex],
    I thought to get the transformed faraday components you just times two lorentz matricies together then right multiply by the untransformed faraday tensor. Even though I know this is wrong (having tried it) I do not understand why it is wrong. It is very difficult to describe.


    Thanks to you, I now understand how to do the manipulation which is a relief :smile:. The manipulation itself makes sense, I just dont understand where my logic in the above fails if you see what I mean.

    Ill try expressing it in another way if someone asked me...
    "Can you get the transformed faraday components by just multiplying two lorentz matricies together then right multiply by the untransformed faraday tensor?"...
    I would say no, but if they then asked me why not, I would be stuck.
     
  7. Apr 20, 2010 #6

    dx

    User Avatar
    Homework Helper
    Gold Member

    Maybe it will be help a little if we say it in coordinate free language. In spacetime, the distinction between a vector and a covector is only conceptually useful, but computationally, we can convert a vector into a covector or a covector into a vector using the metric tensor g(_,_). So if we have a vector v, then the covector corresponding to that is defined as g(v,_), i.e. the vector vμ corresponds to the covector vα through vα = gαμvμ

    Similarly, if we have a contravariant tensor Fαβ, then we use this by contracting it with covectors v and w thus: Fαβvαwβ. But, as we have seen above, vα = gαμvμ and wβ = gβνvν. So Fαβvαwβ = Fαβgαμvμgβνvν = gβνgαμFαβvμvν.

    So the contravariant tensor, which acts on pairs of covectors, can be made to act on their corresponding vectors by replacing it with the covariant tensor gβνgαμFαβ = Fμν.
     
  8. Apr 20, 2010 #7

    dx

    User Avatar
    Homework Helper
    Gold Member

    Just noticed the question was not about raising and lowering indices. Ignore my previous post.
     
  9. Apr 20, 2010 #8

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Look at the definition of matrix multiplication again, in #2. Note that the sum is always over an index that's a column index for the matrix on the left and a row index for the matrix on the right. Since [itex]\Lambda^\nu{}_{\beta}[/itex] is row [itex]\nu[/itex], column [itex]\beta[/itex] of a [itex]\Lambda[/itex], and [itex]F^{\beta\alpha}[/itex] is row [itex]\beta[/itex], column [itex]\alpha[/itex] of a [itex]F[/itex], the result

    [tex]\Lambda^\nu{}_\beta F^{\beta\alpha}=(\Lambda F)^{\nu\alpha}[/tex]

    follows immediately from the definition of matrix multiplication. But now look at

    [tex]\Lambda^\mu{}_\alpha F^{\beta\alpha}[/tex]

    Note that the sum is over the column index of F. If you have another look at the definition of matrix multiplication, you'll see that this means that if the above is a component of the product of two matrices, one of which is F, then F must be the matrix on the left. When you understand that, the rest should be easy.

    Also note that you should LaTeX [itex]\Lambda^\mu{}_\nu[/itex] as \Lambda^\mu{}_\nu, so that the column index appears diagonally to the right below the row index. And check out the comment about the inverse here to see why the horizontal position of the indices matters.
     
  10. Apr 20, 2010 #9
    I finally get it! Its a miracle.

    Thank you for your help.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Tensor transformations
  1. What is a tensor? (Replies: 34)

  2. Modulus of a tensor (Replies: 1)

Loading...