1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Infinitesimal Lorentz transform and its inverse, tensors

  1. Sep 7, 2013 #1

    fluidistic

    User Avatar
    Gold Member

    1. The problem statement, all variables and given/known data
    The problem can be found in Jackson's book.
    An infinitesimal Lorentz transform and its inverse can be written under the form ##x^{'\alpha}=(\eta ^{\alpha \beta}+\epsilon ^{\alpha \beta})x_{\beta}## and ##x^\alpha = (\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}) x^{'}_\beta## where ##\eta _{\alpha \beta}## is Minkowski's metric and the epsilons are infinitesimals.
    1)Demonstrate, using the definition of the inverse, that ##\epsilon ^{'\alpha \beta}=-\epsilon ^{\alpha \beta}##.
    2)Demonstrate, using the conservation of the norm, that ##\epsilon ^{\alpha \beta}=-\epsilon ^{\beta \alpha}##


    2. Relevant equations
    Not really sure, but I used some eq. found on some page earlier in the book: ##\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\alpha} \frac{\partial x^{'\beta}}{\partial x^\beta} \epsilon ^{\alpha \beta}##.


    3. The attempt at a solution
    1)I used the relevant equation and wrote that it's equal to ##\frac{\partial x^{'\alpha }}{\partial x^\beta} \frac{\partial x^{'\beta}}{\partial x^\alpha}\epsilon ^{\alpha \beta}##. Then I calculated the partial derivatives using the 2 equations given in the problem statement, I made an approximation (depreciated terms with epsilons multiplied together because they are "infinitesimals") and I reached that ##\epsilon ^{'\alpha \beta}\approx \frac{\eta^{\alpha \beta}}{\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}}\cdot \epsilon ^{\alpha \beta}##. I don't see how the first term can be worth -1 here... So I guess my approach is wrong. Or if it's right, I still don't see how I can show that the first term is worth -1. Thanks for any comment.
     
  2. jcsd
  3. Sep 7, 2013 #2

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    For #1, apply the transformation and followed by its inverse and use the fact that that product should yield the identity.
     
  4. Sep 7, 2013 #3

    fluidistic

    User Avatar
    Gold Member

    I see... do you mean that I should perform ##x^{'\alpha}x^{\alpha}=\text{Identity}##?
    Also I don't see what's wrong with what I did in my attempt.
    By the way I don't understand how tensors "work" yet, I am self studying this topic right now.

    Edit: Nevermind my first question, the answer is no, the expression I wrote makes no sense...
     
    Last edited: Sep 7, 2013
  5. Sep 7, 2013 #4

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    There's a problem with the equation you started with. A dummy index should only appear twice in each product. You wrote
    $$\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\alpha} \frac{\partial x^{'\beta}}{\partial x^\beta} \epsilon ^{\alpha \beta}.$$ The indices ##\alpha## and ##\beta## appear only once on the lefthand side, so they should appear only once on the righthand side. What you probably meant was something like
    $$\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\gamma} \frac{\partial x^{'\beta}}{\partial x^\delta} \epsilon ^{\gamma \delta}.$$ Notice how ##\gamma## and ##\delta## appear in pairs, which implies a summation over those indices.

    This relationship, however, doesn't apply here. It relates the components of a tensor in one frame to the components of the same tensor in a different frame. In this problem, ##\epsilon## and ##\epsilon'## aren't the same tensor. One generates a Lorentz transformation, and the other, the inverse Lorentz transformation.
     
    Last edited: Sep 7, 2013
  6. Sep 7, 2013 #5

    fluidistic

    User Avatar
    Gold Member

    I see... thanks. Yes this is what I've done and I replaced gamma and delta by alpha and beta respectively... Ok I didn't know this could not be done.
     
  7. Sep 8, 2013 #6

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I'd start from the Lorentz-transformation property of the representing matrices:
    [tex]\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma}.[/tex]
    Now write
    [tex]{\Lambda^{\mu}}_{\rho}=\delta_{\rho}^{\mu}+{\epsilon^{\mu}}_{\rho},[/tex]
    plug this in the above defining equation for LT matrices and expand up to the 1st-order terms in [itex]{\epsilon^{\mu}}_{\rho}[/itex]. Finally note that by definition one applies the index-dragging rule not only to tensor components but also to Lorentz matrices. Note that a Lorentz matrix does not form tensor components but the infintesimal generators do, but that doesn't matter for your problem here. The only additional thing you need to know is that
    [tex]\epsilon_{\rho \sigma}=\eta_{\rho \mu} {\epsilon^{\mu}}_{\sigma}.[/tex]
     
  8. Sep 8, 2013 #7

    I will say you will not get anywhere using this.Just follow the idea jackson gave to you.Like when he say use conservation of norm then use xαxα=x'αx'α.You will have to leave some second order terms here to show εαβ=-εβα.First one is simple manipulation.
     
  9. Sep 8, 2013 #8

    WannabeNewton

    User Avatar
    Science Advisor

    Hi there fluidistic! For the first part, note that all you have to do is a simple substitution. By definition we have ##x'_{\beta} = (\eta_{\beta \gamma} + \epsilon_{\beta \gamma}) x^{\gamma}## so plug this into ##x^{\alpha} = (\eta^{\alpha \beta} + \epsilon'^{\alpha\beta})x'_{\beta}## to get the quadratic ##x^{\alpha} = (\eta^{\alpha \beta} + \epsilon'^{\alpha\beta})(\eta_{\beta \gamma} + \epsilon_{\beta \gamma}) x^{\gamma}##. Expand the quadratic to ##O(\epsilon^2)## and at the every end of the calculation use the fact that ##x^{\alpha}## is arbitrary. The other parts should be very similar.
     
  10. Sep 8, 2013 #9

    fluidistic

    User Avatar
    Gold Member

    Thanks guys for all the help. Unfortunately I'm way too sloppy with tensors in order to manipulate them.
    Here's what I reached using WBN's suggestion: ##x^\alpha \approx (\delta ^\alpha _\gamma +\eta ^{\alpha \beta}\epsilon _{\beta \gamma}+\epsilon ' ^{\alpha \beta}\eta _{\beta \gamma})x^\gamma##. By looking at this equation I have a feeling that gamma must be worth alpha and that what is in parenthesis must be worth the identity. This would imply that ##\eta ^{\alpha \beta}\epsilon _{\beta \gamma}=-\epsilon ' ^{\alpha \beta}\eta _{\beta \gamma}##. And I guess that it's by working on that equation that I will get the desired result.
     
  11. Sep 8, 2013 #10

    WannabeNewton

    User Avatar
    Science Advisor

    You're pretty much there. Recall that ##\eta^{\alpha\beta}## converts covariant indices into contravariant indices, and vice versa, upon contraction (more precisely, it is what we call a musical isomorphism and serves as a map between elements of the tangent space and its dual). So ##\eta^{\alpha \beta}\epsilon_{\beta \gamma} = \epsilon^{\alpha}{}{}_{\gamma}## and similarly ##\epsilon'^{\alpha \beta}\eta_{\beta \gamma} = \epsilon'^{\alpha}{}{}_{\gamma}##.

    So now you have ##\delta^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon'^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon^{\alpha}{}{}_{\gamma}x^{\gamma} + O(\epsilon^2) = x^{\alpha}##. The result should then be immediate.
     
  12. Sep 9, 2013 #11

    fluidistic

    User Avatar
    Gold Member

    I see.
    So I reach that ##(\delta ^\alpha _\gamma +\epsilon ^\alpha _\gamma+\epsilon '^\alpha _\gamma )x^\gamma \approx x^\alpha##. The only way for both sides to be approximately equal requires that ##\gamma =\alpha## and that what is in parenthesis is worth the identity right?
    In this case I reach that ##\epsilon '^\alpha _\alpha =-\epsilon ^\alpha _\alpha##. But I still don't see how to reach the final result. I guess I must contract these tensors as to get only upper scripts with alpha and beta.
    If I multiply the last expression by ##\eta ^{\beta \alpha}##, I reach that ##\epsilon '^{\beta \alpha}=-\epsilon ^{\beta \alpha}##. Now if this tensor is symmetric then I reach the desired result but how do I know whether it is symmetric?
    I'm sure I made some error(s)...


    Edit: Nevermind, I think I reach the final result, if I multiply both sides of ##\epsilon '^\alpha _\alpha =-\epsilon ^\alpha _\alpha## by ##\eta ^{\alpha \beta}##, but a multiplication by the RIGHT and not the left. This yields the result. Is that correct?
     
  13. Sep 9, 2013 #12

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    Remember that you're summing over repeated indices. So you really have
    $$\sum_\gamma \delta^\alpha{}_\gamma x^\gamma + \sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = x^\alpha.$$ The first term just leaves you with ##x^\alpha## after the summation, so you end up with
    $$\sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = 0.$$ Now use the fact that this has to hold for arbitrary ##x^\mu##.
     
  14. Sep 9, 2013 #13

    fluidistic

    User Avatar
    Gold Member

    Ah right, thanks!
    This leaves me with ##\epsilon ^\alpha _\gamma=-\epsilon ' ^\alpha _\gamma##. By multiplying each side from the right by ##\eta ^{\gamma \beta}##, I do reach the final result.
     
  15. Sep 9, 2013 #14

    fluidistic

    User Avatar
    Gold Member

    I attempted part 2) but I get some non sense.
    As andrien pointed out, conservation of norm is ##x^\alpha x_\alpha=x'^\alpha x'_\alpha##.
    What I did:
    ##x'^\alpha x'_\alpha=(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta})x_\beta x'_\alpha=(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta}) x_\beta \eta _{\alpha \gamma}x'^\gamma =(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta}) x_\beta \eta _{\alpha \gamma}(\eta^{\gamma \omega }+ \epsilon ^{\gamma \omega} ) x_\omega## ## = (\eta^{\alpha \beta } x_\beta + \epsilon ^{\alpha \beta} x_\beta )(\delta ^\omega _\alpha x_\omega + \epsilon ^\omega _\alpha x_\omega )=\eta ^{\alpha \beta}x_\beta \delta ^\omega _\alpha x_\omega +\eta ^{\alpha \beta}x_\beta \epsilon ^\omega _\alpha x_\omega +\epsilon ^{\alpha \beta} x_\beta \delta ^\omega _\alpha x_\omega +O(\epsilon ^2)## ##\approx x^\alpha \delta ^\omega _\alpha x_\omega + x ^\alpha \epsilon ^\omega _\alpha x_\omega + \epsilon ^{\alpha \beta} x_\beta \delta ^\omega _\alpha x_\omega##.
    Here on my draft I rewrote that last expression with sums (2 double sums, 1 triple sum) just to simplicate the Kronecker's delta's.
    Then I got rid once again of the sums. And I reached that it's worth ##x^\alpha x_\alpha +x^\alpha \epsilon ^\omega _\alpha x_\omega +\epsilon ^{\alpha \beta}x_\beta x_\alpha##. Now using the conservation of the norm, I got that ##x^\alpha \epsilon ^\omega _\alpha x_\omega =-\epsilon ^{\alpha \beta}x_\beta x_\alpha##.
    But here I realized that this is a non sense: for if ##x_\beta## is a 1x4 covector, epsilon a 4x3 matrix, then ##x _\omega## sould be a 3x1 vector... but it is a 1x3 covector. Therefore the left hand side doesn't make any sense.
    I don't see where I went wrong though.
     
  16. Sep 9, 2013 #15

    WannabeNewton

    User Avatar
    Science Advisor

    ##\epsilon^{\alpha}{}{}_{\beta}## has a 4x4 matrix representation, not 4x3.
     
  17. Sep 9, 2013 #16

    fluidistic

    User Avatar
    Gold Member

    Oh you're right.
    But I still have the same problem, the left hand side of the last equation would be (1x4)x(4x4)x(1x4) where the colored digits indicate that something doesn't match. I seem to have a covector multiplied by a covector instead of a covector multiplied by a vector.
     
  18. Sep 9, 2013 #17

    WannabeNewton

    User Avatar
    Science Advisor

    What you have is (1x4)x(4x4)x(4x1) = 1x1 on the left hand side, which is fine. The right hand side is the same thing once you raise the index of one of the ##x_{\alpha}## and lower the corresponding index on ##\epsilon^{\alpha\beta}##. If you want to represent ##\epsilon^{\alpha\beta}## as a matrix then it has to be in the form ##\epsilon^{\alpha}{}{}_{\beta}##.
     
  19. Sep 9, 2013 #18

    fluidistic

    User Avatar
    Gold Member

    I see thank you. I'll need to digest this. I'll come back tomorrow on this.
     
  20. Sep 10, 2013 #19

    vanhees71

    User Avatar
    Science Advisor
    2016 Award

    I don't understand the trouble you make with this problem. Just expand
    [tex]\eta_{\mu \nu} \left (\delta^{\mu}_{\rho} + {\epsilon^{\mu}}_{\rho} \right) \left (\delta^{\nu}_{\sigma} + {\epsilon^{\nu}}_{\sigma} \right) \stackrel{!}{=}\eta_{\rho \sigma}.[/tex]
    up to first order in [itex]\epsilon[/itex], and you'll find that [itex]\epsilon_{\mu \nu}=-\epsilon_{\nu \mu}[/itex].
     
  21. Sep 10, 2013 #20

    fluidistic

    User Avatar
    Gold Member

    Hmm but I wouldn't be using the conservation of the norm by doing so, or I'm wrong?
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Infinitesimal Lorentz transform and its inverse, tensors
Loading...