• Support PF! Buy your school textbooks, materials and every day products Here!

Infinitesimal Lorentz transform and its inverse, tensors

  • Thread starter fluidistic
  • Start date

fluidistic

Gold Member
3,625
96
1. Homework Statement
The problem can be found in Jackson's book.
An infinitesimal Lorentz transform and its inverse can be written under the form ##x^{'\alpha}=(\eta ^{\alpha \beta}+\epsilon ^{\alpha \beta})x_{\beta}## and ##x^\alpha = (\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}) x^{'}_\beta## where ##\eta _{\alpha \beta}## is Minkowski's metric and the epsilons are infinitesimals.
1)Demonstrate, using the definition of the inverse, that ##\epsilon ^{'\alpha \beta}=-\epsilon ^{\alpha \beta}##.
2)Demonstrate, using the conservation of the norm, that ##\epsilon ^{\alpha \beta}=-\epsilon ^{\beta \alpha}##


2. Homework Equations
Not really sure, but I used some eq. found on some page earlier in the book: ##\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\alpha} \frac{\partial x^{'\beta}}{\partial x^\beta} \epsilon ^{\alpha \beta}##.


3. The Attempt at a Solution
1)I used the relevant equation and wrote that it's equal to ##\frac{\partial x^{'\alpha }}{\partial x^\beta} \frac{\partial x^{'\beta}}{\partial x^\alpha}\epsilon ^{\alpha \beta}##. Then I calculated the partial derivatives using the 2 equations given in the problem statement, I made an approximation (depreciated terms with epsilons multiplied together because they are "infinitesimals") and I reached that ##\epsilon ^{'\alpha \beta}\approx \frac{\eta^{\alpha \beta}}{\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}}\cdot \epsilon ^{\alpha \beta}##. I don't see how the first term can be worth -1 here... So I guess my approach is wrong. Or if it's right, I still don't see how I can show that the first term is worth -1. Thanks for any comment.
 

vela

Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
14,461
1,123
For #1, apply the transformation and followed by its inverse and use the fact that that product should yield the identity.
 

fluidistic

Gold Member
3,625
96
For #1, apply the transformation and followed by its inverse and use the fact that that product should yield the identity.
I see... do you mean that I should perform ##x^{'\alpha}x^{\alpha}=\text{Identity}##?
Also I don't see what's wrong with what I did in my attempt.
By the way I don't understand how tensors "work" yet, I am self studying this topic right now.

Edit: Nevermind my first question, the answer is no, the expression I wrote makes no sense...
 
Last edited:

vela

Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
14,461
1,123
There's a problem with the equation you started with. A dummy index should only appear twice in each product. You wrote
$$\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\alpha} \frac{\partial x^{'\beta}}{\partial x^\beta} \epsilon ^{\alpha \beta}.$$ The indices ##\alpha## and ##\beta## appear only once on the lefthand side, so they should appear only once on the righthand side. What you probably meant was something like
$$\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\gamma} \frac{\partial x^{'\beta}}{\partial x^\delta} \epsilon ^{\gamma \delta}.$$ Notice how ##\gamma## and ##\delta## appear in pairs, which implies a summation over those indices.

This relationship, however, doesn't apply here. It relates the components of a tensor in one frame to the components of the same tensor in a different frame. In this problem, ##\epsilon## and ##\epsilon'## aren't the same tensor. One generates a Lorentz transformation, and the other, the inverse Lorentz transformation.
 
Last edited:

fluidistic

Gold Member
3,625
96
I see... thanks. Yes this is what I've done and I replaced gamma and delta by alpha and beta respectively... Ok I didn't know this could not be done.
 

vanhees71

Science Advisor
Insights Author
Gold Member
13,505
5,412
I'd start from the Lorentz-transformation property of the representing matrices:
[tex]\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma}.[/tex]
Now write
[tex]{\Lambda^{\mu}}_{\rho}=\delta_{\rho}^{\mu}+{\epsilon^{\mu}}_{\rho},[/tex]
plug this in the above defining equation for LT matrices and expand up to the 1st-order terms in [itex]{\epsilon^{\mu}}_{\rho}[/itex]. Finally note that by definition one applies the index-dragging rule not only to tensor components but also to Lorentz matrices. Note that a Lorentz matrix does not form tensor components but the infintesimal generators do, but that doesn't matter for your problem here. The only additional thing you need to know is that
[tex]\epsilon_{\rho \sigma}=\eta_{\rho \mu} {\epsilon^{\mu}}_{\sigma}.[/tex]
 
1,023
30

1)I used the relevant equation and wrote that it's equal to ##\frac{\partial x^{'\alpha }}{\partial x^\beta} \frac{\partial x^{'\beta}}{\partial x^\alpha}\epsilon ^{\alpha \beta}##. Then I calculated the partial derivatives using the 2 equations given in the problem statement, I made an approximation (depreciated terms with epsilons multiplied together because they are "infinitesimals") and I reached that ##\epsilon ^{'\alpha \beta}\approx \frac{\eta^{\alpha \beta}}{\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}}\cdot \epsilon ^{\alpha \beta}##. I don't see how the first term can be worth -1 here... So I guess my approach is wrong. Or if it's right, I still don't see how I can show that the first term is worth -1.

I will say you will not get anywhere using this.Just follow the idea jackson gave to you.Like when he say use conservation of norm then use xαxα=x'αx'α.You will have to leave some second order terms here to show εαβ=-εβα.First one is simple manipulation.
 

WannabeNewton

Science Advisor
5,774
530
Hi there fluidistic! For the first part, note that all you have to do is a simple substitution. By definition we have ##x'_{\beta} = (\eta_{\beta \gamma} + \epsilon_{\beta \gamma}) x^{\gamma}## so plug this into ##x^{\alpha} = (\eta^{\alpha \beta} + \epsilon'^{\alpha\beta})x'_{\beta}## to get the quadratic ##x^{\alpha} = (\eta^{\alpha \beta} + \epsilon'^{\alpha\beta})(\eta_{\beta \gamma} + \epsilon_{\beta \gamma}) x^{\gamma}##. Expand the quadratic to ##O(\epsilon^2)## and at the every end of the calculation use the fact that ##x^{\alpha}## is arbitrary. The other parts should be very similar.
 

fluidistic

Gold Member
3,625
96
Thanks guys for all the help. Unfortunately I'm way too sloppy with tensors in order to manipulate them.
Here's what I reached using WBN's suggestion: ##x^\alpha \approx (\delta ^\alpha _\gamma +\eta ^{\alpha \beta}\epsilon _{\beta \gamma}+\epsilon ' ^{\alpha \beta}\eta _{\beta \gamma})x^\gamma##. By looking at this equation I have a feeling that gamma must be worth alpha and that what is in parenthesis must be worth the identity. This would imply that ##\eta ^{\alpha \beta}\epsilon _{\beta \gamma}=-\epsilon ' ^{\alpha \beta}\eta _{\beta \gamma}##. And I guess that it's by working on that equation that I will get the desired result.
 

WannabeNewton

Science Advisor
5,774
530
You're pretty much there. Recall that ##\eta^{\alpha\beta}## converts covariant indices into contravariant indices, and vice versa, upon contraction (more precisely, it is what we call a musical isomorphism and serves as a map between elements of the tangent space and its dual). So ##\eta^{\alpha \beta}\epsilon_{\beta \gamma} = \epsilon^{\alpha}{}{}_{\gamma}## and similarly ##\epsilon'^{\alpha \beta}\eta_{\beta \gamma} = \epsilon'^{\alpha}{}{}_{\gamma}##.

So now you have ##\delta^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon'^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon^{\alpha}{}{}_{\gamma}x^{\gamma} + O(\epsilon^2) = x^{\alpha}##. The result should then be immediate.
 

fluidistic

Gold Member
3,625
96
You're pretty much there. Recall that ##\eta^{\alpha\beta}## converts covariant indices into contravariant indices, and vice versa, upon contraction (more precisely, it is what we call a musical isomorphism and serves as a map between elements of the tangent space and its dual). So ##\eta^{\alpha \beta}\epsilon_{\beta \gamma} = \epsilon^{\alpha}{}{}_{\gamma}## and similarly ##\epsilon'^{\alpha \beta}\eta_{\beta \gamma} = \epsilon'^{\alpha}{}{}_{\gamma}##.

So now you have ##\delta^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon'^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon^{\alpha}{}{}_{\gamma}x^{\gamma} + O(\epsilon^2) = x^{\alpha}##. The result should then be immediate.
I see.
So I reach that ##(\delta ^\alpha _\gamma +\epsilon ^\alpha _\gamma+\epsilon '^\alpha _\gamma )x^\gamma \approx x^\alpha##. The only way for both sides to be approximately equal requires that ##\gamma =\alpha## and that what is in parenthesis is worth the identity right?
In this case I reach that ##\epsilon '^\alpha _\alpha =-\epsilon ^\alpha _\alpha##. But I still don't see how to reach the final result. I guess I must contract these tensors as to get only upper scripts with alpha and beta.
If I multiply the last expression by ##\eta ^{\beta \alpha}##, I reach that ##\epsilon '^{\beta \alpha}=-\epsilon ^{\beta \alpha}##. Now if this tensor is symmetric then I reach the desired result but how do I know whether it is symmetric?
I'm sure I made some error(s)...


Edit: Nevermind, I think I reach the final result, if I multiply both sides of ##\epsilon '^\alpha _\alpha =-\epsilon ^\alpha _\alpha## by ##\eta ^{\alpha \beta}##, but a multiplication by the RIGHT and not the left. This yields the result. Is that correct?
 

vela

Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
14,461
1,123
I see.
So I reach that ##(\delta^\alpha{}_\gamma +\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma \approx x^\alpha##. The only way for both sides to be approximately equal requires that ##\gamma =\alpha## and that what is in parenthesis is worth the identity right?
Remember that you're summing over repeated indices. So you really have
$$\sum_\gamma \delta^\alpha{}_\gamma x^\gamma + \sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = x^\alpha.$$ The first term just leaves you with ##x^\alpha## after the summation, so you end up with
$$\sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = 0.$$ Now use the fact that this has to hold for arbitrary ##x^\mu##.
 

fluidistic

Gold Member
3,625
96
Remember that you're summing over repeated indices. So you really have
$$\sum_\gamma \delta^\alpha{}_\gamma x^\gamma + \sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = x^\alpha.$$ The first term just leaves you with ##x^\alpha## after the summation, so you end up with
$$\sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = 0.$$ Now use the fact that this has to hold for arbitrary ##x^\mu##.
Ah right, thanks!
This leaves me with ##\epsilon ^\alpha _\gamma=-\epsilon ' ^\alpha _\gamma##. By multiplying each side from the right by ##\eta ^{\gamma \beta}##, I do reach the final result.
 

fluidistic

Gold Member
3,625
96
I attempted part 2) but I get some non sense.
As andrien pointed out, conservation of norm is ##x^\alpha x_\alpha=x'^\alpha x'_\alpha##.
What I did:
##x'^\alpha x'_\alpha=(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta})x_\beta x'_\alpha=(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta}) x_\beta \eta _{\alpha \gamma}x'^\gamma =(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta}) x_\beta \eta _{\alpha \gamma}(\eta^{\gamma \omega }+ \epsilon ^{\gamma \omega} ) x_\omega## ## = (\eta^{\alpha \beta } x_\beta + \epsilon ^{\alpha \beta} x_\beta )(\delta ^\omega _\alpha x_\omega + \epsilon ^\omega _\alpha x_\omega )=\eta ^{\alpha \beta}x_\beta \delta ^\omega _\alpha x_\omega +\eta ^{\alpha \beta}x_\beta \epsilon ^\omega _\alpha x_\omega +\epsilon ^{\alpha \beta} x_\beta \delta ^\omega _\alpha x_\omega +O(\epsilon ^2)## ##\approx x^\alpha \delta ^\omega _\alpha x_\omega + x ^\alpha \epsilon ^\omega _\alpha x_\omega + \epsilon ^{\alpha \beta} x_\beta \delta ^\omega _\alpha x_\omega##.
Here on my draft I rewrote that last expression with sums (2 double sums, 1 triple sum) just to simplicate the Kronecker's delta's.
Then I got rid once again of the sums. And I reached that it's worth ##x^\alpha x_\alpha +x^\alpha \epsilon ^\omega _\alpha x_\omega +\epsilon ^{\alpha \beta}x_\beta x_\alpha##. Now using the conservation of the norm, I got that ##x^\alpha \epsilon ^\omega _\alpha x_\omega =-\epsilon ^{\alpha \beta}x_\beta x_\alpha##.
But here I realized that this is a non sense: for if ##x_\beta## is a 1x4 covector, epsilon a 4x3 matrix, then ##x _\omega## sould be a 3x1 vector... but it is a 1x3 covector. Therefore the left hand side doesn't make any sense.
I don't see where I went wrong though.
 

WannabeNewton

Science Advisor
5,774
530
##\epsilon^{\alpha}{}{}_{\beta}## has a 4x4 matrix representation, not 4x3.
 

fluidistic

Gold Member
3,625
96
##\epsilon^{\alpha}{}{}_{\beta}## has a 4x4 matrix representation, not 4x3.
Oh you're right.
But I still have the same problem, the left hand side of the last equation would be (1x4)x(4x4)x(1x4) where the colored digits indicate that something doesn't match. I seem to have a covector multiplied by a covector instead of a covector multiplied by a vector.
 

WannabeNewton

Science Advisor
5,774
530
What you have is (1x4)x(4x4)x(4x1) = 1x1 on the left hand side, which is fine. The right hand side is the same thing once you raise the index of one of the ##x_{\alpha}## and lower the corresponding index on ##\epsilon^{\alpha\beta}##. If you want to represent ##\epsilon^{\alpha\beta}## as a matrix then it has to be in the form ##\epsilon^{\alpha}{}{}_{\beta}##.
 

fluidistic

Gold Member
3,625
96
What you have is (1x4)x(4x4)x(4x1) = 1x1 on the left hand side, which is fine. The right hand side is the same thing once you raise the index of one of the ##x_{\alpha}## and lower the corresponding index on ##\epsilon^{\alpha\beta}##. If you want to represent ##\epsilon^{\alpha\beta}## as a matrix then it has to be in the form ##\epsilon^{\alpha}{}{}_{\beta}##.
I see thank you. I'll need to digest this. I'll come back tomorrow on this.
 

vanhees71

Science Advisor
Insights Author
Gold Member
13,505
5,412
I don't understand the trouble you make with this problem. Just expand
[tex]\eta_{\mu \nu} \left (\delta^{\mu}_{\rho} + {\epsilon^{\mu}}_{\rho} \right) \left (\delta^{\nu}_{\sigma} + {\epsilon^{\nu}}_{\sigma} \right) \stackrel{!}{=}\eta_{\rho \sigma}.[/tex]
up to first order in [itex]\epsilon[/itex], and you'll find that [itex]\epsilon_{\mu \nu}=-\epsilon_{\nu \mu}[/itex].
 

fluidistic

Gold Member
3,625
96
I don't understand the trouble you make with this problem. Just expand
[tex]\eta_{\mu \nu} \left (\delta^{\mu}_{\rho} + {\epsilon^{\mu}}_{\rho} \right) \left (\delta^{\nu}_{\sigma} + {\epsilon^{\nu}}_{\sigma} \right) \stackrel{!}{=}\eta_{\rho \sigma}.[/tex]
up to first order in [itex]\epsilon[/itex], and you'll find that [itex]\epsilon_{\mu \nu}=-\epsilon_{\nu \mu}[/itex].
Hmm but I wouldn't be using the conservation of the norm by doing so, or I'm wrong?
 

vanhees71

Science Advisor
Insights Author
Gold Member
13,505
5,412
But this is the conservation of the "norm"!
 

vela

Staff Emeritus
Science Advisor
Homework Helper
Education Advisor
14,461
1,123
Conservation of the norm says
$$\eta_{\mu\nu} x^\mu x^\nu = \eta_{\rho\sigma}x'^\rho x'^\sigma.$$ Again, write x' in terms of x to get
$$\eta_{\mu\nu} x^\mu x^\nu = \eta_{\rho\sigma}[(\delta^\rho_\mu + \epsilon^\rho{}_\mu) x^\mu] [(\delta^\sigma_\nu + \epsilon^\sigma{}_\nu)x^\nu] = [\eta_{\rho\sigma}(\delta^\rho_\mu + \epsilon^\rho{}_\mu) (\delta^\sigma_\nu + \epsilon^\sigma{}_\nu)] x^\mu x^\nu.$$ Comparing the two sides of the equation, you should be able to see you must have
$$\eta_{\mu\nu} = \eta_{\rho\sigma}(\delta^\rho_\mu + \epsilon^\rho{}_\mu) (\delta^\sigma_\nu + \epsilon^\sigma{}_\nu).$$ Without the x's cluttering things up, it's a little easier to see where you're headed.
 

Related Threads for: Infinitesimal Lorentz transform and its inverse, tensors

Replies
2
Views
2K
Replies
10
Views
748
Replies
15
Views
812
Replies
0
Views
1K
  • Last Post
Replies
1
Views
3K
Replies
0
Views
1K
Replies
9
Views
1K
Top