Infinitesimal Lorentz transform and its inverse, tensors

In summary, the problem can be found in Jackson's book. An infinitesimal Lorentz transform and its inverse can be written under the form ##x^{'\alpha}=(\eta ^{\alpha \beta}+\epsilon ^{\alpha \beta})x_{\beta}## and ##x^\alpha = (\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}) x^{'}_\beta## where ##\eta _{\alpha \beta}## is Minkowski's metric and the epsilons are infinitesimals. The Attempt at a Solution demonstrates that ##\epsilon ^{'\alpha
  • #1
fluidistic
Gold Member
3,923
261

Homework Statement


The problem can be found in Jackson's book.
An infinitesimal Lorentz transform and its inverse can be written under the form ##x^{'\alpha}=(\eta ^{\alpha \beta}+\epsilon ^{\alpha \beta})x_{\beta}## and ##x^\alpha = (\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}) x^{'}_\beta## where ##\eta _{\alpha \beta}## is Minkowski's metric and the epsilons are infinitesimals.
1)Demonstrate, using the definition of the inverse, that ##\epsilon ^{'\alpha \beta}=-\epsilon ^{\alpha \beta}##.
2)Demonstrate, using the conservation of the norm, that ##\epsilon ^{\alpha \beta}=-\epsilon ^{\beta \alpha}##


Homework Equations


Not really sure, but I used some eq. found on some page earlier in the book: ##\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\alpha} \frac{\partial x^{'\beta}}{\partial x^\beta} \epsilon ^{\alpha \beta}##.


The Attempt at a Solution


1)I used the relevant equation and wrote that it's equal to ##\frac{\partial x^{'\alpha }}{\partial x^\beta} \frac{\partial x^{'\beta}}{\partial x^\alpha}\epsilon ^{\alpha \beta}##. Then I calculated the partial derivatives using the 2 equations given in the problem statement, I made an approximation (depreciated terms with epsilons multiplied together because they are "infinitesimals") and I reached that ##\epsilon ^{'\alpha \beta}\approx \frac{\eta^{\alpha \beta}}{\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}}\cdot \epsilon ^{\alpha \beta}##. I don't see how the first term can be worth -1 here... So I guess my approach is wrong. Or if it's right, I still don't see how I can show that the first term is worth -1. Thanks for any comment.
 
Physics news on Phys.org
  • #2
For #1, apply the transformation and followed by its inverse and use the fact that that product should yield the identity.
 
  • #3
vela said:
For #1, apply the transformation and followed by its inverse and use the fact that that product should yield the identity.

I see... do you mean that I should perform ##x^{'\alpha}x^{\alpha}=\text{Identity}##?
Also I don't see what's wrong with what I did in my attempt.
By the way I don't understand how tensors "work" yet, I am self studying this topic right now.

Edit: Nevermind my first question, the answer is no, the expression I wrote makes no sense...
 
Last edited:
  • #4
There's a problem with the equation you started with. A dummy index should only appear twice in each product. You wrote
$$\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\alpha} \frac{\partial x^{'\beta}}{\partial x^\beta} \epsilon ^{\alpha \beta}.$$ The indices ##\alpha## and ##\beta## appear only once on the lefthand side, so they should appear only once on the righthand side. What you probably meant was something like
$$\epsilon ^{'\alpha \beta}=\frac{\partial x^{'\alpha }}{\partial x^\gamma} \frac{\partial x^{'\beta}}{\partial x^\delta} \epsilon ^{\gamma \delta}.$$ Notice how ##\gamma## and ##\delta## appear in pairs, which implies a summation over those indices.

This relationship, however, doesn't apply here. It relates the components of a tensor in one frame to the components of the same tensor in a different frame. In this problem, ##\epsilon## and ##\epsilon'## aren't the same tensor. One generates a Lorentz transformation, and the other, the inverse Lorentz transformation.
 
Last edited:
  • Like
Likes 1 person
  • #5
I see... thanks. Yes this is what I've done and I replaced gamma and delta by alpha and beta respectively... Ok I didn't know this could not be done.
 
  • #6
I'd start from the Lorentz-transformation property of the representing matrices:
[tex]\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma}.[/tex]
Now write
[tex]{\Lambda^{\mu}}_{\rho}=\delta_{\rho}^{\mu}+{\epsilon^{\mu}}_{\rho},[/tex]
plug this in the above defining equation for LT matrices and expand up to the 1st-order terms in [itex]{\epsilon^{\mu}}_{\rho}[/itex]. Finally note that by definition one applies the index-dragging rule not only to tensor components but also to Lorentz matrices. Note that a Lorentz matrix does not form tensor components but the infintesimal generators do, but that doesn't matter for your problem here. The only additional thing you need to know is that
[tex]\epsilon_{\rho \sigma}=\eta_{\rho \mu} {\epsilon^{\mu}}_{\sigma}.[/tex]
 
  • #7
fluidistic said:

1)I used the relevant equation and wrote that it's equal to ##\frac{\partial x^{'\alpha }}{\partial x^\beta} \frac{\partial x^{'\beta}}{\partial x^\alpha}\epsilon ^{\alpha \beta}##. Then I calculated the partial derivatives using the 2 equations given in the problem statement, I made an approximation (depreciated terms with epsilons multiplied together because they are "infinitesimals") and I reached that ##\epsilon ^{'\alpha \beta}\approx \frac{\eta^{\alpha \beta}}{\eta ^{\alpha \beta}+\epsilon ^{'\alpha \beta}}\cdot \epsilon ^{\alpha \beta}##. I don't see how the first term can be worth -1 here... So I guess my approach is wrong. Or if it's right, I still don't see how I can show that the first term is worth -1.

I will say you will not get anywhere using this.Just follow the idea jackson gave to you.Like when he say use conservation of norm then use xαxα=x'αx'α.You will have to leave some second order terms here to show εαβ=-εβα.First one is simple manipulation.
 
  • #8
Hi there fluidistic! For the first part, note that all you have to do is a simple substitution. By definition we have ##x'_{\beta} = (\eta_{\beta \gamma} + \epsilon_{\beta \gamma}) x^{\gamma}## so plug this into ##x^{\alpha} = (\eta^{\alpha \beta} + \epsilon'^{\alpha\beta})x'_{\beta}## to get the quadratic ##x^{\alpha} = (\eta^{\alpha \beta} + \epsilon'^{\alpha\beta})(\eta_{\beta \gamma} + \epsilon_{\beta \gamma}) x^{\gamma}##. Expand the quadratic to ##O(\epsilon^2)## and at the every end of the calculation use the fact that ##x^{\alpha}## is arbitrary. The other parts should be very similar.
 
  • Like
Likes 1 person
  • #9
Thanks guys for all the help. Unfortunately I'm way too sloppy with tensors in order to manipulate them.
Here's what I reached using WBN's suggestion: ##x^\alpha \approx (\delta ^\alpha _\gamma +\eta ^{\alpha \beta}\epsilon _{\beta \gamma}+\epsilon ' ^{\alpha \beta}\eta _{\beta \gamma})x^\gamma##. By looking at this equation I have a feeling that gamma must be worth alpha and that what is in parenthesis must be worth the identity. This would imply that ##\eta ^{\alpha \beta}\epsilon _{\beta \gamma}=-\epsilon ' ^{\alpha \beta}\eta _{\beta \gamma}##. And I guess that it's by working on that equation that I will get the desired result.
 
  • #10
You're pretty much there. Recall that ##\eta^{\alpha\beta}## converts covariant indices into contravariant indices, and vice versa, upon contraction (more precisely, it is what we call a musical isomorphism and serves as a map between elements of the tangent space and its dual). So ##\eta^{\alpha \beta}\epsilon_{\beta \gamma} = \epsilon^{\alpha}{}{}_{\gamma}## and similarly ##\epsilon'^{\alpha \beta}\eta_{\beta \gamma} = \epsilon'^{\alpha}{}{}_{\gamma}##.

So now you have ##\delta^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon'^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon^{\alpha}{}{}_{\gamma}x^{\gamma} + O(\epsilon^2) = x^{\alpha}##. The result should then be immediate.
 
  • #11
WannabeNewton said:
You're pretty much there. Recall that ##\eta^{\alpha\beta}## converts covariant indices into contravariant indices, and vice versa, upon contraction (more precisely, it is what we call a musical isomorphism and serves as a map between elements of the tangent space and its dual). So ##\eta^{\alpha \beta}\epsilon_{\beta \gamma} = \epsilon^{\alpha}{}{}_{\gamma}## and similarly ##\epsilon'^{\alpha \beta}\eta_{\beta \gamma} = \epsilon'^{\alpha}{}{}_{\gamma}##.

So now you have ##\delta^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon'^{\alpha}{}{}_{\gamma}x^{\gamma} + \epsilon^{\alpha}{}{}_{\gamma}x^{\gamma} + O(\epsilon^2) = x^{\alpha}##. The result should then be immediate.
I see.
So I reach that ##(\delta ^\alpha _\gamma +\epsilon ^\alpha _\gamma+\epsilon '^\alpha _\gamma )x^\gamma \approx x^\alpha##. The only way for both sides to be approximately equal requires that ##\gamma =\alpha## and that what is in parenthesis is worth the identity right?
In this case I reach that ##\epsilon '^\alpha _\alpha =-\epsilon ^\alpha _\alpha##. But I still don't see how to reach the final result. I guess I must contract these tensors as to get only upper scripts with alpha and beta.
If I multiply the last expression by ##\eta ^{\beta \alpha}##, I reach that ##\epsilon '^{\beta \alpha}=-\epsilon ^{\beta \alpha}##. Now if this tensor is symmetric then I reach the desired result but how do I know whether it is symmetric?
I'm sure I made some error(s)...Edit: Nevermind, I think I reach the final result, if I multiply both sides of ##\epsilon '^\alpha _\alpha =-\epsilon ^\alpha _\alpha## by ##\eta ^{\alpha \beta}##, but a multiplication by the RIGHT and not the left. This yields the result. Is that correct?
 
  • #12
fluidistic said:
I see.
So I reach that ##(\delta^\alpha{}_\gamma +\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma \approx x^\alpha##. The only way for both sides to be approximately equal requires that ##\gamma =\alpha## and that what is in parenthesis is worth the identity right?
Remember that you're summing over repeated indices. So you really have
$$\sum_\gamma \delta^\alpha{}_\gamma x^\gamma + \sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = x^\alpha.$$ The first term just leaves you with ##x^\alpha## after the summation, so you end up with
$$\sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = 0.$$ Now use the fact that this has to hold for arbitrary ##x^\mu##.
 
  • Like
Likes 1 person
  • #13
vela said:
Remember that you're summing over repeated indices. So you really have
$$\sum_\gamma \delta^\alpha{}_\gamma x^\gamma + \sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = x^\alpha.$$ The first term just leaves you with ##x^\alpha## after the summation, so you end up with
$$\sum_\gamma (\epsilon^\alpha{}_\gamma + \epsilon'^\alpha{}_\gamma) x^\gamma = 0.$$ Now use the fact that this has to hold for arbitrary ##x^\mu##.

Ah right, thanks!
This leaves me with ##\epsilon ^\alpha _\gamma=-\epsilon ' ^\alpha _\gamma##. By multiplying each side from the right by ##\eta ^{\gamma \beta}##, I do reach the final result.
 
  • #14
I attempted part 2) but I get some non sense.
As andrien pointed out, conservation of norm is ##x^\alpha x_\alpha=x'^\alpha x'_\alpha##.
What I did:
##x'^\alpha x'_\alpha=(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta})x_\beta x'_\alpha=(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta}) x_\beta \eta _{\alpha \gamma}x'^\gamma =(\eta^{\alpha \beta }+ \epsilon ^{\alpha \beta}) x_\beta \eta _{\alpha \gamma}(\eta^{\gamma \omega }+ \epsilon ^{\gamma \omega} ) x_\omega## ## = (\eta^{\alpha \beta } x_\beta + \epsilon ^{\alpha \beta} x_\beta )(\delta ^\omega _\alpha x_\omega + \epsilon ^\omega _\alpha x_\omega )=\eta ^{\alpha \beta}x_\beta \delta ^\omega _\alpha x_\omega +\eta ^{\alpha \beta}x_\beta \epsilon ^\omega _\alpha x_\omega +\epsilon ^{\alpha \beta} x_\beta \delta ^\omega _\alpha x_\omega +O(\epsilon ^2)## ##\approx x^\alpha \delta ^\omega _\alpha x_\omega + x ^\alpha \epsilon ^\omega _\alpha x_\omega + \epsilon ^{\alpha \beta} x_\beta \delta ^\omega _\alpha x_\omega##.
Here on my draft I rewrote that last expression with sums (2 double sums, 1 triple sum) just to simplicate the Kronecker's delta's.
Then I got rid once again of the sums. And I reached that it's worth ##x^\alpha x_\alpha +x^\alpha \epsilon ^\omega _\alpha x_\omega +\epsilon ^{\alpha \beta}x_\beta x_\alpha##. Now using the conservation of the norm, I got that ##x^\alpha \epsilon ^\omega _\alpha x_\omega =-\epsilon ^{\alpha \beta}x_\beta x_\alpha##.
But here I realized that this is a non sense: for if ##x_\beta## is a 1x4 covector, epsilon a 4x3 matrix, then ##x _\omega## sould be a 3x1 vector... but it is a 1x3 covector. Therefore the left hand side doesn't make any sense.
I don't see where I went wrong though.
 
  • #15
##\epsilon^{\alpha}{}{}_{\beta}## has a 4x4 matrix representation, not 4x3.
 
  • #16
WannabeNewton said:
##\epsilon^{\alpha}{}{}_{\beta}## has a 4x4 matrix representation, not 4x3.
Oh you're right.
But I still have the same problem, the left hand side of the last equation would be (1x4)x(4x4)x(1x4) where the colored digits indicate that something doesn't match. I seem to have a covector multiplied by a covector instead of a covector multiplied by a vector.
 
  • #17
What you have is (1x4)x(4x4)x(4x1) = 1x1 on the left hand side, which is fine. The right hand side is the same thing once you raise the index of one of the ##x_{\alpha}## and lower the corresponding index on ##\epsilon^{\alpha\beta}##. If you want to represent ##\epsilon^{\alpha\beta}## as a matrix then it has to be in the form ##\epsilon^{\alpha}{}{}_{\beta}##.
 
  • Like
Likes 1 person
  • #18
WannabeNewton said:
What you have is (1x4)x(4x4)x(4x1) = 1x1 on the left hand side, which is fine. The right hand side is the same thing once you raise the index of one of the ##x_{\alpha}## and lower the corresponding index on ##\epsilon^{\alpha\beta}##. If you want to represent ##\epsilon^{\alpha\beta}## as a matrix then it has to be in the form ##\epsilon^{\alpha}{}{}_{\beta}##.

I see thank you. I'll need to digest this. I'll come back tomorrow on this.
 
  • #19
I don't understand the trouble you make with this problem. Just expand
[tex]\eta_{\mu \nu} \left (\delta^{\mu}_{\rho} + {\epsilon^{\mu}}_{\rho} \right) \left (\delta^{\nu}_{\sigma} + {\epsilon^{\nu}}_{\sigma} \right) \stackrel{!}{=}\eta_{\rho \sigma}.[/tex]
up to first order in [itex]\epsilon[/itex], and you'll find that [itex]\epsilon_{\mu \nu}=-\epsilon_{\nu \mu}[/itex].
 
  • #20
vanhees71 said:
I don't understand the trouble you make with this problem. Just expand
[tex]\eta_{\mu \nu} \left (\delta^{\mu}_{\rho} + {\epsilon^{\mu}}_{\rho} \right) \left (\delta^{\nu}_{\sigma} + {\epsilon^{\nu}}_{\sigma} \right) \stackrel{!}{=}\eta_{\rho \sigma}.[/tex]
up to first order in [itex]\epsilon[/itex], and you'll find that [itex]\epsilon_{\mu \nu}=-\epsilon_{\nu \mu}[/itex].

Hmm but I wouldn't be using the conservation of the norm by doing so, or I'm wrong?
 
  • #21
But this is the conservation of the "norm"!
 
  • #22
Conservation of the norm says
$$\eta_{\mu\nu} x^\mu x^\nu = \eta_{\rho\sigma}x'^\rho x'^\sigma.$$ Again, write x' in terms of x to get
$$\eta_{\mu\nu} x^\mu x^\nu = \eta_{\rho\sigma}[(\delta^\rho_\mu + \epsilon^\rho{}_\mu) x^\mu] [(\delta^\sigma_\nu + \epsilon^\sigma{}_\nu)x^\nu] = [\eta_{\rho\sigma}(\delta^\rho_\mu + \epsilon^\rho{}_\mu) (\delta^\sigma_\nu + \epsilon^\sigma{}_\nu)] x^\mu x^\nu.$$ Comparing the two sides of the equation, you should be able to see you must have
$$\eta_{\mu\nu} = \eta_{\rho\sigma}(\delta^\rho_\mu + \epsilon^\rho{}_\mu) (\delta^\sigma_\nu + \epsilon^\sigma{}_\nu).$$ Without the x's cluttering things up, it's a little easier to see where you're headed.
 

1. What is an infinitesimal Lorentz transform?

An infinitesimal Lorentz transform is a mathematical operation used in the theory of relativity to describe how coordinates and physical quantities change when observed from different frames of reference that are moving at constant velocities relative to each other. It is a linear transformation that preserves the fundamental laws of physics, such as the speed of light being constant.

2. What is the inverse of an infinitesimal Lorentz transform?

The inverse of an infinitesimal Lorentz transform is another linear transformation that can be used to undo the effects of the original transform. It allows for the conversion of coordinates and physical quantities between frames of reference that are moving at constant velocities relative to each other.

3. What are tensors in the context of infinitesimal Lorentz transforms?

Tensors are mathematical objects that describe the relationship between different coordinate systems and physical quantities. In the context of infinitesimal Lorentz transforms, tensors are used to represent the effects of the transforms on various quantities, such as position, velocity, and acceleration.

4. How are infinitesimal Lorentz transforms and tensors used in physics?

Infinitesimal Lorentz transforms and tensors are essential tools in the theory of relativity, which is a crucial aspect of modern physics. They are used to describe the behavior of physical quantities in different frames of reference and to ensure that the laws of physics hold true in all reference frames.

5. Can infinitesimal Lorentz transforms and tensors be applied to any physical system?

Yes, infinitesimal Lorentz transforms and tensors are general mathematical tools that can be applied to any physical system that obeys the laws of physics. They are widely used in fields such as special and general relativity, electromagnetism, and quantum mechanics.

Similar threads

  • Advanced Physics Homework Help
Replies
2
Views
359
Replies
1
Views
973
  • Advanced Physics Homework Help
Replies
3
Views
815
  • Advanced Physics Homework Help
Replies
22
Views
3K
  • Advanced Physics Homework Help
Replies
4
Views
306
  • Advanced Physics Homework Help
Replies
5
Views
1K
  • Advanced Physics Homework Help
Replies
18
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
6
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
576
Back
Top