I Lorentz transformation of derivative and vector field

  • Thread starter doggydan42
  • Start date
161
18
Summary
Why is the inverse transformation used for derivative and the position, while for a vector field the normal transformation is used?
I'm currently watching lecture videos on QFT by David Tong. He is going over lorentz invariance and classical field theory. In his lecture notes he has,

$$(\partial_\mu\phi)(x) \rightarrow (\Lambda^{-1})^\nu_\mu(\partial_\nu \phi)(y)$$, where ##y = \Lambda^{-1}x##.

He mentions he uses active transformation. So I understand the inverse in the y. For the derivative I tried

$$\partial'_\mu \phi=(\partial_\nu \phi )\partial'_\mu (x^\nu)$$

So then I need to show that ##\partial'_\mu (x^\nu) = (\Lambda^{-1})^\nu_\mu##.
$$\partial'_\mu (x^\nu)=\partial'_\mu (\Lambda^\nu_\sigma x'^\sigma) = \Lambda^\nu_\sigma \delta^\sigma_\mu = \Lambda^\nu_\mu$$.
This is how I thought to proceed, but clearly there should be a ##\Lambda^{-1}##, but I thought it would be that ##x' = \Lambda^{-1} x##. Why is it that although it is an active transformation ##x' = \Lambda x## instead? Did I make a mistake somewhere else? And if it is that the transformation should not be the inverse, why is it that the x transforms to y, which has the inverse?

Also, for a vector field he makes the claim that $$A^\mu \rightarrow A'^\mu = \Lambda^\mu_\nu A_\nu(\Lambda^{-1}x)$$ Why is it not the inverse transformation on the A but an inverse on x?

Thank you in advance.
 

vanhees71

Science Advisor
Insights Author
Gold Member
13,009
5,020
First of all it is utmost important to write all indices not only in strict vertical order (defining whether an index denotes a co- or a contravariant tensor component) but also in strict horizontal order.

The logic is as follows: The contravariant time-space four vector components transform under Lorentz transformations as
$$x^{\prime \mu} = {\Lambda^{\mu}}_{\nu} x^{\nu}.$$
The same holds for ##\mathrm{d} x^{\nu}## of course.

The Lorentz transformation obeys
$$\eta_{\mu \nu} {\Lambda^{\mu}}_{\rho} {\Lambda^{\nu}}_{\sigma}=\eta_{\rho \sigma}$$
with ##\eta_{\mu \nu}=\mathrm{diag}(1,-1,-1,-1)## (in the west-coast convention).

This can be used to derive an important property of the inverse of the Lorentz transformation. Multiplying the above equation with ##{(\Lambda^{-1})^{\sigma}}_{\alpha}## gives
$$\eta_{\mu \alpha} {\Lambda^{\mu}}_{\rho} = \eta_{\rho \sigma} {(\Lambda^{-1})^{\sigma}}_{\alpha},$$
and now this equation with ##\eta^{\rho \beta}##
$$\eta_{\mu \alpha} \eta^{\rho \beta} {\Lambda^{\mu}}_{\rho} ={(\Lambda^{-1})^{\beta}}_{\alpha}. \qquad (1)$$

Covariant vector components transform as
$$\mathrm{d} x_{\mu}' = \eta_{\mu \rho} \mathrm{d} x^{\prime \rho} = \eta_{\mu \rho} {\Lambda^{\rho}}_{\nu} \mathrm{d} x^{\nu} = \eta_{\mu \rho} \eta^{\nu \sigma} {\Lambda^{\rho}}_{\nu} \mathrm{d} x_{\sigma} \stackrel{(1)}{=} {(\Lambda^{-1})^{\rho}}_{\mu} \mathrm{d} x_{\rho}.$$
It's also easy to see that a partial derivative with respect to ##x^{\mu}## leads to a covariant index.

To see this take a vector field which by definition transforms as
$$A^{\prime \mu}(x')={\Lambda^{\mu}}_{\rho} A^{\rho}(x).$$
For the derivative follows
$$\frac{\partial}{\partial x^{\prime \nu}} A^{\prime \mu}(x') = {\Lambda^{\mu}}_{\rho} \frac{\partial x^{\sigma}}{\partial x^{\prime \nu}} \frac{\partial}{\partial x^{\sigma}} A^{\rho}(x) ={\Lambda^{\mu}}_{\rho} {(\Lambda^{-1})^{\sigma}}_{\nu} \frac{\partial}{\partial x^{\sigma}} A^{\rho}(x),$$
which indeed is precisely the transformation rule for covariant tensor components. Thus to get a nice mnemonic notation you define
$$\partial_{\mu} = \frac{\partial}{\partial x^{\mu}}$$
with a lower, i.e., covariant, index.
 
161
18
First of all it is utmost important to write all indices not only in strict vertical order (defining whether an index denotes a co- or a contravariant tensor component) but also in strict horizontal order.
Sorry for not writing them out in horizontal order, I wasn't aware I could do that with latex.

I looked at a another lecture notes, and it seems like the notation is ##(\partial_\nu \phi)(y) = \partial'_\nu \phi(y)##, where ##y = \Lambda^{-1}x## because it is an active transformation.

So I see now where the ##\Lambda^{-1}## comes from, because the derivatives themselves are not transformed, just rewritten using chain rule, which pulls out a ##\Lambda^{-1}##. Do the derivatives not transform because an active transform is being used? Why would the derivatives not transform if the coordinates do?

For the derivative follows


$$\frac{\partial}{\partial x^{\prime \nu}} A^{\prime \mu}(x') = {\Lambda^{\mu}}_{\rho} \frac{\partial x^{\sigma}}{\partial x^{\prime \nu}} \frac{\partial}{\partial x^{\sigma}} A^{\rho}(x) ={\Lambda^{\mu}}_{\rho} {(\Lambda^{-1})^{\sigma}}_{\nu} \frac{\partial}{\partial x^{\sigma}} A^{\rho}(x)$$

, which indeed is precisely the transformation rule for covariant tensor components.

Does that transformation rule change if an active transformation is being used instead of a passive one?
 
Last edited:

vanhees71

Science Advisor
Insights Author
Gold Member
13,009
5,020
I never understood the distinction between active and passive transformations. Mathematically it boils down to interchanging the transformation in one interpretation by its inverse in the other ;-)).

The mathematics is clear and simple. The vector-field components transform as given in my previous posting
$$A^{\prime \mu}(x')={\Lambda^{\mu}}_{\rho} A^{\rho}(x)={\Lambda^{\mu}}_{\rho} A^{\rho}(\hat{\Lambda}^{-1}x').$$
Analogously this holds true for tensor fields of arbitrary rank (with a transformation matrix for each index of the field components).

The tensor fields themselves are invariant under Lorentz transformations.
 
161
18
The mathematics is clear and simple. The vector-field components transform as given in my previous posting
$$A^{\prime \mu}(x')={\Lambda^{\mu}}_{\rho} A^{\rho}(x)={\Lambda^{\mu}}_{\rho} A^{\rho}(\hat{\Lambda}^{-1}x').$$
Analogously this holds true for tensor fields of arbitrary rank (with a transformation matrix for each index of the field components).

The tensor fields themselves are invariant under Lorentz transformations.
I think I understand it a bit better now. Thank you!
 

haushofer

Science Advisor
Insights Author
2,220
558
I never understood the distinction between active and passive transformations. Mathematically it boils down to interchanging the transformation in one interpretation by its inverse in the other ;-)).

The mathematics is clear and simple. The vector-field components transform as given in my previous posting
$$A^{\prime \mu}(x')={\Lambda^{\mu}}_{\rho} A^{\rho}(x)={\Lambda^{\mu}}_{\rho} A^{\rho}(\hat{\Lambda}^{-1}x').$$
Analogously this holds true for tensor fields of arbitrary rank (with a transformation matrix for each index of the field components).

The tensor fields themselves are invariant under Lorentz transformations.
A passive transformation is a diffeomorphism in the coordinate space, and an active transformation is a diffeomorphism on the manifold.
 

vanhees71

Science Advisor
Insights Author
Gold Member
13,009
5,020
Which is by definition (at least locally) equivalent.
 
161
18
Which is by definition (at least locally) equivalent.
So locally, the transformation of a vector field would be the same passive/active?
 

Want to reply to this thread?

"Lorentz transformation of derivative and vector field" You must log in or register to reply here.

Related Threads for: Lorentz transformation of derivative and vector field

Replies
3
Views
1K
Replies
6
Views
3K
Replies
6
Views
1K
Replies
12
Views
3K
Replies
16
Views
15K
Replies
1
Views
6K
Replies
8
Views
3K
Replies
16
Views
5K

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top