Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Lorentz covaraince of differntial operator

  1. Jul 4, 2011 #1
    Hi,

    Can anyone show me how to prove that the differential operator, i.e [itex]\partial_{\mu}[/itex] is Lorentz covariant. In other words, [itex]\partial /\partial x'_{\nu}=\Lambda^{\nu}_{\mu}\partial /\partial x^{\mu}[/itex].

    And once this is done, how can I show that the D'Alembert operator [itex]\partial_{\mu}\partial^{\mu}[/itex] is a four vector in the sense that its magnitude is a constant in all frames? I understand that all four vectors have a constant magnitude so but I am not sure how to apply this when dealing with differential operators.

    Thank you
     
    Last edited: Jul 4, 2011
  2. jcsd
  3. Jul 4, 2011 #2

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Your free indices don't match. Use the chain rule.
     
  4. Jul 4, 2011 #3
    Sorry about that. Fixed now.
     
  5. Jul 4, 2011 #4

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Chain rule:
    [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}[/tex]
     
  6. Jul 4, 2011 #5

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    When the [itex]\partial_\mu[/itex] are the partial derivative operators associated with a coordinate system on a manifold, you should use the definition in post #3 here, and do the calculation the way I did it in #5. (It's still just the chain rule).
     
  7. Jul 4, 2011 #6

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I suspect that McLaren Rulez is working with Lorentz transformations between inertial coordinate systems in special relativity, i.e., global coordinate systems on R^4 and introductory multivariable calculus.
     
  8. Jul 4, 2011 #7

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Yes, I got that impression too. What I said may still be useful since it explains what (this kind of) partial derivative operators have to do with coordinate systems. If he already understands the connection between (the standard kind of) partial derivative operators and coordinate systems, he will find your answer significantly easier to understand.
     
  9. Jul 4, 2011 #8
    Sorry, I don't know of the connection between partial derivative operators and coordinate systems that you're referring to. I'm just studying some QM on my own and this came up while dealing with the Klien Gordon equation where I see a lot of derivative operators as four vectors.

    Anyway, my question is: If we have [itex]x'^{\nu}=\Lambda^{\nu}_{\mu} x^{\mu}[/itex] then we get [itex]\partial x'^{\nu}=\Lambda^{\nu}_{\mu} \partial x^{\mu}[/itex].

    But when we use the chain rule,

    [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}[/tex]

    and express [itex]\partial x'^{\nu}=\Lambda^{\nu}_{\mu} \partial x^{\mu}[/itex]

    then we get [itex]\Lambda^{\nu}_{\mu}\partial /\partial x'_{\nu}=\partial /\partial x^{\mu}[/itex]

    instead of [itex]\partial /\partial x'_{\nu}=\Lambda^{\nu}_{\mu}\partial /\partial x^{\mu}[/itex]

    So what is my error? Thank you for the replies!
     
  10. Jul 4, 2011 #9

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    You know what, just forget what I said before. That saves us both some time. :smile:

    Let's focus on the problem at hand. You got this part right: [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}[/tex] The next step is [tex]=(\Lambda^{-1})^\mu{}_\nu\frac{\partial}{\partial x^\mu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]
    I can't explain what exactly you're doing wrong, because I don't really understand what you're doing.

    A few comments about the notation: Don't write [itex]\partial x'^\mu[/itex] when what you have in mind is a differential. The standard notation is [itex]dx'^\mu[/itex]. If you intend to raise and lower indices using the metric (you're already doing that in post #1), you shouldn't use the notation [itex]\Lambda^\mu_\nu[/itex]. You will have to distinguish between [itex]\Lambda^\mu{}_\nu[/itex] and [itex]\Lambda_\nu{}^\mu[/itex]. The latter is defined to mean row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\Lambda^{-1}=\eta\Lambda^T\eta[/itex].
     
    Last edited: Jul 4, 2011
  11. Jul 4, 2011 #10

    George Jones

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Because [itex]\lambda[/itex] is not symmetric, indices should not written directly/below each other. Consequently, write something like [itex]x'^\nu = \Lambda^\nu {}_\mu x^\mu[/itex].

    There is a lot of index gymnastic stuff that you should know before doing calculations like this. For example:

    [tex]\eta^{\alpha \beta} \eta_{\beta \nu} = \delta_{\alpha \nu}[/tex]
    [tex]\eta_{\alpha \beta} \Lambda^\beta {}_\mu \Lambda^\alpha {}_\nu = \eta_{\mu \nu}[/tex]
    [tex]\Lambda_\alpha {}^\nu = \eta_{\alpha \beta} \eta^{\mu \nu} \Lambda^\beta {}_\mu[/tex]
    The second equality is just the definition of a Lorentz transformation. Do you understand the third equality?

    Now, in order to write the x s in terms of the x' s, calculate
    [tex]\Lambda_\nu {}^\alpha x'^\nu = \Lambda_\nu {}^\alpha \Lambda^\nu {}_\mu x^\mu[/tex]

    [edit]While I was writing and calculating, Fredrik made a more elegant post.[/edit]
     
  12. Jul 5, 2011 #11
    You're both right, I'm not yet comfortable with the index notation. I thought that an index on the left meant row and right meant column. Is there a comprehensive guide to it? Most of my internet searches for index notation throw up stuff I already know like summing over dummy indices and such.

    Also, what is the [itex]\eta[/itex] matrix you both used? Is this the metric i.e diag[1 -1 -1 -1] matrix?

    Thank you so much and my apologies for the rather basic questions. I realize my background is a little bit insufficient at the moment.
     
  13. Jul 5, 2011 #12

    JDoolin

    User Avatar
    Gold Member

    McLaren, I'm also trying to clear up my confusion with the index notation. This is a completely different problem, but maybe it could help us both.

    I have ben told that the equation for curl in Einstein Notation is

    [tex]\nabla \times \vec V = \partial_\mu V_\nu - \partial_\nu V_\mu[/tex]

    Could you verify (confirm or correct) for me that this is because:

    [tex]\partial_\mu V_\nu \overset{def?}{=} \partial_x V_y \vec u_z + \partial_y V_z \vec u_x + \partial_z V_x \vec u_y[/tex]

    [tex]\partial_\nu V_\mu \overset{def?}{=} \partial_z V_y \vec u_x + \partial_y V_x \vec u_z + \partial_x V_z \vec u_y[/tex]


    This would also work if somehow:

    [tex]\partial_\mu = \begin{pmatrix} \partial_y\\ \partial_z\\ \partial_x \end{pmatrix}, V_\mu=\begin{pmatrix} V_y & V_z & V_x \end{pmatrix}[/tex]

    and

    [tex]\partial_\nu = \begin{pmatrix} \partial_z\\ \partial_x\\ \partial_y \end{pmatrix}, V_\nu=\begin{pmatrix} V_z & V_x & V_y \end{pmatrix}[/tex]

    Can anyboy confirm that, or explain what I got wrong?
     
    Last edited: Jul 5, 2011
  14. Jul 5, 2011 #13
    I think it should be

    [itex](\nabla \times V)_{i} = \epsilon_{ijk}\partial_{j}V_{k}[/itex]

    Since you only need to sum over repeated indices (dummy indices), you need to let j and k run through all the possibilities. So if you label the three components of the curl vector as 1 2 and 3, and pick i=1, you would get

    [itex](\nabla \times V)_{1}= \partial V_{3}/\partial x_{2} - \partial V_{2}/\partial x_{3}[/itex] because the [itex]\epsilon_{ijk}[/itex] gives these two non zero terms and the minus sign.

    http://en.wikipedia.org/wiki/Levi-Civita_symbol

    Now you have the 1st component of the curl vector which you could have also worked out from the cross product matrix. Similarly, you get the [itex](\nabla \times V)_{2}[/itex] and [itex](\nabla \times V)_{3}[/itex] which are the remaining components of the curl vector.
     
    Last edited: Jul 5, 2011
  15. Jul 5, 2011 #14

    JDoolin

    User Avatar
    Gold Member

    That works! Thanks!
     
  16. Jul 5, 2011 #15

    JDoolin

    User Avatar
    Gold Member

    Okay, a Long, long time ago,

    https://www.physicsforums.com/showthread.php?t=430956

    I was looking at different ways of defining a Lorentz Transformation, but it didn't get into tensor notation.

    Can anyone tell me how
    [tex]\eta_{\alpha \beta} \Lambda^\beta {}_\mu \Lambda^\alpha {}_\nu = \eta_{\mu \nu}[/tex]

    translates into the Lorentz Transformation?
     
    Last edited: Jul 5, 2011
  17. Jul 7, 2011 #16

    JDoolin

    User Avatar
    Gold Member

    Okay, it's been a couple of days, so let me ask a simpler question:

    What are the names of the symbols:

    [tex]\eta_{\alpha \beta}, \Lambda^\beta {}_\mu ,g_{ij},\delta_i^j[/tex] so that I can look them up?

    Can these things be in any way be expressed in matrix formation, (even in a multi-dimensional matrix form?)

    I have from http://www.mathpages.com/rr/appendix/appendix.htm (Section 4), for instance that

    [tex]u_i \cdot u^j = \delta_i^j , u_i \cdot u_j = g_{ij},u^i \cdot u^j = g^{ij}[/tex]

    Can I, without loss of generality, set the orthogonal unit spanning vectors [itex]u_i , u_j , u_k[/itex] to equal the vectors (0,0,1) (0,1,0), (1,0,0) and say something to the effect that

    [tex]\begin{pmatrix} 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} = 1[/tex]

    and

    [tex]\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix}\begin{pmatrix} 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix}[/tex]

    Is this last equation in any way related to either the covariant gij or contravariant gij metric tensors?
     
    Last edited: Jul 7, 2011
  18. Jul 7, 2011 #17

    JDoolin

    User Avatar
    Gold Member

    Ah, no, I can't. What I've just said limits me to a Cartesian Coordinate System.
     
  19. Jul 7, 2011 #18

    Fredrik

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    A (homogeneous) Lorentz transformation can be defined as a 4×4 matrix [itex]\Lambda[/itex] such that [itex]\Lambda^T\eta\Lambda=\eta[/itex]. This condition is equivalent to the requirement that [itex]x^T\eta x[/itex] is preserved by [itex]\Lambda[/itex], i.e. that [tex]x^T\eta x=(\Lambda x)^T\eta(\Lambda x)=x^T\Lambda^T\eta\Lambda x.[/tex] Yes, [itex]\eta[/itex] is what you guessed, i.e. the matrix of components of the Minkowski metric in an inertial coordinate system. I define it with the opposite sign, but that's just an irrelevant convention.

    Recall that the definition of matrix multiplication is [tex](AB)^i{}_j=A^i{}_k B^k{}_j.[/tex] The component on row [itex]\mu[/itex] and column [itex]\nu[/itex] of [itex]\Lambda[/itex] is usually (in the context of SR) written as [itex]\Lambda^\mu{}_\nu[/itex]. You should consider this the "default" convention for 4×4 matrices (in SR), but the convention is different for [itex]\eta[/itex] and [itex]\eta^{-1}[/itex]. Row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\eta[/itex] is written as [itex]\eta_{\mu\nu}[/itex], and row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\eta^{-1}[/itex] is written as [itex]\eta^{\mu\nu}[/itex]. These conventions, the definitions of matrix multiplication, and the identity [itex]\eta=\Lambda^T\eta\Lambda[/itex] tell us that [tex]\eta_{\mu\nu}=(\Lambda^T\eta\Lambda)_{\mu\nu} =\Lambda^\rho{}_\mu\eta_{\rho\sigma}\Lambda^\sigma{}_\nu.[/tex] and [tex](\Lambda^{-1})^\mu{}_\nu=(\eta^{-1}\Lambda^T\eta)^\mu{}_\nu =\eta^{\mu\rho} \Lambda^\sigma{}_\rho\eta_{\sigma\nu}=\Lambda_\nu{}^\mu[/tex]
    [itex]\eta[/itex] and its inverse are used to raise and lower indices. The last step above is an example of that. For example, if T is a tensor whose components are written as [itex]T_\mu{}^\nu[/itex], we have [tex]\eta^{\rho\mu}T_\mu{}^\nu=T^{\rho\nu}[/tex].
     
  20. Jul 7, 2011 #19

    JDoolin

    User Avatar
    Gold Member

    I'm not sure if I actually guessed what it was, but now I will:

    [tex]\eta=\begin{pmatrix} -1 & 0 & 0 & 0\\ 0& 1 & 0 & 0\\ 0 & 0 & 1 &0 \\ 0&0 & 0& 1 \end{pmatrix}[/tex]

    (right?)

    With the rotation, the transpose is identical to the inverse.

    [tex]\Lambda^T =\begin{pmatrix} \cos(\theta) & \sin(\theta) \\ -\sin(\theta) & cos(\theta) \end{pmatrix}=\Lambda^{-1} =\begin{pmatrix} \cos(-\theta) & -\sin(-\theta) \\ \sin(-\theta) & cos(-\theta) \end{pmatrix}[/tex]

    whereas with a hyperbolic rotation, the transpose is the same as the original LT

    [tex]\Lambda = \Lambda^T = \begin{pmatrix} \cosh(\theta) & -\sinh(\theta) \\ -\sinh(\theta) & cosh(\theta) \end{pmatrix}[/tex]

    whereas the inverse is,

    [tex]\Lambda^{-1} = \begin{pmatrix} \cosh(-\theta) & -\sinh(-\theta) \\ -\sinh(-\theta) & \cosh(-\theta) \end{pmatrix}= \begin{pmatrix} \cosh(\theta) & \sinh(\theta) \\ \sinh(\theta) & cosh(\theta) \end{pmatrix}[/tex]

    I wonder what the reasoning of finding the things by using the LT and the transpose of the LT, and sticking [itex]\eta[/itex] in between, and decide to find mathematical entities that would preserve [itex]\eta[/itex], instead of using, for instance the LT, and the inverse of the LT. I mean, aside from the fact that it gives you the right answer. Why did... who was it, Poincare?... decide that was what was necessary, or an interesting problem? Or more to the point, what problem exactly was he working on, when he figured out those steps were necessary?
     
  21. Jul 8, 2011 #20
    Thank you Fredrik, that was very helpful. Now, if I can bring up my original question again, what I don't understand is this.

    We have [itex]x'^{\nu} = \Lambda^{\nu}_{\ \ \mu}x^{\mu}[/itex] but also [tex]\frac{\partial}{\partial x'^\nu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]
    Isn't this inconsistent? When we make a Lorentz transformation, we should be using the same transformation matrix for all four vectors right? But [itex]\Lambda^{\nu}_{\ \ \mu}\neq\Lambda^{\ \ \mu}_{\nu}[/itex] unless [itex]\Lambda^{-1}=\Lambda^{T}[/itex]

    But this condition is not true, right? We can consider the matrix


    [tex]\begin{pmatrix} \gamma & -\gamma\beta & 0 & 0\\ -\gamma\beta& \gamma & 0 & 0\\ 0 & 0 & 1 &0 \\ 0&0 & 0& 1 \end{pmatrix}[/tex]

    Thank you for your help.
     
    Last edited: Jul 8, 2011
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Lorentz covaraince of differntial operator
  1. Lorentz Transformation (Replies: 7)

  2. Lorentz invariance (Replies: 4)

  3. Lorentz Telescope (Replies: 19)

  4. Lorentz transformation (Replies: 1)

Loading...