Lorentz covaraince of differntial operator

In summary: I mean simpler...way to do it:\frac{\partial}{\partial x'^\nu} = \frac{\partial}{\partial x^\mu} \frac{\partial}{\partial x^\mu}
  • #1
McLaren Rulez
292
3
Hi,

Can anyone show me how to prove that the differential operator, i.e [itex]\partial_{\mu}[/itex] is Lorentz covariant. In other words, [itex]\partial /\partial x'_{\nu}=\Lambda^{\nu}_{\mu}\partial /\partial x^{\mu}[/itex].

And once this is done, how can I show that the D'Alembert operator [itex]\partial_{\mu}\partial^{\mu}[/itex] is a four vector in the sense that its magnitude is a constant in all frames? I understand that all four vectors have a constant magnitude so but I am not sure how to apply this when dealing with differential operators.

Thank you
 
Last edited:
Physics news on Phys.org
  • #2
McLaren Rulez said:
Can anyone show me how to prove that the differential operator, i.e [itex]\partial_{\mu}[/itex] is Lorentz covariant. In other words, [itex]\partial /\partial x'_{\mu}=\Lambda^{\nu}_{\mu}\partial /\partial x^{\mu}[/itex]

Your free indices don't match. Use the chain rule.
 
  • #3
George Jones said:
Your free indices don't match. Use the chain rule.

Sorry about that. Fixed now.
 
  • #4
Chain rule:
[tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}[/tex]
 
  • #5
When the [itex]\partial_\mu[/itex] are the partial derivative operators associated with a coordinate system on a manifold, you should use the definition in post #3 here, and do the calculation the way I did it in #5. (It's still just the chain rule).
 
  • #6
Fredrik said:
When the [itex]\partial_\mu[/itex] are the partial derivative operators associated with a coordinate system on a manifold, you should use the definition in post #3 here, and do the calculation the way I did it in #5. (It's still just the chain rule).

I suspect that McLaren Rulez is working with Lorentz transformations between inertial coordinate systems in special relativity, i.e., global coordinate systems on R^4 and introductory multivariable calculus.
 
  • #7
Yes, I got that impression too. What I said may still be useful since it explains what (this kind of) partial derivative operators have to do with coordinate systems. If he already understands the connection between (the standard kind of) partial derivative operators and coordinate systems, he will find your answer significantly easier to understand.
 
  • #8
Fredrik said:
Yes, I got that impression too. What I said may still be useful since it explains what (this kind of) partial derivative operators have to do with coordinate systems. If he already understands the connection between (the standard kind of) partial derivative operators and coordinate systems, he will find your answer significantly easier to understand.

Sorry, I don't know of the connection between partial derivative operators and coordinate systems that you're referring to. I'm just studying some QM on my own and this came up while dealing with the Klien Gordon equation where I see a lot of derivative operators as four vectors.

Anyway, my question is: If we have [itex]x'^{\nu}=\Lambda^{\nu}_{\mu} x^{\mu}[/itex] then we get [itex]\partial x'^{\nu}=\Lambda^{\nu}_{\mu} \partial x^{\mu}[/itex].

But when we use the chain rule,

[tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}[/tex]

and express [itex]\partial x'^{\nu}=\Lambda^{\nu}_{\mu} \partial x^{\mu}[/itex]

then we get [itex]\Lambda^{\nu}_{\mu}\partial /\partial x'_{\nu}=\partial /\partial x^{\mu}[/itex]

instead of [itex]\partial /\partial x'_{\nu}=\Lambda^{\nu}_{\mu}\partial /\partial x^{\mu}[/itex]

So what is my error? Thank you for the replies!
 
  • #9
You know what, just forget what I said before. That saves us both some time. :smile:

Let's focus on the problem at hand. You got this part right: [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}[/tex] The next step is [tex]=(\Lambda^{-1})^\mu{}_\nu\frac{\partial}{\partial x^\mu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]
I can't explain what exactly you're doing wrong, because I don't really understand what you're doing.

A few comments about the notation: Don't write [itex]\partial x'^\mu[/itex] when what you have in mind is a differential. The standard notation is [itex]dx'^\mu[/itex]. If you intend to raise and lower indices using the metric (you're already doing that in post #1), you shouldn't use the notation [itex]\Lambda^\mu_\nu[/itex]. You will have to distinguish between [itex]\Lambda^\mu{}_\nu[/itex] and [itex]\Lambda_\nu{}^\mu[/itex]. The latter is defined to mean row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\Lambda^{-1}=\eta\Lambda^T\eta[/itex].
 
Last edited:
  • #10
McLaren Rulez said:
If we have [itex]x'^{\nu}=\Lambda^{\nu}_{\mu} x^{\mu}[/itex]

Because [itex]\lambda[/itex] is not symmetric, indices should not written directly/below each other. Consequently, write something like [itex]x'^\nu = \Lambda^\nu {}_\mu x^\mu[/itex].

There is a lot of index gymnastic stuff that you should know before doing calculations like this. For example:

[tex]\eta^{\alpha \beta} \eta_{\beta \nu} = \delta_{\alpha \nu}[/tex]
[tex]\eta_{\alpha \beta} \Lambda^\beta {}_\mu \Lambda^\alpha {}_\nu = \eta_{\mu \nu}[/tex]
[tex]\Lambda_\alpha {}^\nu = \eta_{\alpha \beta} \eta^{\mu \nu} \Lambda^\beta {}_\mu[/tex]
The second equality is just the definition of a Lorentz transformation. Do you understand the third equality?

Now, in order to write the x s in terms of the x' s, calculate
[tex]\Lambda_\nu {}^\alpha x'^\nu = \Lambda_\nu {}^\alpha \Lambda^\nu {}_\mu x^\mu[/tex]

[edit]While I was writing and calculating, Fredrik made a more elegant post.[/edit]
 
  • #11
You're both right, I'm not yet comfortable with the index notation. I thought that an index on the left meant row and right meant column. Is there a comprehensive guide to it? Most of my internet searches for index notation throw up stuff I already know like summing over dummy indices and such.

Also, what is the [itex]\eta[/itex] matrix you both used? Is this the metric i.e diag[1 -1 -1 -1] matrix?

Thank you so much and my apologies for the rather basic questions. I realize my background is a little bit insufficient at the moment.
 
  • #12
McLaren, I'm also trying to clear up my confusion with the index notation. This is a completely different problem, but maybe it could help us both.

I have ben told that the equation for curl in Einstein Notation is

[tex]\nabla \times \vec V = \partial_\mu V_\nu - \partial_\nu V_\mu[/tex]

Could you verify (confirm or correct) for me that this is because:

[tex]\partial_\mu V_\nu \overset{def?}{=} \partial_x V_y \vec u_z + \partial_y V_z \vec u_x + \partial_z V_x \vec u_y[/tex]

[tex]\partial_\nu V_\mu \overset{def?}{=} \partial_z V_y \vec u_x + \partial_y V_x \vec u_z + \partial_x V_z \vec u_y[/tex]


This would also work if somehow:

[tex]\partial_\mu = \begin{pmatrix} \partial_y\\ \partial_z\\ \partial_x \end{pmatrix}, V_\mu=\begin{pmatrix} V_y & V_z & V_x \end{pmatrix}[/tex]

and

[tex]\partial_\nu = \begin{pmatrix} \partial_z\\ \partial_x\\ \partial_y \end{pmatrix}, V_\nu=\begin{pmatrix} V_z & V_x & V_y \end{pmatrix}[/tex]

Can anyboy confirm that, or explain what I got wrong?
 
Last edited:
  • #13
I think it should be

[itex](\nabla \times V)_{i} = \epsilon_{ijk}\partial_{j}V_{k}[/itex]

Since you only need to sum over repeated indices (dummy indices), you need to let j and k run through all the possibilities. So if you label the three components of the curl vector as 1 2 and 3, and pick i=1, you would get

[itex](\nabla \times V)_{1}= \partial V_{3}/\partial x_{2} - \partial V_{2}/\partial x_{3}[/itex] because the [itex]\epsilon_{ijk}[/itex] gives these two non zero terms and the minus sign.

http://en.wikipedia.org/wiki/Levi-Civita_symbol

Now you have the 1st component of the curl vector which you could have also worked out from the cross product matrix. Similarly, you get the [itex](\nabla \times V)_{2}[/itex] and [itex](\nabla \times V)_{3}[/itex] which are the remaining components of the curl vector.
 
Last edited:
  • #14
That works! Thanks!
 
  • #15
George Jones said:
Because [itex]\lambda[/itex] is not symmetric, indices should not written directly/below each other. Consequently, write something like [itex]x'^\nu = \Lambda^\nu {}_\mu x^\mu[/itex].

There is a lot of index gymnastic stuff that you should know before doing calculations like this. For example:

[tex]\eta^{\alpha \beta} \eta_{\beta \nu} = \delta_{\alpha \nu}[/tex]
[tex]\eta_{\alpha \beta} \Lambda^\beta {}_\mu \Lambda^\alpha {}_\nu = \eta_{\mu \nu}[/tex]
[tex]\Lambda_\alpha {}^\nu = \eta_{\alpha \beta} \eta^{\mu \nu} \Lambda^\beta {}_\mu[/tex]
The second equality is just the definition of a Lorentz transformation. Do you understand the third equality?

Now, in order to write the x s in terms of the x' s, calculate
[tex]\Lambda_\nu {}^\alpha x'^\nu = \Lambda_\nu {}^\alpha \Lambda^\nu {}_\mu x^\mu[/tex]

[edit]While I was writing and calculating, Fredrik made a more elegant post.[/edit]

Okay, a Long, long time ago,

https://www.physicsforums.com/showthread.php?t=430956

I was looking at different ways of defining a Lorentz Transformation, but it didn't get into tensor notation.

Can anyone tell me how
[tex]\eta_{\alpha \beta} \Lambda^\beta {}_\mu \Lambda^\alpha {}_\nu = \eta_{\mu \nu}[/tex]

translates into the Lorentz Transformation?
 
Last edited:
  • #16
JDoolin said:
Okay, a Long, long time ago,

https://www.physicsforums.com/showthread.php?t=430956

I was looking at different ways of defining a Lorentz Transformation, but it didn't get into tensor notation.

Can anyone tell me how
[tex]\eta_{\alpha \beta} \Lambda^\beta {}_\mu \Lambda^\alpha {}_\nu = \eta_{\mu \nu}[/tex]

translates into the Lorentz Transformation?

Okay, it's been a couple of days, so let me ask a simpler question:

What are the names of the symbols:

[tex]\eta_{\alpha \beta}, \Lambda^\beta {}_\mu ,g_{ij},\delta_i^j[/tex] so that I can look them up?

Can these things be in any way be expressed in matrix formation, (even in a multi-dimensional matrix form?)

I have from http://www.mathpages.com/rr/appendix/appendix.htm (Section 4), for instance that

[tex]u_i \cdot u^j = \delta_i^j , u_i \cdot u_j = g_{ij},u^i \cdot u^j = g^{ij}[/tex]

Can I, without loss of generality, set the orthogonal unit spanning vectors [itex]u_i , u_j , u_k[/itex] to equal the vectors (0,0,1) (0,1,0), (1,0,0) and say something to the effect that

[tex]\begin{pmatrix} 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix} = 1[/tex]

and

[tex]\begin{pmatrix} 0\\ 0\\ 1 \end{pmatrix}\begin{pmatrix} 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{pmatrix}[/tex]

Is this last equation in any way related to either the covariant gij or contravariant gij metric tensors?
 
Last edited:
  • #17
JDoolin said:
Can I, without loss of generality, set the orthogonal unit spanning vectors [itex]u_i , u_j , u_k[/itex] to equal the vectors (0,0,1) (0,1,0), (1,0,0)

Ah, no, I can't. What I've just said limits me to a Cartesian Coordinate System.
 
  • #18
A (homogeneous) Lorentz transformation can be defined as a 4×4 matrix [itex]\Lambda[/itex] such that [itex]\Lambda^T\eta\Lambda=\eta[/itex]. This condition is equivalent to the requirement that [itex]x^T\eta x[/itex] is preserved by [itex]\Lambda[/itex], i.e. that [tex]x^T\eta x=(\Lambda x)^T\eta(\Lambda x)=x^T\Lambda^T\eta\Lambda x.[/tex] Yes, [itex]\eta[/itex] is what you guessed, i.e. the matrix of components of the Minkowski metric in an inertial coordinate system. I define it with the opposite sign, but that's just an irrelevant convention.

Recall that the definition of matrix multiplication is [tex](AB)^i{}_j=A^i{}_k B^k{}_j.[/tex] The component on row [itex]\mu[/itex] and column [itex]\nu[/itex] of [itex]\Lambda[/itex] is usually (in the context of SR) written as [itex]\Lambda^\mu{}_\nu[/itex]. You should consider this the "default" convention for 4×4 matrices (in SR), but the convention is different for [itex]\eta[/itex] and [itex]\eta^{-1}[/itex]. Row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\eta[/itex] is written as [itex]\eta_{\mu\nu}[/itex], and row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\eta^{-1}[/itex] is written as [itex]\eta^{\mu\nu}[/itex]. These conventions, the definitions of matrix multiplication, and the identity [itex]\eta=\Lambda^T\eta\Lambda[/itex] tell us that [tex]\eta_{\mu\nu}=(\Lambda^T\eta\Lambda)_{\mu\nu} =\Lambda^\rho{}_\mu\eta_{\rho\sigma}\Lambda^\sigma{}_\nu.[/tex] and [tex](\Lambda^{-1})^\mu{}_\nu=(\eta^{-1}\Lambda^T\eta)^\mu{}_\nu =\eta^{\mu\rho} \Lambda^\sigma{}_\rho\eta_{\sigma\nu}=\Lambda_\nu{}^\mu[/tex]
[itex]\eta[/itex] and its inverse are used to raise and lower indices. The last step above is an example of that. For example, if T is a tensor whose components are written as [itex]T_\mu{}^\nu[/itex], we have [tex]\eta^{\rho\mu}T_\mu{}^\nu=T^{\rho\nu}[/tex].
 
  • #19
Fredrik said:
A (homogeneous) Lorentz transformation can be defined as a 4×4 matrix [itex]\Lambda[/itex] such that [itex]\Lambda^T\eta\Lambda=\eta[/itex]. This condition is equivalent to the requirement that [itex]x^T\eta x[/itex] is preserved by [itex]\Lambda[/itex], i.e. that [tex]x^T\eta x=(\Lambda x)^T\eta(\Lambda x)=x^T\Lambda^T\eta\Lambda x.[/tex] Yes, [itex]\eta[/itex] is what you guessed, i.e. the matrix of components of the Minkowski metric in an inertial coordinate system. I define it with the opposite sign, but that's just an irrelevant convention.

Recall that the definition of matrix multiplication is [tex](AB)^i{}_j=A^i{}_k B^k{}_j.[/tex] The component on row [itex]\mu[/itex] and column [itex]\nu[/itex] of [itex]\Lambda[/itex] is usually (in the context of SR) written as [itex]\Lambda^\mu{}_\nu[/itex]. You should consider this the "default" convention for 4×4 matrices (in SR), but the convention is different for [itex]\eta[/itex] and [itex]\eta^{-1}[/itex]. Row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\eta[/itex] is written as [itex]\eta_{\mu\nu}[/itex], and row [itex]\mu[/itex], column [itex]\nu[/itex] of [itex]\eta^{-1}[/itex] is written as [itex]\eta^{\mu\nu}[/itex]. These conventions, the definitions of matrix multiplication, and the identity [itex]\eta=\Lambda^T\eta\Lambda[/itex] tell us that [tex]\eta_{\mu\nu}=(\Lambda^T\eta\Lambda)_{\mu\nu} =\Lambda^\rho{}_\mu\eta_{\rho\sigma}\Lambda^\sigma{}_\nu.[/tex] and [tex](\Lambda^{-1})^\mu{}_\nu=(\eta^{-1}\Lambda^T\eta)^\mu{}_\nu =\eta^{\mu\rho} \Lambda^\sigma{}_\rho\eta_{\sigma\nu}=\Lambda_\nu{}^\mu[/tex]
[itex]\eta[/itex] and its inverse are used to raise and lower indices. The last step above is an example of that. For example, if T is a tensor whose components are written as [itex]T_\mu{}^\nu[/itex], we have [tex]\eta^{\rho\mu}T_\mu{}^\nu=T^{\rho\nu}[/tex].

I'm not sure if I actually guessed what it was, but now I will:

[tex]\eta=\begin{pmatrix} -1 & 0 & 0 & 0\\ 0& 1 & 0 & 0\\ 0 & 0 & 1 &0 \\ 0&0 & 0& 1 \end{pmatrix}[/tex]

(right?)

With the rotation, the transpose is identical to the inverse.

[tex]\Lambda^T =\begin{pmatrix} \cos(\theta) & \sin(\theta) \\ -\sin(\theta) & cos(\theta) \end{pmatrix}=\Lambda^{-1} =\begin{pmatrix} \cos(-\theta) & -\sin(-\theta) \\ \sin(-\theta) & cos(-\theta) \end{pmatrix}[/tex]

whereas with a hyperbolic rotation, the transpose is the same as the original LT

[tex]\Lambda = \Lambda^T = \begin{pmatrix} \cosh(\theta) & -\sinh(\theta) \\ -\sinh(\theta) & cosh(\theta) \end{pmatrix}[/tex]

whereas the inverse is,

[tex]\Lambda^{-1} = \begin{pmatrix} \cosh(-\theta) & -\sinh(-\theta) \\ -\sinh(-\theta) & \cosh(-\theta) \end{pmatrix}= \begin{pmatrix} \cosh(\theta) & \sinh(\theta) \\ \sinh(\theta) & cosh(\theta) \end{pmatrix}[/tex]

I wonder what the reasoning of finding the things by using the LT and the transpose of the LT, and sticking [itex]\eta[/itex] in between, and decide to find mathematical entities that would preserve [itex]\eta[/itex], instead of using, for instance the LT, and the inverse of the LT. I mean, aside from the fact that it gives you the right answer. Why did... who was it, Poincare?... decide that was what was necessary, or an interesting problem? Or more to the point, what problem exactly was he working on, when he figured out those steps were necessary?
 
  • #20
Thank you Fredrik, that was very helpful. Now, if I can bring up my original question again, what I don't understand is this.

We have [itex]x'^{\nu} = \Lambda^{\nu}_{\ \ \mu}x^{\mu}[/itex] but also [tex]\frac{\partial}{\partial x'^\nu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]
Isn't this inconsistent? When we make a Lorentz transformation, we should be using the same transformation matrix for all four vectors right? But [itex]\Lambda^{\nu}_{\ \ \mu}\neq\Lambda^{\ \ \mu}_{\nu}[/itex] unless [itex]\Lambda^{-1}=\Lambda^{T}[/itex]

But this condition is not true, right? We can consider the matrix[tex]\begin{pmatrix} \gamma & -\gamma\beta & 0 & 0\\ -\gamma\beta& \gamma & 0 & 0\\ 0 & 0 & 1 &0 \\ 0&0 & 0& 1 \end{pmatrix}[/tex]

Thank you for your help.
 
Last edited:
  • #21
McLaren Rulez said:
Thank you Fredrik, that was very helpful. Now, if I can bring up my original question again, what I don't understand is this.

We have [itex]x'^{\nu} = \Lambda^{\nu}_{\ \ \mu}x^{\mu}[/itex] but also [tex]\frac{\partial}{\partial x'^\nu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]
Isn't this inconsistent? When we make a Lorentz transformation, we should be using the same transformation matrix for all four vectors right? But [itex]\Lambda^{\nu}_{\ \ \mu}\neq\Lambda^{\ \ \mu}_{\nu}[/itex] unless [itex]\Lambda^{-1}=\Lambda^{T}[/itex]

But this condition is not true, right? We can consider the matrix


[tex]\begin{pmatrix} \gamma & -\gamma\beta & 0 & 0\\ -\gamma\beta& \gamma & 0 & 0\\ 0 & 0 & 1 &0 \\ 0&0 & 0& 1 \end{pmatrix}[/tex]

Thank you for your help.


[itex]x'^{\nu} = \Lambda^{\nu}_{\ \ \mu}x^{\mu}[/itex]

would resemble:

[tex]\begin{pmatrix} t'\\ x'\\ y'\\ z' \end{pmatrix} = \begin{pmatrix} \gamma & -\gamma \beta & 0 & 0\\ - \gamma \beta & \gamma & 0 & 0\\ 0 &0 &1 &0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} t\\ x\\ y\\ z \end{pmatrix}[/tex]

while
[tex]\frac{\partial}{\partial x'^\nu}\overset ? = \Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]

Would represent, expanded out:
[tex]\begin{pmatrix} \frac{\partial }{\partial t'}\\ \frac{\partial }{\partial x'}\\ \frac{\partial }{\partial y'}\\ \frac{\partial }{\partial z'} \end{pmatrix} V(t,x,y,z) \overset ? = \begin{pmatrix} \gamma & -\gamma \beta & 0 & 0\\ - \gamma \beta & \gamma & 0 & 0\\ 0 &0 &1 &0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{\partial }{\partial t}\\ \frac{\partial }{\partial x}\\ \frac{\partial }{\partial y}\\ \frac{\partial }{\partial z} \end{pmatrix}V(t,x,y,z)[/tex]

This looks to me like a gradient of a potential or density field. I have to think about this some more, but vaguely, this equation represents [itex]d V = \vec u^i dV_i[/itex]

When it might be better to say [itex]d V = \vec u^i dV_i + A_i d \vec u^i[/itex]

However, I'm not entirely sure how to discuss [itex] d \vec u^i[/itex] explicitly or physically. It might be zero?
 
  • #22
McLaren Rulez said:
We can consider the matrix


[tex]\begin{pmatrix} \gamma & -\gamma\beta & 0 & 0\\ -\gamma\beta& \gamma & 0 & 0\\ 0 & 0 & 1 &0 \\ 0&0 & 0& 1 \end{pmatrix}[/tex]

Thank you for your help.

You probably already know this, but you can freely replace [itex]\begin{matrix} \gamma \beta = sinh(\theta)\\ \gamma = cosh(\theta) \end{matrix}[/itex] when you make the replacement [itex]\beta = tanh(\theta)[/itex]

Proof (at least a part of it.)
[tex]\begin{matrix}
Verify\\
If \cosh(\theta)=\gamma , Then \beta = tanh(\theta)\\
\frac{e^\theta - e^{-\theta}}{2} = \frac{1}{\sqrt{1 - \beta^2}}; Given\\
Solve for \beta^2 \\
\beta^2 = 1 - \frac{4}{(e^\theta + e^-\theta)^2} \\
{}=\frac{(e^\theta - e^{-\theta})^2 - 4}{(e^\theta - e^{-\theta})^2}\\
{}=\frac{(e^{2\theta} + 2 - e^{-2\theta}) - 4}{(e^\theta - e^{-\theta})^2}\\
{}=\frac{(e^\theta - e^{-\theta})^2}{(e^\theta + e^{-\theta})^2}
{}=\tanh^2(\theta)
\end{matrix}[/tex]

If I'm not mistaken, the rapidity, θ is the area swept out by following a path on a unit hyperbola, just as with regular trigonometry, the angle, θ is the area swept out by following a path on a unit circle.
 
Last edited:
  • #23
JDoolin said:
(right?)
Yes.

JDoolin said:
I wonder what the reasoning of finding the things by using the LT and the transpose of the LT, and sticking [itex]\eta[/itex] in between, and decide to find mathematical entities that would preserve [itex]\eta[/itex], instead of using, for instance the LT, and the inverse of the LT. I mean, aside from the fact that it gives you the right answer.
I'm not sure I understand the question. I would say that we are using [itex]\Lambda[/itex] and [itex]\Lambda^{-1}[/itex].

McLaren Rulez said:
We have [itex]x'^{\nu} = \Lambda^{\nu}_{\ \ \mu}x^{\mu}[/itex] but also [tex]\frac{\partial}{\partial x'^\nu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]
Isn't this inconsistent?
No, the first equality implies the second: [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}=(\Lambda^{-1})^\mu{}_\nu\frac{\partial}{\partial x^\mu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]

(Don't forget that [itex]x'=\Lambda x\Leftrightarrow x=\Lambda^{-1}x'[/itex]).

McLaren Rulez said:
unless [itex]\Lambda^{-1}=\Lambda^{T}[/itex]
This implies v=0. [tex]\Lambda=\gamma\begin{pmatrix}1 & -v\\ -v & 1\end{pmatrix}[/tex]

I don't have time to elaborate much right now.
 
  • #24
Fredrik said:
Yes.


I'm not sure I understand the question. I would say that we are using [itex]\Lambda[/itex] and [itex]\Lambda^{-1}[/itex].

It appears to me that this equality [itex]\Lambda^T \eta \Lambda =\eta[/itex] is an implicit definition for the Lorentz Transformations, (so long as you know what [itex]\eta[/itex] is. But within that definition, you don't see the inverse; just the transpose.

[tex]\begin{align*} \Lambda^T \eta \Lambda &= \begin{pmatrix} cosh & sinh\\ sinh & cosh \end{pmatrix} \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} cosh & sinh\\ sinh & cosh \end{pmatrix}\\ &= \begin{pmatrix} -cosh & sinh\\ -sinh & cosh \end{pmatrix} \begin{pmatrix} cosh & sinh\\ sinh & cosh \end{pmatrix} \\ &= \begin{pmatrix} sinh^2-cosh^2 & cosh sinh - cosh sinh \\ cosh sinh - cosh sinh & cosh^2 - sinh^2 \end{pmatrix} \\ &= \begin{pmatrix} -1 &0 \\ 0& 1 \end{pmatrix} \end{align*}[/tex]




No, the first equality implies the second: [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}=(\Lambda^{-1})^\mu{}_\nu\frac{\partial}{\partial x^\mu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]


Ahh, I think that is the Einstein Notation for divergence that I was looking for.

https://www.physicsforums.com/showthread.php?t=511811 Post #8
 
Last edited:
  • #25
JDoolin said:
[itex]x'^{\nu} = \Lambda^{\nu}_{\ \ \mu}x^{\mu}[/itex]

would resemble:

[tex]\begin{pmatrix} t'\\ x'\\ y'\\ z' \end{pmatrix} = \begin{pmatrix} \gamma & -\gamma \beta & 0 & 0\\ - \gamma \beta & \gamma & 0 & 0\\ 0 &0 &1 &0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} t\\ x\\ y\\ z \end{pmatrix}[/tex]

while
[tex]\frac{\partial}{\partial x'^\nu}\overset ? = \Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]

Would represent, expanded out:
[tex]\begin{pmatrix} \frac{\partial }{\partial t'}\\ \frac{\partial }{\partial x'}\\ \frac{\partial }{\partial y'}\\ \frac{\partial }{\partial z'} \end{pmatrix} V(t,x,y,z) \overset ? = \begin{pmatrix} \gamma & -\gamma \beta & 0 & 0\\ - \gamma \beta & \gamma & 0 & 0\\ 0 &0 &1 &0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{\partial }{\partial t}\\ \frac{\partial }{\partial x}\\ \frac{\partial }{\partial y}\\ \frac{\partial }{\partial z} \end{pmatrix}V(t,x,y,z)[/tex]

Hang on. Let's go from [itex]\Lambda^{\nu}_{\ \ \mu}[/itex] to [itex]\Lambda^{\ \ \mu}_{\nu}[/itex]

So we have [itex]\Lambda^{\nu}_{\ \ \mu}= (\Lambda^{-1})^{\ \ \ \ \nu}_{\mu}= ((\Lambda^{-1})^{T})^{\ \ \ \mu}_{\nu}=((\Lambda^{T})^{-1})^{\ \ \ \mu}_{\nu}[/itex]

Which means that we should get this instead (the minus signs are gone):

[tex]\begin{pmatrix} \frac{\partial }{\partial t'}\\ \frac{\partial }{\partial x'}\\ \frac{\partial }{\partial y'}\\ \frac{\partial }{\partial z'} \end{pmatrix} V(t,x,y,z) \overset ? = \begin{pmatrix} \gamma & \gamma \beta & 0 & 0\\ \gamma \beta & \gamma & 0 & 0\\ 0 &0 &1 &0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} \frac{\partial }{\partial t}\\ \frac{\partial }{\partial x}\\ \frac{\partial }{\partial y}\\ \frac{\partial }{\partial z} \end{pmatrix}V(t,x,y,z)[/tex]

Isn't this so?

Fredrik said:
No, the first equality implies the second: [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}=(\Lambda^{-1})^\mu{}_\nu\frac{\partial}{\partial x^\mu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]

I get that the first equality implies the second but therein lies my problem. If we have any four vector, the transformation matrix should be the same, correct? Now to transform the four vector [itex](x_{0}, x_{1}, x_{2}, x_{3})[/itex] to [itex](x'_{0}, x'_{1}, x'_{2}, x'_{3})[/itex] we are using one matrix but to transform the four vector [itex](\partial x_{0}, \partial x_{1}, \partial x_{2}, \partial x_{3})[/itex] to [itex](\partial x'_{0}, \partial x'_{1}, \partial x'_{2}, \partial x'_{3})[/itex], we are using another matrix, namely \begin{pmatrix} \gamma & -\gamma \beta & 0 & 0\\ - \gamma \beta & \gamma & 0 & 0\\ 0 &0 &1 &0 \\ 0 & 0 & 0 & 1 \end{pmatrix} for the former and \begin{pmatrix} \gamma & \gamma \beta & 0 & 0\\ \gamma \beta & \gamma & 0 & 0\\ 0 &0 &1 &0 \\ 0 & 0 & 0 & 1 \end{pmatrix} for the latter. The two are equal only if v=0 which is weird. My question basically is: why are we having two different matrices to transform different four vectors? I thought that the transformation matrix was universal for all four vectors.

Fredrik said:
I don't have time to elaborate much right now.

I understand completely. I appreciate the fact that you're making such an effort to help me with this. Please take your time, I am in no hurry since this is just holiday reading with no deadlines :)
 
Last edited:
  • #26
JDoolin said:
Okay, a Long, long time ago,

https://www.physicsforums.com/showthread.php?t=430956

I was looking at different ways of defining a Lorentz Transformation, but it didn't get into tensor notation.

Can anyone tell me how
[tex]\eta_{\alpha \beta} \Lambda^\beta {}_\mu \Lambda^\alpha {}_\nu = \eta_{\mu \nu}[/tex]

translates into the Lorentz Transformation?

If you have MTW, try reading around pg 66.

The Lorentz transform is the transform of a vector. You're asking about how a matrix transforms.

So, let's go back and compare how a matrix transforms the old way, by which I mean using the old engineering "matrix" notation, and using tensor notation.

[tex]\vec{x'} = L \vec{x}[/tex] in vector notation

[tex]x^{a'} = \Lambda^{a'}{}_{a} x^{a}[/tex] in tensor notation.

Note that both L here and [itex]\Lambda[/itex] act just like a transformation matrix does. They are a linear map from a vector to a vector.

In the old notation we had column vectors, and row vectors. The row vectors were duals,though they weren't called that, at least not usually. The product of a vector (column vector) and it's dual (row vector) was a scalar, that's what makes them duals.

There isn't any direct equivalent for a rank two tensor of the form [itex]g_{ab}[/itex] in the old notation. [itex]g_{ab}[/itex] is a map from two vectors to a scalar, or a map from a column vector to a row vector, there isn't any matrix notation for that (though there is tensor noation for it).

The closest thing we have in the old matrix notation is [tex]g^{a}{}_{b}[/tex], a map from a vector to a vector.

The reverse transforms look like
[tex]\vec{x'} = L \vec{x}[/tex]
[tex]X^{a} = \Lambda^{a}{}_{a'} X^{a'}[/tex]

So know that [tex] \Lambda^{a'}{}_{a} \Lambda^{a}{}_{a'} = \delta^{a'}{}_{a'} [/tex], the product of the two is an identity transformation.In the old notation, a matrix transformed as

[itex] g' = L^{-1} g L [/itex]

This is the most similar to transforming

[tex]g^{a'}{}_{b'} = g^{a}{}_{b} L^{a'}{}_{a} L^{b}{}_{b'}[/tex]

If you write this out component by component, you'll see it's identical.
 
Last edited:
  • #27
JDoolin said:
It appears to me that this equality [itex]\Lambda^T \eta \Lambda =\eta[/itex] is an implicit definition for the Lorentz Transformations, (so long as you know what [itex]\eta[/itex] is. But within that definition, you don't see the inverse; just the transpose.
Yes, it's one of several different ways to define the term "Lorentz transformation". Multiply that equation by [itex]\eta^{-1}[/itex] from the right and you'll see that [itex]\Lambda^{-1}=\eta^{-1}\Lambda^T\eta[/itex].

JDoolin said:
Ahh, I think that is the Einstein Notation for divergence that I was looking for.

https://www.physicsforums.com/showthread.php?t=511811 Post #8
I'd go with what Mentz told you in #2. Actually, if we're only dealing with [itex]\mathbb R^n[/itex], there's no reason not to write all indices downstairs. With this notation, we have [tex]\begin{align}\nabla\cdot V &=\partial_i V_i\\ \nabla\times V &=e_i \varepsilon_{ijk}\partial_j V_k\end{align}.[/tex] That epsilon-thingy is called the Levi-Civita symbol. See Wikipedia, or this thread. The ei are the basis vectors for [itex]\mathbb R^3[/itex].
 
Last edited:
  • #28
McLaren Rulez said:
Hang on. Let's go from [itex]\Lambda^{\nu}_{\ \ \mu}[/itex] to [itex]\Lambda^{\ \ \mu}_{\nu}[/itex]

So we have [itex]\Lambda^{\nu}_{\ \ \mu}= (\Lambda^{-1})^{\ \ \ \ \nu}_{\mu}= ((\Lambda^{-1})^{T})^{\ \ \ \mu}_{\nu}=((\Lambda^{T})^{-1})^{\ \ \ \mu}_{\nu}[/itex]
You left out a couple of etas from the equality [itex]\Lambda^{-1}=\eta^{-1}\Lambda^T\eta[/itex].

McLaren Rulez said:
I get that the first equality implies the second but therein lies my problem. If we have any four vector, the transformation matrix should be the same, correct? Now to transform the four vector [itex](x_{0}, x_{1}, x_{2}, x_{3})[/itex] to [itex](x'_{0}, x'_{1}, x'_{2}, x'_{3})[/itex] we are using one matrix but to transform the four vector [itex](\partial x_{0}, \partial x_{1}, \partial x_{2}, \partial x_{3})[/itex] to [itex](\partial x'_{0}, \partial x'_{1}, \partial x'_{2}, \partial x'_{3})[/itex], we are using another matrix,
[...]
why are we having two different matrices to transform different four vectors? I thought that the transformation matrix was universal for all four vectors.
[itex](\partial x_{0}, \partial x_{1}, \partial x_{2}, \partial x_{3})[/itex] isn't a four-vector. One of the standard definitions of "four-vector" would say that the reason is precisely that it doesn't transform the way a four-vector should.

Edit: When I wrote this, I didn't even notice that you were using the confusing notation [itex]\partial x_0[/itex]. I don't know if you mean [itex]\partial_0[/itex] or [itex]dx^0[/itex], but you should figure that out and use the appropriate notation. [itex](dx^0,dx^1,dx^2,dx^3)[/itex] actually does transform as a four-vector, while [itex](\partial_0,\partial_1,\partial_2,\partial_3)[/itex] transforms the "opposite" way (i.e. using [itex]\Lambda^{-1}[/itex] rather than [itex]\Lambda[/itex]).
 
Last edited:
  • #29
McLaren Rulez said:
My question basically is: why are we having two different matrices to transform different four vectors? I thought that the transformation matrix was universal for all four vectors.

you've just discovered something that's pretty important.

I think something that needs to be mentioned is that there are two "kinds" of 4-vector.
Ones that transform like the coordinates: [itex] x' = \Lambda x [/itex], in physics we
call these contravariant vectors or just vectors. we generally write them in index notation
as a beast with an upper index like [itex] x^i[/itex]. in this notation we have
[itex] x'{}^{i} = \Lambda^i_{\;j} x^j [/itex].

the second kind are like the derivatives and transform with the inverse matrix:
[itex] p' = \Lambda^{-1} p[/itex]. in physics we call these covariant vectors
or co-vectors. we generally write them in index notation as something with a lower
index like [itex]p_i[/itex]. in this notation we have
[itex] p'_{i} = \left( \Lambda^{-1} \right)_i^{\;j} p_j [/itex]

when we have a metric [itex]\eta[/itex] we have an association of vectors and
co-vectors. the vector [itex] x^i [/itex] is paired with the covector [itex]
x_i = \eta_{ij}x^j[/itex]. and using [itex]\eta^{-1}[/itex] we can pair a
vector to each covector [itex]x^i = \eta^{ij}x_j[/itex].

in euclidean space the metric is trivial and we forget about the distinction between
vectors and covectors. but in minkowski space there is enough of a change that it
is best to think about them as sepearate kinds of things entirely.

asside: in the setting of geometry what we call vectors are the tangent vectors to
the manifold while the co-vectors are called cotangent vectors or dual vectors.
 
  • #30
gbert, I think that was my mistake, I didn't realize covariant and contravariant vectors transformed differently. Now, things are making much more sense though I probably need to work at it a bit more before I'm comfortable with it.

Thank you to everyone who helped so much. I'm very grateful to you all.
 
  • #31
Fredrik:
No, the first equality implies the second: [tex]\frac{\partial}{\partial x'^\nu} = \frac{\partial x^\mu}{\partial x'^\nu} \frac{\partial}{\partial x^\mu}=(\Lambda^{-1})^\mu{}_\nu\frac{\partial}{\partial x^\mu}=\Lambda_\nu{}^\mu\frac{\partial}{\partial x^\mu}[/tex]​
JDoolin:
Ahh, I think that is the Einstein Notation for [STRIKE]divergence[/STRIKE] that I was looking for.

https://www.physicsforums.com/showthread.php?t=511811 Post #8​

Fredrik
I'd go with what Mentz told you in #2.​

My clumsy vocabulary. I meant Einstein Notation for gradient.

[tex]\nabla u = \begin{pmatrix}
\frac{1}{h_1} \frac{\partial }{\partial x_1}\\
\frac{1}{h_2} \frac{\partial }{\partial x_2}\\
\frac{1}{h_3} \frac{\partial }{\partial x_3}
\end{pmatrix}u[/tex]

I'll try to work it out in more detail in the other thread.

Edit: Actually, I can put the gist of the question here.

[tex]\frac{\partial x^\mu}{\partial x'^\nu}\cdot \frac{\partial }{\partial x^\mu} \overset ? = \begin{pmatrix} \frac{\partial ||\vec r||}{\partial ||\vec r`||} \frac{\partial }{\partial r}\\ \frac{\partial ||\vec r||}{\partial ||\vec \theta`||} \frac{\partial }{\partial \theta}\\ \frac{\partial ||\vec r||}{\partial ||\vec \phi`||} \frac{\partial }{\partial \phi} \end{pmatrix} =\begin{pmatrix} \frac{\partial }{\partial r}\\ \frac{1}{r} \frac{\partial }{\partial \theta}\\ \frac{1}{r sin\theta} \frac{\partial }{\partial \phi} \end{pmatrix}[/tex]
 
Last edited:
  • #32
OK, that question doesn't really have anything to do with notation. If you just want a notation with indices for (the components of) [itex]\nabla u[/itex], the answer is of course [itex]\partial_i u[/itex], since [itex]\nabla u=e_i(\nabla u)_i=e_i\partial_i u[/itex]. You seem to want the [itex]\nabla[/itex] operator in spherical coordinates. That takes a bit more work. I'm sure this is covered in a lot of books, and probably in some forum posts as well, so I won't do that exercise here. Maybe someone else can link to a proof of this, e.g. at Google Books.
 
  • #33
Fredrik said:
OK, that question doesn't really have anything to do with notation. If you just want a notation with indices for (the components of) [itex]\nabla u[/itex], the answer is of course [itex]\partial_i u[/itex], since [itex]\nabla u=e_i(\nabla u)_i=e_i\partial_i u[/itex]. You seem to want the [itex]\nabla[/itex] operator in spherical coordinates. That takes a bit more work. I'm sure this is covered in a lot of books, and probably in some forum posts as well, so I won't do that exercise here. Maybe someone else can link to a proof of this, e.g. at Google Books.



Okay, you're right. That's a bad example. See if this is a better question.
I find in http://www.mathpages.com/rr/s6-06/6-06.htm the equation:

[tex]g=\begin{pmatrix} 1 & 0 & 0 & 0\\ 0 & -1 &0 &0 \\ 0 & 0 & -1 &0 \\ 0 & 0 & 0 & -1 \end{pmatrix} - \frac{2 m}{r}\begin{pmatrix} 1 &0 &0 &0 \\ 0& \kappa x x & \kappa x y &\kappa x z \\ 0& \kappa y x &\kappa y y &\kappa z y \\ 0& \kappa z x &\kappa y z &\kappa z z \end{pmatrix}[/tex]

where

[tex]\kappa = \frac{1}{r^2(1-2m/r)}[/tex]


I know that
[tex]g_{11}= \left (\frac{\partial \tau}{\partial t} \right )^2=1-\frac{2m}{r}[/tex]

...but can the other elements of the matrix be expressed in the form of two differentials multiplied together? Or more to the point, what is Einstein notation for g? Of course having finally asked such an unambiguous question, I must try looking it up (pause).

Okay, from http://en.wikipedia.org/wiki/Metric_tensor_(general_relativity )

[tex]g_{\bar \mu \bar \nu} = \frac{\partial x^\rho}{\partial x^{\bar \mu}}\frac{\partial x^\sigma}{\partial x^{\bar \nu}} g_{\rho\sigma} = \Lambda^\rho {}_{\bar \mu} \, \Lambda^\sigma {}_{\bar \nu} \, g_{\rho \sigma}[/tex]

...which is probably what I wanted to know. (Once I figure out how to decode the notation, of course)
 
Last edited by a moderator:
  • #34
JDoolin said:
Okay, from http://en.wikipedia.org/wiki/Metric_tensor_(general_relativity )

[tex]g_{\bar \mu \bar \nu} = \frac{\partial x^\rho}{\partial x^{\bar \mu}}\frac{\partial x^\sigma}{\partial x^{\bar \nu}} g_{\rho\sigma} = \Lambda^\rho {}_{\bar \mu} \, \Lambda^\sigma {}_{\bar \nu} \, g_{\rho \sigma}[/tex]

...which is probably what I wanted to know. (Once I figure out how to decode the notation, of course)
Posts #3 and #5 in this thread might help. Note by the way that what you've got there is a matrix equation in component form: [tex]g'=\Lambda^T g\Lambda[/tex]
 
Last edited by a moderator:
  • #35
Fredrik said:
Posts #3 and #5 in this thread might help. Note by the way that what you've got there is a matrix equation in component form: [tex]g'=\Lambda^T g\Lambda[/tex]

Thank you for giving me that extra bit of information Fredrik. I can see now that if

[tex]\Lambda=\begin{pmatrix}
\frac{\partial t}{\partial \tau} &
\frac{\partial t}{\partial \bar x} &
\frac{\partial t}{\partial \bar y} &
\frac{\partial t}{\partial \bar z} \\

\frac{\partial x}{\partial \tau} &
\frac{\partial x}{\partial \bar x} &
\frac{\partial x}{\partial \bar y} &
\frac{\partial x}{\partial \bar z} \\
\frac{\partial y}{\partial \tau} &
\frac{\partial y}{\partial \bar x} &
\frac{\partial y}{\partial \bar y} &
\frac{\partial y}{\partial \bar z} \\
\frac{\partial z}{\partial \tau} &
\frac{\partial z}{\partial \bar x} &
\frac{\partial z}{\partial \bar y} &
\frac{\partial z}{\partial \bar z} \\
\end{pmatrix}[/tex]

then the equations:

[tex]g_{\bar \mu \bar \nu} = \frac{\partial x^\rho}{\partial x^{\bar \mu}}\frac{\partial x^\sigma}{\partial x^{\bar \nu}} g_{\rho\sigma} [/tex]

and
[tex]g'=\Lambda^T g\Lambda[/tex]

are equivalent.

This is kind of a EUREKA moment for me, because I've never seen the Lorentz Transformation written this way. I have one little thing I want to mention, Wouldn't this technically be the "inverse" lorentz transformation? because:

[tex]\begin{pmatrix}
dt\\
dx\\
dy\\
dz
\end{pmatrix} = \begin{pmatrix}
\frac{\partial t}{\partial \tau} &
\frac{\partial t}{\partial \bar x} &
\frac{\partial t}{\partial \bar y} &
\frac{\partial t}{\partial \bar z} \\
\frac{\partial x}{\partial \tau} &
\frac{\partial x}{\partial \bar x} &
\frac{\partial x}{\partial \bar y} &
\frac{\partial x}{\partial \bar z} \\
\frac{\partial y}{\partial \tau} &
\frac{\partial y}{\partial \bar x} &
\frac{\partial y}{\partial \bar y} &
\frac{\partial y}{\partial \bar z} \\
\frac{\partial z}{\partial \tau} &
\frac{\partial z}{\partial \bar x} &
\frac{\partial z}{\partial \bar y} &
\frac{\partial z}{\partial \bar z} \\
\end{pmatrix}
\begin{pmatrix}
d\tau\\
d \bar x\\
d \bar y\\
d \bar z
\end{pmatrix}[/tex]

...or could it be a problem with Wikipedia's equation? Maybe they need to flip their differentials...
 

Similar threads

  • Special and General Relativity
Replies
2
Views
1K
  • Special and General Relativity
Replies
17
Views
1K
  • Special and General Relativity
2
Replies
62
Views
3K
  • Special and General Relativity
Replies
7
Views
5K
  • Special and General Relativity
Replies
4
Views
699
  • Special and General Relativity
Replies
7
Views
1K
  • Special and General Relativity
Replies
19
Views
1K
  • Special and General Relativity
4
Replies
124
Views
6K
  • Special and General Relativity
Replies
7
Views
2K
Replies
10
Views
1K
Back
Top