Tensor Confusion: Lambda & Partial Derivatives

  • Context: Graduate 
  • Thread starter Thread starter barnflakes
  • Start date Start date
  • Tags Tags
    Confusion Tensor
Click For Summary

Discussion Overview

The discussion revolves around the properties and relationships of tensors, specifically the lambda tensor (\Lambda) and its derivatives, in the context of Lorentz transformations and Minkowski space. Participants explore various mathematical expressions, index manipulations, and the implications of these relationships in theoretical physics.

Discussion Character

  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • Some participants question whether \Lambda^{\mu}_{\hspace{3 mm}\nu} being equal to \partial_{\nu}x'^{\mu} implies that \Lambda_{\mu}^{\hspace{3 mm}\nu} equals \partial^{\nu}x'_{\mu}.
  • There is a discussion about whether \Lambda_{\mu}^{\hspace{3 mm}\nu} equals \Lambda^{\nu}_{\hspace{3 mm}\mu}, with conflicting responses.
  • One participant suggests that \Lambda_{\mu}^{\hspace{3 mm}\nu} is the inverse of \Lambda^{\nu}_{\hspace{3 mm}\mu}, leading to further exploration of conditions under which products of lambda tensors yield the Kronecker delta.
  • Concerns are raised about the interpretation of matrix products involving the metric tensor \eta and whether certain expressions can be simplified to identity matrices.
  • Some participants discuss the implications of working in Minkowski space and how it affects the properties of the metric tensor.
  • There is an exploration of index manipulation and its implications for the inversion of Lorentz transformations, with questions about the generality of these rules beyond Lorentz transformations.

Areas of Agreement / Disagreement

Participants express differing views on the relationships between the various tensor components and their implications, indicating that multiple competing views remain without a clear consensus.

Contextual Notes

Participants note that the definitions and properties discussed depend on the context of Minkowski space and the specific conventions used for tensor notation and matrix multiplication. There are unresolved assumptions regarding the conditions under which certain identities hold.

barnflakes
Messages
156
Reaction score
4
If [itex]\Lambda^{\mu}_{\hspace{3 mm}\nu} = \partial_{\nu}x'^{\mu} = \frac{\partial x^'{\mu}}{\partial x^{\nu}}[/itex]

does that mean [itex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \partial^{\nu}x'_{\mu} = \frac{\partial x'_{\mu}}{\partial x_{\nu}}[/itex] ?
 
Last edited:
Physics news on Phys.org
Doesn't

[tex]\Lambda^{\mu}_{\hspace{3 mm}\nu} = \partial_{\nu}x'^{\mu} = \frac{\partial x^'{\mu}}{\partial x^{\nu}}[/tex]

mean that

[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \partial^{\nu}x'_{\mu} = \frac{\partial x'_{\mu}}{\partial x_{\nu}}?[/tex]
 
Yes George that's what I meant to write, sorry about that. Is that correct?

Does that also mean that [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \Lambda^{\nu}_{\hspace{3 mm} \mu} [/itex] ?[/tex]
 
barnflakes said:
Yes George that's what I meant to write, sorry about that. Is that correct?

I think so.
barnflakes said:
Does that also mean that [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \Lambda^{\nu}_{\hspace{3 mm} \mu} [/itex] ?[/tex]
[tex] <br /> No.<br /> <br /> [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex][/tex]
 
So [itex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = (\Lambda^{-1})^\nu{}_\mu[/itex]? Ie. [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu}[/itex] Is the inverse of [tex]\Lambda^{\nu}_{\hspace{3 mm} \mu}[/tex] ?<br /> <br /> <br /> So if I wanted to multiply two Lambdas together, it's only in certain cases that we get the kronecker delta?<br /> <br /> For instance: [tex]\Lambda^{\mu}_{\hspace{3 mm} \alpha} \Lambda_{\nu}^{\hspace{3 mm}\alpha}= \delta^\mu_\alpha[/tex] <br /> <br /> and [tex]\Lambda^{\mu}_{\hspace{3 mm} \alpha} \Lambda_{\mu}^{\hspace{3 mm}\nu}= \delta^\nu_\alpha[/tex] <br /> <br /> Is that right?[/tex]
 
Last edited:
That's right. See this post for a little bit more.
 
George Jones said:
I think so.No.

[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]

OK some more conceptual problems I'm having:

[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]

But matrix multiplication is associate, so [tex](\eta_{\mu \alpha} \eta^{\beta \nu}) \Lambda^\alpha{}_\beta = \eta_{\mu \alpha} (\eta^{\beta \nu} \Lambda^\alpha{}_\beta)[/tex] but surely [tex](\eta_{\mu \alpha} \eta^{\beta \nu})[/tex] is equal to the identity matrix?
 
In

[tex] \eta_{\mu \alpha} \eta^{\beta \nu}[/tex]

you basically take the tensor product of an n-dimensional covariant metric and a contravariant metric and you end up with a type (2,2) tensor. Normally this is not represented by an nxn matrix (or you have to take a convention in which you build a 16x16 matrix by multiplying every element of the first matrix by the second matrix, but that's not what is useful here).

If you would take a contraction then ofcourse you could write things down in terms of simple matrix multiplication. But you don't.
 
barnflakes said:
...but surely [tex](\eta_{\mu \alpha} \eta^{\beta \nu})[/tex] is equal to the identity matrix?
It's not. It's the product of one component of [itex]\eta[/itex] and one component of [itex]\eta^{-1}[/itex]. Recall that the definition of matrix multiplication is [itex](AB)^i_j=A^i_k B^k_j[/itex] and that the right-hand side actually means [itex]\sum_k A^i_k B^k_j[/itex]. There is no summation in [tex]\eta_{\mu \alpha} \eta^{\beta \nu}[/tex].

Did you understand my calculation of the components of [itex]\Lambda^{-1}[/itex] in the thread I linked to?
 
  • #10
Fredrik said:
It's not. It's the product of one component of [itex]\eta[/itex] and one component of [itex]\eta^{-1}[/itex]. Recall that the definition of matrix multiplication is [itex](AB)^i_j=A^i_k B^k_j[/itex] and that the right-hand side actually means [itex]\sum_k A^i_k B^k_j[/itex]. There is no summation in [tex]\eta_{\mu \alpha} \eta^{\beta \nu}[/tex].

And the other reason why it's not an identity matrix is because we're working in Minkowski space with signature (-,+,+,+), and therefore [itex]\eta[/itex]'s are not identity matrices.
 
  • #11
hamster143 said:
And the other reason why it's not an identity matrix is because we're working in Minkowski space with signature (-,+,+,+), and therefore [itex]\eta[/itex]'s are not identity matrices.
You're right that they're not, but the result would still be (the components of) an identity matrix if the indices had matched. See the post I linked to.
 
Last edited:
  • #12
George Jones said:
[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]

Could we write this in matrix notation as

[tex]\left [ \Lambda_{\mu}^{\enspace \nu} \right ] = \eta \left [ \Lambda^{\mu}_{\enspace \nu} \right ] \eta^{-1} = \eta \Lambda \eta = \left [ \Lambda^{\mu}_{\enspace \nu} \right ]^{-1}[/tex]

And am I right in thinking this equation only applies to boosts? The more general equation including boosts and rotations being:

[tex]\eta \Lambda^{T} \eta = \Lambda^{-1}[/tex]
 
  • #13
George's equations (one for each value of the indices) are just the components of a matrix equation that holds for all Lorentz transformations. See the post I linked to in #6.
 
  • #14
How's this for index juggling?

[tex]\Lambda^{\mu}_{\enspace\rho} \left ( \Lambda^{-1} \right )^{\rho}_{\enspace\nu} = \delta^{\mu}_{\nu}[/tex]

And substituting your equation for the components of [tex]\Lambda^{-1}[/tex], from post #2 of the thread you linked to:

[tex]\Lambda^{\mu}_{\enspace\rho} \eta^{\thinspace \rho\tau} \Lambda^{\sigma}_{\enspace\tau} \eta_{\sigma\nu} = \delta^{\mu}_{\nu}[/tex]

[tex]\Lambda^{\mu}_{\enspace\rho} \Lambda_{\nu}^{\enspace\rho} = \delta^{\mu}_{\nu}[/tex]

Or in matrix format:

[tex]\Lambda \eta^{-1} \Lambda^{T} \eta = I \Leftrightarrow \Lambda^{-1} = \eta^{-1} \Lambda^{T} \eta[/tex]

I suppose what this shows is that the rules of index manipulation imply the convention that when there's a pair of indices--one up, and one down--then swapping their horizontal order (moving the leftmost index to the right, and the rightmost index to the left) inverts a Lorentz transformation. Does swapping the horizontal order of indices indicate inversion in general, or does this only work for a Lorentz transformation?

[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\nu}^{\enspace\mu} \right ][/tex]

And if so, since the indices are arbitrary:

[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex] ?

In #18 of this thread https://www.physicsforums.com/showthread.php?t=353536&page=2 Haushofer concludes with a formula similar to George's. In fact, I think it's equivalent to George's, except that [tex]T[/tex] is used instead of [tex]\Lambda[/tex]. If I was going to try to write this in matrix notation, I'd write:

[tex]\left [ T^{\mu}_{\enspace\nu} \right ] = \eta^{-1} \left [ T_{\alpha}^{\enspace\beta} \right ]^{T} \eta[/tex]

Is that correct? Then if [tex]T[/tex] was a Lorentz transformation, I guess we'd know that [tex]\left [ T^{\mu}_{\enspace\nu} \right ][/tex] is the inverse of [tex]\left [ T_{\alpha}^{\enspace\beta} \right ][/tex]. But since not everything is a Lorentz transformation, I'm guessing maybe it's not true in general that

[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex]
 
  • #15
Rasalhague said:
How's this for index juggling?
It's all good.

Rasalhague said:
[tex]\Lambda \eta^{-1} \Lambda^{T} \eta = I \Leftrightarrow \Lambda^{-1} = \eta^{-1} \Lambda^{T} \eta[/tex]
That's right. I like to take [itex]\Lambda^T\eta\Lambda=\eta[/itex] as the definition of a Lorentz transformation. If we multiply this with [itex]\eta^{-1}[/itex] from the left, we get [itex]\eta^{-1}\Lambda^T\eta\Lambda=I[/itex], which implies that [itex]\Lambda^{-1}[/itex] is what you said.

To go from any of the nice and simple matrix equations to the corresponding result with lots of mostly pointless and annoying indices, you simply use the definition of matrix multiplication stated above, the summation convention, and the notational convention described in the other thread.

Rasalhague said:
I suppose what this shows is that the rules of index manipulation imply the convention that when there's a pair of indices--one up, and one down--then swapping their horizontal order (moving the leftmost index to the right, and the rightmost index to the left) inverts a Lorentz transformation. Does swapping the horizontal order of indices indicate inversion in general, or does this only work for a Lorentz transformation?
Only for Lorentz transformations, because it follows from the formula for [itex]\Lambda^{-1}[/itex] that you found (which only holds when [itex]\Lambda[/itex] is a Lorentz transformation), and those other things I just mentioned.

Rasalhague said:
[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex] ?
What you need to understand is that while [itex]T^\alpha{}_\beta[/itex] is defined as the component of T on row [itex]\alpha[/itex], column [itex]\beta[/itex], [itex]T_\alpha{}^\beta[/itex] is defined as the component on row [itex]\alpha[/itex], column [itex]\beta[/itex] of [itex]\eta T\eta^{-1}[/itex]. (This is just the convention to use the metric to raise and lower indices). So your equation says that [itex]T^{-1}=\eta T\eta^{-1}[/itex], or maybe that [itex]T^{-1}=(\eta T\eta^{-1})^T=\eta^{-1}T^T\eta[/itex]. the second alternative makes more sense (since it would be true for Lorentz transformations), so that suggests that if we use that bracket notation to indicate "the matrix with these components", we should actually interpret it as "the transpose of the matrix with these components" when the indices are "first one downstairs, second one upstairs". (A better option is probably to avoid that notation when you can).

Rasalhague said:
In #18 of this thread https://www.physicsforums.com/showthread.php?t=353536&page=2 Haushofer concludes with a formula similar to George's.
His formula is just an equivalent way to define what we mean by [itex]T_\alpha{}^\beta[/itex].
 
Last edited:
  • #16
Fredrik said:
It's all good.

Phew! Thanks, that's a relief to know.

Fredrik said:
What you need to understand is that while [itex]T^\alpha{}_\beta[/itex] is defined as the component of T on row [itex]\alpha[/itex], column [itex]\beta[/itex], [itex]T_\alpha{}^\beta[/itex] is defined as the component on row [itex]\alpha[/itex], column [itex]\beta[/itex] of [itex]\eta T\eta^{-1}[/itex]. (This is just the convention to use the metric to raise and lower indices).

Ah, another source of confusion... This differs from the convention explained by Ruslan Shapirov in his Quick Introduction to Tensor Analysis, which I'd assumed was the rule everyone followed:

"For any double indexed array with indices on the same level (both upper or both lower) the first index is a row number, while the second index is a column number. If indices are on different levels (one upper and one lower), then the upper index is a row number, while lower one is a column number."

I gather that some people follow a convention whereby upper indices are always written first, [tex]T^{\alpha}_{\enspace\beta}[/tex], or where an arbitrary type-(1,1) tensor is written [tex]T^{\alpha}_{\beta}[/tex] (this is what Shapirov does), and only the order of indices on the same level as each other is significant, whereas others use a convention whereby changing the horizontal order of a pair of indices on a type-(1,1) tensor does make a difference (indicating inversion of a Lorentz transformation, and I don't know what--if anything--it indicates more generally). So maybe Shapirov's rule shouldn't be applied to the system of index manipulation in which [tex]T^{\alpha}_{\enspace\beta}[/tex] doesn't necessarily equal [tex]T_{\beta}^{\enspace\alpha}[/tex].
 
  • #17
Rasalhague said:
I gather that some people follow a convention whereby upper indices are always written first, [tex]T^{\alpha}_{\enspace\beta}[/tex], or where an arbitrary type-(1,1) tensor is written [tex]T^{\alpha}_{\beta}[/tex] (this is what Shapirov does), and only the order of indices on the same level as each other is significant, whereas others use a convention whereby changing the horizontal order of a pair of indices on a type-(1,1) tensor does make a difference (indicating inversion of a Lorentz transformation, and I don't know what--if anything--it indicates more generally). So maybe Shapirov's rule shouldn't be applied to the system of index manipulation in which [tex]T^{\alpha}_{\enspace\beta}[/tex] doesn't necessarily equal [tex]T_{\beta}^{\enspace\alpha}[/tex].

In GR, if you raise and lower indices, then the ordering, including leaving appropriate spaces for upstairs and downstairs indices matters. In SR, there are some tricks where you don't have to keep track of this, because of the fixed background, and restriction to Lorentz inertial frames, but I don't remember the rules off the top of my head.
 
  • #18
Rasalhague said:
Ah, another source of confusion... This differs from the convention explained by Ruslan Shapirov in his Quick Introduction to Tensor Analysis, which I'd assumed was the rule everyone followed:

"For any double indexed array with indices on the same level (both upper or both lower) the first index is a row number, while the second index is a column number. If indices are on different levels (one upper and one lower), then the upper index is a row number, while lower one is a column number."
I don't know what "everyone" is using, but his convention does make sense. It seems that if we use his convention, we can use the [] notation consistently regardless of whether the left or the right index is upstairs. Either way we have [tex]T_\alpha{}^\beta=\eta_\alpha_\gamma T^\gamma{}_\delta\eta^\delta^\beta[/tex]. The question is, do we want to interpret that as the components of [itex]\eta\Lambda\eta^{-1}[/itex] or as the components of the transpose of that?

Adding to what atyy said, [tex]T_\alpha{}^\beta[/itex] would be the result of having a tensor [itex]T:V\times V^*\rightarrow\mathbb R[/itex] act on basis vectors, and [tex]S^\alpha{}_\beta[/itex] would be the result of having a tensor [itex]S:V^*\times V\rightarrow\mathbb R[/itex] act on basis vectors. Here V is a vector space (usually the tangent space at some point of a manifold) and V* it's dual space. So the positions of the indices determine what type of tensor we're dealing with.<br /> <br /> In SR, there's no reason to even think about tensors (at least not in a situation I can think of right now), so I would really prefer to just write components of matrices as [itex]A^\mu_\nu[/itex] or [itex]A_\mu_\nu[/itex]. The notational convention we've been discussing in this thread just makes everything more complicated without having any significant benefits. It ensures that we never have to write [itex]^{-1}[/itex] or [itex]^T[/itex] on a Lorentz transformation matrix, but I think that's it.<br /> <br /> I really don't get why so many (all?) authors choose to write equations like [tex]\Lambda^T\eta\Lambda=\eta[/tex] in component form.[/tex][/tex]
 
Last edited:
  • #19
Sean Carroll, in his GR lecture notes, ch. 1, p. 10, writes, breaking another of Shapirov's rules (that indices should match at the same height on opposite sides of an equation),

We will [...] introduce a somewhat subtle notation by using the same symbol for both matrices [a Lorentz transformation and its inverse], just with primed and unprimed indices adjusted. That is,

[tex]\left(\Lambda^{-1} \right)^{\nu'}_{\enspace \mu} = \Lambda_{\nu'}^{\enspace \mu}[/tex]

or

[tex]\Lambda_{\nu'}^{\enspace\mu} \Lambda^{\sigma'}_{\enspace\mu} = \delta^{\sigma'}_{\nu'} \qquad \Lambda_{\nu'}^{\enspace\mu} \Lambda^{\nu'}_{\enspace\rho} = \delta^{\mu}_{\rho'}[/tex]

(Note that Schutz uses a different convention, always arranging the two indices northwest/southeast; the important thing is where the primes go.)

http://preposterousuniverse.com/grnotes/

I haven't seen Schutz's First Course in General Relativity, so I don't know any more about that, but in Blandford and Thorne's Applications of Classical Physics, 1.7.2, where they introduce Lorentz tramsformations, they write

[tex]L^{\overline{\mu}}_{\enspace \alpha} L^{\alpha}_{\enspace \overline{\nu}} = \delta^{\overline{\mu}}_{\enspace \overline{\nu}} \qquad L^{\alpha}_{\enspace \overline{\mu}} L^{\overline{\nu}}_{\enspace \beta} = \delta^{\alpha}_{\enspace \beta}[/tex]

Notice the up/down placement of indices on the elements of the transformation matrices: the first index is always up, and the second is always down.

Perhaps this is similar to Schutz's notation. Is the role of the left-right ordering, in other people's notation, fulfilled here in Blandford and Thorne's notation by the position of the bar, or would left-right ordering still be needed for a more general treatment?

In other sources I've looked at, such as Bowen and Wang's Introduction to Vector's and Tensors, tensors are just said to be of type, or valency, (p,q), p-times contravariant and q-times covariant, requiring p up indices and q down indices:

[tex]T : V^{*}_{1} \times ... \times V^{*}_{p} \times V_{1} \times ... \times V_{q} \to \mathbb{R}[/tex]

...with up and down indices ordered separately. So apparently it's more complicated than I realized. Is there any way of explaining or hinting at why leaving spaces for up and down indices becomes important in GR to someone just starting out and battling with the basics of terminology, definitions and notational conventions?
 
  • #20
Rasalhague said:
In other sources I've looked at, such as Bowen and Wang's Introduction to Vector's and Tensors, tensors are just said to be of type, or valency, (p,q), p-times contravariant and q-times covariant, requiring p up indices and q down indices:

[tex]T : V^{*}_{1} \times ... \times V^{*}_{p} \times V_{1} \times ... \times V_{q} \to \mathbb{R}[/tex]

...with up and down indices ordered separately. So apparently it's more complicated than I realized. Is there any way of explaining or hinting at why leaving spaces for up and down indices becomes important in GR to someone just starting out and battling with the basics of terminology, definitions and notational conventions?

A tensor eats a bunch of one forms and tangent vectors and spits out a number. As you can see from the above definition, it matters which one forms and vectors go into which mouth of a tensor, so the order of the up and down indices matter. It is a good idea to keep in mind that one forms and vectors are separate objects.

However, when there is a metric tensor, each one form can be associated with a vector as follows. A one form eats a vector and spits out a number. The metric tensor eats two vectors and spits out a number. So if a metric tensor eats one vector, it's still hungry and can eat another vector, so the half-full metric tensor is a one form. This is defined as the one form associated with the vector that the half-full metric tensor has just eaten, and is denoted by the same symbol as that vector, but with its index lowered or raised (I don't remember which one). So the combined requirement of keeping track of which mouth of a tensor eats what, and the ability to raise or lower indices means that we have to keep track of the combined order of up and down indices.
 
  • #21
Aha, so, if I've understood this, it's actually the existence of a metric tensor, which is symmetric and let's us raise and lower indices like this

[tex]g_{\alpha \beta} V^{\alpha} U^{\beta} = V_{\beta} U^{\beta} = V^{\alpha} U_{\alpha} = g_{\beta \alpha} V^{\alpha} U^{\beta},[/tex]

that makes it necessary to keep track of the order of upper indices relative to lower indices because, for example

[tex]g_{\alpha \beta} g_{\gamma \delta} T^{\beta}_{\enspace \epsilon}^{ \gamma}_{\enspace \zeta} = T_{\alpha \epsilon \delta \zeta}[/tex]

won't in general be equal to

[tex]g_{\alpha \beta} g_{\gamma \delta} T^{\beta \gamma}_{\enspace \enspace \epsilon \zeta} = T_{\alpha \delta \epsilon \zeta}.[/tex]
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 38 ·
2
Replies
38
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 2 ·
Replies
2
Views
999
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 16 ·
Replies
16
Views
2K