Tensor Confusion: Lambda & Partial Derivatives

In summary, the conversation discussed the equation \Lambda^{\mu}_{\hspace{3 mm}\nu} = \partial_{\nu}x'^{\mu} = \frac{\partial x^'{\mu}}{\partial x^{\nu}} and its implications for the equation \Lambda_{\mu}^{\hspace{3 mm}\nu} = \partial^{\nu}x'_{\mu} = \frac{\partial x'_{\mu}}{\partial x_{\nu}}, as well as its relation to the inverse of \Lambda^{\nu}_{\hspace{3 mm} \mu} and the kronecker delta. The conversation also explored the index manipulation and its implications for swapping
  • #1
barnflakes
156
4
If [itex]\Lambda^{\mu}_{\hspace{3 mm}\nu} = \partial_{\nu}x'^{\mu} = \frac{\partial x^'{\mu}}{\partial x^{\nu}}[/itex]

does that mean [itex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \partial^{\nu}x'_{\mu} = \frac{\partial x'_{\mu}}{\partial x_{\nu}}[/itex] ?
 
Last edited:
Physics news on Phys.org
  • #2
Doesn't

[tex]\Lambda^{\mu}_{\hspace{3 mm}\nu} = \partial_{\nu}x'^{\mu} = \frac{\partial x^'{\mu}}{\partial x^{\nu}}[/tex]

mean that

[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \partial^{\nu}x'_{\mu} = \frac{\partial x'_{\mu}}{\partial x_{\nu}}?[/tex]
 
  • #3
Yes George that's what I meant to write, sorry about that. Is that correct?

Does that also mean that [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \Lambda^{\nu}_{\hspace{3 mm} \mu} [/itex] ?
 
  • #4
barnflakes said:
Yes George that's what I meant to write, sorry about that. Is that correct?

I think so.
barnflakes said:
Does that also mean that [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \Lambda^{\nu}_{\hspace{3 mm} \mu} [/itex] ?

No.

[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]
 
  • #5
So [itex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = (\Lambda^{-1})^\nu{}_\mu[/itex]? Ie. [tex]\Lambda_{\mu}^{\hspace{3 mm}\nu}[/itex] Is the inverse of [tex]\Lambda^{\nu}_{\hspace{3 mm} \mu} [/tex] ?


So if I wanted to multiply two Lambdas together, it's only in certain cases that we get the kronecker delta?

For instance: [tex] \Lambda^{\mu}_{\hspace{3 mm} \alpha} \Lambda_{\nu}^{\hspace{3 mm}\alpha}= \delta^\mu_\alpha[/tex]

and [tex] \Lambda^{\mu}_{\hspace{3 mm} \alpha} \Lambda_{\mu}^{\hspace{3 mm}\nu}= \delta^\nu_\alpha[/tex]

Is that right?
 
Last edited:
  • #6
That's right. See this post for a little bit more.
 
  • #7
George Jones said:
I think so.No.

[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]

OK some more conceptual problems I'm having:

[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]

But matrix multiplication is associate, so [tex] (\eta_{\mu \alpha} \eta^{\beta \nu}) \Lambda^\alpha{}_\beta = \eta_{\mu \alpha} (\eta^{\beta \nu} \Lambda^\alpha{}_\beta)[/tex] but surely [tex] (\eta_{\mu \alpha} \eta^{\beta \nu})[/tex] is equal to the identity matrix?
 
  • #8
In

[tex]
\eta_{\mu \alpha} \eta^{\beta \nu}
[/tex]

you basically take the tensor product of an n-dimensional covariant metric and a contravariant metric and you end up with a type (2,2) tensor. Normally this is not represented by an nxn matrix (or you have to take a convention in which you build a 16x16 matrix by multiplying every element of the first matrix by the second matrix, but that's not what is useful here).

If you would take a contraction then ofcourse you could write things down in terms of simple matrix multiplication. But you don't.
 
  • #9
barnflakes said:
...but surely [tex] (\eta_{\mu \alpha} \eta^{\beta \nu})[/tex] is equal to the identity matrix?
It's not. It's the product of one component of [itex]\eta[/itex] and one component of [itex]\eta^{-1}[/itex]. Recall that the definition of matrix multiplication is [itex](AB)^i_j=A^i_k B^k_j[/itex] and that the right-hand side actually means [itex]\sum_k A^i_k B^k_j[/itex]. There is no summation in [tex]\eta_{\mu \alpha} \eta^{\beta \nu}[/tex].

Did you understand my calculation of the components of [itex]\Lambda^{-1}[/itex] in the thread I linked to?
 
  • #10
Fredrik said:
It's not. It's the product of one component of [itex]\eta[/itex] and one component of [itex]\eta^{-1}[/itex]. Recall that the definition of matrix multiplication is [itex](AB)^i_j=A^i_k B^k_j[/itex] and that the right-hand side actually means [itex]\sum_k A^i_k B^k_j[/itex]. There is no summation in [tex]\eta_{\mu \alpha} \eta^{\beta \nu}[/tex].

And the other reason why it's not an identity matrix is because we're working in Minkowski space with signature (-,+,+,+), and therefore [itex]\eta[/itex]'s are not identity matrices.
 
  • #11
hamster143 said:
And the other reason why it's not an identity matrix is because we're working in Minkowski space with signature (-,+,+,+), and therefore [itex]\eta[/itex]'s are not identity matrices.
You're right that they're not, but the result would still be (the components of) an identity matrix if the indices had matched. See the post I linked to.
 
Last edited:
  • #12
George Jones said:
[tex]\Lambda_{\mu}^{\hspace{3 mm}\nu} = \eta_{\mu \alpha} \Lambda^{\alpha \nu} = \eta_{\mu \alpha} \eta^{\beta \nu} \Lambda^\alpha{}_\beta[/tex]

Could we write this in matrix notation as

[tex]\left [ \Lambda_{\mu}^{\enspace \nu} \right ] = \eta \left [ \Lambda^{\mu}_{\enspace \nu} \right ] \eta^{-1} = \eta \Lambda \eta = \left [ \Lambda^{\mu}_{\enspace \nu} \right ]^{-1}[/tex]

And am I right in thinking this equation only applies to boosts? The more general equation including boosts and rotations being:

[tex]\eta \Lambda^{T} \eta = \Lambda^{-1}[/tex]
 
  • #13
George's equations (one for each value of the indices) are just the components of a matrix equation that holds for all Lorentz transformations. See the post I linked to in #6.
 
  • #14
How's this for index juggling?

[tex]\Lambda^{\mu}_{\enspace\rho} \left ( \Lambda^{-1} \right )^{\rho}_{\enspace\nu} = \delta^{\mu}_{\nu}[/tex]

And substituting your equation for the components of [tex]\Lambda^{-1}[/tex], from post #2 of the thread you linked to:

[tex]\Lambda^{\mu}_{\enspace\rho} \eta^{\thinspace \rho\tau} \Lambda^{\sigma}_{\enspace\tau} \eta_{\sigma\nu} = \delta^{\mu}_{\nu}[/tex]

[tex]\Lambda^{\mu}_{\enspace\rho} \Lambda_{\nu}^{\enspace\rho} = \delta^{\mu}_{\nu}[/tex]

Or in matrix format:

[tex]\Lambda \eta^{-1} \Lambda^{T} \eta = I \Leftrightarrow \Lambda^{-1} = \eta^{-1} \Lambda^{T} \eta[/tex]

I suppose what this shows is that the rules of index manipulation imply the convention that when there's a pair of indices--one up, and one down--then swapping their horizontal order (moving the leftmost index to the right, and the rightmost index to the left) inverts a Lorentz transformation. Does swapping the horizontal order of indices indicate inversion in general, or does this only work for a Lorentz transformation?

[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\nu}^{\enspace\mu} \right ][/tex]

And if so, since the indices are arbitrary:

[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex] ?

In #18 of this thread https://www.physicsforums.com/showthread.php?t=353536&page=2 Haushofer concludes with a formula similar to George's. In fact, I think it's equivalent to George's, except that [tex]T[/tex] is used instead of [tex]\Lambda[/tex]. If I was going to try to write this in matrix notation, I'd write:

[tex]\left [ T^{\mu}_{\enspace\nu} \right ] = \eta^{-1} \left [ T_{\alpha}^{\enspace\beta} \right ]^{T} \eta[/tex]

Is that correct? Then if [tex]T[/tex] was a Lorentz transformation, I guess we'd know that [tex]\left [ T^{\mu}_{\enspace\nu} \right ][/tex] is the inverse of [tex]\left [ T_{\alpha}^{\enspace\beta} \right ][/tex]. But since not everything is a Lorentz transformation, I'm guessing maybe it's not true in general that

[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex]
 
  • #15
Rasalhague said:
How's this for index juggling?
It's all good.

Rasalhague said:
[tex]\Lambda \eta^{-1} \Lambda^{T} \eta = I \Leftrightarrow \Lambda^{-1} = \eta^{-1} \Lambda^{T} \eta[/tex]
That's right. I like to take [itex]\Lambda^T\eta\Lambda=\eta[/itex] as the definition of a Lorentz transformation. If we multiply this with [itex]\eta^{-1}[/itex] from the left, we get [itex]\eta^{-1}\Lambda^T\eta\Lambda=I[/itex], which implies that [itex]\Lambda^{-1}[/itex] is what you said.

To go from any of the nice and simple matrix equations to the corresponding result with lots of mostly pointless and annoying indices, you simply use the definition of matrix multiplication stated above, the summation convention, and the notational convention described in the other thread.

Rasalhague said:
I suppose what this shows is that the rules of index manipulation imply the convention that when there's a pair of indices--one up, and one down--then swapping their horizontal order (moving the leftmost index to the right, and the rightmost index to the left) inverts a Lorentz transformation. Does swapping the horizontal order of indices indicate inversion in general, or does this only work for a Lorentz transformation?
Only for Lorentz transformations, because it follows from the formula for [itex]\Lambda^{-1}[/itex] that you found (which only holds when [itex]\Lambda[/itex] is a Lorentz transformation), and those other things I just mentioned.

Rasalhague said:
[tex]\left [ T^{\mu}_{\enspace\nu} \right ]^{-1} = \left [ T_{\alpha}^{\enspace\beta} \right ][/tex] ?
What you need to understand is that while [itex]T^\alpha{}_\beta[/itex] is defined as the component of T on row [itex]\alpha[/itex], column [itex]\beta[/itex], [itex]T_\alpha{}^\beta[/itex] is defined as the component on row [itex]\alpha[/itex], column [itex]\beta[/itex] of [itex]\eta T\eta^{-1}[/itex]. (This is just the convention to use the metric to raise and lower indices). So your equation says that [itex]T^{-1}=\eta T\eta^{-1}[/itex], or maybe that [itex]T^{-1}=(\eta T\eta^{-1})^T=\eta^{-1}T^T\eta[/itex]. the second alternative makes more sense (since it would be true for Lorentz transformations), so that suggests that if we use that bracket notation to indicate "the matrix with these components", we should actually interpret it as "the transpose of the matrix with these components" when the indices are "first one downstairs, second one upstairs". (A better option is probably to avoid that notation when you can).

Rasalhague said:
In #18 of this thread https://www.physicsforums.com/showthread.php?t=353536&page=2 Haushofer concludes with a formula similar to George's.
His formula is just an equivalent way to define what we mean by [itex]T_\alpha{}^\beta[/itex].
 
Last edited:
  • #16
Fredrik said:
It's all good.

Phew! Thanks, that's a relief to know.

Fredrik said:
What you need to understand is that while [itex]T^\alpha{}_\beta[/itex] is defined as the component of T on row [itex]\alpha[/itex], column [itex]\beta[/itex], [itex]T_\alpha{}^\beta[/itex] is defined as the component on row [itex]\alpha[/itex], column [itex]\beta[/itex] of [itex]\eta T\eta^{-1}[/itex]. (This is just the convention to use the metric to raise and lower indices).

Ah, another source of confusion... This differs from the convention explained by Ruslan Shapirov in his Quick Introduction to Tensor Analysis, which I'd assumed was the rule everyone followed:

"For any double indexed array with indices on the same level (both upper or both lower) the first index is a row number, while the second index is a column number. If indices are on different levels (one upper and one lower), then the upper index is a row number, while lower one is a column number."

I gather that some people follow a convention whereby upper indices are always written first, [tex]T^{\alpha}_{\enspace\beta}[/tex], or where an arbitrary type-(1,1) tensor is written [tex]T^{\alpha}_{\beta}[/tex] (this is what Shapirov does), and only the order of indices on the same level as each other is significant, whereas others use a convention whereby changing the horizontal order of a pair of indices on a type-(1,1) tensor does make a difference (indicating inversion of a Lorentz transformation, and I don't know what--if anything--it indicates more generally). So maybe Shapirov's rule shouldn't be applied to the system of index manipulation in which [tex]T^{\alpha}_{\enspace\beta}[/tex] doesn't necessarily equal [tex]T_{\beta}^{\enspace\alpha}[/tex].
 
  • #17
Rasalhague said:
I gather that some people follow a convention whereby upper indices are always written first, [tex]T^{\alpha}_{\enspace\beta}[/tex], or where an arbitrary type-(1,1) tensor is written [tex]T^{\alpha}_{\beta}[/tex] (this is what Shapirov does), and only the order of indices on the same level as each other is significant, whereas others use a convention whereby changing the horizontal order of a pair of indices on a type-(1,1) tensor does make a difference (indicating inversion of a Lorentz transformation, and I don't know what--if anything--it indicates more generally). So maybe Shapirov's rule shouldn't be applied to the system of index manipulation in which [tex]T^{\alpha}_{\enspace\beta}[/tex] doesn't necessarily equal [tex]T_{\beta}^{\enspace\alpha}[/tex].

In GR, if you raise and lower indices, then the ordering, including leaving appropriate spaces for upstairs and downstairs indices matters. In SR, there are some tricks where you don't have to keep track of this, because of the fixed background, and restriction to Lorentz inertial frames, but I don't remember the rules off the top of my head.
 
  • #18
Rasalhague said:
Ah, another source of confusion... This differs from the convention explained by Ruslan Shapirov in his Quick Introduction to Tensor Analysis, which I'd assumed was the rule everyone followed:

"For any double indexed array with indices on the same level (both upper or both lower) the first index is a row number, while the second index is a column number. If indices are on different levels (one upper and one lower), then the upper index is a row number, while lower one is a column number."
I don't know what "everyone" is using, but his convention does make sense. It seems that if we use his convention, we can use the [] notation consistently regardless of whether the left or the right index is upstairs. Either way we have [tex]T_\alpha{}^\beta=\eta_\alpha_\gamma T^\gamma{}_\delta\eta^\delta^\beta[/tex]. The question is, do we want to interpret that as the components of [itex]\eta\Lambda\eta^{-1}[/itex] or as the components of the transpose of that?

Adding to what atyy said, [tex]T_\alpha{}^\beta[/itex] would be the result of having a tensor [itex]T:V\times V^*\rightarrow\mathbb R[/itex] act on basis vectors, and [tex]S^\alpha{}_\beta[/itex] would be the result of having a tensor [itex]S:V^*\times V\rightarrow\mathbb R[/itex] act on basis vectors. Here V is a vector space (usually the tangent space at some point of a manifold) and V* it's dual space. So the positions of the indices determine what type of tensor we're dealing with.

In SR, there's no reason to even think about tensors (at least not in a situation I can think of right now), so I would really prefer to just write components of matrices as [itex]A^\mu_\nu[/itex] or [itex]A_\mu_\nu[/itex]. The notational convention we've been discussing in this thread just makes everything more complicated without having any significant benefits. It ensures that we never have to write [itex]^{-1}[/itex] or [itex]^T[/itex] on a Lorentz transformation matrix, but I think that's it.

I really don't get why so many (all?) authors choose to write equations like [tex]\Lambda^T\eta\Lambda=\eta[/tex] in component form.
 
Last edited:
  • #19
Sean Carroll, in his GR lecture notes, ch. 1, p. 10, writes, breaking another of Shapirov's rules (that indices should match at the same height on opposite sides of an equation),

We will [...] introduce a somewhat subtle notation by using the same symbol for both matrices [a Lorentz transformation and its inverse], just with primed and unprimed indices adjusted. That is,

[tex]\left(\Lambda^{-1} \right)^{\nu'}_{\enspace \mu} = \Lambda_{\nu'}^{\enspace \mu}[/tex]

or

[tex]\Lambda_{\nu'}^{\enspace\mu} \Lambda^{\sigma'}_{\enspace\mu} = \delta^{\sigma'}_{\nu'} \qquad \Lambda_{\nu'}^{\enspace\mu} \Lambda^{\nu'}_{\enspace\rho} = \delta^{\mu}_{\rho'}[/tex]

(Note that Schutz uses a different convention, always arranging the two indices northwest/southeast; the important thing is where the primes go.)

http://preposterousuniverse.com/grnotes/

I haven't seen Schutz's First Course in General Relativity, so I don't know any more about that, but in Blandford and Thorne's Applications of Classical Physics, 1.7.2, where they introduce Lorentz tramsformations, they write

[tex]L^{\overline{\mu}}_{\enspace \alpha} L^{\alpha}_{\enspace \overline{\nu}} = \delta^{\overline{\mu}}_{\enspace \overline{\nu}} \qquad L^{\alpha}_{\enspace \overline{\mu}} L^{\overline{\nu}}_{\enspace \beta} = \delta^{\alpha}_{\enspace \beta} [/tex]

Notice the up/down placement of indices on the elements of the transformation matrices: the first index is always up, and the second is always down.

Perhaps this is similar to Schutz's notation. Is the role of the left-right ordering, in other people's notation, fulfilled here in Blandford and Thorne's notation by the position of the bar, or would left-right ordering still be needed for a more general treatment?

In other sources I've looked at, such as Bowen and Wang's Introduction to Vector's and Tensors, tensors are just said to be of type, or valency, (p,q), p-times contravariant and q-times covariant, requiring p up indices and q down indices:

[tex]T : V^{*}_{1} \times ... \times V^{*}_{p} \times V_{1} \times ... \times V_{q} \to \mathbb{R}[/tex]

...with up and down indices ordered separately. So apparently it's more complicated than I realized. Is there any way of explaining or hinting at why leaving spaces for up and down indices becomes important in GR to someone just starting out and battling with the basics of terminology, definitions and notational conventions?
 
  • #20
Rasalhague said:
In other sources I've looked at, such as Bowen and Wang's Introduction to Vector's and Tensors, tensors are just said to be of type, or valency, (p,q), p-times contravariant and q-times covariant, requiring p up indices and q down indices:

[tex]T : V^{*}_{1} \times ... \times V^{*}_{p} \times V_{1} \times ... \times V_{q} \to \mathbb{R}[/tex]

...with up and down indices ordered separately. So apparently it's more complicated than I realized. Is there any way of explaining or hinting at why leaving spaces for up and down indices becomes important in GR to someone just starting out and battling with the basics of terminology, definitions and notational conventions?

A tensor eats a bunch of one forms and tangent vectors and spits out a number. As you can see from the above definition, it matters which one forms and vectors go into which mouth of a tensor, so the order of the up and down indices matter. It is a good idea to keep in mind that one forms and vectors are separate objects.

However, when there is a metric tensor, each one form can be associated with a vector as follows. A one form eats a vector and spits out a number. The metric tensor eats two vectors and spits out a number. So if a metric tensor eats one vector, it's still hungry and can eat another vector, so the half-full metric tensor is a one form. This is defined as the one form associated with the vector that the half-full metric tensor has just eaten, and is denoted by the same symbol as that vector, but with its index lowered or raised (I don't remember which one). So the combined requirement of keeping track of which mouth of a tensor eats what, and the ability to raise or lower indices means that we have to keep track of the combined order of up and down indices.
 
  • #21
Aha, so, if I've understood this, it's actually the existence of a metric tensor, which is symmetric and let's us raise and lower indices like this

[tex]g_{\alpha \beta} V^{\alpha} U^{\beta} = V_{\beta} U^{\beta} = V^{\alpha} U_{\alpha} = g_{\beta \alpha} V^{\alpha} U^{\beta},[/tex]

that makes it necessary to keep track of the order of upper indices relative to lower indices because, for example

[tex]g_{\alpha \beta} g_{\gamma \delta} T^{\beta}_{\enspace \epsilon}^{ \gamma}_{\enspace \zeta} = T_{\alpha \epsilon \delta \zeta}[/tex]

won't in general be equal to

[tex]g_{\alpha \beta} g_{\gamma \delta} T^{\beta \gamma}_{\enspace \enspace \epsilon \zeta} = T_{\alpha \delta \epsilon \zeta}.[/tex]
 

1. What is Tensor Confusion?

Tensor Confusion refers to the confusion that can arise when working with tensors in mathematics or computer science. Tensors are multidimensional arrays that are commonly used to represent data in fields such as machine learning and physics. However, their complex nature can make them difficult to understand and work with, leading to confusion.

2. What is Lambda in relation to Tensor Confusion?

Lambda, also known as the lambda symbol (λ), is a key concept in Tensor Confusion. It represents a partial derivative, which is a way to measure how a function changes when one of its variables is changed. Lambdas are often used in tensor calculus to calculate the derivatives of tensors.

3. How do partial derivatives relate to Tensor Confusion?

Partial derivatives are an important tool in understanding and working with tensors. They allow us to measure how a tensor changes when one of its variables is changed. This is useful in many applications, such as optimizing machine learning models or solving physics problems involving tensors.

4. What are some common sources of Tensor Confusion?

There are several common sources of Tensor Confusion, including the complex nature of tensors, the use of abstract mathematical notation, and the use of tensors in different fields with varying definitions and conventions. Additionally, the use of partial derivatives and other mathematical concepts can also contribute to confusion when working with tensors.

5. How can one overcome Tensor Confusion?

One way to overcome Tensor Confusion is to gain a deeper understanding of tensors and their properties. This can be achieved through studying tensor calculus, practicing with various tensor operations, and learning from examples and tutorials. Additionally, seeking help from experts or joining online communities focused on tensors can also be helpful in overcoming confusion.

Similar threads

  • Special and General Relativity
Replies
1
Views
66
  • Special and General Relativity
Replies
2
Views
999
  • Special and General Relativity
Replies
17
Views
1K
  • Special and General Relativity
Replies
9
Views
543
  • Special and General Relativity
Replies
3
Views
2K
  • Special and General Relativity
Replies
7
Views
180
  • Special and General Relativity
Replies
16
Views
2K
  • Special and General Relativity
Replies
10
Views
1K
  • Special and General Relativity
Replies
7
Views
1K
  • Special and General Relativity
Replies
22
Views
2K
Back
Top