Raising/Lowering indices and matrix multiplication

In summary, the conversation discusses the distinction between T^{\mu}_{\mbox{ } \nu} and T_{\mbox{ }\mu}^{\nu}, the metric tensor, and the manipulation of components and matrix operations. The conversation also introduces the concept of creating new maps using g and T, and discusses the use of gT and Tg as guidelines for working out matrix products. The conversation concludes with a discussion on how to lower indices on T and the importance of the order in which operations are performed.
  • #1
homology
306
1
Please read the following carefully. The point of the following is to distinguish between [tex]T^{\mu}_{\mbox{ } \nu}[/tex] and [tex]T_{\mbox{ }\mu}^{\nu}[/tex] which clearly involves a metric tensor. But when you want to go from component manipulation to matrix operations you have to be careful. Components are scalars and the products are commutative, but the matrix products they represent are not.

Setup: let M be a manifold and p a point on the manifold. Let V be the tangent space at p and V* be the cotangent space at p. Let [tex]T:V^*\times V^*\to R[/tex] be a (2,0) tensor. Choosing a basis for V we can talk about the components of T: [tex]T^{\mu\nu}[/tex]. Let g be the metric tensor so [tex]g:V\times V\to R[/tex] and the inverse [tex]g^{-1}:V^*\times V^*\to R[/tex].

If we only "fill" one of the slots of T we can create maps like [tex]T:V^*\to V[/tex] and we can also do the same for g (think Riesz rep. theorem) [tex]g:V\to V^*[/tex].

So in that sense we can create some new maps: [tex]gT:V^*\to V^*[/tex] and [tex]Tg:V\to V[/tex].

My Claim:

the components of gT are [tex]g_{\mu\nu}T^{\nu\lambda}=T_{\mu}^{\mbox{ }\lambda}[/tex] while the components of Tg are [tex]T^{\nu\lambda}g_{\lambda\mu}=T^{\nu}_{\mbox{ }\mu}[/tex]. Moreover I claim that to work out the matrix products you use gT and Tg as guidelines. A professor disagrees however, he says he's never seen the metric act from the right (and usually in component form it is written on the left).

How, using matrices would you lower the first or second index on [tex]T^{\mu\nu}[/tex] and does the order matter? Looking at the above in a coordinate free manner with composition of functions it does seem to matter.

Thank you in advance.
 
Physics news on Phys.org
  • #2
Not sure what you mean by the "components of Tg", since it's not a tensor. (It doesn't have codomain ℝ). Of course, Tg can be used to define a (1,1) tensor, which you might also want to denote by Tg:
$$Tg(u,\omega)=\omega(Tg(u)).$$ Is this the one you want to find the components of? We have ##(Tg)_i{}^j=Tg(e_i,e^j)=e^j(Tg(e_i))##.
 
  • #3
I believe:

[tex]gT\rightarrow ( g_{\mu \nu}T^{\nu \lambda })[/tex]
[tex]Tg\rightarrow ( g_{\lambda \mu}T^{\nu \lambda })[/tex]

where gT and Tg represent the matrix multiplication of the components of T and g.
 
Last edited:
  • #4
elfmotat said:
I believe:

[tex]gT\rightarrow ( g_{\mu \nu}T^{\nu \lambda })[/tex]
[tex]Tg\rightarrow ( g_{\mu \lambda}T^{\nu \lambda })[/tex]

where gT and Tg represent the matrix multiplication of the components of T and g.

it looks like you're multiplying by 'g' first each time, but the sum doesn't make it look like matrix multiplication.
 
  • #5
homology said:
it looks like you're multiplying by 'g' first each time, but the sum doesn't make it look like matrix multiplication.

Maybe this will make more sense:

[tex]g_{\mu \nu}T^{\nu \lambda }=

\begin{bmatrix}
g_{11} & g_{12}\\
g_{21} & g_{22}
\end{bmatrix}

\begin{bmatrix}
T^{11} & T^{12}\\
T^{21} & T^{22}
\end{bmatrix}
[/tex]

[tex]g_{\lambda \mu }T^{\nu \lambda }=

\begin{bmatrix}
T^{11} & T^{12}\\
T^{21} & T^{22}
\end{bmatrix}

\begin{bmatrix}
g_{11} & g_{12}\\
g_{21} & g_{22}
\end{bmatrix}[/tex]

Unless that's not what you were asking.
 
Last edited:
  • #6
Elfmotat,

That is what I'm asking. So you agree that 'matrix wise' you need to 'operate' on different sides to lower different indices? Do you have a reference. The instructor is somewhat out of date and responds better to references.
 
  • #7
In the case of the first one (gμνTνλ) you're summing over the second index in g and the first index in T. Multiplying the corresponding matrices like I did above allows this to happen, by definition of matrix multiplication. I.e. you get gμ1T+gμ2T.
 
  • #8
elfmotat said:
In the case of the first one (gμνTνλ) you're summing over the second index in g and the first index in T. Multiplying the corresponding matrices like I did above allows this to happen, by definition of matrix multiplication.

I.e. you get gμ1T+gμ2T.

Oh, I totally agree. It seems motivated from a matrix multiplication standpoint and a composition of functions standpoint, but I lost points on this one. And he's stubborn. So it almost doesn't matter if I'm right. Its pretty frustrating.
 
  • #9
Fredrik said:
Not sure what you mean by the "components of Tg", since it's not a tensor. (It doesn't have codomain ℝ). Of course, Tg can be used to define a (1,1) tensor, which you might also want to denote by Tg:
$$Tg(u,\omega)=\omega(Tg(u)).$$ Is this the one you want to find the components of? We have ##(Tg)_i{}^j=Tg(e_i,e^j)=e^j(Tg(e_i))##.


Fair enough, I have been a bit sloppy with my meaning. Though would you agree that Tg can be thought of as a linear transformation on V? Let me try to explain,

I'm going to use roman indices to make the texing easier.

Let [tex]e_i[/tex] be a basis for V and [tex]a^i[/tex] the corresponding dual basis in V*. Then [tex]v\in V[/tex] can be expressed as [tex]v=v^i e_i[/tex] and the metric tensor as [tex]g=g_{ij}a^i\otimes a^j[/tex] and T as [tex]T=T^{ij}e_i\otimes e_j[/tex].

So I imagine feeding v to g, g(_,v) which gives [tex]g_{ij}v^j a^i[/tex] which is an element of the cotangent space V*. Then I put this into the second slot of T,

[tex]T(\mbox{_},g(\mbox{_},v))=T(\mbox{_},g_{ij}v^j a^i)=g_{ij}v^jT^{mn}e_m\delta_n^i=g_{ij}v^jT^{mi}e_m[/tex]

So the object T(_,g(_,v)) has components [tex]T^{mi}g_{ij}[/tex] which implies a matrix product Tg to produce a new function (not a tensor?) but a linear transformation on V.

To summarize: To create a linear transformation on V, out of the tensor T, using the metric tensor g, requires a product Tg, which can be computed with their matrix representations, but it requires g acting from the right. Am I off track Fredrik?
 
  • #10
homology said:
Though would you agree that Tg can be thought of as a linear transformation on V?
Yes. I understood your definition of Tg. If A and B respectively denote the maps ##\omega\mapsto T(\cdot,\omega)## and ##v\mapsto g(\cdot,v)##, then Tg is defined as ##A\circ B##. So
$$Tg(v)=A(B(v))=A(g(\cdot,v))=T(\cdot,g(\cdot,v)).$$ In the abstract index notation, ##g(\cdot,v)## is written as ##g_{ab}v^b## and ##T(\cdot,\omega)## as ##T^{ab}\omega_b##. So Tg(v) is written as ##T^{ab}g_{bc}v^c##. By the usual magic of the abstract index notation, this expression can be interpreted in several different ways. Since it has a free index upstairs, it can be interpreted as a member of V, or as the corresponding member of V**. If we choose the latter interpretation, we can write ##Tg(v)(\omega)=T^{ab}g_{bc}v^c\omega_a##. The function that takes ##(\omega,v)## to that number is the (1,1) tensor I mentioned in my previous post. If it's denoted by Tg, its components are ##(Tg)^i{}_j=T^{ik}g_{kj}##.

homology said:
So the object T(_,g(_,v)) has components [tex]T^{mi}g_{ij}[/tex] which implies a matrix product Tg to produce a new function (not a tensor?) but a linear transformation on V.
Isn't the only reason you end up with g on the right that you chose to fill the second slot of T and g instead of the first?
 

What is raising/lowering indices in matrix multiplication?

Raising and lowering indices in matrix multiplication involves changing the placement of indices in a matrix to simplify calculations or to match the dimensions of other matrices involved in the multiplication.

Why is it important to raise/lower indices in matrix multiplication?

Raising or lowering indices can make matrix multiplication more efficient and easier to understand by aligning indices with the corresponding dimensions of other matrices. It can also help with simplifying calculations and finding patterns in the data.

How do you raise/lower indices in matrix multiplication?

To raise indices, you multiply the matrix by the identity matrix, which has diagonal elements of 1 and all other elements of 0. To lower indices, you multiply the matrix by the inverse of the identity matrix, which has diagonal elements of 1 and all other elements of 0.

What are the rules for raising/lowering indices in matrix multiplication?

The rules for raising/lowering indices in matrix multiplication include: 1) multiplying by the identity matrix on the left raises indices, and multiplying by the inverse of the identity matrix on the right lowers indices, 2) raising indices by n levels means multiplying by the identity matrix raised to the nth power, and 3) lowering indices by n levels means multiplying by the inverse of the identity matrix raised to the nth power.

Can raising/lowering indices change the outcome of matrix multiplication?

Yes, raising/lowering indices can change the outcome of matrix multiplication because it changes the placement of indices and can affect the dimensions of the resulting matrix. This can lead to different calculations and results.

Similar threads

  • Special and General Relativity
Replies
2
Views
988
  • Special and General Relativity
2
Replies
62
Views
3K
  • Special and General Relativity
Replies
8
Views
2K
  • Special and General Relativity
Replies
17
Views
1K
  • Special and General Relativity
Replies
6
Views
5K
  • Special and General Relativity
Replies
17
Views
1K
  • Special and General Relativity
2
Replies
38
Views
4K
  • Special and General Relativity
Replies
2
Views
2K
  • Special and General Relativity
Replies
3
Views
2K
  • Special and General Relativity
Replies
12
Views
1K
Back
Top