Transformation of Tensor Components

In summary, the conversation discusses the transformation of tensor components when changing coordinate systems. The first question asks about the point of rearranging the indicial form and whether it violates the non-commutativity of matrix multiplication. The second question asks about the meaning of the rearranged form and why the matrix needs to be transposed. The expert explains that the rearrangement is not necessary but may make things clearer, and that the resulting product is a matrix with its rows and columns swapped, which is equivalent to multiplying the original matrix by its transpose.
  • #1
FluidStu
26
3
In the transformation of tensor components when changing the co-ordinate system, can someone explain the following:

upload_2016-3-31_14-20-12.png


Firstly, what is the point in re-writing the indicial form (on the left) as aikTklajl? Since we're representing the components in a matrix, and the transformation matrix is also a matrix, aren't we violating the non-commutativity of matrix multiplication (AB ≠ BA)?

Secondly, how does this mean A T AT in matrix form? Why are we transposing the matrix?

Thanks in advance
 
Physics news on Phys.org
  • #2
FluidStu said:
Firstly, what is the point in re-writing the indicial form (on the left) as aikTklajl? Since we're representing the components in a matrix, and the transformation matrix is also a matrix, aren't we violating the non-commutativity of matrix multiplication (AB ≠ BA)?
## \mathcal A##,etc. refers to the matrices themselves but ##a_{ik}##,etc. refers to the elements of those matrices. So the ##a_{ik}##s are just numbers and commute with each other.
Actually there is no need to rearrange them in that way but the author seems to think that it makes things more clear.
FluidStu said:
Secondly, how does this mean ## \mathbf{A T A^{T}} ## in matrix form? Why are we transposing the matrix?
Remember the rule for matrix multiplication? It reads like ## (AB)_{ij}=\sum_k A_{ik} B_{kj} ##, which, if we accept the rule that repeated indices are summed over, becomes ## (AB)_{ij}=A_{ik} B_{kj} ##.
So its clear that ## a_{ik}T_{kl} ## is actually ##C=\mathcal{AT} ##. Now we have ##c_{il} a_{jl} ##. Now if I define ##d_{lj}=a_{jl} ##, the result becomes ## c_{il} d_{lj} ## which is clearly the product of two matrices. The first is ## \mathcal{AT} ## and the second was defined as a matrix that has its rows and columns swapped(because I swapped l and j in ##a_{jl}## to get ## d_{lj} ##). So its clear that the second matrix is ## \mathcal{A}^T## and so the whole product is ## \mathcal{ATA}^T ##.
 
  • Like
Likes FluidStu
  • #3
Shyan said:
## \mathcal A##,etc. refers to the matrices themselves but ##a_{ik}##,etc. refers to the elements of those matrices. So the ##a_{ik}##s are just numbers and commute with each other.
Actually there is no need to rearrange them in that way but the author seems to think that it makes things more clear.

Remember the rule for matrix multiplication? It reads like ## (AB)_{ij}=\sum_k A_{ik} B_{kj} ##, which, if we accept the rule that repeated indices are summed over, becomes ## (AB)_{ij}=A_{ik} B_{kj} ##.
So its clear that ## a_{ik}T_{kl} ## is actually ##C=\mathcal{AT} ##. Now we have ##c_{il} a_{jl} ##. Now if I define ##d_{lj}=a_{jl} ##, the result becomes ## c_{il} d_{lj} ## which is clearly the product of two matrices. The first is ## \mathcal{AT} ## and the second was defined as a matrix that has its rows and columns swapped(because I swapped l and j in ##a_{jl}## to get ## d_{lj} ##). So its clear that the second matrix is ## \mathcal{A}^T## and so the whole product is ## \mathcal{ATA}^T ##.
Got it! Thanks :)
 

1. What is the transformation of tensor components?

The transformation of tensor components refers to the process of converting tensor components from one coordinate system to another. This is necessary when working with tensors in different reference frames or coordinate systems.

2. Why is the transformation of tensor components important?

The transformation of tensor components is important because it allows for the analysis and manipulation of tensors in different reference frames. It is necessary for solving problems in areas such as physics, engineering, and mathematics.

3. How is the transformation of tensor components performed?

The transformation of tensor components is performed using a set of mathematical equations known as transformation rules. These rules vary depending on the type of tensor and the coordinate system being transformed to or from.

4. What are some common examples of tensor transformations?

Some common examples of tensor transformations include converting Cartesian coordinates to polar coordinates, transforming tensors from a local reference frame to a global reference frame, and converting from one unit system to another.

5. What are the challenges of performing tensor transformations?

The challenges of performing tensor transformations include understanding and applying the correct transformation rules, dealing with complex equations and calculations, and keeping track of the different coordinate systems and reference frames involved.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
823
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
2K
Replies
27
Views
1K
  • Special and General Relativity
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Differential Equations
Replies
7
Views
2K
  • Special and General Relativity
Replies
22
Views
2K
  • Classical Physics
Replies
3
Views
2K
Replies
23
Views
4K
Back
Top