Tensor summation and components.

In summary: The components of a tensor change in a specific way when you change from one coordinate system to another. The tensor itself does not change.
  • #1
peripatein
880
0
Hello,

I would very much like someone to please clarify the following points concerning tensor summation to me. Suppose the components of a tensor Ai j are A1 2 = A2 1 = A (or, in general, Axy = Ayx = A), whereas all the other components are 0. Is this a symmetrical tensor then? How may Ai j be written in the form of a matrix? Furthermore, suppose I then have the following sum:
RilRjmAlm
Do l and m run from 1 to 3? How may I actually carry out this summation, considering the above-mentioned properties of A?

Thanks!
 
Physics news on Phys.org
  • #2
Yes, if ##A^{ij}=A^{ji}## for all i,j, then A is symmetric.

Recall that the definition of matrix multiplication is ##(AB)_{ij}=A_{ik} B_{kj}## (with a summation over k). So if you want to write that sum you asked about as a product of matrices, it will be ##RAR^T##, where R is the matrix with ##R^i{}j## on row i, column j, and A is the matrix with ##A^{ij}## on row i, column j.

The fact that A is symmetric doesn't simplify the problem much, unless you have used it to choose R such that ##RAR^T## is diagonal. Then that simplifies the problem by allowing you to ignore all matrix elements with i≠j (because they're all zero).
 
  • #3
Would your answer still be valid if A and R were tensors? (which they are!)
 
  • #4
Sure, why wouldn't it?
 
  • #5
OK, your answer then brings me back to a few elementary questions (if I may):
1) Does it carry any import if the indices are contravariant or covariant (in regard to the summation formula of two matrices you wrote in your first reply)? Will it, in other words, have any effect on the formula?
2) What was the rationale behind writing the three tensors in your answer in the order they are written?
 
  • #6
1. It influences the definition of the matrices, but not much else. For example, let's define ##B^{jl}=R^j{}_m A^{lm}##. What you wrote can now be written as ##R^i{}_l B^{jl}##. Suppose that we want to interpret this as row i, column j of a matrix RB, obtained by multiplying a matrix R with a matrix B. Now look at the definition of matrix multiplication. The indices that are being summed over are the column index of the matrix on the left, and the row index of the matrix on the right. So we must interpret ##l## as a column index of R, and as a row index of B. So we must define R as the matrix with ##R^i{}_j## on row i, column j, and B as the matrix with ##B^{ji}## on row i, column j. That last one looks really weird, since we're used to having the row index first. So we should do something about it. The obvious solution is to abandon the original plan to interpret what you wrote as ##(RB)^{ij}##, and instead define B as the matrix with ##B^{ij}## on row i, column j, so that we can interpret what you wrote as the row i, column j component of ##RB^T##.

2. It follows from the definition of matrix multiplication, as in the answer to 1 above.

I won't have time for followup questions for the next 10 hours or so. But maybe someone else does.
 
  • #7
That all makes perfect sense now :-). Thank you very much for your kindness and insightful assistance, Fredrik!
 
  • #8
A tensor can always be represented as a matrix in a given coordinate system. The distinction between a "matrix" and a "tensor" is that a tensor changes in a specific way when you change from one coordinate system to another.
 
  • #9
HallsofIvy said:
A tensor can always be represented as a matrix in a given coordinate system. The distinction between a "matrix" and a "tensor" is that a tensor changes in a specific way when you change from one coordinate system to another.
Well if ##V## is a finite dimensional vector space and ##L## is a linear operator on ##V## then it indeed has a coordinate representation as a matrix. But more generally if ##T## is a tensor associated with ##V## then very loosely put its "coordinate representation" is what is known as a hypermatrix: http://galton.uchicago.edu/~lekheng/work/hla.pdf
 
  • #10
HallsofIvy said:
A tensor can always be represented as a matrix in a given coordinate system. The distinction between a "matrix" and a "tensor" is that a tensor changes in a specific way when you change from one coordinate system to another.

This is true for linear and bilinear maps, but not for n-linear if n>3 .
 
  • #11
HallsofIvy said:
A tensor can always be represented as a matrix in a given coordinate system. The distinction between a "matrix" and a "tensor" is that a tensor changes in a specific way when you change from one coordinate system to another.
I think this response should be made more precise. The components of a tensor change in a specific way when you change from one coordinate system to another. The tensor itself is independent of coordinate system.
 
  • #12
Chestermiller said:
I think this response should be made more precise. The components of a tensor change in a specific way when you change from one coordinate system to another. The tensor itself is independent of coordinate system.
Yes, thank you.
 

1. What is tensor summation?

Tensor summation is the process of adding or combining tensors, which are mathematical objects that represent linear relationships between different sets of numbers or vectors. This operation is similar to vector addition, but takes into account the direction and magnitude of the vectors in multiple dimensions.

2. How do you perform tensor summation?

To perform tensor summation, you first need to ensure that the tensors you are adding have the same dimensions. Then, you simply add the corresponding components of each tensor together. For example, if you have two tensors A and B, the sum of these tensors would be A + B = [a1 + b1, a2 + b2, a3 + b3, ...], where a1, a2, a3, etc. are the components of tensor A and b1, b2, b3, etc. are the components of tensor B.

3. What is the difference between tensor summation and tensor multiplication?

The main difference between tensor summation and tensor multiplication is that summation combines tensors while multiplication multiplies tensors. In tensor multiplication, the components of the tensors are multiplied together according to specific rules, whereas in tensor summation, the components are simply added together.

4. What are tensor components?

Tensor components are the numbers or values used to represent a tensor. They can be scalars, vectors, or matrices, depending on the dimensionality of the tensor. For example, a 3-dimensional tensor may have components represented by a 3x3 matrix, while a 4-dimensional tensor may have components represented by a 4x4x4 matrix.

5. Why is tensor summation important in scientific research?

Tensor summation is important in scientific research because it allows for the manipulation and analysis of data in multiple dimensions. This is particularly useful in fields such as physics and engineering, where many phenomena are multidimensional and complex. Tensor summation also enables the use of powerful mathematical tools, such as tensor calculus, which can be applied to solve complex problems in various scientific disciplines.

Similar threads

Replies
4
Views
1K
  • Differential Geometry
Replies
1
Views
2K
  • Advanced Physics Homework Help
Replies
6
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Special and General Relativity
Replies
10
Views
2K
  • General Math
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
27
Views
922
  • Special and General Relativity
Replies
4
Views
2K
Back
Top