I What are the differences between matrices and tensors?

Sorcerer
Messages
281
Reaction score
53
I have not really finished studying linear algebra, I have to admit. The furthest I have gotten to is manipulating matrices a little bit (although I have used this in differential equations to calculate a Wronskian to see if two equations are linear independent, but again, a determinant is pretty basic). But I tried looking at tensors, and I am having a hard time distinguishing between a matrix and a tensor. What are the differences other than Einstein summation convention? I know I have to just hit the grind stone and finish learning the basics of linear algebra, but hopefully someone can enlighten me a bit about the differences.
 
Physics news on Phys.org
Sorcerer said:
I have not really finished studying linear algebra, I have to admit. The furthest I have gotten to is manipulating matrices a little bit (although I have used this in differential equations to calculate a Wronskian to see if two equations are linear independent, but again, a determinant is pretty basic). But I tried looking at tensors, and I am having a hard time distinguishing between a matrix and a tensor. What are the differences other than Einstein summation convention? I know I have to just hit the grind stone and finish learning the basics of linear algebra, but hopefully someone can enlighten me a bit about the differences.
The components of 2-index tensors with respect to a basis can be described by a matrix. But a tensor is a geometric object, independent from a chosen basis. A matrix is just a way to describe it with respect to some basis.

Likewise, a vector is not just a set of numbers, e.g. {1,2,3}. This set, at most, describes the components of a vector with respect to some basis.

So the lesson is that 2-index tensors have more structures than the matrices one uses to describe its components.

For tensors with more indices you cannot use matrices anymore to depict their components.
 
  • Like
Likes FactChecker
In addition to what has been said above. Before you worry about tensors you need to grasp fully

a) the difference between a vector and the representation of a vector as three numbers in a given basis

b) the difference between a linear operator and the representation of a linear operator as a matrix in a particular basis

For physics, you also need to grasp the difference between these as physical things and mathematical objects.
 
Last edited:
  • Like
Likes FactChecker and fresh_42
If you define a tensor and represent it in two different basis by its elements, then the two representations are related by the tensor transformation rules. That is what makes it a physical or geometric entity rather than just a mathematical entity.

A mathematical entity that is not a tensor: Define the ordered pair (1,2), regardless of what that means in any basis. So it corresponds to the vector (1,2) no matter what the basis is. That is a mathematical entity does not transform correctly to be a tensor. In fact, it does not transform at all. People often define the unit basis (0,1) and (1,0) for any basis. That concept is not a intrinsically a tensor and does not have a fixed physical or geometrical meaning. But it can be used to represent the associated tensor (vector) that will follow the tensor transformation rules.
 
Last edited:
Hey Sorcerer

Building on what haushofer has mentioned about being independent of basis, a tensor is constructed by having spaces being "dual" to another [i.e. being independent] so that they share nothing in common with one another.

The "overlap" part of a tensor is literally nothing [i.e. the zero vector in the spaces] and it is like a Cartesian product of two sets except that you need to preserve the geometry of the space [which means you need to preserve the axioms of the vector space including a metric and inner product space if necessary] and synthesize a new space from the building blocks of the original ones.

When they have some overlap between them, then you need to make them dual by removing the stuff common to both spaces and then take the tensor product.

It's a bit like with P(A OR B) = P(A) + P(B) - P(A and B) with probability - in this analogy between probability and tensors, the overlap term is P(A and B) and provided it is zero, then you get a similar result to tensors where you do the tensor product to get the new space.

If the overlap term is non-zero [i.e. non-zero vectors exist in both spaces] then you have to "transform" one space so that you remove the overlap and then do the tensor product.

The reason they are used is because you use the theory of tensors to build more complicated spaces and the mathematical results are guaranteed to hold if the dual holds where both are independent from one another [in terms of the information they contain]. This means physicists, engineers, and applied mathematicians just use these results knowing they work if they are independent and have geometry much like how they use results from calculus or linear algebra.
 
Back
Top