What are the differences between matrices and tensors?

Click For Summary

Discussion Overview

The discussion focuses on the differences between matrices and tensors, exploring theoretical distinctions, representations, and the implications of these differences in various contexts. Participants share their understanding of linear algebra, tensor properties, and the geometric interpretation of tensors compared to matrices.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants note that a matrix is a specific type of tensor, but not all tensors can be represented as matrices.
  • One participant emphasizes that tensors are geometric objects independent of a chosen basis, while matrices are representations of these objects in a specific basis.
  • Another participant highlights the importance of understanding the distinction between a vector and its representation as a set of numbers in a basis, as well as between linear operators and their matrix representations.
  • It is mentioned that the transformation rules for tensors relate different representations of a tensor across bases, underscoring their geometric nature.
  • A participant introduces the concept of dual spaces and the tensor product, explaining how tensors can be constructed from independent spaces and the significance of preserving geometry in this process.
  • There is a discussion about the implications of overlap in spaces when forming tensors, drawing an analogy to probability theory to illustrate the concept of independence in tensor construction.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and agreement on the distinctions between matrices and tensors. While some points are clarified, multiple competing views and interpretations remain unresolved.

Contextual Notes

Some participants acknowledge their ongoing learning in linear algebra, which may affect their understanding of the concepts discussed. There are also references to specific mathematical definitions and transformation rules that may not be fully explored in the discussion.

Sorcerer
Messages
281
Reaction score
53
I have not really finished studying linear algebra, I have to admit. The furthest I have gotten to is manipulating matrices a little bit (although I have used this in differential equations to calculate a Wronskian to see if two equations are linear independent, but again, a determinant is pretty basic). But I tried looking at tensors, and I am having a hard time distinguishing between a matrix and a tensor. What are the differences other than Einstein summation convention? I know I have to just hit the grind stone and finish learning the basics of linear algebra, but hopefully someone can enlighten me a bit about the differences.
 
Physics news on Phys.org
Sorcerer said:
I have not really finished studying linear algebra, I have to admit. The furthest I have gotten to is manipulating matrices a little bit (although I have used this in differential equations to calculate a Wronskian to see if two equations are linear independent, but again, a determinant is pretty basic). But I tried looking at tensors, and I am having a hard time distinguishing between a matrix and a tensor. What are the differences other than Einstein summation convention? I know I have to just hit the grind stone and finish learning the basics of linear algebra, but hopefully someone can enlighten me a bit about the differences.
The components of 2-index tensors with respect to a basis can be described by a matrix. But a tensor is a geometric object, independent from a chosen basis. A matrix is just a way to describe it with respect to some basis.

Likewise, a vector is not just a set of numbers, e.g. {1,2,3}. This set, at most, describes the components of a vector with respect to some basis.

So the lesson is that 2-index tensors have more structures than the matrices one uses to describe its components.

For tensors with more indices you cannot use matrices anymore to depict their components.
 
  • Like
Likes   Reactions: FactChecker
In addition to what has been said above. Before you worry about tensors you need to grasp fully

a) the difference between a vector and the representation of a vector as three numbers in a given basis

b) the difference between a linear operator and the representation of a linear operator as a matrix in a particular basis

For physics, you also need to grasp the difference between these as physical things and mathematical objects.
 
Last edited:
  • Like
Likes   Reactions: FactChecker and fresh_42
If you define a tensor and represent it in two different basis by its elements, then the two representations are related by the tensor transformation rules. That is what makes it a physical or geometric entity rather than just a mathematical entity.

A mathematical entity that is not a tensor: Define the ordered pair (1,2), regardless of what that means in any basis. So it corresponds to the vector (1,2) no matter what the basis is. That is a mathematical entity does not transform correctly to be a tensor. In fact, it does not transform at all. People often define the unit basis (0,1) and (1,0) for any basis. That concept is not a intrinsically a tensor and does not have a fixed physical or geometrical meaning. But it can be used to represent the associated tensor (vector) that will follow the tensor transformation rules.
 
Last edited:
Hey Sorcerer

Building on what haushofer has mentioned about being independent of basis, a tensor is constructed by having spaces being "dual" to another [i.e. being independent] so that they share nothing in common with one another.

The "overlap" part of a tensor is literally nothing [i.e. the zero vector in the spaces] and it is like a Cartesian product of two sets except that you need to preserve the geometry of the space [which means you need to preserve the axioms of the vector space including a metric and inner product space if necessary] and synthesize a new space from the building blocks of the original ones.

When they have some overlap between them, then you need to make them dual by removing the stuff common to both spaces and then take the tensor product.

It's a bit like with P(A OR B) = P(A) + P(B) - P(A and B) with probability - in this analogy between probability and tensors, the overlap term is P(A and B) and provided it is zero, then you get a similar result to tensors where you do the tensor product to get the new space.

If the overlap term is non-zero [i.e. non-zero vectors exist in both spaces] then you have to "transform" one space so that you remove the overlap and then do the tensor product.

The reason they are used is because you use the theory of tensors to build more complicated spaces and the mathematical results are guaranteed to hold if the dual holds where both are independent from one another [in terms of the information they contain]. This means physicists, engineers, and applied mathematicians just use these results knowing they work if they are independent and have geometry much like how they use results from calculus or linear algebra.
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 1 ·
Replies
1
Views
752
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 7 ·
Replies
7
Views
12K
  • · Replies 1 ·
Replies
1
Views
2K