Tensor Rank: Scalar, Vector, Matrix, & More

  • Context: Graduate 
  • Thread starter Thread starter KFC
  • Start date Start date
  • Tags Tags
    Matrices Tensor
Click For Summary

Discussion Overview

The discussion revolves around the concept of tensor rank, specifically the definitions and representations of tensors of various ranks, including scalars, vectors, and higher rank tensors. Participants explore the implications of tensor rank in terms of dimensionality and representation in mathematical form, including the use of indices and matrices.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested

Main Points Raised

  • One participant states that a zero-rank tensor is a scalar, a rank one tensor is a vector, and a rank two tensor is a 3x3 matrix, but questions how higher rank tensors can be represented.
  • Another participant clarifies that four indices correspond to a hypercube rather than a square matrix, emphasizing the need for four dimensions to represent all elements.
  • A third participant introduces the concept of tensors as multilinear functions and defines a tensor of type (0,4) in relation to a vector space and its dual space.
  • It is noted that the components of a tensor can change with different bases, but the tensor itself remains independent of these bases.
  • One participant points out that a rank 2 tensor is not limited to a 3x3 matrix, providing examples such as the Kronecker delta and the Minkowski metric.
  • Another participant adds that the Kronecker delta can be represented by an nxn matrix, where n is the dimension of the manifold.

Areas of Agreement / Disagreement

Participants express differing views on the representation of tensors, particularly regarding the dimensionality and forms of higher rank tensors. There is no consensus on the implications of these representations.

Contextual Notes

Some statements rely on specific definitions of tensors and their ranks, which may vary across different contexts. The discussion includes assumptions about the dimensionality of spaces and the nature of tensor components.

KFC
Messages
477
Reaction score
4
Hi there,
I have a question about tensor rank. As we know, zero-rank tensor is scalar, rank one tensor is a vector and rank two tensor is a 3x3 matrix. Moreover, scalar and vector can also be written in the form of matrix. However, for higher rank tensor, says rank 4, according to the definition, there are 3^4 = 81 entries. And many textbook wrote rank 4 tensor, in index form, as

T_{\alpha, \beta, \gamma, \delta}

i.e., there are four indices. So can we also write higher rank tensor (rank 4 or above) with a square matrix? If so, what does each index mean?
 
Physics news on Phys.org
4 indices means a means a hypercube, not a really big square matrix. think about it, each element has 4 "coordinates" in the matrix - obviously you need 4 dimensions to display em all.
 
(I won't be writing any summation sigmas in this post, because we always sum over those indices that appear twice, and only those. This is the Einstein summation convention).

If V is a vector space, you can define the dual space V* as the set of all linear functions from V into the real numbers. A tensor of type (n,m) is a multilinear (linear in all variables) function

T:\underbrace{V^*\times\cdots\times V^*}_{\mbox{n factors}}\times\underbrace{V\times\cdots\times V}_{\mbox{m factors}}\rightarrow\mathbb R

What you call a tensor of rank 4 is a tensor of type (0,4).

Given a basis \{\vec e_i\} of V, you can define a basis \{\tilde e^i\} of V* by

\footnotesize\tilde e^i(\vec e_j)=\delta^i_j

where the right-hand side is the Kronecker delta (i.e. it's =1 when i=j and zero otherwise). This basis is called the dual basis of \{\vec e_i\}.

The set of all tensors of type (0,4) also has a natural vector space structure, and we can use any basis of V* to construct a basis for it. For example, the one constructed from \{\tilde e^i\} is \{\tilde e^i\otimes\tilde e^j\otimes\tilde e^k\otimes\tilde e^l\}. The \otimes symbol has a simple definition. I'll just give an example: If \tilde\alpha and \tilde\beta are members of V*, we have

\tilde\alpha\otimes\tilde\beta(\vec u,\vec v)=\tilde\alpha(\vec u)\tilde\beta(\vec v)

for all \vec u and \vec v in V.

OK, here's the definition of your T with 4 indices. It's the components of T when we express it using a basis:

T=T_{ijkl}\tilde e^i\otimes\tilde e^j\otimes\tilde e^k\otimes\tilde e^l

It's easy to show that

T_{ijkl}=T(\vec e_i,\vec e_j,\vec e_k,\vec e_l)

Note that the tensor itself is something that's completely independent of all bases. It's the components of the tensor that changes when you decide to use another basis.

If you're wondering what any of this has to do with changing coordinate systems, the answer is that the vector space V is usually a tangent space of a manifold (there's one at each point), and you can use a coordinate system to construct a basis for the tangent space at any point where the coordinate system is defined.
 
Note that a rank 2 tensor is not necessarily a 3x3 matrix. For example the kronocker delta is represented by a 2x2 matrix, and the Minkowski metric is represented by a 4x4 matrix.

A rank 3 matrix could be represented by a matrix that is being stretched out into a third dimension. See http://en.wikipedia.org/wiki/Levi-Civita_symbol for such an image.
 
nicksauce said:
For example the kronocker delta is represented by a 2x2 matrix,
Actually it's an nxn matrix where n is the dimension of the manifold. :smile:
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 26 ·
Replies
26
Views
4K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 0 ·
Replies
0
Views
1K
  • · Replies 27 ·
Replies
27
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
8
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K