How Are Tensors Represented and Why Are Their Notations Significant?

  • Context: Graduate 
  • Thread starter Thread starter BWV
  • Start date Start date
  • Tags Tags
    Tensor
Click For Summary
SUMMARY

This discussion clarifies the representation and notation of tensors, specifically focusing on mixed tensors, covariant, and contravariant tensors of rank two. A mixed (1,1)-tensor can be represented as a square matrix, while a (0,2)-tensor is typically shown as a square array of numbers. The Kronecker delta should always have one upper and one lower index, as using two lower indices is misleading. The transformation properties of tensors are crucial, with contravariant vectors represented as column vectors and covariant vectors as row vectors, emphasizing the importance of matrix notation in tensor algebra.

PREREQUISITES
  • Understanding of tensor algebra concepts
  • Familiarity with linear algebra and matrix notation
  • Knowledge of covariant and contravariant transformations
  • Basic grasp of the Kronecker delta and metric tensors
NEXT STEPS
  • Study the properties of mixed tensors in detail
  • Learn about the implications of the metric tensor in relativity
  • Explore the transformation rules for contravariant and covariant vectors
  • Investigate advanced tensor notation and its applications in physics
USEFUL FOR

Students and professionals in mathematics, physics, and engineering who are looking to deepen their understanding of tensor algebra and its applications in various fields, including relativity and advanced mechanics.

BWV
Messages
1,667
Reaction score
2,013
Trying to teach myself basic tensor algebra and confused by some of the notation. Either a [1,1] mixed tensor or a covariant or contravariant tensor of rank two can be represented by a square matrix, correct?

Is it just a convention that the kronecker delta is represented as a mixed tensor while the metric tensor (which is in relativity is a signed identity matrix) is represented as a covariant tensor?

Relative to doing this math with linear algebra, the advantage of tensor notation and the concepts of covariance and contravariance is to give information as to the specific transformation properties rather than writing out all the dx's in matrix form?
 
Physics news on Phys.org
BWV said:
Trying to teach myself basic tensor algebra and confused by some of the notation. Either a [1,1] mixed tensor or a covariant or contravariant tensor of rank two can be represented by a square matrix, correct?

You can think of a contravariant vector as being a column vector, and a covariant vector is a row vector. Hence, upper indices number rows, and lower indices number columns. Therefore, a mixed (1,1)-tensor can be thought of quite literally as a square matrix.

A (0,2)-tensor should technically be thought of as a row vector whose elements are themselves row vectors; however, such a tensor is usually exhibited as a square array of numbers.

But it is important to remember the distinction, because it is only mixed (1,1)-tensors that behave precisely like square matrices.

Is it just a convention that the kronecker delta is represented as a mixed tensor while the metric tensor (which is in relativity is a signed identity matrix) is represented as a covariant tensor?

The Kronecker delta is sometimes written with two lower or two upper indices, in which case it means a two-index object with 1's on the diagonal and 0 elsewhere. However, this notation is, strictly speaking, incorrect. The Kronecker delta should always have one upper and one lower index. Consider what happens if you use the metric to raise or lower an index:

g_{ac}\delta^c{}_b = g_{ab} \neq \delta_{ab}

and so the latter notation (delta with two lower indices) is a bit misleading, as it is inconsistent with what we usually mean when we write a symbol with its indices moved around:

A_{\mu}{}^{\nu\lambda} = g_{\mu\sigma} A^{\sigma\nu\lambda}

Relative to doing this math with linear algebra, the advantage of tensor notation and the concepts of covariance and contravariance is to give information as to the specific transformation properties rather than writing out all the dx's in matrix form?

No, physicists always give confusing explanations of this.

Contravariant vectors are column vectors. Covariant vectors are row vectors. That is really all it means.

So then write it in matrix notation. Suppose a column vector v^{cont} transforms as

\mathbf v^{cont} \rightarrow \mathbf A \mathbf v^{cont}

Now, define \mathbf v_{cov} to be a row vector such that

\mathbf v_{cov} \mathbf v^{cont} = |v|^2

The squared length of v must be invariant under any transformation, and therefore to be consistent, we must have

\mathbf v_{cov} \rightarrow \mathbf v_{cov} \mathbf A^{-1}

because this gives

\mathbf v_{cov} \mathbf v^{cont} \rightarrow \mathbf v_{cov} \mathbf A^{-1} \mathbf A \mathbf v^{cont} = \mathbf v_{cov} \mathbf v^{cont} = |v|^2

So, the covariant vector corresponding to v must transform using the inverse transformation matrix.

The metric, in turn, gives a way to define \mathbf v_{cov}[/tex]:<br /> <br /> |v|^2 = (\mathbf v^{cont})^T \mathbf G \mathbf v^{cont} \implies \mathbf v_{cov} = (\mathbf v^{cont})^T \mathbf G
 
Thanks, this is very helpful
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 7 ·
Replies
7
Views
1K
Replies
5
Views
4K
  • · Replies 10 ·
Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 2 ·
Replies
2
Views
4K
Replies
4
Views
7K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 24 ·
Replies
24
Views
3K