How Are Tensors Represented and Why Are Their Notations Significant?

• BWV
In summary, a covariant vector is a column vector, and a contravariant vector is a row vector. Mixed (1,1)-tensors can be thought of as square matrices, and the metric tensor is represented as a covariant tensor.
BWV
Trying to teach myself basic tensor algebra and confused by some of the notation. Either a [1,1] mixed tensor or a covariant or contravariant tensor of rank two can be represented by a square matrix, correct?

Is it just a convention that the kronecker delta is represented as a mixed tensor while the metric tensor (which is in relativity is a signed identity matrix) is represented as a covariant tensor?

Relative to doing this math with linear algebra, the advantage of tensor notation and the concepts of covariance and contravariance is to give information as to the specific transformation properties rather than writing out all the dx's in matrix form?

BWV said:
Trying to teach myself basic tensor algebra and confused by some of the notation. Either a [1,1] mixed tensor or a covariant or contravariant tensor of rank two can be represented by a square matrix, correct?

You can think of a contravariant vector as being a column vector, and a covariant vector is a row vector. Hence, upper indices number rows, and lower indices number columns. Therefore, a mixed (1,1)-tensor can be thought of quite literally as a square matrix.

A (0,2)-tensor should technically be thought of as a row vector whose elements are themselves row vectors; however, such a tensor is usually exhibited as a square array of numbers.

But it is important to remember the distinction, because it is only mixed (1,1)-tensors that behave precisely like square matrices.

Is it just a convention that the kronecker delta is represented as a mixed tensor while the metric tensor (which is in relativity is a signed identity matrix) is represented as a covariant tensor?

The Kronecker delta is sometimes written with two lower or two upper indices, in which case it means a two-index object with 1's on the diagonal and 0 elsewhere. However, this notation is, strictly speaking, incorrect. The Kronecker delta should always have one upper and one lower index. Consider what happens if you use the metric to raise or lower an index:

$$g_{ac}\delta^c{}_b = g_{ab} \neq \delta_{ab}$$

and so the latter notation (delta with two lower indices) is a bit misleading, as it is inconsistent with what we usually mean when we write a symbol with its indices moved around:

$$A_{\mu}{}^{\nu\lambda} = g_{\mu\sigma} A^{\sigma\nu\lambda}$$

Relative to doing this math with linear algebra, the advantage of tensor notation and the concepts of covariance and contravariance is to give information as to the specific transformation properties rather than writing out all the dx's in matrix form?

No, physicists always give confusing explanations of this.

Contravariant vectors are column vectors. Covariant vectors are row vectors. That is really all it means.

So then write it in matrix notation. Suppose a column vector $v^{cont}$ transforms as

$$\mathbf v^{cont} \rightarrow \mathbf A \mathbf v^{cont}$$

Now, define $\mathbf v_{cov}$ to be a row vector such that

$$\mathbf v_{cov} \mathbf v^{cont} = |v|^2$$

The squared length of v must be invariant under any transformation, and therefore to be consistent, we must have

$$\mathbf v_{cov} \rightarrow \mathbf v_{cov} \mathbf A^{-1}$$

because this gives

$$\mathbf v_{cov} \mathbf v^{cont} \rightarrow \mathbf v_{cov} \mathbf A^{-1} \mathbf A \mathbf v^{cont} = \mathbf v_{cov} \mathbf v^{cont} = |v|^2$$

So, the covariant vector corresponding to v must transform using the inverse transformation matrix.

The metric, in turn, gives a way to define [itex]\mathbf v_{cov}[/tex]:

$$|v|^2 = (\mathbf v^{cont})^T \mathbf G \mathbf v^{cont} \implies \mathbf v_{cov} = (\mathbf v^{cont})^T \mathbf G$$

1. What is a rudimentary tensor?

A rudimentary tensor is a mathematical object that represents the linear relationship between two vector spaces. It is a generalization of a scalar (which represents a single value) and a vector (which represents both magnitude and direction).

2. How is a rudimentary tensor different from a regular tensor?

A rudimentary tensor is a specific type of tensor that only has one index, while regular tensors can have multiple indices. This means that rudimentary tensors only represent linear relationships between two vector spaces, while regular tensors can represent much more complex relationships.

3. What are some real-world applications of rudimentary tensors?

Rudimentary tensors are commonly used in physics and engineering to model physical systems with linear relationships. Some examples include stress and strain tensors in materials science, and electromagnetic field tensors in electromagnetism.

4. How are rudimentary tensors used in machine learning?

Rudimentary tensors are used in machine learning to represent data and relationships between features. They are particularly useful for supervised learning tasks such as regression and classification, where the goal is to predict a continuous or categorical output based on a set of input features.

5. Are there any limitations to using rudimentary tensors?

One limitation of rudimentary tensors is that they can only represent linear relationships, which may not accurately model more complex systems. In addition, they can only represent relationships between two vector spaces, so they may not be suitable for tasks involving more than two variables. In these cases, regular tensors may be more appropriate.

Replies
5
Views
2K
• Special and General Relativity
Replies
10
Views
2K
• Differential Geometry
Replies
2
Views
2K
• Special and General Relativity
Replies
4
Views
303
• Special and General Relativity
Replies
22
Views
2K
• Differential Geometry
Replies
4
Views
6K
• Special and General Relativity
Replies
1
Views
598
• Calculus and Beyond Homework Help
Replies
1
Views
1K
• Differential Geometry
Replies
3
Views
1K
• Science and Math Textbooks
Replies
9
Views
1K