BWV said:
Trying to teach myself basic tensor algebra and confused by some of the notation. Either a [1,1] mixed tensor or a covariant or contravariant tensor of rank two can be represented by a square matrix, correct?
You can think of a contravariant vector as being a column vector, and a covariant vector is a row vector. Hence, upper indices number rows, and lower indices number columns. Therefore, a mixed (1,1)-tensor can be thought of quite literally as a square matrix.
A (0,2)-tensor should technically be thought of as a row vector whose elements are themselves row vectors; however, such a tensor is usually exhibited as a square array of numbers.
But it is important to remember the distinction, because it is only mixed (1,1)-tensors that behave precisely like square matrices.
Is it just a convention that the kronecker delta is represented as a mixed tensor while the metric tensor (which is in relativity is a signed identity matrix) is represented as a covariant tensor?
The Kronecker delta is sometimes written with two lower or two upper indices, in which case it means a two-index object with 1's on the diagonal and 0 elsewhere. However, this notation is, strictly speaking, incorrect. The Kronecker delta
should always have one upper and one lower index. Consider what happens if you use the metric to raise or lower an index:
g_{ac}\delta^c{}_b = g_{ab} \neq \delta_{ab}
and so the latter notation (delta with two lower indices) is a bit misleading, as it is inconsistent with what we usually mean when we write a symbol with its indices moved around:
A_{\mu}{}^{\nu\lambda} = g_{\mu\sigma} A^{\sigma\nu\lambda}
Relative to doing this math with linear algebra, the advantage of tensor notation and the concepts of covariance and contravariance is to give information as to the specific transformation properties rather than writing out all the dx's in matrix form?
No, physicists always give confusing explanations of this.
Contravariant vectors are column vectors. Covariant vectors are row vectors. That is really all it means.
So then write it in matrix notation. Suppose a column vector v^{cont} transforms as
\mathbf v^{cont} \rightarrow \mathbf A \mathbf v^{cont}
Now, define \mathbf v_{cov} to be a row vector such that
\mathbf v_{cov} \mathbf v^{cont} = |v|^2
The squared length of v must be invariant under any transformation, and therefore to be consistent, we must have
\mathbf v_{cov} \rightarrow \mathbf v_{cov} \mathbf A^{-1}
because this gives
\mathbf v_{cov} \mathbf v^{cont} \rightarrow \mathbf v_{cov} \mathbf A^{-1} \mathbf A \mathbf v^{cont} = \mathbf v_{cov} \mathbf v^{cont} = |v|^2
So, the covariant vector corresponding to v must transform using the inverse transformation matrix.
The metric, in turn, gives a way to define \mathbf v_{cov}[/tex]:<br />
<br />
|v|^2 = (\mathbf v^{cont})^T \mathbf G \mathbf v^{cont} \implies \mathbf v_{cov} = (\mathbf v^{cont})^T \mathbf G