dx said:
Two vector spaces of the same type and dimension are clearly isomorphic, and you can set up a one-to-one correspondence between their elements. But there are many ways to do this, and depending on the context, we may find one way of doing it more useful and natural. In general relativity for example, if you have a vector V, then you can get a corresponding one-form g(_, V), where g is the metric tensor. The components of the 1-form g(_, V) are called the covariant components of the contravariant vector V. In component (or abstract index) notation, this is Va = gbaVb.
Is the following right?
g_{ij} = \lbrace\alpha_{i}\beta_{j}\rbrace = \mathbf{g}\left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}} \otimes \mathbf{\tilde{\beta}} \left(\mathbf{\underline{a}},\mathbf{\underline{b}} \right) = \mathbf{\tilde{\alpha}}\left(\mathbf{\underline{a}}\right)\mathbf{\tilde{\beta}}\left(\mathbf{\underline{b}}\right)
Bold, underlined Roman letters stand for vectors, and bold Greek letters with tildes for 1-forms. By analogy with Bernard Schutz's example of a (2 0) tensor, can we say that the metric tensor is a linear, 1-form-valued function of vectors, a (0 2) tensor? Or does the metric tensor need to be described as a sum of tensor products between pairs of 1-forms, or is it what Schutz calls a "simple tensor"?
Are the components of the "metric tensor" numbers? Are they the components of the 1-forms whose tensor product the metric tensor is (if it is a tensor product of 1-forms)? Can they be represented as a matrix, and can the operation of the metric tensor on two vectors be represented as matrix multiplication, e.g.
\mathbf{a}^{T} \; g \; \mathbf{b}
If so, how does this relate to the convention that vectors, in some sense,
are "column vectors" and covectors
are "row vectors"? When Schutz says "one is used to switching between vectors and 1-forms, associating a given vector with its 'conjugate' or 'transpose', which is a 1-form", does transpose mean the same thing as the transpose of a matrix (as in turning a column vector into a row vector)? Does he mean that conjugate and transpose are synonyms in this context, or is he referring to two different methods of associating a given vector with the same 1-form, or is he referring to two different 1-forms that a given vector could be associated with (one its conjugate, one its transpose). I'm guessing he means them synonymously, but at this stage, I'm so confused, I don't want to take any chances!
You say that "the components of the 1-form g(_, V) are called the covariant components of the contravariant vector V." In one of his examples, Schutz describes a (1 1) tensor - once contravariant, once covariant:
\mathbf{T}\left(\mathbf{\tilde{\omega}};\mathbf{\underline{v}} \right)
which he calls a "real number". But "for fixed omega," he says
\mathbf{T}\left(\mathbf{\tilde{\omega}};\mathbf{\underline{\quad}} \right)
is a 1-form, since it needs a vector argument to give a real number. What makes this a 1-form: is the rule that the value of a tensor with one empty slot that requires a vectorial argument is a 1-form, and the value of a tensor with one empty slot that requires a 1-form argument is a vector? Thus, in the simplest case, the argument of a vector is a 1-form, and vice-versa. So the components of this 1-form would then be the covariant components of some vector. You put this vector in the
second slot: is this significant in the case of the metric tensor (I gather the order of arguments is significant with some tensors)? Is g(V,_) another 1-form? The same 1-form? Does this also give the covariant components of V? How would this be written in index notation: would we just swap the a and b subscripts on
g in your example? Could this be written a a matrix multiplication, and what would that look like?
Griffel seems to be suggesting that in one way of looking at the situation, there are two kinds of entities, one called a vector, the other a covector (linear functional, 1-form), and in another way of looking at the situation (a way associated with physics), each vector can be associated with a certain 1-form; and what would in the former conception be called simply "the components of the vector" are, in the physics conception (according to Griffel), called the "covariant components of the vector", and this vector is associate with a 1-form, whose components (in the former conception) can be regarded (in the physics way) as the "contravariant components of the vector". This "physics way" matches the way I've seen the concept described everywhere else I've looked so far EXCEPT that the "components of the vector" (when otherwise not specified more precisely) are everywhere else (Wikipedia, Schutz, Borisenko & Taparov, Shapirov,...) called its contravariant components; this in contrast to the "components of the 1-form", which are in other words called the "covariant components of the vector". Does Griffel's description of the physics way of dealing with tensors go agains the usual convention, or have I misunderstood something? Are the statements in my attempted paraphrase of Griffel correct in substance; have I got the terminology right?
At least everyone seems to agree that indices should be up on anything contravariant, and down on anything covariant, with the exception that Griffel puts them down on 1-forms, including basis 1-forms.