BobbyFluffyPric said:
Explain, I wonder if you can recommend me some text that explains this in detail, although I suppose that as an engineering student I shall have little need to delve further into it. However, I am curious as to this distinction between contravariant and covariant vectors (I actually read in some maths text that though the terms contravariant vector and covariant vector were being used, strictly speaking these terms should be applied to the components of the a vector, and thus I took it for granted that this was the case in all texts that referred to vectors as covariant or contravariant). It's hard for me not to think of a vector, which is a geometrical entity that one can represent via an arrow (eg force, velocity etc), as not being the same physical entity regardless of what type of components are used to describe it (ie regardless of the basis it is expressed in).
Yes, as I said, some books tell you that there is one vector and two "types" of components. The problem with this thinking is that it is okay at some level (i.e. you can do correct calculations if you follow the rules), but confusing at another level. If you only need engineering applications, like computing tensors in flat space, - it's okay, but if you are going to study things like differential forms, it's probably not okay.
Let me try to avoid talking about different vector spaces and just concentrate on a consistent definition of "components". For example, you just replied that
vector [is] ...
the same physical entity regardless of what type of components are used to describe it (ie regardless of the basis it is expressed in). Yes, I also prefer to view vectors primarily as physical entities ("directed magnitudes" or "directed velocities" or simply just "vectors"), defined regardless of any components or a basis. I prefer to think that components of a vector are secondary and are defined with regard to a basis. Now, what is a basis? A basis is a set of
vectors, i.e. a set of physical entities, right? So there cannot be a "covariant" or a "contravariant" basis, --- there is
only one kind of basis. As we know, a basis is a linearly independent set of vectors through which all other vectors can be expressed as linear combinations. There is not a "contravariant" or "covariant" basis, but just a basis. A vector
X has components X^\mu in the basis \mathbf{E}_\mu if \mathbf{X}=\sum_\mu X^\mu \mathbf{E}_\mu and that's it. The components X^\mu are those called "contravariant" in your terminology. These are the only kind of components one can define using a basis.
There is no consistent way to define "covariant" components using just a basis, because there is no such thing as a "covariant" basis. Covariant components are instead defined using a
metric (i.e. a scalar product)
and a basis. (I use the words "metric" and "scalar product" interchangeably.) Here is a definition: if g(\mathbf{X},\mathbf{Y}) is the scalar product of vectors
X and
Y, then the covariant components of a vector
X are defined as X_\mu = g(\mathbf{X},\mathbf{E}_\mu). You see that these "covariant components" are quite another beast. They express the relationship between a vector, a basis, and a scalar product. No scalar product - no covariant components. I prefer to think that "covariant components" is really something auxiliary that helps when you need lots of calculations with scalar product. On the other hand, "contravariant components" are honest components of a vector in a basis.
In the Euclidean space, there is a standard scalar product g, and an orthonormal basis where the metric g is given by a unit matrix, g(\mathbf{E}_\mu, \mathbf{E}_\nu)=\delta_{\mu\nu}. Then covariant components of any vector are numerically equal to its contravariant components. In books where the only conceivable case is Euclidean, the metric g is always present and always the same, so it is perhaps okay to say that every vector has a set of contravariant components and a set of covariant components. But even then, if you choose a non-orthogonal basis (which you are certainly allowed to do), then the metric g will not be given by a unit matrix and covariant components of vectors will not be equal to contravariant components. Also, if you define a different metric, the covariant components will be completely different (because they depend also on the metric). For instance, in special relativity the metric in the standard basis is g=diag(1,-1,-1,-1) and covariant components always have some signs flipped, compared with contravariant components. This makes things more confusing than it's worth. So eventually one finds that these different components don't help with calculations and one abandons components. http://www.theorie.physik.uni-muenchen.de/~serge/T7/" is a book-in-progress on advanced general relativity that does all vector and tensor calculations without using any components at all. (I already posted this link in a different thread.)
As for books - I don't really know what to suggest. You can look here:
http://www.cs.wisc.edu/~deboor/ma443.html
This page also lists some suggested books.