Covariant and Contravariant Components of a Vector

Click For Summary
The discussion centers on the definitions and significance of contravariant and covariant components of vectors as presented in Borisenko's book on vector and tensor analysis. Contravariant components, denoted as A^{i}, are associated with the original basis, while covariant components, A_{i}, relate to the reciprocal basis. The transformation properties of these components under coordinate changes highlight their distinct roles in linear algebra and geometry. The terminology used in the book is considered outdated, as modern interpretations favor the terms vector and covector. Understanding these concepts is crucial for grasping how vectors and covectors interact in various mathematical contexts.
LawdyLawdy
Messages
3
Reaction score
0
I think this may be a simple yes or no question. I am currently reading a book Vector and Tensor Analysis by Borisenko. In it he introduces a reciprocal basis \vec{e_{i}} (where i=1,2,3) for a basis \vec{e^{i}} (where i is an index, not an exponent) that may or may not be orthogonal, normalized, and coplanar. With this is in mind he shows that a vector \vec{A} using the properties:

\vec{A} \cdot \vec{e_{i}} = A_{i}
\vec{A} \cdot \vec{e^{i}} = A^{i}

may be written in the following ways...

\vec{A} = A^{1} \vec{e_{1}}+A^{2}\vec{e_{2}}+A^{3}\vec{e_{3}}
\vec{A}=A_{1}\vec{e^{1}}+A_{2}\vec{e^{2}}+A_{3}\vec{e^{3}}

I understand this so far...however he then seems to call the components A^{i} the contravariant components and A_{i} the covariant components without any particular reason as to why which is which. What significance does one have over the other in order to point out which is which?

Are the contravariant components just defined as \vec{A} projected onto the reciprocal basis? In which case the decision as to which is the contravariant components and which is the covariant components depend on which basis was the "original?" in this case \vec{e^{i}}?

From my searching through the threads on this site, I understand contravariant and covariant vectors go into so pretty different territory regarding manifolds and how vectors deal with certain transformations, are these applications using the same meaning that the book is using?

Thanks for the help.
 
Physics news on Phys.org
Did the book talk about the dual vector space of a space ##V## (= the linear functions ##V\rightarrow \mathbb{R}##)?? I certainly hope so. Otherwise you need to buy a better book.
 
First off, awesome user name. Secondly, the notion of contravariant and covariant vectors is horribly outdated terminology but for some reason the old tensor analysis books seem to keep using it.

These objects are first seen in Linear Algebra. Since it seems your case of interest is finite real vector spaces, let ##V## be a finite dimensional vector space over ##\mathbb{R}## and ##\left \{e_i \right \}_i## be a basis for ##V##. We know from elementary linear algebra that by definition of a basis, any ##v\in V## can be written as a unique linear combination of these basis vectors i.e. ##v = \sum_{i}v^ie_i##. We also know that there exists a dual space ##V^*## which is the set of all linear maps ##l:V\rightarrow \mathbb{R}## (also called linear functionals). There is a natural basis for ##V^*##, called the dual basis, which is a set of linear functions ##\left \{ e^{i} \right \}_{i}, e^{j}\in V^*## on ##V## defined by ##e^i(e_j) = \delta ^{i}_j## (here ##\delta^{i}_j## is the kronecker delta). We can of course write any vector ##l\in V^*## as a unique linear combination of the dual basis vectors, ##l = \sum_{i}l_{i}e^i##. So we see that ##l(v) = \sum_{i,j}l_iv^je^{i}(e_j) = \sum_{i,j}l_iv^j\delta ^{i}_j = \sum_{i}l_iv^i ##. ##l## is often called a covector in this context.

Let's jump ahead to geometry. Let's say we have coordinates ##(x^1,x^2,x^3)## defined on some open subset ##U\subseteq \mathbb{R}^{3}## and we make a coordinate transformation ##x^i\rightarrow x'^i##. Under this coordinate transformation, the components ##l_i## of a covector ##l##, with respect to some coordinate basis (like the usual standard basis on euclidean space) transform as ##l'_i = \sum_{j}\frac{\partial x^j}{\partial x'^i}l_j## whereas the components ##v^i## of a vector ##v##, with respect to that basis, transform as ##v'^i = \sum_{j}\frac{\partial x'^i}{\partial x^j}v^j## (I'm being VERY handwavy here because I haven't mentioned tangent spaces etc. but for euclidean space it doesn't really matter). Your book is using these transformation properties to call ##l## a "covariant" vector and ##v## a "contravariant" vector which are VERY outdated terminologies from the old days of classical tensor analysis and the modern notion of a covector and vector stem from the linear algebra formulations of the objects.
 
LawdyLawdy said:
I understand this so far...however he then seems to call the components A^{i} the contravariant components and A_{i} the covariant components without any particular reason as to why which is which. What significance does one have over the other in order to point out which is which?

Are the contravariant components just defined as \vec{A} projected onto the reciprocal basis? In which case the decision as to which is the contravariant components and which is the covariant components depend on which basis was the "original?" in this case \vec{e^{i}}?
You're right. The answer to this part is simply "yes". The contravariant components of A with respect to one basis would be the covariant components of A with respect to the reciprocal basis.

I have never seen this approach before, including the definition of the reciprocal basis. (I did take a quick look inside that book). The standard definitions go like this: Let V be a finite-dimensional vector space over ℝ. Let V* be the set of linear maps from V into ℝ. For each f,g in V*, define f+g by (f+g)v=fv+gv for all v in V. For each f in V* and each a in ℝ, define af by (af)v=a(fv) for all v in V. These definitions turn V* into a vector space over ℝ. It's called the dual space of V. For each basis ##\{e_i\}## of V, we define its dual basis ##\{e^i\}## by ##e^i e_j=\delta^i_j## for all i,j.

Edit: ...but I see that WannabeNewton has already said this.
 
In linear algebra, vectors and covectors are just arbitrary naming convections depending on which was first.

But in a more general setting, some things are more naturally vectors than covectors. If you have a particle moving through space, the velocity, which is tangent to the curve drawn by the particle, is more naturally a vector.

In contrast, covectors are things associated with scalar functions, ie. things which associate a number to every point of the space.

There are also things called "forms", which are on the covector side of things. These are the most natural things to integrate along a path, and can provide a notion of "area" even though there is no metric.

In these settings, linear algebra still holds good, except that one has different vector (tangent) and covector (cotangent) spaces at every point.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
27
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 10 ·
Replies
10
Views
516