Covariant and Contravariant Components of a Vector

In summary, the conversation discusses the concept of contravariant and covariant components of vectors and covectors in linear algebra and differential geometry. The book introduces a reciprocal basis for a basis that may or may not be orthogonal, normalized, and coplanar. It also shows the transformation properties of these components under coordinate transformations. The decision of which components are contravariant and which are covariant depends on the original basis and there is no particular reason for calling them by these names. The conversation also mentions the outdated terminology of contravariant and covariant vectors and explains the modern notion of covectors. It also touches on the idea of forms, which are associated with scalar functions and can be integrated along a path, providing a notion of "area
  • #1
LawdyLawdy
3
0
I think this may be a simple yes or no question. I am currently reading a book Vector and Tensor Analysis by Borisenko. In it he introduces a reciprocal basis [itex]\vec{e_{i}}[/itex] (where i=1,2,3) for a basis [itex]\vec{e^{i}}[/itex] (where i is an index, not an exponent) that may or may not be orthogonal, normalized, and coplanar. With this is in mind he shows that a vector [itex]\vec{A}[/itex] using the properties:

[itex]\vec{A} \cdot \vec{e_{i}} = A_{i}[/itex]
[itex]\vec{A} \cdot \vec{e^{i}} = A^{i}[/itex]

may be written in the following ways...

[itex]\vec{A} = A^{1} \vec{e_{1}}+A^{2}\vec{e_{2}}+A^{3}\vec{e_{3}}[/itex]
[itex]\vec{A}=A_{1}\vec{e^{1}}+A_{2}\vec{e^{2}}+A_{3}\vec{e^{3}}[/itex]

I understand this so far...however he then seems to call the components [itex]A^{i}[/itex] the contravariant components and [itex]A_{i}[/itex] the covariant components without any particular reason as to why which is which. What significance does one have over the other in order to point out which is which?

Are the contravariant components just defined as [itex]\vec{A}[/itex] projected onto the reciprocal basis? In which case the decision as to which is the contravariant components and which is the covariant components depend on which basis was the "original?" in this case [itex]\vec{e^{i}}[/itex]?

From my searching through the threads on this site, I understand contravariant and covariant vectors go into so pretty different territory regarding manifolds and how vectors deal with certain transformations, are these applications using the same meaning that the book is using?

Thanks for the help.
 
Physics news on Phys.org
  • #2
Did the book talk about the dual vector space of a space ##V## (= the linear functions ##V\rightarrow \mathbb{R}##)?? I certainly hope so. Otherwise you need to buy a better book.
 
  • #3
First off, awesome user name. Secondly, the notion of contravariant and covariant vectors is horribly outdated terminology but for some reason the old tensor analysis books seem to keep using it.

These objects are first seen in Linear Algebra. Since it seems your case of interest is finite real vector spaces, let ##V## be a finite dimensional vector space over ##\mathbb{R}## and ##\left \{e_i \right \}_i## be a basis for ##V##. We know from elementary linear algebra that by definition of a basis, any ##v\in V## can be written as a unique linear combination of these basis vectors i.e. ##v = \sum_{i}v^ie_i##. We also know that there exists a dual space ##V^*## which is the set of all linear maps ##l:V\rightarrow \mathbb{R}## (also called linear functionals). There is a natural basis for ##V^*##, called the dual basis, which is a set of linear functions ##\left \{ e^{i} \right \}_{i}, e^{j}\in V^*## on ##V## defined by ##e^i(e_j) = \delta ^{i}_j## (here ##\delta^{i}_j## is the kronecker delta). We can of course write any vector ##l\in V^*## as a unique linear combination of the dual basis vectors, ##l = \sum_{i}l_{i}e^i##. So we see that ##l(v) = \sum_{i,j}l_iv^je^{i}(e_j) = \sum_{i,j}l_iv^j\delta ^{i}_j = \sum_{i}l_iv^i ##. ##l## is often called a covector in this context.

Let's jump ahead to geometry. Let's say we have coordinates ##(x^1,x^2,x^3)## defined on some open subset ##U\subseteq \mathbb{R}^{3}## and we make a coordinate transformation ##x^i\rightarrow x'^i##. Under this coordinate transformation, the components ##l_i## of a covector ##l##, with respect to some coordinate basis (like the usual standard basis on euclidean space) transform as ##l'_i = \sum_{j}\frac{\partial x^j}{\partial x'^i}l_j## whereas the components ##v^i## of a vector ##v##, with respect to that basis, transform as ##v'^i = \sum_{j}\frac{\partial x'^i}{\partial x^j}v^j## (I'm being VERY handwavy here because I haven't mentioned tangent spaces etc. but for euclidean space it doesn't really matter). Your book is using these transformation properties to call ##l## a "covariant" vector and ##v## a "contravariant" vector which are VERY outdated terminologies from the old days of classical tensor analysis and the modern notion of a covector and vector stem from the linear algebra formulations of the objects.
 
  • #4
LawdyLawdy said:
I understand this so far...however he then seems to call the components [itex]A^{i}[/itex] the contravariant components and [itex]A_{i}[/itex] the covariant components without any particular reason as to why which is which. What significance does one have over the other in order to point out which is which?

Are the contravariant components just defined as [itex]\vec{A}[/itex] projected onto the reciprocal basis? In which case the decision as to which is the contravariant components and which is the covariant components depend on which basis was the "original?" in this case [itex]\vec{e^{i}}[/itex]?
You're right. The answer to this part is simply "yes". The contravariant components of A with respect to one basis would be the covariant components of A with respect to the reciprocal basis.

I have never seen this approach before, including the definition of the reciprocal basis. (I did take a quick look inside that book). The standard definitions go like this: Let V be a finite-dimensional vector space over ℝ. Let V* be the set of linear maps from V into ℝ. For each f,g in V*, define f+g by (f+g)v=fv+gv for all v in V. For each f in V* and each a in ℝ, define af by (af)v=a(fv) for all v in V. These definitions turn V* into a vector space over ℝ. It's called the dual space of V. For each basis ##\{e_i\}## of V, we define its dual basis ##\{e^i\}## by ##e^i e_j=\delta^i_j## for all i,j.

Edit: ...but I see that WannabeNewton has already said this.
 
  • #5
In linear algebra, vectors and covectors are just arbitrary naming convections depending on which was first.

But in a more general setting, some things are more naturally vectors than covectors. If you have a particle moving through space, the velocity, which is tangent to the curve drawn by the particle, is more naturally a vector.

In contrast, covectors are things associated with scalar functions, ie. things which associate a number to every point of the space.

There are also things called "forms", which are on the covector side of things. These are the most natural things to integrate along a path, and can provide a notion of "area" even though there is no metric.

In these settings, linear algebra still holds good, except that one has different vector (tangent) and covector (cotangent) spaces at every point.
 

What are covariant and contravariant components of a vector?

Covariant and contravariant components of a vector are two different ways of representing the same vector in a coordinate system. Covariant components are defined with respect to a set of basis vectors that are aligned with the coordinate axes, while contravariant components are defined with respect to a set of basis vectors that are orthogonal to the coordinate axes.

What is the difference between covariant and contravariant components?

The main difference between covariant and contravariant components is their transformation properties under a change of basis. Covariant components transform in the same way as the basis vectors, while contravariant components transform in the inverse way. This means that covariant components change when the coordinate axes are rotated or scaled, while contravariant components remain the same.

How are covariant and contravariant components related?

Covariant and contravariant components are related by the metric tensor, which is a mathematical object that describes the relationship between the two sets of basis vectors. The metric tensor allows us to convert between covariant and contravariant components, and is essential for performing vector operations in different coordinate systems.

Why are covariant and contravariant components important in physics?

In physics, covariant and contravariant components are important because they allow us to express physical quantities in different coordinate systems without changing their meaning. This is especially useful in relativity, where the laws of physics must hold in all reference frames, and in differential geometry, where vector fields are defined on curved spaces.

What are some examples of covariant and contravariant components in physics?

An example of covariant components in physics is the position vector, which describes the location of a point in space relative to a set of basis vectors aligned with the coordinate axes. An example of contravariant components is the momentum vector, which describes the direction and magnitude of an object's motion relative to a set of basis vectors that are orthogonal to the coordinate axes.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
298
Replies
27
Views
1K
  • General Math
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
823
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
17
Views
2K
  • Precalculus Mathematics Homework Help
Replies
20
Views
850
  • Linear and Abstract Algebra
Replies
12
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
2K
Replies
2
Views
897
Back
Top