# Tensor analysis in curvilinear coordinates

Im taking a course in contiuum mechanics and had some questions that im sure are pretty basic but I'm not getting.

We just started curvilinear coordinates and I was curious if someone could explain in a little simplier language of what the superscript and subscripts mean.

Or if you happen to know of a site that explains it that would be swell too.
I found refrences to this paper (http://arxiv.org/abs/math.HO/0403252) several times while searching, but the paper doesnt seem to help me. (important stuff starts on page 38).

Thanks for any help.

hbar314 said:
Im taking a course in contiuum mechanics and had some questions that im sure are pretty basic but I'm not getting.

We just started curvilinear coordinates and I was curious if someone could explain in a little simplier language of what the superscript and subscripts mean.

Or if you happen to know of a site that explains it that would be swell too.
I found refrences to this paper (http://arxiv.org/abs/math.HO/0403252) several times while searching, but the paper doesnt seem to help me. (important stuff starts on page 38).

Thanks for any help.

First things first... See if your library has a copy of A Brief on Tensor Analysis by James Simmonds. The author's background is in continuum mechanics, if I recall correctly.

Here's a start on the upper and lower indices stuff, following Simmonds' approach. In three dimensional euclidean space, one usually chooses the standard orthonormal basis vectors

$$\mathbf{e_1} = (1, 0, 0)$$
$$\mathbf{e_2} = (0, 1, 0)$$
$$\mathbf{e_3} = (0, 0, 1)$$

which point along the x-, y-, and z-axes (cartesian coordinates). Sometimes it is useful to represent vectors in terms of a more general basis ${\mathbf{g_1}, \ \mathbf{g_2}, \ \mathbf{g_3}}$ in which the basis vectors aren't necessarily of unit length, and not necessarily orthogonal. For example,

$$\mathbf{g_1} = (1, 0, 0)$$
$$\mathbf{g_2} = (1, 1, 0)$$
$$\mathbf{g_3} = (1, 1, 1)$$

is such a basis: they are linearly independent and span the 3-space. Upon choosing a basis, you can represent any vector in terms of this basis:

$$\mathbf{a} = a^1\mathbf{g_1}+a^2\mathbf{g_2}+a^3\mathbf{g_3}$$

Now suppose you want to find the dot product of this vector with another vector b, whose components in this basis are b1, b2, and b3. You can't just say a.b = a1b1 + a2b2 + a3b3 as you can for the usual cartesian basis. This is because in general,

$$\mathbf{g_i} \cdot \mathbf{g_j} \neq 0$$

for i not equal to j. The full expansion is

$$\mathbf{a} \cdot \mathbf{b} = a^1b^1\mathbf{g_1} \cdot \mathbf{g_1} + a^1b^2\mathbf{g_1}\cdot\mathbf{g_2} + a^1b^3\mathbf{g_1}\cdot\mathbf{g_3}+ a^2b^1\mathbf{g_2}\cdot\mathbf{g_1} + a^2b^2\mathbf{g_2}\cdot\mathbf{g_2} + a^2b^3\mathbf{g_2}\cdot\mathbf{g_3}+ a^3b^1\mathbf{g_3}\cdot\mathbf{g_1} + a^3b^2\mathbf{g_3}\cdot\mathbf{g_2} + a^3b^3\mathbf{g_3}\cdot\mathbf{g_3}$$

This is obviously not a very convenient way to write out the dot product of two vectors. The solution to the problem is to introduce another set of three basis vectors ${\mathbf{g^1}, \ \mathbf{g^2}, \ \mathbf{g^3}}$, called the "dual basis vectors" that satisfy these relations:

$$\mathbf{g^i} \cdot \mathbf{g_j} = 1$$

when i = j, and 0 otherwise. Since the "dual basis" vectors form a basis, you can write any vector as a linear combination of them. For example, we can rewrite the vector b:

$$\mathbf{b} = b_1\mathbf{g^1}+b_2\mathbf{g^2}+b_3\mathbf{g^3}$$

This allows a much simpler way to write the dot product of a and b:

$$(a^1\mathbf{g_1}+a^2\mathbf{g_2}+a^3\mathbf{g_3}) \cdot (b_1\mathbf{g^1}+b_2\mathbf{g^2}+b_3\mathbf{g^3}) = a^1b_1 + a^2b_2 + a^3b_3$$

So the upper and lower indices are used to distinguish between the components of a vector in a given basis and its components in the corresponding dual basis. Given a particular set of basis vectors, there is a set way of determining their corresponding dual basis. Construct a matrix whose colunms are the cartesian components of your basis vectors. Now find the inverse of this matrix. The cartesian components of the dual vectors are given by the rows of this inverse matrix.

add: If the basis vectors happen to be orthonormal, as in the case of the standard cartesian basis, these basis vectors are identical to their corresponding dual basis vectors. In books where orthonormal bases are used exlusively, they usually make no distiction between upper and lower indices, because both sets of components are the same.

Last edited:
After I posted I searched and found a nice thread on this subject (link ) that has been helpful.

Your explanation makes sense tho. Thank you for the help. Hopfully I can get ahold of this material before I get behind.