Tensor analysis in curvilinear coordinates

In summary, -A Brief on Tensor Analysis by James Simmonds is a good resource for understanding superscript and subscripts in three dimensional euclidean space.-The dot product of two vectors can be written more conveniently in terms of the dual basis vectors when they are in an orthonormal basis.-There is a matrix that can be used to determine the cartesian components of the dual basis vectors.-The inverse of this matrix is the cartesian component of the dual vector.
  • #1
hbar314
2
0
Im taking a course in contiuum mechanics and had some questions that I am sure are pretty basic but I'm not getting.

We just started curvilinear coordinates and I was curious if someone could explain in a little simplier language of what the superscript and subscripts mean.

Or if you happen to know of a site that explains it that would be swell too.
I found refrences to this paper (http://arxiv.org/abs/math.HO/0403252) several times while searching, but the paper doesn't seem to help me. (important stuff starts on page 38).

Thanks for any help.
 
Physics news on Phys.org
  • #2
hbar314 said:
Im taking a course in contiuum mechanics and had some questions that I am sure are pretty basic but I'm not getting.

We just started curvilinear coordinates and I was curious if someone could explain in a little simplier language of what the superscript and subscripts mean.

Or if you happen to know of a site that explains it that would be swell too.
I found refrences to this paper (http://arxiv.org/abs/math.HO/0403252) several times while searching, but the paper doesn't seem to help me. (important stuff starts on page 38).

Thanks for any help.

First things first... See if your library has a copy of A Brief on Tensor Analysis by James Simmonds. The author's background is in continuum mechanics, if I recall correctly.

Here's a start on the upper and lower indices stuff, following Simmonds' approach. In three dimensional euclidean space, one usually chooses the standard orthonormal basis vectors

[tex]\mathbf{e_1} = (1, 0, 0)[/tex]
[tex]\mathbf{e_2} = (0, 1, 0)[/tex]
[tex]\mathbf{e_3} = (0, 0, 1)[/tex]

which point along the x-, y-, and z-axes (cartesian coordinates). Sometimes it is useful to represent vectors in terms of a more general basis [itex]{\mathbf{g_1}, \ \mathbf{g_2}, \ \mathbf{g_3}}[/itex] in which the basis vectors aren't necessarily of unit length, and not necessarily orthogonal. For example,

[tex]\mathbf{g_1} = (1, 0, 0)[/tex]
[tex]\mathbf{g_2} = (1, 1, 0)[/tex]
[tex]\mathbf{g_3} = (1, 1, 1)[/tex]

is such a basis: they are linearly independent and span the 3-space. Upon choosing a basis, you can represent any vector in terms of this basis:

[tex]\mathbf{a} = a^1\mathbf{g_1}+a^2\mathbf{g_2}+a^3\mathbf{g_3}[/tex]

Now suppose you want to find the dot product of this vector with another vector b, whose components in this basis are b1, b2, and b3. You can't just say a.b = a1b1 + a2b2 + a3b3 as you can for the usual cartesian basis. This is because in general,

[tex]\mathbf{g_i} \cdot \mathbf{g_j} \neq 0[/tex]

for i not equal to j. The full expansion is

[tex]\mathbf{a} \cdot \mathbf{b} = a^1b^1\mathbf{g_1} \cdot \mathbf{g_1} + a^1b^2\mathbf{g_1}\cdot\mathbf{g_2} + a^1b^3\mathbf{g_1}\cdot\mathbf{g_3}+
a^2b^1\mathbf{g_2}\cdot\mathbf{g_1} + a^2b^2\mathbf{g_2}\cdot\mathbf{g_2} + a^2b^3\mathbf{g_2}\cdot\mathbf{g_3}+ a^3b^1\mathbf{g_3}\cdot\mathbf{g_1} + a^3b^2\mathbf{g_3}\cdot\mathbf{g_2} + a^3b^3\mathbf{g_3}\cdot\mathbf{g_3}[/tex]

This is obviously not a very convenient way to write out the dot product of two vectors. The solution to the problem is to introduce another set of three basis vectors [itex]{\mathbf{g^1}, \ \mathbf{g^2}, \ \mathbf{g^3}}[/itex], called the "dual basis vectors" that satisfy these relations:

[tex]\mathbf{g^i} \cdot \mathbf{g_j} = 1[/tex]

when i = j, and 0 otherwise. Since the "dual basis" vectors form a basis, you can write any vector as a linear combination of them. For example, we can rewrite the vector b:

[tex]\mathbf{b} = b_1\mathbf{g^1}+b_2\mathbf{g^2}+b_3\mathbf{g^3}[/tex]

This allows a much simpler way to write the dot product of a and b:

[tex](a^1\mathbf{g_1}+a^2\mathbf{g_2}+a^3\mathbf{g_3}) \cdot (b_1\mathbf{g^1}+b_2\mathbf{g^2}+b_3\mathbf{g^3}) = a^1b_1 + a^2b_2 + a^3b_3[/tex]

So the upper and lower indices are used to distinguish between the components of a vector in a given basis and its components in the corresponding dual basis. Given a particular set of basis vectors, there is a set way of determining their corresponding dual basis. Construct a matrix whose colunms are the cartesian components of your basis vectors. Now find the inverse of this matrix. The cartesian components of the dual vectors are given by the rows of this inverse matrix.

add: If the basis vectors happen to be orthonormal, as in the case of the standard cartesian basis, these basis vectors are identical to their corresponding dual basis vectors. In books where orthonormal bases are used exlusively, they usually make no distiction between upper and lower indices, because both sets of components are the same.
 
Last edited:
  • #3
After I posted I searched and found a nice thread on this subject (link ) that has been helpful.

Your explanation makes sense tho. Thank you for the help. Hopfully I can get ahold of this material before I get behind.
 

1. What is tensor analysis in curvilinear coordinates?

Tensor analysis in curvilinear coordinates is a mathematical tool used to study and manipulate tensors (multidimensional arrays of numbers) in a coordinate system where the basis vectors vary from point to point. This is in contrast to tensor analysis in Cartesian coordinates, where the basis vectors are constant.

2. Why is tensor analysis in curvilinear coordinates useful?

Tensor analysis in curvilinear coordinates is useful because it allows for the analysis of tensors in non-Cartesian coordinate systems, which are often more relevant and natural for certain physical systems. This approach also simplifies the equations and calculations involved in working with tensors in curved spaces.

3. What are some applications of tensor analysis in curvilinear coordinates?

Tensor analysis in curvilinear coordinates has a wide range of applications in physics, engineering, and mathematics. Some examples include general relativity, fluid dynamics, electromagnetism, and elasticity theory. It also has applications in computer science and machine learning, such as in image processing and pattern recognition.

4. How is tensor analysis in curvilinear coordinates different from tensor analysis in Cartesian coordinates?

The main difference between tensor analysis in curvilinear coordinates and Cartesian coordinates is that the basis vectors are not constant in curvilinear systems. This means that the components of a tensor will vary with position, making the equations and calculations more complex. However, working in a coordinate system that is more appropriate for the problem at hand can often simplify the analysis.

5. What mathematical tools are used in tensor analysis in curvilinear coordinates?

Tensor analysis in curvilinear coordinates relies heavily on vector calculus, including concepts such as gradient, divergence, and curl. It also utilizes differential geometry, specifically tensor calculus in manifolds. Other mathematical tools that are often used include index notation, coordinate transformations, and the tensor product.

Similar threads

  • Quantum Physics
3
Replies
82
Views
7K
  • Differential Geometry
Replies
6
Views
11K
  • Special and General Relativity
Replies
10
Views
2K
  • Quantum Interpretations and Foundations
Replies
27
Views
2K
Replies
1
Views
1K
  • Differential Geometry
Replies
5
Views
6K
  • Quantum Physics
Replies
2
Views
1K
  • Beyond the Standard Models
Replies
10
Views
2K
  • Beyond the Standard Models
Replies
11
Views
2K
  • Beyond the Standard Models
Replies
2
Views
2K
Back
Top