# I Metric Tensor as Simplest Math Object for Describing Space

Tags:
1. Jun 15, 2017

### NaiveBayesian

I've been reading Fleisch's "A Student's Guide to Vectors and Tensors" as a self-study, and watched this helpful video also by Fleisch: Suddenly co-vectors and one-forms make more sense than they did when I tried to learn the from Schutz's GR book many years ago.

Especially in the video, Fleisch describes one way of viewing a second-rank tensor; as an object whose individual components relate the input and output components of two sets basis vectors.

In Chapter 4 of his book, Fleisch draws careful attention to two different ways that second rank tensors can be used to relate vectors; they can be used to change a vector, or to change the coordinate representation of a vector.

So, in a classical rigid-body problem, a second-rank tensor is used to create a new vector. The moment-of-inertia tensor operates on an angular velocity vector expressed in terms of a set of basis vectors, and returns an angular momentum vector, which is a different vector that is expressed in terms of the input basis-vectors.

In a change of co-ordinates problem, on the other hand, a second rank tensor operates on a vector expressed in terms of a set of basis vectors, and returns the same vector expressed in terms of a different set of basis vectors.

Even though different things are happening in the two problem types, it is obvious in both cases that the fundamental definition of the problem requires two sets of basis vectors, one used to describe the input and one used to describe the output.

Now, in section 5.5 of his book, Fleisch describes the metric tensor, and describes its function as "allow[ing] you to define fundamental quantities such as lengths and angles in a consistent manner at different locations".

My question is this: Why does this require two sets of basis vectors? If I just want to measure a distance, not change the vector that describes that distance, and not change coordinates, why do I need to involve the second set of basis vectors that a tensor implies? Or, to put the question more concretely, What does the "second set" of basis vectors represent, for example in the Cartesian metric in 3-dimensions?

2. Jun 16, 2017

### Orodruin

Staff Emeritus
I strongly doubt this, because it is wrong. The transformation coefficients are not tensors.

The metric defines the inner product between two vectors. The inner product is used to define vector lengths and angles.

3. Jun 16, 2017

### pervect

Staff Emeritus
I don't have the book, but I'm assuming this is a discussion of what's usually called active and passive transformations. See for instance this <<wiki link>>.

I would agree that a linear map from a vector to a vector is one example of a second rank tensor. But I wouldn't regard it as a good defintion of a second rank tensor. Second rank tensors can take other forms. One of these other forms is that of a linear map from a pair of vectors to a scalar. A scalar is basically just a number that doesn't depend on ones choice of coordinates or basis vectors.

The metric tensor falls most easily into the category of a linear map between a pair of vectors and a scalar. The metric tensor can accept two vectors, in either order (this is not true of all second rank tensors, but the metric tensor has the property of being a symmetric tensor, which gives it this property). The scalar output of the metric tensor given two vector inputs is the inner product. This can be regarded as the square of the length of the projection of one vector onto the other, using the projection technique Fleish used in the early part of his video. (Which I watched part of, though not all the way through).

The inner product gives information on the angle between the two vectors - for instance, if the inner product of two vectors is zero, the vectors are at right angles to each other - they are orthogonal. Thus the inner product (and hence the metric tensor) can be used in this manner to define the angle between two vectors. If one applies the same vector to both inputs of the metric tensor, the result is the square of the length of that vector. Thus the inner product can also be used to define the length of a vector.

To go much further than this, one would need to introduce the concept of dual vectors. I'm not sure that would be a good idea at this point, so I'll stop here.

4. Jun 16, 2017

### Orodruin

Staff Emeritus
I would say these are equivalent. A linear map $T$ from a pair of vectors to scalars also defines a map from vectors to vectors (or more precisely from vectors to dual vectors, but let us stop there as you say and live in a space with a metric), for any vector $X$, this map would be given by $X \to f$ such that $f(Y) = T(Y,X)$. In the same way, a map $S$ from vectors to vectors will be equivalent to a map from a pair of vectors to scalars through $X\cdot S(Y)$. (In the case of the metric, it can be equivalently seen either as a map from a pair of tangent vectors to scalars or as a map from the tangent space to the dual space.)

While I an actual change of the vectors by a linear transformation would be a tensor, this is not what chapter 4 talks about. It only talks about coordinate transformations (i.e., passive transformations) only as far as I can tell. I believe the OP has mistaken the transformation coefficients $\partial x^i/\partial y^j$ for tensors.

5. Jun 17, 2017

### NaiveBayesian

I re-read chapter 4 of Fleisch, and the replies are correct and I was wrong, Fleisch hasn't introduced tensors at that point.

Given that, allow me to re-do the second part of my question:

In a classical rigid-body problem, a second-rank tensor is used to create (calculate? generate?) a new vector. The moment-of-inertia tensor operates on an angular velocity vector expressed in terms of a set of basis vectors, and returns an angular momentum vector expressed in terms of the basis-vectors used for the input.

It is also possible to use tensors to create (calculate? generate?) vectors in a basis that is different from the input basis vector set, i.e. tensor calculations are not limited to calculations where input and output basis vectors are identical.

My question, then, is still: Why does the simplest mathematical form that can give meaning to concepts such as distances and angles require two sets of basis vectors? Is the answer as simple that it is because when operating on a vector with something that returns another vector, you need to specify the input and output basis vectors as part of the set-up of the problem?

6. Jun 17, 2017

### Orodruin

Staff Emeritus
The basis that you chose to work in is not actually relevant. Vectors and tensor are not basis dependent, only their components in a particular coordinate system are. The direction to the Moon does not change depending on what basis you use, although the components in a particular basis will.

The metric tensor includes more information. It defines an inner product and we can use the same concepts that we are used to from regular Euclidean spaces to define distances and angles from the inner product.

7. Jun 17, 2017

### pervect

Staff Emeritus
If you want your mathematical concept of distance to be the same as the notion of distance in plane Euclidean geometry, you can use the following argument. If you use a pair of orthogonal basis vectors of unit length, $\vec{A}$ and $\vec{B}$, you can write a general vector C in that basis as $C = m\vec{A}+n\vec{B}$. Because the basis vectors are orthogonal, $m\vec{A}$ and $n\vec{B}$ will form a right triangle. Because $\vec{A}$ and $\vec{B}$ are unit vectors, the lengths of the sides of this right triangle will be m and n. The length of the general vector $C = m\vec{A}+n\vec{B}$ will then be the length of the hypotenouse of the right triangle. By the pythagorean theorem, this must be length = $\sqrt{m^2 + n^2}$ or length$^2 = m^2 + n^2$. So we see that the square of the length of a vector in the orthonormal basis must be a quadratic form.

Tensors are linear operators, and taking the square of something is not a linear operator, but rather a bilinear one. This means that a rank 2 tensor can represent a quadratic form such as $m^2 + n^2$, but a rank 1 tensor cannot, because quadratic form is quadratic rather than linear.