# Metric Tensor as Simplest Math Object for Describing Space

• I
• NaiveBayesian
In summary, the conversation discusses the concept of second rank tensors and their use in relating vectors and changing coordinate representations. The book "A Student's Guide to Vectors and Tensors" by Fleisch and a helpful video by the same author are mentioned. The conversation also touches on the role of two sets of basis vectors in defining the inner product and measuring distances in different locations. The concept of active and passive transformations is also briefly mentioned. The question of why two sets of basis vectors are needed in the metric tensor is posed.
NaiveBayesian
I've been reading Fleisch's "A Student's Guide to Vectors and Tensors" as a self-study, and watched this helpful video also by Fleisch: Suddenly co-vectors and one-forms make more sense than they did when I tried to learn the from Schutz's GR book many years ago.

Especially in the video, Fleisch describes one way of viewing a second-rank tensor; as an object whose individual components relate the input and output components of two sets basis vectors.

In Chapter 4 of his book, Fleisch draws careful attention to two different ways that second rank tensors can be used to relate vectors; they can be used to change a vector, or to change the coordinate representation of a vector.

So, in a classical rigid-body problem, a second-rank tensor is used to create a new vector. The moment-of-inertia tensor operates on an angular velocity vector expressed in terms of a set of basis vectors, and returns an angular momentum vector, which is a different vector that is expressed in terms of the input basis-vectors.

In a change of co-ordinates problem, on the other hand, a second rank tensor operates on a vector expressed in terms of a set of basis vectors, and returns the same vector expressed in terms of a different set of basis vectors.

Even though different things are happening in the two problem types, it is obvious in both cases that the fundamental definition of the problem requires two sets of basis vectors, one used to describe the input and one used to describe the output.

Now, in section 5.5 of his book, Fleisch describes the metric tensor, and describes its function as "allow[ing] you to define fundamental quantities such as lengths and angles in a consistent manner at different locations".

My question is this: Why does this require two sets of basis vectors? If I just want to measure a distance, not change the vector that describes that distance, and not change coordinates, why do I need to involve the second set of basis vectors that a tensor implies? Or, to put the question more concretely, What does the "second set" of basis vectors represent, for example in the Cartesian metric in 3-dimensions?

NaiveBayesian said:
In Chapter 4 of his book, Fleisch draws careful attention to two different ways that second rank tensors can be used to relate vectors; they can be used to change a vector, or to change the coordinate representation of a vector.
I strongly doubt this, because it is wrong. The transformation coefficients are not tensors.

NaiveBayesian said:
Why does this require two sets of basis vectors? If I just want to measure a distance, not change the vector that describes that distance, and not change coordinates, why do I need to involve the second set of basis vectors that a tensor implies? Or, to put the question more concretely, What does the "second set" of basis vectors represent, for example in the Cartesian metric in 3-dimensions?
The metric defines the inner product between two vectors. The inner product is used to define vector lengths and angles.

NaiveBayesian said:
I've been reading Fleisch's "A Student's Guide to Vectors and Tensors" as a self-study, and watched this helpful video also by Fleisch: Suddenly co-vectors and one-forms make more sense than they did when I tried to learn the from Schutz's GR book many years ago.

Especially in the video, Fleisch describes one way of viewing a second-rank tensor; as an object whose individual components relate the input and output components of two sets basis vectors.

In Chapter 4 of his book, Fleisch draws careful attention to two different ways that second rank tensors can be used to relate vectors; they can be used to change a vector, or to change the coordinate representation of a vector.

I don't have the book, but I'm assuming this is a discussion of what's usually called active and passive transformations. See for instance this <<wiki link>>.

Now, in section 5.5 of his book, Fleisch describes the metric tensor, and describes its function as "allow[ing] you to define fundamental quantities such as lengths and angles in a consistent manner at different locations".

My question is this: Why does this require two sets of basis vectors? If I just want to measure a distance, not change the vector that describes that distance, and not change coordinates, why do I need to involve the second set of basis vectors that a tensor implies? Or, to put the question more concretely, What does the "second set" of basis vectors represent, for example in the Cartesian metric in 3-dimensions?

I would agree that a linear map from a vector to a vector is one example of a second rank tensor. But I wouldn't regard it as a good defintion of a second rank tensor. Second rank tensors can take other forms. One of these other forms is that of a linear map from a pair of vectors to a scalar. A scalar is basically just a number that doesn't depend on ones choice of coordinates or basis vectors.

The metric tensor falls most easily into the category of a linear map between a pair of vectors and a scalar. The metric tensor can accept two vectors, in either order (this is not true of all second rank tensors, but the metric tensor has the property of being a symmetric tensor, which gives it this property). The scalar output of the metric tensor given two vector inputs is the inner product. This can be regarded as the square of the length of the projection of one vector onto the other, using the projection technique Fleish used in the early part of his video. (Which I watched part of, though not all the way through).

The inner product gives information on the angle between the two vectors - for instance, if the inner product of two vectors is zero, the vectors are at right angles to each other - they are orthogonal. Thus the inner product (and hence the metric tensor) can be used in this manner to define the angle between two vectors. If one applies the same vector to both inputs of the metric tensor, the result is the square of the length of that vector. Thus the inner product can also be used to define the length of a vector.

To go much further than this, one would need to introduce the concept of dual vectors. I'm not sure that would be a good idea at this point, so I'll stop here.

pervect said:
But I wouldn't regard it as a good defintion of a second rank tensor. Second rank tensors can take other forms. One of these other forms is that of a linear map from a pair of vectors to a scalar. A scalar is basically just a number that doesn't depend on ones choice of coordinates or basis vectors.
I would say these are equivalent. A linear map ##T## from a pair of vectors to scalars also defines a map from vectors to vectors (or more precisely from vectors to dual vectors, but let us stop there as you say and live in a space with a metric), for any vector ##X##, this map would be given by ##X \to f## such that ##f(Y) = T(Y,X)##. In the same way, a map ##S## from vectors to vectors will be equivalent to a map from a pair of vectors to scalars through ##X\cdot S(Y)##. (In the case of the metric, it can be equivalently seen either as a map from a pair of tangent vectors to scalars or as a map from the tangent space to the dual space.)

pervect said:
I don't have the book, but I'm assuming this is a discussion of what's usually called active and passive transformations. See for instance this <<wiki link>>.

While I an actual change of the vectors by a linear transformation would be a tensor, this is not what chapter 4 talks about. It only talks about coordinate transformations (i.e., passive transformations) only as far as I can tell. I believe the OP has mistaken the transformation coefficients ##\partial x^i/\partial y^j## for tensors.

I re-read chapter 4 of Fleisch, and the replies are correct and I was wrong, Fleisch hasn't introduced tensors at that point.

Given that, allow me to re-do the second part of my question:

In a classical rigid-body problem, a second-rank tensor is used to create (calculate? generate?) a new vector. The moment-of-inertia tensor operates on an angular velocity vector expressed in terms of a set of basis vectors, and returns an angular momentum vector expressed in terms of the basis-vectors used for the input.

It is also possible to use tensors to create (calculate? generate?) vectors in a basis that is different from the input basis vector set, i.e. tensor calculations are not limited to calculations where input and output basis vectors are identical.

My question, then, is still: Why does the simplest mathematical form that can give meaning to concepts such as distances and angles require two sets of basis vectors? Is the answer as simple that it is because when operating on a vector with something that returns another vector, you need to specify the input and output basis vectors as part of the set-up of the problem?

NaiveBayesian said:
It is also possible to use tensors to create (calculate? generate?) vectors in a basis that is different from the input basis vector set, i.e. tensor calculations are not limited to calculations where input and output basis vectors are identical.
The basis that you chose to work in is not actually relevant. Vectors and tensor are not basis dependent, only their components in a particular coordinate system are. The direction to the Moon does not change depending on what basis you use, although the components in a particular basis will.

NaiveBayesian said:
Why does the simplest mathematical form that can give meaning to concepts such as distances and angles require two sets of basis vectors? Is the answer as simple that it is because when operating on a vector with something that returns another vector, you need to specify the input and output basis vectors as part of the set-up of the problem?
The metric tensor includes more information. It defines an inner product and we can use the same concepts that we are used to from regular Euclidean spaces to define distances and angles from the inner product.

NaiveBayesian said:
I re-read chapter 4 of Fleisch, and the replies are correct and I was wrong, Fleisch hasn't introduced tensors at that point.

Given that, allow me to re-do the second part of my question:

In a classical rigid-body problem, a second-rank tensor is used to create (calculate? generate?) a new vector. The moment-of-inertia tensor operates on an angular velocity vector expressed in terms of a set of basis vectors, and returns an angular momentum vector expressed in terms of the basis-vectors used for the input.

It is also possible to use tensors to create (calculate? generate?) vectors in a basis that is different from the input basis vector set, i.e. tensor calculations are not limited to calculations where input and output basis vectors are identical.

My question, then, is still: Why does the simplest mathematical form that can give meaning to concepts such as distances and angles require two sets of basis vectors? Is the answer as simple that it is because when operating on a vector with something that returns another vector, you need to specify the input and output basis vectors as part of the set-up of the problem?

If you want your mathematical concept of distance to be the same as the notion of distance in plane Euclidean geometry, you can use the following argument. If you use a pair of orthogonal basis vectors of unit length, ##\vec{A}## and ##\vec{B}##, you can write a general vector C in that basis as ##C = m\vec{A}+n\vec{B}##. Because the basis vectors are orthogonal, ##m\vec{A}## and ##n\vec{B}## will form a right triangle. Because ##\vec{A}## and ##\vec{B}## are unit vectors, the lengths of the sides of this right triangle will be m and n. The length of the general vector ##C = m\vec{A}+n\vec{B}## will then be the length of the hypotenouse of the right triangle. By the pythagorean theorem, this must be length = ##\sqrt{m^2 + n^2}## or length##^2 = m^2 + n^2##. So we see that the square of the length of a vector in the orthonormal basis must be a quadratic form.

Tensors are linear operators, and taking the square of something is not a linear operator, but rather a bilinear one. This means that a rank 2 tensor can represent a quadratic form such as ##m^2 + n^2##, but a rank 1 tensor cannot, because quadratic form is quadratic rather than linear.

## 1. What is a metric tensor?

A metric tensor is a mathematical object that is used to describe the geometry of a space. It defines the distance between two points in that space, as well as the angles and shape of its geometric objects.

## 2. How is a metric tensor used to describe space?

A metric tensor is used to define a coordinate system for a space and to measure distances and angles within that space. It allows for the calculation of geometric properties such as curvature and volume.

## 3. What makes a metric tensor the simplest math object for describing space?

A metric tensor is considered the simplest math object for describing space because it is a straightforward and elegant way to represent the geometry of a space. It is also the foundation for many mathematical models used in physics and engineering.

## 4. How is a metric tensor related to general relativity?

In general relativity, the metric tensor is used to describe the curvature of space and time caused by the presence of mass and energy. It is a fundamental concept in the theory and is essential for understanding the behavior of gravity.

## 5. Can a metric tensor be used to describe non-Euclidean spaces?

Yes, a metric tensor can be used to describe non-Euclidean spaces, which are spaces that do not follow the rules of Euclidean geometry. This is because the metric tensor is a general concept that can be applied to various mathematical models and does not rely on the specific properties of Euclidean space.

• Special and General Relativity
Replies
11
Views
545
• Special and General Relativity
Replies
13
Views
920
• Special and General Relativity
Replies
1
Views
666
• Special and General Relativity
Replies
7
Views
1K
• Special and General Relativity
Replies
9
Views
589
• Special and General Relativity
Replies
3
Views
1K
• Special and General Relativity
Replies
6
Views
3K
• Special and General Relativity
Replies
7
Views
2K
• Special and General Relativity
Replies
1
Views
827
• Special and General Relativity
Replies
146
Views
7K