Undergrad Beginner question about tensor index manipulation

Click For Summary
The discussion centers on the manipulation of tensor indices and the relationship between vectors and their duals using a metric tensor. Participants clarify how to properly apply the metric to lower indices and the importance of distinguishing between upper and lower indices in tensor calculus. There is confusion regarding the notation and the explicit inclusion of basis vectors, which some find helpful while others see it as complicating the understanding of tensor operations. The conversation emphasizes that the contraction of indices is a key operation in tensor algebra, allowing for the transformation between vectors and covectors. Ultimately, understanding the underlying principles of duality and the role of the metric is essential for mastering tensor manipulation.
Data Base Erased
Messages
5
Reaction score
0
TL;DR
I learned about transforming a vector to a covector using the metric tensor, but I feel I lack the basic understanding behind this.
For instance, using the vector ##A^\alpha e_\alpha##:

##g_{\mu \nu} e^\mu \otimes e^\nu (A^\alpha e_\alpha) = g_{\mu \nu} (e^\mu, A^\alpha e_\alpha) e^\nu ##

##g_{\mu \nu} e^\mu \otimes e^\nu (A^\alpha e_\alpha) = A^\alpha g_{\mu \nu} \delta_\alpha^\mu e^\nu = A^\mu g_{\mu \nu} e^\nu = A_\nu e^\nu ##.

In this case the intuition is I had a tensor that could take 2 vectors and give back a number. Since I fed it only with one vector, I still have a map that now takes one vector to a number.

When Carroll talks about this subject in his book he does

##\eta^{\mu \gamma} T^{\alpha \beta}_{\gamma \delta}## (the upper indexes come first) = ##T^{\alpha \beta \mu}_{\delta}##. Again, sorry for the bad typing, I have no idea how to order these indexes here.

In this example, one of the indexes of the metric is the same as one of the indexes of the tensor that's feeding it. How do I get his result?
 
Physics news on Phys.org
I don't understand the notation in the first part, because as you say 2nd rank tensor needs to vectors as input, not only one. Which book is this from?

The horizontal ordering of indices in LaTeX is possible with "set brackets" {}, e.g., {T^{\alpha \beta}}_{\gamma \delta} gives ##{T^{\alpha \beta}}_{\gamma \delta}##. Then the equation reads
$$\eta^{\mu \gamma} {T^{\alpha \beta}}_{\gamma \delta}={T^{\alpha \beta \mu}}_{\delta}.$$
That's the rule how to draw a lower (covariant) index up to become a contravariant index. It's understood here that you have to sum over the equal-index pair. It's also called a contraction. Note that for a contraction there must be always one of the indices a lower and one an upper index. Summing over two upper or two lower indices is a mistake, and must never occur in this general Ricci calculus, where you must distinguish between upper and lower indices.
 
It's from a youtube playlist on tensors. He treats them very algebraically so he leaves the basis vectors explicit.
 
Data Base Erased said:
He treats them very algebraically so he leaves the basis vectors explicit.

Which is actually very, very confusing. See below.

Data Base Erased said:
In this example, one of the indexes of the metric is the same as one of the indexes of the tensor that's feeding it.

The same is true in your first example, once you've hacked your way through all the underbrush created by explicitly including all the basis vectors. Notice that what you end up with is the equation ##A^\mu g_{\mu \nu} e^\nu = A_\nu e^\nu##. Leaving out the basis vector, this is just ##A^\mu g_{\mu \nu} = A_\nu##. And that is exactly how Carroll would write it! Notice the whole thing now is just one easy step, and the index on the vector matches one of the indexes on the metric--which is exactly how you lower an index.
 
  • Like
Likes vanhees71
Well, looking at the youtube movie it's not so bad. It's just defining tensors in a "canonical" way (i.e., not dependent on the choice of any basis and the corresponding dual basis). I only find the notation pretty confusing.
 
vanhees71 said:
Well, looking at the youtube movie it's not so bad. It's just defining tensors in a "canonical" way (i.e., not dependent on the choice of any basis and the corresponding dual basis). I only find the notation pretty confusing.

I find it to be fairly standard, but maybe this is because this is how I learned it all those decades ago.

PeterDonis said:
this is just ##A^\mu g_{\mu \nu} = A_\nu##. And that is exactly how Carroll would write it! Notice the whole thing now is just one easy step, and the index on the vector matches one of the indexes on the metric--which is exactly how you lower an index.

But, what do these symbols mean, and from where does the relation come?
 
  • Like
Likes vanhees71
George Jones said:
what do these symbols mean, and from where does the relation come?

I agree that, to fully answer that question, one needs to understand where the basis vectors come in. But one would have to do the same thing with the second expression the OP asks about. And when one did, one would find that this...

Data Base Erased said:
one of the indexes of the metric is the same as one of the indexes of the tensor that's feeding it

...was no longer true of the full expression with all the basis vectors explicitly included, just as is the case for the first example the OP gave. And, conversely, it is true for the simplified expression in the OP's first example with the basis vectors not explicitly included, the one I gave.

That is why I think that explicitly including the basis vectors has confused the OP rather than helped him: it has led him to think there is a difference between his two examples that isn't actually there. The only difference between his two examples is the number of indexes on the tensor (and whether an index is raised or lowered--the latter in the first example, the former in the second). The process of raising or lowering one index is exactly the same in both cases.
 
Data Base Erased said:
Summary:: I learned about transforming a vector to a covector using the metric tensor, but I feel I lack the basic understanding behind this.

For instance, using the vector ##A^\alpha e_\alpha##:

I'm also puzzled by this notation, I additionally suspect it's wrong

Usually, I don't write out the basis vectors explicitly, as others have also commented, so I'd write

$$A_\beta == g_{\alpha \beta} A^\alpha$$

I'm using == to indicate "equivalent to" - more on this later.

I believe this is different from what you wrote.

When we put in the omitted basis vectors, we get instead

$$A_\beta \vec{e}^\beta == g_{\alpha \beta} A^\alpha \vec{e}_\alpha$$

But I'm not sure we actually need to do this.

I couldn't find an explicit textbook reference, so the following is somewhat from memory - which unfortunately isn't as good as it used to be.

The point is that we have a vector space, with an inner product, and the inner product is defined as
$$\vec{A} \cdot \vec{A} = g_{\alpha\beta} A^\alpha A^\beta $$

But this notion of an inner product of a metric space must be compatible with the more primitive notion that a dual vector is a map from a vector to a scalar, and we require that this map from the unique dual of a vector when applied to the vector gives us the length of the vector, i.e. the dot product we defined in our metric space.

I'm a bit unclear as to whether the uniqueness of the dual vector in a metric space really comes from. I believe it's because a map from two vectors to a scalar is equivalent from a map from a vector to a dual vector, because a dual vector is a map from a vector to a scalar. But I might be mis-remembering something here.

When I write the == sign, I'm indicating that there is a unique equivalence between a vector and it's dual. I tend to think of them as actually being different representations of the "same thing", which would justify the equality, but it seems more prudent to use the == notation to simply say instead there is a unique correspondence from vectors to their duals in a metric space. Without a metric, there is no such a unique relationship, by the way.

Given that the dual is unique, though, we know that

$$\vec{A} \cdot \vec{A} = A^\alpha A_\alpha = g_{\alpha \beta} A^\alpha A^\beta$$

The fact that these two notions are equivalent requires that.

$$A^{\alpha}A_{\alpha} = g_{\alpha \beta} A^\alpha A^{beta}$$

we can see this is satisfied in general if

$$A_{\alpha} = g_{\alpha \beta} A^{\beta}$$

And the uniqueness means it's the only solution.

It might be additionally helpful to know the properties & defintions of the dual basis, even though the above approach didn't require us to know that.

From https://en.wikipedia.org/w/index.php?title=Dual_basis&oldid=973680445

the relation between the the basis vectors and their duals is:

$$\vec{e_\alpha} \, \cdot \vec{e^\beta} = \delta ^\alpha_\beta$$

But I'm not sure that it's really necessary to know this part.
 
It's a perfectly valid notation for a vector space with a fundamental non-degenerate symmetric bilinear form ##g## and after you have identified vectors and covectors by the corresponding canonical isomorphism wrt. this bilinear form.

For a basis ##\vec{e}_{\alpha}## of the vector space you have
$$g_{\alpha \beta}=g(\vec{e}_{\alpha},\vec{e}_{\beta})$$
Now for an arbitrary linear form ##\underline{L}## you can find a unique vector ##\vec{L}## such that
$$\forall \vec{x} \in V:L(\vec{x})=g(\vec{L},\vec{x}).$$
As any vector you can write
$$\vec{L}=L^{\alpha} \vec{e}_{\alpha}.$$
To dual basis ##\underline{e}^{\alpha}## is then mapped to the vectors ##\vec{e}^{\alpha}##, and you can decompose it as
$$\vec{e}^{\alpha}=E^{\alpha \beta} \vec{e}_{\beta}.$$
Now the ##E^{\alpha \beta}## are found by
$$\delta_{\beta}^{\alpha} = \underline{e}^{\alpha}(\vec{e}_{\beta}) = g(\vec{e}^{\alpha},\vec{e}_{\beta}) = E^{\alpha \gamma} g(\vec{e}_{\gamma},\vec{e}_{\beta})=E^{\alpha \gamma} g_{\gamma \beta}.$$
From this you find
$$E^{\alpha \gamma} = g^{\alpha \gamma},$$
where ##(g^{\alpha \gamma})## is by definition the inverse of the matrix ##(g_{\alpha \beta})##, i.e.,
$$E^{\alpha \gamma} g_{\gamma \beta}=\delta_{\beta}^{\alpha}.$$
So finally you have
$$\vec{e}^{\alpha}=E^{\alpha \beta} \vec{e}_{\beta}=g^{\alpha \beta} \vec{e}_{\beta}$$
and
$$\vec{e}_{\alpha}=g_{\alpha \beta} \vec{e}^{\beta}.$$
From this you get the following using this canonical identification of vectors and dual vectors wrt. to the fundmental form ##g##:
$$\vec{A}=A^{\alpha} \vec{e}_{\alpha} =A^{\alpha} g_{\alpha \beta} \vec{e}^{\beta}=A_{\beta} \vec{e}^{\beta}$$
and thus
$$A_{\beta}=g_{\alpha \beta} A^{\alpha}.$$
The other way goes like this:
$$\vec{A}=A_{\alpha} \vec{e}^{\alpha} =A_{\alpha} g^{\alpha \beta} \vec{e}_{\beta}=A^{\beta} \vec{e}_{\beta}$$
and hus
$$A_{\beta}=g_{\alpha \beta} A^{\alpha}.$$
With writing all equations in invariant terms, i.e., in vectors and tensors you have a much easier time to get the correct transformation properties of the various components and bases vectors under basis transformations.
 
  • Like
Likes PeroK and etotheipi

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 16 ·
Replies
16
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 9 ·
Replies
9
Views
1K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
735
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K