Beginner question about tensor index manipulation

In summary, the conversation discusses the use of tensors and the process of raising and lowering indices. The first example given shows how a tensor can take two vectors and give back a number, while the second example addresses the use of basis vectors in defining tensors. The notation used in the conversation may be confusing, but the underlying concept is the same in both examples.
  • #1
Data Base Erased
5
0
TL;DR Summary
I learned about transforming a vector to a covector using the metric tensor, but I feel I lack the basic understanding behind this.
For instance, using the vector ##A^\alpha e_\alpha##:

##g_{\mu \nu} e^\mu \otimes e^\nu (A^\alpha e_\alpha) = g_{\mu \nu} (e^\mu, A^\alpha e_\alpha) e^\nu ##

##g_{\mu \nu} e^\mu \otimes e^\nu (A^\alpha e_\alpha) = A^\alpha g_{\mu \nu} \delta_\alpha^\mu e^\nu = A^\mu g_{\mu \nu} e^\nu = A_\nu e^\nu ##.

In this case the intuition is I had a tensor that could take 2 vectors and give back a number. Since I fed it only with one vector, I still have a map that now takes one vector to a number.

When Carroll talks about this subject in his book he does

##\eta^{\mu \gamma} T^{\alpha \beta}_{\gamma \delta}## (the upper indexes come first) = ##T^{\alpha \beta \mu}_{\delta}##. Again, sorry for the bad typing, I have no idea how to order these indexes here.

In this example, one of the indexes of the metric is the same as one of the indexes of the tensor that's feeding it. How do I get his result?
 
Physics news on Phys.org
  • #2
I don't understand the notation in the first part, because as you say 2nd rank tensor needs to vectors as input, not only one. Which book is this from?

The horizontal ordering of indices in LaTeX is possible with "set brackets" {}, e.g., {T^{\alpha \beta}}_{\gamma \delta} gives ##{T^{\alpha \beta}}_{\gamma \delta}##. Then the equation reads
$$\eta^{\mu \gamma} {T^{\alpha \beta}}_{\gamma \delta}={T^{\alpha \beta \mu}}_{\delta}.$$
That's the rule how to draw a lower (covariant) index up to become a contravariant index. It's understood here that you have to sum over the equal-index pair. It's also called a contraction. Note that for a contraction there must be always one of the indices a lower and one an upper index. Summing over two upper or two lower indices is a mistake, and must never occur in this general Ricci calculus, where you must distinguish between upper and lower indices.
 
  • #3
It's from a youtube playlist on tensors. He treats them very algebraically so he leaves the basis vectors explicit.
 
  • #4
Data Base Erased said:
He treats them very algebraically so he leaves the basis vectors explicit.

Which is actually very, very confusing. See below.

Data Base Erased said:
In this example, one of the indexes of the metric is the same as one of the indexes of the tensor that's feeding it.

The same is true in your first example, once you've hacked your way through all the underbrush created by explicitly including all the basis vectors. Notice that what you end up with is the equation ##A^\mu g_{\mu \nu} e^\nu = A_\nu e^\nu##. Leaving out the basis vector, this is just ##A^\mu g_{\mu \nu} = A_\nu##. And that is exactly how Carroll would write it! Notice the whole thing now is just one easy step, and the index on the vector matches one of the indexes on the metric--which is exactly how you lower an index.
 
  • Like
Likes vanhees71
  • #5
Well, looking at the youtube movie it's not so bad. It's just defining tensors in a "canonical" way (i.e., not dependent on the choice of any basis and the corresponding dual basis). I only find the notation pretty confusing.
 
  • #6
vanhees71 said:
Well, looking at the youtube movie it's not so bad. It's just defining tensors in a "canonical" way (i.e., not dependent on the choice of any basis and the corresponding dual basis). I only find the notation pretty confusing.

I find it to be fairly standard, but maybe this is because this is how I learned it all those decades ago.

PeterDonis said:
this is just ##A^\mu g_{\mu \nu} = A_\nu##. And that is exactly how Carroll would write it! Notice the whole thing now is just one easy step, and the index on the vector matches one of the indexes on the metric--which is exactly how you lower an index.

But, what do these symbols mean, and from where does the relation come?
 
  • Like
Likes vanhees71
  • #7
George Jones said:
what do these symbols mean, and from where does the relation come?

I agree that, to fully answer that question, one needs to understand where the basis vectors come in. But one would have to do the same thing with the second expression the OP asks about. And when one did, one would find that this...

Data Base Erased said:
one of the indexes of the metric is the same as one of the indexes of the tensor that's feeding it

...was no longer true of the full expression with all the basis vectors explicitly included, just as is the case for the first example the OP gave. And, conversely, it is true for the simplified expression in the OP's first example with the basis vectors not explicitly included, the one I gave.

That is why I think that explicitly including the basis vectors has confused the OP rather than helped him: it has led him to think there is a difference between his two examples that isn't actually there. The only difference between his two examples is the number of indexes on the tensor (and whether an index is raised or lowered--the latter in the first example, the former in the second). The process of raising or lowering one index is exactly the same in both cases.
 
  • #8
Data Base Erased said:
Summary:: I learned about transforming a vector to a covector using the metric tensor, but I feel I lack the basic understanding behind this.

For instance, using the vector ##A^\alpha e_\alpha##:

I'm also puzzled by this notation, I additionally suspect it's wrong

Usually, I don't write out the basis vectors explicitly, as others have also commented, so I'd write

$$A_\beta == g_{\alpha \beta} A^\alpha$$

I'm using == to indicate "equivalent to" - more on this later.

I believe this is different from what you wrote.

When we put in the omitted basis vectors, we get instead

$$A_\beta \vec{e}^\beta == g_{\alpha \beta} A^\alpha \vec{e}_\alpha$$

But I'm not sure we actually need to do this.

I couldn't find an explicit textbook reference, so the following is somewhat from memory - which unfortunately isn't as good as it used to be.

The point is that we have a vector space, with an inner product, and the inner product is defined as
$$\vec{A} \cdot \vec{A} = g_{\alpha\beta} A^\alpha A^\beta $$

But this notion of an inner product of a metric space must be compatible with the more primitive notion that a dual vector is a map from a vector to a scalar, and we require that this map from the unique dual of a vector when applied to the vector gives us the length of the vector, i.e. the dot product we defined in our metric space.

I'm a bit unclear as to whether the uniqueness of the dual vector in a metric space really comes from. I believe it's because a map from two vectors to a scalar is equivalent from a map from a vector to a dual vector, because a dual vector is a map from a vector to a scalar. But I might be mis-remembering something here.

When I write the == sign, I'm indicating that there is a unique equivalence between a vector and it's dual. I tend to think of them as actually being different representations of the "same thing", which would justify the equality, but it seems more prudent to use the == notation to simply say instead there is a unique correspondence from vectors to their duals in a metric space. Without a metric, there is no such a unique relationship, by the way.

Given that the dual is unique, though, we know that

$$\vec{A} \cdot \vec{A} = A^\alpha A_\alpha = g_{\alpha \beta} A^\alpha A^\beta$$

The fact that these two notions are equivalent requires that.

$$A^{\alpha}A_{\alpha} = g_{\alpha \beta} A^\alpha A^{beta}$$

we can see this is satisfied in general if

$$A_{\alpha} = g_{\alpha \beta} A^{\beta}$$

And the uniqueness means it's the only solution.

It might be additionally helpful to know the properties & defintions of the dual basis, even though the above approach didn't require us to know that.

From https://en.wikipedia.org/w/index.php?title=Dual_basis&oldid=973680445

the relation between the the basis vectors and their duals is:

$$\vec{e_\alpha} \, \cdot \vec{e^\beta} = \delta ^\alpha_\beta$$

But I'm not sure that it's really necessary to know this part.
 
  • #9
It's a perfectly valid notation for a vector space with a fundamental non-degenerate symmetric bilinear form ##g## and after you have identified vectors and covectors by the corresponding canonical isomorphism wrt. this bilinear form.

For a basis ##\vec{e}_{\alpha}## of the vector space you have
$$g_{\alpha \beta}=g(\vec{e}_{\alpha},\vec{e}_{\beta})$$
Now for an arbitrary linear form ##\underline{L}## you can find a unique vector ##\vec{L}## such that
$$\forall \vec{x} \in V:L(\vec{x})=g(\vec{L},\vec{x}).$$
As any vector you can write
$$\vec{L}=L^{\alpha} \vec{e}_{\alpha}.$$
To dual basis ##\underline{e}^{\alpha}## is then mapped to the vectors ##\vec{e}^{\alpha}##, and you can decompose it as
$$\vec{e}^{\alpha}=E^{\alpha \beta} \vec{e}_{\beta}.$$
Now the ##E^{\alpha \beta}## are found by
$$\delta_{\beta}^{\alpha} = \underline{e}^{\alpha}(\vec{e}_{\beta}) = g(\vec{e}^{\alpha},\vec{e}_{\beta}) = E^{\alpha \gamma} g(\vec{e}_{\gamma},\vec{e}_{\beta})=E^{\alpha \gamma} g_{\gamma \beta}.$$
From this you find
$$E^{\alpha \gamma} = g^{\alpha \gamma},$$
where ##(g^{\alpha \gamma})## is by definition the inverse of the matrix ##(g_{\alpha \beta})##, i.e.,
$$E^{\alpha \gamma} g_{\gamma \beta}=\delta_{\beta}^{\alpha}.$$
So finally you have
$$\vec{e}^{\alpha}=E^{\alpha \beta} \vec{e}_{\beta}=g^{\alpha \beta} \vec{e}_{\beta}$$
and
$$\vec{e}_{\alpha}=g_{\alpha \beta} \vec{e}^{\beta}.$$
From this you get the following using this canonical identification of vectors and dual vectors wrt. to the fundmental form ##g##:
$$\vec{A}=A^{\alpha} \vec{e}_{\alpha} =A^{\alpha} g_{\alpha \beta} \vec{e}^{\beta}=A_{\beta} \vec{e}^{\beta}$$
and thus
$$A_{\beta}=g_{\alpha \beta} A^{\alpha}.$$
The other way goes like this:
$$\vec{A}=A_{\alpha} \vec{e}^{\alpha} =A_{\alpha} g^{\alpha \beta} \vec{e}_{\beta}=A^{\beta} \vec{e}_{\beta}$$
and hus
$$A_{\beta}=g_{\alpha \beta} A^{\alpha}.$$
With writing all equations in invariant terms, i.e., in vectors and tensors you have a much easier time to get the correct transformation properties of the various components and bases vectors under basis transformations.
 
  • Like
Likes PeroK and etotheipi

What is a tensor index?

A tensor index is a label or symbol used to represent the components of a tensor. It is typically represented by a subscript or superscript and indicates the position of the component within the tensor.

How do I manipulate tensor indices?

Tensor indices can be manipulated using various operations such as contraction, raising and lowering, and index permutation. These operations follow specific rules and can be performed using mathematical equations or tensor notation.

What is the importance of tensor index manipulation?

Tensor index manipulation is important because it allows for the simplification and transformation of tensor equations, making them easier to solve and understand. It also allows for the identification of symmetries and patterns within tensors, which can provide valuable insights in various scientific fields.

What are some common mistakes made when manipulating tensor indices?

Some common mistakes include forgetting to account for the indices in each term of an equation, incorrect use of index notation, and forgetting to properly raise or lower indices when necessary. It is important to carefully follow the rules and double-check calculations to avoid errors.

Are there any resources available to learn more about tensor index manipulation?

Yes, there are various textbooks, online tutorials, and courses available for learning about tensor index manipulation. It is also helpful to practice with examples and seek guidance from experienced researchers or mathematicians when needed.

Similar threads

  • Special and General Relativity
Replies
8
Views
2K
  • Special and General Relativity
Replies
1
Views
676
  • Special and General Relativity
Replies
1
Views
1K
Replies
13
Views
646
  • Special and General Relativity
Replies
17
Views
1K
  • Special and General Relativity
Replies
8
Views
2K
Replies
1
Views
1K
  • Special and General Relativity
2
Replies
59
Views
3K
  • Special and General Relativity
Replies
2
Views
857
Replies
23
Views
4K
Back
Top