What are the Confusions Surrounding Tensors in General Relativity?

  • Context: Graduate 
  • Thread starter Thread starter gabeeisenstei
  • Start date Start date
  • Tags Tags
    Tensor
Click For Summary
SUMMARY

This discussion centers on the complexities surrounding the use of tensors in General Relativity (GR), particularly focusing on contraction, dual spaces, and the relationship between vectors and covectors. Key points include the mechanics of contraction, which simplifies tensors but often results in the loss of information, and the understanding of dual spaces, which are inversely related to their original spaces. Participants express confusion about when contraction is appropriate and how it affects the resulting tensor's information. The discussion emphasizes the importance of grasping these concepts to fully understand tensor operations in GR.

PREREQUISITES
  • Understanding of tensor mechanics, including contraction and transformation rules
  • Familiarity with the concepts of vectors and covectors in the context of General Relativity
  • Knowledge of dual spaces and their dimensional characteristics
  • Basic grasp of linear functionals and their representation in tensor notation
NEXT STEPS
  • Study the conditions under which tensor contraction can be performed and its implications on information retention
  • Explore the relationship between dual spaces and their dimensional characteristics in depth
  • Investigate the concept of invariants in tensor operations, particularly the pairing of vectors and covectors
  • Examine the definitions of linear functionals and their connection to covectors in various physical contexts
USEFUL FOR

Students and researchers in theoretical physics, particularly those focusing on General Relativity, tensor calculus, and mathematical physics. This discussion is beneficial for anyone seeking to deepen their understanding of tensor operations and their implications in GR.

gabeeisenstei
Messages
36
Reaction score
0
I've been studying tensors and GR for awhile now, and I've read a lot of tensor discussions in this forum, but I still have some gaping blind spots. I know about vectors and covectors and their transformation rules, about the inner and outer products, "raising"/"lowering" with metrics, and contraction. That is, I know the mechanics of these things; but I often don't understand their meaning--what makes an operation (especially contraction) appropriate in context.

1. I understand HOW contraction works, in terms of eliminating a pair of upper/lower indices or reducing a rank-2 tensor to a scalar. What I don't understand is WHEN you can do it, what you lose and what you gain. In particular, under what conditions can you take two indices a and b and "set them equal" in order to perform contraction? When you made b=a, did you throw away information from b?

2. When you have a tensor with rank>2 and you contract two indices, what happens to the scalar value resulting from their inner product? Does it get retained as a multiplicative factor on the remaining tensor, or is all the information from those two elements lost?

3. I understand (in a loose sense of the term) that the pairing of vectors and covectors gives invariants, but I don't understand why. (If I have a contravariant vector, can I just "lower" it to covariant form, then multiply these two together to get something invariant?)

4. I "understand" that the dual space V* has basis vectors that are the inverse of those in V, in the sense that their product (using a metric) is the Kronecker delta. I also "understand" that the dual space can be pictured as a (n-m)-dimensional sub-space or hypersurface of the manifold in question (although I only really understand the example of a gradient of a 2D surface in 3D). I do not understand how the "inverse" character of the dual space relates to its (n-m)-dimensional character. (I have a sense that if I understood this, I'd be on my way to understanding #3.)

5. I have a hard time with the "linear functional" or "form" definitions of a covector, when I try to connect them with the numeric components. In the only example I've seen (electromagnetism tensor), values were plugged into the covariant slots by simply taking the simple terms Ex,Ey,Ez and using the inverse Lorentz metric to flip their signs. How do the "covectors" constructed in such a way represent linear functionals?

Thanks a lot.
 
Physics news on Phys.org
Take a matrix and then take its trace. This is exactly what you're gaining/losing by performing contraction. You gain simplification, but at the cost of eliminating most of the information contained in the matrix.
 

Similar threads

  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 8 ·
Replies
8
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 3 ·
Replies
3
Views
6K
  • · Replies 9 ·
Replies
9
Views
2K