What are the Confusions Surrounding Tensors in General Relativity?

  • Thread starter gabeeisenstei
  • Start date
  • Tags
    Tensor
In summary: It's only appropriate to contract when you want to reduce a higher-order tensor to a lower-order one, or when you want to simplify calculations by eliminating certain indices. When contracting two indices, you essentially "set them equal" and the resulting scalar value is retained as a multiplicative factor on the remaining tensor. The pairing of vectors and covectors gives invariants because the resulting scalar value is independent of the basis chosen for the vector and covector. The inverse character of the dual space relates to its (n-m)-dimensional character because the basis vectors for the dual space are the "inverse" of those in the original space, and the dimension of the dual space is determined by the number of basis vectors. In the case of the gradient of
  • #1
gabeeisenstei
37
0
I've been studying tensors and GR for awhile now, and I've read a lot of tensor discussions in this forum, but I still have some gaping blind spots. I know about vectors and covectors and their transformation rules, about the inner and outer products, "raising"/"lowering" with metrics, and contraction. That is, I know the mechanics of these things; but I often don't understand their meaning--what makes an operation (especially contraction) appropriate in context.

1. I understand HOW contraction works, in terms of eliminating a pair of upper/lower indices or reducing a rank-2 tensor to a scalar. What I don't understand is WHEN you can do it, what you lose and what you gain. In particular, under what conditions can you take two indices a and b and "set them equal" in order to perform contraction? When you made b=a, did you throw away information from b?

2. When you have a tensor with rank>2 and you contract two indices, what happens to the scalar value resulting from their inner product? Does it get retained as a multiplicative factor on the remaining tensor, or is all the information from those two elements lost?

3. I understand (in a loose sense of the term) that the pairing of vectors and covectors gives invariants, but I don't understand why. (If I have a contravariant vector, can I just "lower" it to covariant form, then multiply these two together to get something invariant?)

4. I "understand" that the dual space V* has basis vectors that are the inverse of those in V, in the sense that their product (using a metric) is the Kronecker delta. I also "understand" that the dual space can be pictured as a (n-m)-dimensional sub-space or hypersurface of the manifold in question (although I only really understand the example of a gradient of a 2D surface in 3D). I do not understand how the "inverse" character of the dual space relates to its (n-m)-dimensional character. (I have a sense that if I understood this, I'd be on my way to understanding #3.)

5. I have a hard time with the "linear functional" or "form" definitions of a covector, when I try to connect them with the numeric components. In the only example I've seen (electromagnetism tensor), values were plugged into the covariant slots by simply taking the simple terms Ex,Ey,Ez and using the inverse Lorentz metric to flip their signs. How do the "covectors" constructed in such a way represent linear functionals?

Thanks a lot.
 
Physics news on Phys.org
  • #2
Take a matrix and then take its trace. This is exactly what you're gaining/losing by performing contraction. You gain simplification, but at the cost of eliminating most of the information contained in the matrix.
 

1. What are tensors?

Tensors are mathematical objects that are used to represent and manipulate multidimensional data in a coordinate-independent manner.

2. What are the basic properties of tensors?

The basic properties of tensors include rank, shape, and components. Rank refers to the number of indices needed to describe the tensor, shape refers to the size of each dimension, and components refer to the values in each index position.

3. What is the difference between a tensor and a matrix?

While both tensors and matrices can represent multidimensional data, tensors are more general and can have any number of dimensions, while matrices are restricted to two dimensions.

4. How are tensors used in physics and engineering?

Tensors are used in physics and engineering to represent physical quantities, such as forces and stresses, that have direction and magnitude in three-dimensional space. They are also used in the study of general relativity and other areas of mathematics and science.

5. What are some common sources of confusion when working with tensors?

Some common sources of confusion when working with tensors include understanding index notation, the use of Einstein's summation convention, and the distinction between covariant and contravariant tensors. It is also important to have a solid understanding of vector and matrix operations when working with tensors.

Similar threads

  • Differential Geometry
Replies
2
Views
443
  • Differential Geometry
Replies
9
Views
2K
  • Differential Geometry
Replies
8
Views
3K
Replies
2
Views
2K
Replies
6
Views
1K
  • Differential Geometry
Replies
3
Views
2K
  • Differential Geometry
Replies
1
Views
1K
  • Differential Geometry
Replies
3
Views
6K
  • Differential Geometry
Replies
2
Views
825
  • Differential Geometry
Replies
6
Views
2K
Back
Top