Tensor rank: One number or two?

  • Context: Undergrad 
  • Thread starter Thread starter George Keeling
  • Start date Start date
  • Tags Tags
    rank Tensor
Click For Summary

Discussion Overview

The discussion revolves around the concept of tensor rank, specifically whether it should be represented as a single number or as two separate numbers indicating the number of up and down indices. Participants explore the implications of this representation in both mathematical and physical contexts, addressing the nature of tensors, their components, and the effects of raising and lowering indices.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that the rank of a tensor should be understood as the total number of indices, represented as ##m+n##, while others argue that the distinction between up and down indices is crucial for understanding the tensor's properties.
  • One participant emphasizes that raising and lowering indices with the metric tensor results in different tensors that belong to different tensor spaces, suggesting that the original tensor and its transformed version are fundamentally different objects.
  • Another viewpoint highlights that from a mathematical perspective, the distinction between vector and covector arguments may not be significant, but from a physical standpoint, it is essential to understand how these components interact to produce scalars.
  • There is a discussion about the order of vector and covector arguments, with some participants questioning whether changing the order affects the resulting tensor.
  • Concerns are raised regarding the geometric interpretation of lowering indices, particularly in relation to the basis vectors and their components.
  • Participants note that in general relativity, the availability of a metric allows for the raising and lowering of indices, which may simplify calculations but requires careful tracking of indices to ensure legal operations.
  • Some participants express uncertainty about the implications of working with tensors in spaces without metrics, acknowledging a gap in their understanding.

Areas of Agreement / Disagreement

Participants do not reach a consensus on whether the rank of a tensor should be treated as a single number or two separate numbers. Multiple competing views remain regarding the significance of the distinction between vector and covector indices and the implications of raising and lowering indices.

Contextual Notes

Participants acknowledge that the discussion involves complex mathematical concepts and may depend on specific definitions and contexts, particularly in relation to the physical interpretation of tensors and the presence of metrics in certain spaces.

George Keeling
Gold Member
Messages
183
Reaction score
42
TL;DR
Is it necessary to have two numbers to specify the rank of a tensor?
When I started learning about tensors the tensor rank was drilled into me. "A tensor rank ##\left(m,n\right)## has ##m## up indices and ##n## down indices." So a rank (1,1) tensor is written ##A_\nu^\mu,A_{\ \ \nu}^\mu## or is that ##A_\nu^{\ \ \ \mu}##? Tensor coefficients change when the indices move up or down but surely the tensor itself stays the same. Can I forget about the ##\left(m,n\right)## business and just need the rank of a tensor which is ##m+n## = the total number of indices?
 
Physics news on Phys.org
You shouldn't ignore it, because the vertical positioning tells you how many vector and covector arguments the tensor takes.
George Keeling said:
Tensor coefficients change when the indices move up or down but surely the tensor itself stays the same.
Hmm I don't think that's right. When you 'raise and lower indices' with the metric tensor, you do end up with a completely different tensor which lives in a different tensor space. For example, take a vector ##A = A^k e_k##, and now take the tensor product with the metric tensor and contract over the last two slots:$$\tilde{A} = \mathscr{C}(2,3) [g \otimes A] = \mathscr{C}(2,3) [ g_{ij} A^k e^i \otimes e^j \otimes e_k] = g_{ij} A^k e^i \delta^j_k = g_{ij} A^j e^i \equiv \tilde{A}_i e^i$$We see that ##\tilde{A}_i = g_{ij} A^j##, however note that ##A \in V## whilst ##\tilde{A} \in V^*##. So ##A## and ##\tilde{A}## are completely different objects, although they are put into one-to-one correspondence via the metric tensor ##g## [i.e. we can define an isomorphism, as above]. In that way, we can abuse notation a little bit and simply write ##A_i = g_{ij} A^j##. And you can extend the exact same reasoning to a general ##(m,n)## tensor, which is put into correspondence with ##2^{m+n}-1## other tensors via the metric.

(N.B. This is not the same as performing a change of basis of a given tensor space, which would indeed give a different representation of the same tensor.)
 
Last edited by a moderator:
  • Like
Likes   Reactions: George Keeling
George Keeling said:
Can I forget about the ##\left(m,n\right)## business and just need the rank of a tensor which is ##m+n## = the total number of indices?
That depends on what you want to do. A tensor is composed of vectors and linear forms: ##V\otimes\ldots\otimes V\otimes V^*\otimes\ldots\otimes V^*##. The linear forms are linear functions, i.e. they can be fed with vectors and spit out a scalar. ##\operatorname{rank} (m,n)## says how many copies of ##V## resp. ##V^*## are there. From a purely mathematical point of view they are all vector spaces and even of the same dimension, so it doesn't make a lot of sense to distinguish them if you only consider vectors. This is completely wrong from a physical point of view, since vectors set up the space and covectors (= linear forms) combine to a function which eats vectors and creates a scalar, a number. Cp.

https://www.physicsforums.com/insights/what-is-a-tensor/
 
  • Like
Likes   Reactions: George Keeling
etotheipi said:
You shouldn't ignore it, because the vertical positioning tells you how many vector and covector arguments the tensor takes.
surely then we need to know the order of the vector and covector arguments. Wouldn't fresh_42's ##V\otimes V\otimes V^*\otimes V^*## be different if it was ##V\otimes V^*\otimes V\otimes V^*##?
And this might be amusing:
Another thing that has been drilled into me is that in the equation for the line element $$
{ds}^2=g_{ij}{dx}^idx^j
$$##{dx}^i,dx^j## are basis dual vectors or convectors (and that ##{dx}^idx^j\neq dx^j{dx}^i##). Apologies for dx not dx. I suppose it must also be true that $$
{ds}^2=g^{kl}{dx}_k{dx}_l
$$From the post ##g_{ij}A^je^i\equiv{\widetilde{A}}_ie^i## it looks like I could lower the ##i## on the LHS to get $$
A^je_j={\widetilde{A}}_ie^i
$$Changing the ##e## 's to ##dx##'s that gives me$$
{\widetilde{A}}_i=\frac{{dx}_j}{{dx}^i}A^j
$$which looks like the tensor transformation law for a change of basis. No wonder I think, so wrongly, that covector / vector bases are just like different coordinate bases.:eek:
 
George Keeling said:
surely then we need to know the order of the vector and covector arguments.
Yes. This seems to be conventional - for example in the case of the Riemann tensor ##R^a{}_{bcd}## you just need to know that the ##b## index is associated with the vector you are planning to transport around a loop defined by infinitesimal vectors with indices ##c## and ##d##.
George Keeling said:
that ##{dx}^idx^j\neq dx^j{dx}^i##
I don't think this is quite right. Using distinct vectors, ##U^iV^j=V^jU^i## - i.e., the order you write the tensors doesn't matter (unlike matrices). However, in general ##U^iV^j\neq U^jV^i##, because the ##i,j##th component of one tensor is the ##j,i##th component of the other. But in your example, ##U=V=dx##, so ##dx^idx^j=dx^jdx^i##. And also if you contract ##U^iV^j## or ##U^jV^i## with a rank-2 tensor the result may be the same - it only matters how you match up the indices. So if ##T_{ij}## is an arbitrary tensor, ##T_{ij}U^iV^j=T_{ji}U^jV^i\neq T_{ij}U^jV^i##.
George Keeling said:
I suppose it must also be true that $$
{ds}^2=g^{kl}{dx}_k{dx}_l
$$From the post ##g_{ij}A^je^i\equiv{\widetilde{A}}_ie^i## it looks like I could lower the ##i## on the LHS to get $$
A^je_j={\widetilde{A}}_ie^i$$
Yes.
George Keeling said:
Changing the ##e## 's to ##dx##'s that gives me$$
{\widetilde{A}}_i=\frac{{dx}_j}{{dx}^i}A^j$$
Remember that these are sums. Writing it out explicitly, ##A^je_j={\widetilde{A}}_ie^i## means ##\sum_jA_je^j=\sum_i{\widetilde{A}}_ie^i##, so you cannot divide both sides by ##e_i## as you did.

Getting back to your original question, in general relativity (which I think is your main interest here) you always have a metric available on any manifold of interest. That means that you can always raise or lower an index, so there isn't really any extra information in a vector that isn't encoded in its dual. So which you use is primarily a matter of computational convenience. You need to keep track of the raised and lowered indices and their orders so that you know that the calculation is legal, but if you have a raised index and it would be more convenient to work with a lowered index, you just lower it. As I understand it, that isn't generally the case in differential geometry (you can have manifolds without metrics), but it works for differential geometry as applied to GR.

Edit: Had a bit of a LaTeX nightmare with this one. I think I've fixed everything, but if I'm misquoting you somewhere it's because I didn't do the fixing right.
 
Last edited:
  • Like
Likes   Reactions: George Keeling and etotheipi
@Ibix, from the equation ##g_{ij} A^j e^i = \tilde{A}_i e^i##, does it makes sense to lower the index of the ##e^i##? I'm uncertain if that makes sense geometrically. For one thing, ##A^j e_j \in V## whilst ##\tilde{A}_i e^i \in V^*##, so these last two things cannot be equal.
 
Last edited by a moderator:
  • Like
Likes   Reactions: Ibix
etotheipi said:
@Ibix, from the equation ##g_{ij} A^j e^i = \tilde{A}_i e^i##, does it makes sense to lower the index of the ##e^i##? I'm uncertain if that makes sense geometrically. For one thing, ##A^j e_j \in V## whilst ##\tilde{A}_i e^i \in V^*##, so these last two things cannot be equal.
Good point - I was thinking of ##e^i## as vector components, but if it's actually meant to be the ##i##th basis then there's another reason you can't divide by it.
 
  • Like
Likes   Reactions: etotheipi
Ibix said:
Good point - I was thinking of ##e^i## as vector components, but if it's actually meant to be the ##i##th basis then there's another reason you can't divide by it.

Ah yeah, sorry, I should really have typeset it in bold or included a little hat or something because now that I look at it, it does look a bit like a vector component 🤦‍♂️
 
You explicitly stated it by noting that ##A=A^ke_k##. But I read your reply this morning and George's this afternoon and I forgot the context. o:) I should read more carefully.
 
  • #10
Ibix said:
you can have manifolds without metrics
eek! I suspected that there might be such things but I had never seen it written in black and white. It explains a lot that has seemed mysterious. My zone of ignorance has expanded. o:) Egg on my face too for dividing by ##e^i##.
Thanks very much all for your help.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 22 ·
Replies
22
Views
4K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K