Undergrad Tensor rank: One number or two?

  • Thread starter Thread starter George Keeling
  • Start date Start date
  • Tags Tags
    rank Tensor
Click For Summary
The discussion centers on the concept of tensor rank, specifically whether it should be represented as a pair of numbers (m,n) indicating the number of up and down indices, or simply as a single number representing the total number of indices. Participants clarify that while tensor coefficients change when indices are raised or lowered, the tensors themselves occupy different spaces and should not be conflated. The importance of maintaining the distinction between vectors and covectors is emphasized, as it affects how tensors interact with each other in mathematical operations. Additionally, the conversation touches on the implications of working with metrics in general relativity, which allows for the manipulation of indices, highlighting the need for careful tracking of their positions. Ultimately, understanding tensor rank and the roles of indices is crucial for proper tensor analysis in both mathematics and physics.
George Keeling
Gold Member
Messages
183
Reaction score
42
TL;DR
Is it necessary to have two numbers to specify the rank of a tensor?
When I started learning about tensors the tensor rank was drilled into me. "A tensor rank ##\left(m,n\right)## has ##m## up indices and ##n## down indices." So a rank (1,1) tensor is written ##A_\nu^\mu,A_{\ \ \nu}^\mu## or is that ##A_\nu^{\ \ \ \mu}##? Tensor coefficients change when the indices move up or down but surely the tensor itself stays the same. Can I forget about the ##\left(m,n\right)## business and just need the rank of a tensor which is ##m+n## = the total number of indices?
 
Mathematics news on Phys.org
You shouldn't ignore it, because the vertical positioning tells you how many vector and covector arguments the tensor takes.
George Keeling said:
Tensor coefficients change when the indices move up or down but surely the tensor itself stays the same.
Hmm I don't think that's right. When you 'raise and lower indices' with the metric tensor, you do end up with a completely different tensor which lives in a different tensor space. For example, take a vector ##A = A^k e_k##, and now take the tensor product with the metric tensor and contract over the last two slots:$$\tilde{A} = \mathscr{C}(2,3) [g \otimes A] = \mathscr{C}(2,3) [ g_{ij} A^k e^i \otimes e^j \otimes e_k] = g_{ij} A^k e^i \delta^j_k = g_{ij} A^j e^i \equiv \tilde{A}_i e^i$$We see that ##\tilde{A}_i = g_{ij} A^j##, however note that ##A \in V## whilst ##\tilde{A} \in V^*##. So ##A## and ##\tilde{A}## are completely different objects, although they are put into one-to-one correspondence via the metric tensor ##g## [i.e. we can define an isomorphism, as above]. In that way, we can abuse notation a little bit and simply write ##A_i = g_{ij} A^j##. And you can extend the exact same reasoning to a general ##(m,n)## tensor, which is put into correspondence with ##2^{m+n}-1## other tensors via the metric.

(N.B. This is not the same as performing a change of basis of a given tensor space, which would indeed give a different representation of the same tensor.)
 
Last edited by a moderator:
  • Like
Likes George Keeling
George Keeling said:
Can I forget about the ##\left(m,n\right)## business and just need the rank of a tensor which is ##m+n## = the total number of indices?
That depends on what you want to do. A tensor is composed of vectors and linear forms: ##V\otimes\ldots\otimes V\otimes V^*\otimes\ldots\otimes V^*##. The linear forms are linear functions, i.e. they can be fed with vectors and spit out a scalar. ##\operatorname{rank} (m,n)## says how many copies of ##V## resp. ##V^*## are there. From a purely mathematical point of view they are all vector spaces and even of the same dimension, so it doesn't make a lot of sense to distinguish them if you only consider vectors. This is completely wrong from a physical point of view, since vectors set up the space and covectors (= linear forms) combine to a function which eats vectors and creates a scalar, a number. Cp.

https://www.physicsforums.com/insights/what-is-a-tensor/
 
  • Like
Likes George Keeling
etotheipi said:
You shouldn't ignore it, because the vertical positioning tells you how many vector and covector arguments the tensor takes.
surely then we need to know the order of the vector and covector arguments. Wouldn't fresh_42's ##V\otimes V\otimes V^*\otimes V^*## be different if it was ##V\otimes V^*\otimes V\otimes V^*##?
And this might be amusing:
Another thing that has been drilled into me is that in the equation for the line element $$
{ds}^2=g_{ij}{dx}^idx^j
$$##{dx}^i,dx^j## are basis dual vectors or convectors (and that ##{dx}^idx^j\neq dx^j{dx}^i##). Apologies for dx not dx. I suppose it must also be true that $$
{ds}^2=g^{kl}{dx}_k{dx}_l
$$From the post ##g_{ij}A^je^i\equiv{\widetilde{A}}_ie^i## it looks like I could lower the ##i## on the LHS to get $$
A^je_j={\widetilde{A}}_ie^i
$$Changing the ##e## 's to ##dx##'s that gives me$$
{\widetilde{A}}_i=\frac{{dx}_j}{{dx}^i}A^j
$$which looks like the tensor transformation law for a change of basis. No wonder I think, so wrongly, that covector / vector bases are just like different coordinate bases.:eek:
 
George Keeling said:
surely then we need to know the order of the vector and covector arguments.
Yes. This seems to be conventional - for example in the case of the Riemann tensor ##R^a{}_{bcd}## you just need to know that the ##b## index is associated with the vector you are planning to transport around a loop defined by infinitesimal vectors with indices ##c## and ##d##.
George Keeling said:
that ##{dx}^idx^j\neq dx^j{dx}^i##
I don't think this is quite right. Using distinct vectors, ##U^iV^j=V^jU^i## - i.e., the order you write the tensors doesn't matter (unlike matrices). However, in general ##U^iV^j\neq U^jV^i##, because the ##i,j##th component of one tensor is the ##j,i##th component of the other. But in your example, ##U=V=dx##, so ##dx^idx^j=dx^jdx^i##. And also if you contract ##U^iV^j## or ##U^jV^i## with a rank-2 tensor the result may be the same - it only matters how you match up the indices. So if ##T_{ij}## is an arbitrary tensor, ##T_{ij}U^iV^j=T_{ji}U^jV^i\neq T_{ij}U^jV^i##.
George Keeling said:
I suppose it must also be true that $$
{ds}^2=g^{kl}{dx}_k{dx}_l
$$From the post ##g_{ij}A^je^i\equiv{\widetilde{A}}_ie^i## it looks like I could lower the ##i## on the LHS to get $$
A^je_j={\widetilde{A}}_ie^i$$
Yes.
George Keeling said:
Changing the ##e## 's to ##dx##'s that gives me$$
{\widetilde{A}}_i=\frac{{dx}_j}{{dx}^i}A^j$$
Remember that these are sums. Writing it out explicitly, ##A^je_j={\widetilde{A}}_ie^i## means ##\sum_jA_je^j=\sum_i{\widetilde{A}}_ie^i##, so you cannot divide both sides by ##e_i## as you did.

Getting back to your original question, in general relativity (which I think is your main interest here) you always have a metric available on any manifold of interest. That means that you can always raise or lower an index, so there isn't really any extra information in a vector that isn't encoded in its dual. So which you use is primarily a matter of computational convenience. You need to keep track of the raised and lowered indices and their orders so that you know that the calculation is legal, but if you have a raised index and it would be more convenient to work with a lowered index, you just lower it. As I understand it, that isn't generally the case in differential geometry (you can have manifolds without metrics), but it works for differential geometry as applied to GR.

Edit: Had a bit of a LaTeX nightmare with this one. I think I've fixed everything, but if I'm misquoting you somewhere it's because I didn't do the fixing right.
 
Last edited:
  • Like
Likes George Keeling and etotheipi
@Ibix, from the equation ##g_{ij} A^j e^i = \tilde{A}_i e^i##, does it makes sense to lower the index of the ##e^i##? I'm uncertain if that makes sense geometrically. For one thing, ##A^j e_j \in V## whilst ##\tilde{A}_i e^i \in V^*##, so these last two things cannot be equal.
 
Last edited by a moderator:
  • Like
Likes Ibix
etotheipi said:
@Ibix, from the equation ##g_{ij} A^j e^i = \tilde{A}_i e^i##, does it makes sense to lower the index of the ##e^i##? I'm uncertain if that makes sense geometrically. For one thing, ##A^j e_j \in V## whilst ##\tilde{A}_i e^i \in V^*##, so these last two things cannot be equal.
Good point - I was thinking of ##e^i## as vector components, but if it's actually meant to be the ##i##th basis then there's another reason you can't divide by it.
 
  • Like
Likes etotheipi
Ibix said:
Good point - I was thinking of ##e^i## as vector components, but if it's actually meant to be the ##i##th basis then there's another reason you can't divide by it.

Ah yeah, sorry, I should really have typeset it in bold or included a little hat or something because now that I look at it, it does look a bit like a vector component 🤦‍♂️
 
You explicitly stated it by noting that ##A=A^ke_k##. But I read your reply this morning and George's this afternoon and I forgot the context. o:) I should read more carefully.
 
  • #10
Ibix said:
you can have manifolds without metrics
eek! I suspected that there might be such things but I had never seen it written in black and white. It explains a lot that has seemed mysterious. My zone of ignorance has expanded. o:) Egg on my face too for dividing by ##e^i##.
Thanks very much all for your help.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 22 ·
Replies
22
Views
3K
  • · Replies 7 ·
Replies
7
Views
780
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K