Is There an Upper Bound on Elementary Tensors in Tensor Products?

Click For Summary

Discussion Overview

The discussion revolves around the representation of elements in the tensor product of vector spaces, specifically examining whether there exists an upper bound on the number of elementary tensors required to express any tensor in the product space. The scope includes theoretical considerations and mathematical reasoning related to tensor algebra.

Discussion Character

  • Exploratory
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant claims that any element in the tensor product space V can be represented as a sum of elementary tensors, with an upper bound R defined as -MAX {dj} + ∑dj.
  • Another participant notes that the dimension of V is given by the product d1*d2*...*dn, suggesting that this number of vectors is necessary to express an arbitrary vector in V.
  • A further elaboration provides a concrete example with n=2, where a 2x3 matrix is discussed, indicating that not all matrices can be represented as tensor products of elements from the respective vector spaces, and that maximal-rank matrices require exactly two elementary tensors.
  • The same participant argues that non-maximal rank matrices can be represented with fewer elementary tensors, emphasizing the relationship between the rank of the matrix and the number of elementary tensors needed.
  • Another participant questions the validity of the claim for n=2 and suggests using induction to explore the claim further.

Areas of Agreement / Disagreement

Participants express differing views on the necessity and sufficiency of the number of elementary tensors required for representation, with some supporting the initial claim and others questioning its validity, particularly for cases with n=2. The discussion remains unresolved with competing perspectives on the upper bound of elementary tensors.

Contextual Notes

The discussion highlights the complexity of representing tensors in product spaces, with specific examples illustrating the nuances involved in rank and representation. The assumptions regarding the nature of elementary tensors and their relationship to the dimensions of the vector spaces are not fully resolved.

klackity
Messages
65
Reaction score
1
Tell me if this is true:

We are given vector spaces V1, V2, ..., Vn of dimensions d1, d2, ..., dn respectively.

Let V = V1 [tex]\otimes[/tex] V2 [tex]\otimes[/tex] ... [tex]\otimes[/tex] Vn

Claim: Any element v [tex]\in[/tex] V can be represented in the following form:

[tex]\sum[/tex]i=1...R (v1,i [tex]\otimes[/tex] ... [tex]\otimes[/tex] vn,i)

Where R = - MAX {dj} + [tex]\sum[/tex]j=1..n dj

And where vj,i [tex]\in[/tex] Vj.

In other words, there is an upper bound of R on the number of "elementary" tensors wi needed needed to represent a particular tensor v in V (where an "elementary" tensor wi is one which can be written in the form wi = v1 [tex]\otimes[/tex] v2 [tex]\otimes[/tex] ... vn, where vj [tex]\in[/tex] Vj). The upper bound R is simply the sum of all the dj except for the dj0 which is maximal.

Is this true?
 
Physics news on Phys.org
Well, you know that the dimension of V is [tex]d_1d_2...d_n[/tex]. So I guess that you probably need this many vectors to express an arbitrary vector in V...
 
micromass said:
Well, you know that the dimension of V is [tex]d_1d_2...d_n[/tex]. So I guess that you probably need this many vectors to express an arbitrary vector in V...

Well, yes, you need d1*d2*...*dn vectors to form a basis for V.

But I am asking something slightly different. I'll give a concrete example:

Suppose n=2 and V1 = R2 and V2 = R3* (the space of linear functionals on R3)

Then V = V1 [tex]\otimes[/tex] V2 [tex]\cong[/tex] R6.

Any tensor in V is a 2x3 matrix:

M = [tex]\left[ \begin{array}{cccc} a & b & c \\ d & e & f \end{array} \right][/tex]

Now, the problem is that not every such matrix is the tensor product of two elements v1 and v2 of V1 and V2 respectively. Any maximal-rank matrix is able to be represented as the sum of exactly two (and no fewer) such "elementary" tensors (where an "elementary" tensor is the tensor product of two elements v1 and v2 (need not be basis vectors) of V1 and V2 respectively). A non-maximal rank matrix (i.e., one with rank strictly less than 2 = min{2,3}) can be represented as a sum of fewer "elementary" tensors (e.g., one).

V contains elements M in the correct form such that I can put a vector x (a column vector) to the right of M, and I will get another vector back as a result of that "multiplication". If I put a linear functional (row vector) to the left of it, I will get back another linear functional (row vector). If I put both a vector to the right, and a linear functional to the left, I will get back a real number.

The matrix M itself can be thought of as two linear functionals on R3 (row vectors) attached to two particular vectors (column vectors) in R2. The topmost row of M is a linear functional attached to the vector (1,0) (in column form), and the bottommost row of M is a linear functional attached to the vector (0,1) (in column form). When you put a vector (in R3) to the right of M, you are using those two linear functionals to determine coefficients a1, a2 (for the top and bottom rows, repsectively) which will factor into the sum a1(1,0) + a2(0,1).

You can also paint a similar picture for thinking about multiplying a linear functional (row vector) on the left of M. In this case, M can be thought of as consisting of three vectors (in R2) each attached to one of the row vectors (1,0,0), (0,1,0), and (0,0,1). However, since there are three such linear functionals, they cannot be linearly independent. So you really only need two linear functionals attached (repsectively) to two row vectors, although this time the row vectors may not be basis vectors.

So either way you think about it, you see that you really only need two bits of "elementary" information to represent any particular matrix M in V, even though there are 6 basis elements to V.

The "elementary" bits of information can be represented as a pair (x,x') in V1xV2 (modulo an equivalence relation).

For matrices, the number of "elementary" bits of information you need is just the maximal number of linearly independent rows (or columns) of M. For higher order tensors, I postulate that it is something a bit more complicated, but that it has an upper bound (which is achieved) of R (where R is defined above).
 
Last edited:
well it looks true for n=2. can you use induction?
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
5
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 10 ·
Replies
10
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K