Confused by this result for the tensor product of two vectors

In summary, the paper defines the tensor product as a matrix with dimensions ##mn## and maps a pair of vectors with dimensions ##m, n## to a single vector with dimension ##mn##. While the matrix notation may be more convenient for type setting, the Kronecker product is commonly defined in this way. There is a nice free sample chapter and walkthrough of Kronecker products in Laub's "Matrix Analysis for Scientists & Engineers" that addresses this question directly.
  • #1
Prez Cannady
21
2
Given two probability distributions ##p \in R^{m}_{+}## and ##q \in R^{n}_{+}## (the "+" subscript simply indicates non-negative elements), this paper (page 4) writes down the tensor product as

$$p \otimes q := \begin{pmatrix}
p(1)q(1) \\
p(1)q(2) \\
\vdots \\
p(1)q(n) \\
\vdots \\
p(m)q(n)
\end{pmatrix}, $$

yet I expected to see

$$p \otimes q := \begin{pmatrix}
p(1)q(1) && p(1)q(2) && \cdots && p(1)q(n-1) && p(1)q(n) \\
p(2)q(1) && \ddots && && && p(2)q(n) \\
\vdots && && \ddots && && \vdots \\
p(m-1)q(1) && && && \ddots && p(m-1)q(n) \\
p(m)q(1) && p(m)q(2) && \cdots && p(m)q(n-1) && p(m)q(n)
\end{pmatrix}, $$

Am I missing something?
 
Physics news on Phys.org
  • #2
Well, they define, so they call the shots. At least they explain clearly what they mean:
Above, we have introduced the notation ⊗ to denote the tensor product, which in general maps a pair of vectors with dimensions ##m, n## to a single vector with dimension ##mn##
so all the elements you expected to see are present.
 
Last edited:
  • Like
Likes Prez Cannady
  • #3
I'd also prefer the matrix notation, because the tensor product then becomes an ordinary matrix multiplication: column times row. But who says you can't write a matrix as a column? I think it's more convenient for type setting than it is mathematically, but in the end it depends on what you want to do with it.
 
  • Like
Likes Prez Cannady
  • #4
Prez Cannady said:
Given two probability distributions ##p \in R^{m}_{+}## and ##q \in R^{n}_{+}## (the "+" subscript simply indicates non-negative elements), this paper (page 4) writes down the tensor product as

$$p \otimes q := \begin{pmatrix}
p(1)q(1) \\
p(1)q(2) \\
\vdots \\
p(1)q(n) \\
\vdots \\
p(m)q(n)
\end{pmatrix}, $$

yet I expected to see

$$p \otimes q := \begin{pmatrix}
p(1)q(1) && p(1)q(2) && \cdots && p(1)q(n-1) && p(1)q(n) \\
p(2)q(1) && \ddots && && && p(2)q(n) \\
\vdots && && \ddots && && \vdots \\
p(m-1)q(1) && && && \ddots && p(m-1)q(n) \\
p(m)q(1) && p(m)q(2) && \cdots && p(m)q(n-1) && p(m)q(n)
\end{pmatrix}, $$

Am I missing something?

For the Kronecker product this is actually a very common definition. In fact if you use the standard definition for the Kronecker product of

##
\mathbf X \otimes \mathbf Y = \begin{bmatrix}
x_{1,1}\mathbf Y & \cdots & x_{1,n}\mathbf Y\\
\vdots & \ddots & \vdots \\
x_{m,1}\mathbf Y & \cdots & x_{m,n}\mathbf Y
\end{bmatrix}##

where ##\mathbf X## is ##\text{m x n}##
- - - -
and you then constrain ##\mathbf X## and ##\mathbf Y## to be column vectors, you really have no choice but to have

##\mathbf {xy}^* = \mathbf x \otimes \mathbf y^* ##

and
##\mathbf x^* \otimes \mathbf y = \mathbf y \mathbf x^* ##

but

##\mathbf x \otimes \mathbf y##

is a column vector.

Notation and definitions can be tweaked slightly to get very different results, which is unfortunately, confusing.

- - - -
There's a nice free 12 page sample chapter and walkthrough of Kronecker products in Laub's "Matrix Analysis for Scientists & Engineers" -- I gave an indirect link here:

https://www.physicsforums.com/threa...vector-subspace-of-r-2-2.929266/#post-5868072

(page 2 of said sample chapter addresses your question directly)
 
  • Like
Likes Prez Cannady

What is the tensor product of two vectors?

The tensor product of two vectors is an operation that combines two vectors to create a new vector. It is commonly used in linear algebra and physics to represent the relationship between two vector spaces.

How is the tensor product calculated?

The tensor product is calculated by taking the product of the two vectors and then arranging the resulting terms in a specific way. This involves multiplying each element of one vector by each element of the other vector and arranging them in a matrix-like structure.

What does it mean if the result of the tensor product is confusing?

If the result of the tensor product is confusing, it could mean that the two vectors being multiplied are not compatible or that the calculation was done incorrectly. It is important to double check the inputs and the calculation method to ensure accuracy.

What are some practical applications of the tensor product?

The tensor product has many practical applications in mathematics, physics, and engineering. It is used in quantum mechanics, signal processing, and computer graphics, among other fields. It is also used to represent physical quantities such as force and angular momentum in a mathematical form.

Are there any limitations to the tensor product?

Yes, there are some limitations to the tensor product. It can only be applied to vectors in the same vector space, and it is not valid for all types of vector operations. It is also important to note that the order of the vectors in the product matters, and switching the order will result in a different result.

Similar threads

  • Linear and Abstract Algebra
Replies
11
Views
971
  • Linear and Abstract Algebra
Replies
4
Views
982
  • Linear and Abstract Algebra
Replies
1
Views
824
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
250
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
928
  • Linear and Abstract Algebra
Replies
4
Views
2K
  • Linear and Abstract Algebra
Replies
16
Views
1K
Back
Top