Notation for vectors in tensor product space

In summary, the conversation discusses the correct notation for constructing composite states of a system of two interacting 1/2 spins. It is agreed that the tensor product symbol ##\otimes## can be used to construct these states from the ##\mathbb{C}^2## vectors representing the individual spins. The notation ##|s_1\rangle |s_2\rangle## is short for ##|s_1\rangle\otimes |s_2\rangle## and ##|s_1\rangle \langle s_2|## is the outer product in the Dirac notation.
  • #1
hilbert2
Science Advisor
Insights Author
Gold Member
1,595
605
TL;DR Summary
About writing the state vector of a composite system
Suppose I have a system of two (possibly interacting) spins of 1/2. Then the state of each separate spin can be written as a ##\mathbb{C}^2## vector, and the spin operators are made from Pauli matrices, for instance the matrices

##\sigma_z \otimes \hat{1}## and ##\hat{1} \otimes \sigma_z##,

which are tensor products of the z-direction spin matrix acting on one spin, and a unit matrix acting on the other spin, correspond to the spin-z operators for individual spins.

Now, in the bra-ket notation it is easy to write the product form states of the composite system as something like ##|s_1 \rangle |s_2 \rangle##. Is it also a correct notation to use the tensor product symbol ##\otimes## for constructing these from the ##\mathbb{C}^2## vectors:

##|s_1 \rangle |s_2 \rangle = \begin{bmatrix}a_1 \\ a_2 \end{bmatrix}\otimes\begin{bmatrix}b_1 \\ b_2 \end{bmatrix}##,

with ##a_1 ,a_2 ,b_1## and ##b_2## being the complex number components of ##|s_1 \rangle## and ##|s_2 \rangle## ?
 
Physics news on Phys.org
  • #2
I don't really understand the Dirac notation, but I would say it is
##|s_1\rangle \langle s_2| = \begin{bmatrix}a_1\\a_2\end{bmatrix} \otimes \begin{bmatrix}b_1\\b_2\end{bmatrix}## since ##|s_1\rangle## is a column vector and ##\langle s_2|## a row vector as I understood it.
 
  • #3
fresh_42 said:
I don't really understand the Dirac notation, but I would say it is
##|s_1\rangle \langle s_2| = \begin{bmatrix}a_1\\a_2\end{bmatrix} \otimes \begin{bmatrix}b_1\\b_2\end{bmatrix}## since ##|s_1\rangle## is a column vector and ##\langle s_2|## a row vector as I understood it.
But on the right you have two column vectors. I'd say it is as written. This ##|s_1\rangle |s_2\rangle## is short for ##|s_1\rangle\otimes |s_2\rangle##.
 
  • Like
Likes hilbert2
  • #4
martinbn said:
But on the right you have two column vectors.
In the tensor notation it is irrelevant how you write the vectors because the tensor product determines completely the result. Btw. you made use of this fact by your suggested abbreviation. Written as matrix multiplication it is ##s_1s_2^\tau = s_1\otimes s_2## where the vectors are columns. A dyad is column times row. If ##|s_1\rangle |s_2\rangle## is short for ##|s_1\rangle \otimes |s_2\rangle##, what is ##|s_1\rangle \langle s_2|## then?
 
  • Like
Likes hilbert2
  • #5
My first thought was that the operator ##|s_1 \rangle \langle s_2 |## would just be the product

##\begin{bmatrix}a_1 \\ a_2 \end{bmatrix}\begin{bmatrix}b_1 & b_2\end{bmatrix} = \begin{bmatrix}a_1 b_1 & a_1 b_2 \\ a_2 b_1 & a_2 b_2 \end{bmatrix}##

but then it wouldn't have the whole 4-dimensional state space as its domain... You could probably write it that way if the ##|s_1 \rangle## and ##|s_2 \rangle## are, for instance, the ##s_z = 1/2## and ##s_z = -1/2## eigenstates of a single spin. Or the ##s_x = 1/2## and ##s_y = 1/2## eigenstates.

Thanks for the comments, anyway.
 
  • #6
hilbert2 said:
but then it wouldn't have the whole 4-dimensional state space as its domain...
But linear combinations of them have. You can never span a four dimensional space by two vectors however you define their binary operation.
 
  • #7
I remember having used the "kronecker()" function of R language when constructing matrix representations of operators that act on a quantum system composed from two non-interacting subsystems. It did produce results that were the same as calculated on pen and paper, and I think it is the same product that is meant with the ##\otimes## symbol.
 
  • #8
hilbert2 said:
I remember having used the "kronecker()" function of R language when constructing matrix representations of operators that act on a quantum system composed from two non-interacting subsystems. It did produce results that were the same as calculated on pen and paper, and I think it is the same product that is meant with the ##\otimes## symbol.
Sure. The question is about the notation, not that Kronecker product and tensor product are the same.
 
  • #9
fresh_42 said:
In the tensor notation it is irrelevant how you write the vectors because the tensor product determines completely the result. Btw. you made use of this fact by your suggested abbreviation. Written as matrix multiplication it is ##s_1s_2^\tau = s_1\otimes s_2## where the vectors are columns. A dyad is column times row. If ##|s_1\rangle |s_2\rangle## is short for ##|s_1\rangle \otimes |s_2\rangle##, what is ##|s_1\rangle \langle s_2|## then?
The standard physicist's notation (the way I understand it) is that things like ##|a\rangle, |b\rangle, |c\rangle## belong to some Hilbert space ##H##, while things like ##\langle a|, \langle b|, \langle c|## to the dual ##H^*##, and ##\langle a| (v) = \left(|a\rangle,v\right)_H##.

So ##|s_1\rangle |s_2\rangle\in H\otimes H##, while ##|s_1\rangle \langle s_2|\in H\otimes H^*##.
 
  • Like
Likes Klystron, Cryo, hilbert2 and 1 other person
  • #10
## |\rangle \langle|## is the outer product if my memory of Dirac notation is correct. (Hopefully I latex this correct) the inner product of two vectors for column and row should be
##\vec{a}\otimes\vec{b}=\dbinom{a}{b}(a^*b^*)##
 
Last edited:
  • #11
I had to double check this but two state vectors will be in notation
##\langle\psi|\phi\rangle## with standard scalar product being ##\langle\psi|\phi\rangle=\langle \phi|\psi\rangle^*##

The tensor product of two Hilbert spaces oft denoted as
##\mathcal{H}=\mathcal{H_1}\otimes\mathcal{H_2}##
 
Last edited:
  • #12
Yes
hilbert2 said:
Summary: About writing the state vector of a composite system

Suppose I have a system of two (possibly interacting) spins of 1/2. Then the state of each separate spin can be written as a ##\mathbb{C}^2## vector, and the spin operators are made from Pauli matrices, for instance the matrices

##\sigma_z \otimes \hat{1}## and ##\hat{1} \otimes \sigma_z##,

which are tensor products of the z-direction spin matrix acting on one spin, and a unit matrix acting on the other spin, correspond to the spin-z operators for individual spins.

Now, in the bra-ket notation it is easy to write the product form states of the composite system as something like ##|s_1 \rangle |s_2 \rangle##. Is it also a correct notation to use the tensor product symbol ##\otimes## for constructing these from the ##\mathbb{C}^2## vectors:

##|s_1 \rangle |s_2 \rangle = \begin{bmatrix}a_1 \\ a_2 \end{bmatrix}\otimes\begin{bmatrix}b_1 \\ b_2 \end{bmatrix}##,

with ##a_1 ,a_2 ,b_1## and ##b_2## being the complex number components of ##|s_1 \rangle## and ##|s_2 \rangle## ?
The above notation is accurate for the tensor products as far as I am familiar with it
##|s_1 \rangle |s_2 \rangle = \begin{bmatrix}a_1 \\ a_2 \end{bmatrix}\otimes\begin{bmatrix}b_1 \\ b_2 \end{bmatrix}##,

Though I am not familiar enough with the tensor products in Dirac notation to have confidence in your questions I would surmise that the answer to both as being yes with that in mind

Lol took too long a break from physics
 
Last edited:
  • #13
hilbert2 said:
Is it also a correct notation to use the tensor product symbol ##\otimes## for constructing these from the ##\mathbb{C}^2## vectors:

##|s_1 \rangle |s_2 \rangle = \begin{bmatrix}a_1 \\ a_2 \end{bmatrix}\otimes\begin{bmatrix}b_1 \\ b_2 \end{bmatrix}##,

with ##a_1 ,a_2 ,b_1## and ##b_2## being the complex number components of ##|s_1 \rangle## and ##|s_2 \rangle##?
The idea is correct: it is a vector in the four-dimensional tensor product space of the Hilbert space with itself. But "the" complex number components of ##|s_1 \rangle## and ##|s_2 \rangle## don't exist. Components only come into play after a basis has been chosen. The LHS of your equation is an abstract vector, the RHS implies that a basis has been chosen.
 
  • Like
Likes hilbert2
  • #14
Don't mix representations, i.e., the matrix-vector notation with components of the various objects of linear algebra with respect to a basis and the basis-independent objects themselves. This almost always leads to confusion. The great thing with Dirac's notation is that it usese the basis-independent objects only and easily let's you derive anything in terms of a representation (i.e., using some specific basis) if necessary.

Now let's look at the different products which have occurred in this thread so far. Let's start with one Hilber space ##\mathcal{H}##. By definition you have a scalar product, i.e., a sesquilinear form mapping two vectors to a complex number, the notation is ##\langle a|b \rangle## with ##|a \rangle## and ##b \rangle## arbitrary vectors in ##\mathcal{H}##.

Then there's the construct ##|b \rangle \langle a|##. As the notation suggests that's a special linear mapping of any vector ##|c \rangle \in \mathcal{H}## another vector, namely ##|b \rangle \langle a|c \rangle \in \mathcal{H}##.

Now there are also direct products of two Hilbert spaces ##\mathcal{H}_1## and ##\mathcal{H}_2## leading to a new Hilbert space ##\mathcal{H}=\mathcal{H}_1 \otimes \mathcal{H}_2##. The vectors in the new Hilbert space are spanned by the direct product of vectors from the two spaces. These special vectors in ##\mathcal{H}## are written as ##|a \rangle \otimes |b \rangle## with ##|a \rangle \in \mathcal{H}_1## and ##|b \rangle \in \mathcal{H}_2##. This tensor product is by definition linear in both arguments. From this you can construct any vector in ##\mathcal{H}## by linear combinations. Further one defines the scalar product on ##\mathcal{H}## also through the product states and the usual sesquilinearity property of the scalar product by
$$(\langle a_1| \otimes \langle b_1|)(|a_2 \rangle \otimes |b_2 \rangle)=\langle a_1 | a_2 \rangle \langle b_1|b_2 \rangle.$$
Some authors simply write ##|a \rangle |b \rangle## or even ##|ab \rangle## for product states. The meaning is always the same.
 
  • Like
Likes Mordred
  • #15
Thanks Vanhees71. Even though this isn't my thread the reminder helped me with details I had forgotten. Been far too long since I last worked with the notation. ( Though it's extremely useful ).

Get going to have to sit down and study thankfully I still have my textbook collection...
 
  • #16
Hopefully it isn't considered a hijack to ask a quick related question on the notation if so I apologize in advance.

Is this correct ##|i\rangle\langle i|\psi\rangle##
The two i's in the ket- bra being the projection operators operating on state ##\psi## ?
 
  • #17
Sure, it's a valid expression. If ##|i \rangle## is normalized to 1, i.e., ##\langle i|i \rangle=1##, then ##\hat{P}=|i \rangle \langle i|## is indeed the projection operator to the direction of ##|i \rangle##.
 
  • #18
Thanks again
 

1. What is a vector in tensor product space?

In tensor product space, a vector is a mathematical object that can be represented as a linear combination of basis vectors. These basis vectors span the space and allow for the representation of more complex vectors.

2. How is a vector in tensor product space notated?

A vector in tensor product space is typically notated using a combination of subscripts and superscripts. The subscripts denote the basis vectors that make up the vector, while the superscripts indicate the coordinates or coefficients of each basis vector.

3. What is the significance of the order of subscripts and superscripts in vector notation?

The order of subscripts and superscripts in vector notation is important because it determines the direction and magnitude of the vector. Changing the order can result in a different vector with a different direction and magnitude.

4. How do you perform operations on vectors in tensor product space?

Operations on vectors in tensor product space are performed by manipulating the subscripts and superscripts according to certain rules. For example, addition of two vectors involves adding the coefficients of the corresponding basis vectors, while scalar multiplication involves multiplying all coefficients by the scalar value.

5. Can vectors in tensor product space be visualized?

Yes, vectors in tensor product space can be visualized using geometric representations such as arrows or coordinate systems. However, these visualizations may not accurately represent the abstract nature of vectors in tensor product space and should be used with caution.

Similar threads

Replies
13
Views
2K
  • Quantum Physics
Replies
1
Views
876
Replies
1
Views
985
  • Quantum Physics
Replies
2
Views
629
  • Quantum Physics
Replies
12
Views
1K
Replies
11
Views
1K
Replies
10
Views
1K
Replies
9
Views
1K
  • Quantum Physics
Replies
1
Views
819
  • Quantum Physics
Replies
2
Views
1K
Back
Top