# Probabilities with a Bell's state

• B
If we consider a singlet state : $$(|+-\rangle-|-+\rangle)/\sqrt{2}$$.

And operators $$A=B=\textrm{diag}(1,-1)$$

I saw in a lecture that we can consider $$A\otimes B$$, it has multiple eigenvalues 1 and -1.
It was then said : we can choose orthonormal basis vectors in each eigenspace.

Hence in this case $$p(+-)$$ and $$p(-+)$$ could be chosen as different.

But then I thought : Whereas if we diagonalize before the tensor product the eigenvalues were not multiple and we get always equiprobability : $$p(+-)=p(-+)$$ and $$p(++)=p(--)$$, here we cannot choose the vectors.

So which order is the right one ?

PeterDonis
Mentor
And operators

What Hilbert space are these operators on? I suspect you are thinking of them as operating on each subsystem individually; but that means each one operates as the identity on the other subsystem. You have to take that into account.

It was then said : we can choose orthonormal basis vectors in each eigenspace.

Yes, but note that that is really a way to construct an orthonormal set of basis vectors on the Hilbert space of the overall system, which is a (symmetric or antisymmetric) tensor product of the two subsystem Hilbert spaces. The four states ##++##, ##+-##, ##-+##, ##--## (with appropriate normalization factors) are a set of such basis vectors for the system Hilbert space, which has four dimensions (two for each subsystem).

Whereas if we diagonalize before the tensor product

What do you mean by this? Diagonalizing is something you do to operators once you've already defined the Hilbert space they operate on. If you haven't yet done the tensor product, you can't diagonalize any operator on the full Hilbert space; and your definitions of the operators ##A## and ##B## already have them diagonalized on the subsystem Hilbert spaces. So I don't understand what you are trying to do differently here.

PeterDonis
Mentor
in this case

$$p(+-)$$

and

$$p(-+)$$

could be chosen as different.

What do these ##p##'s refer to?

##p(+-)## is the probability the measurement results in A gives + and B gives -, for example.

So I have to consider the endstate is the eigenvector corresponding to the eigenvalue + for the operator ##A\otimes\mathbb{1}## and - for ##\mathbb{1}\otimes B## ?

PeterDonis
Mentor
##p(+-)## is the probability the measurement results in A gives + and B gives -, for example.

Then these cannot depend on how you choose basis vectors; measurement probabilities are basis independent, since they're actual observables.

So I have to consider the endstate is the eigenvector corresponding to the eigenvalue + for the operator #A\otimes\mathbb{1}## and - for ##\mathbb{1}\otimes B## ?

For the result ##+-##? No; the end state is the eigenvector ##| +- \rangle## for the operator ##A \otimes B##. The operators ##A \otimes \mathbb{1}## and ##\mathbb{1} \otimes B## only measure one particle (A or B, respectively), not both. Technically, you could model the measurement of both particles as two separate measurements, each using one of the latter two operators; but the end result is the same anyway.

So returning to the original question should we consider $$p(+-)=|\langle +-|\Psi\rangle|^2$$

with ##|+-\rangle=|+\rangle|-\rangle## where ##A|+\rangle=|+\rangle##,

or should I write ##A\otimes B|+-\rangle=|+-\rangle## where here there is a parameter more for 'choosing' the eigenvector like ##|+-\rangle=\cos(\alpha)|+\rangle|-\rangle+\sin(\alpha)|-\rangle +\rangle## ?

The latter sounds to me a bit wrong or cheated.

PeterDonis
Mentor
should we consider

$$p(+-)=|\langle +-|\Psi\rangle|^2$$

with ##|+-\rangle=|+\rangle|-\rangle## where ##A|+\rangle=|+\rangle##,

You're still mixing up the two Hilbert spaces. The equation ##p(+-)=|\langle +-|\Psi\rangle|^2## requires that ##| \Psi \rangle## is a vector in the full system's Hilbert space. (It's correct given that clarification.) But ##A|+\rangle=|+\rangle## is only talking about the one-particle Hilbert space. In the full Hilbert space you would have to write ##A \otimes \mathbb{1} |+ s\rangle=|+ s\rangle##, where ##s## can be any one-particle spin (not necessarily ##+## or ##-##).

or should I write ##A\otimes B|+-\rangle=|+-\rangle## where here there is a parameter more for 'choosing' the eigenvector

I don't know what you mean by "a parameter for choosing the eigenvector". ##A \otimes B## is an operator on the system Hilbert space; ##| +- \rangle## is an eigenvector of that operator. There is no parameter anywhere.

PeterDonis
Mentor
should I write ##A\otimes B|+-\rangle=|+-\rangle##

Given your definition of eigenvalues of the operators ##A## and ##B##, the correct expression would be ##A\otimes B|+-\rangle = -|+-\rangle##. The ##A## part of the operator contributes a ##+1## eigenvalue, and the ##B## part contributes a ##-1## eigenvalue; the overall eigenvalue (i.e., the factor that multiplies the ket on the RHS) is the product of the two.

In fact what I wanted to say is

Given ##A\otimes B##

We know the eigenvectors are ##|a\rangle\otimes |b\rangle##
But we can write ##A\otimes B=(A\otimes\mathbb{1})\cdot (\mathbb{1}\otimes B)##

But then the eigenvector is afaik not a tensor product, hence ##|a\rangle ?|b\rangle ## and ##|a\rangle=Proj(E_a,\Psi)##

Hence the endstate depend explicitly on the initial state ?. I thought of the unkown operation could it be the elementar product (which has afaik no applications) ?

PeterDonis
Mentor
we can write A⊗B=(A⊗1)⋅(1⊗B)

Sort of, yes.

But then the eigenvector is afaik not a tensor product

Why not? It's the same operator as before, you just factored it; so it has the same eigenvectors.

$$\underbrace{A\otimes B}_{2x1\otimes 2x1}=\underbrace{(A\otimes\mathbb{1})\cdot (\mathbb{1}\otimes B)}_{4x1 ? 4x1}$$

The tensor product on the lhs gives a 4x1 vector wheras the ? operation on the rhs shall combine two 4x1 vectors to give a single 4x1 vector, so my question is : what is this operation ?

vanhees71
Gold Member
2021 Award
I think you start with the (uniquely defined) 2D Hilbert space (your notation indicates, it's the space for helicity states of a single photon) and then work in the eigenbasis of the operator ##\hat{h}##, which has eigenvalues ##\pm 1##. I.e., there's an orthonormal basis ##\{|1 \rangle, |-1 \rangle## fulfilling ##\hat{h} |\pm 1 \rangle=\pm |\pm 1 \rangle##.

Then you consider the helicity basis for the helicity of two photons, which is described by the tensor product of the single-photon-helicity space with itself. A Basis for this space is ##|\lambda_1 \lambda_2 \rangle =|\lambda_1 \rangle \otimes |\lambda_2 \rangle##. The resulting Hilbert space is clearly four-dimensional. To understand the tensor product of operators, it's sufficient to know, how they act on product states, since given the linearity of the operators, then you know how the operators act on all linear combinations of product states, and these span the entire Hilbert space:
$$\hat{A} \otimes \hat{B} |\lambda_1 \rangle \otimes \lambda_2 \rangle=(\hat{A} |\lambda_1 \rangle) \otimes (\hat{B} |\lambda_2 \rangle).$$
The operator ##\hat{A} \otimes \hat{B}##, provided it's a self-adjoint operator, which is the case if ##\hat{A}## and ##\hat{B}## are self-adjoint operators, it represents the observable that ##A## is measured on subsystem 1 (i.e., the helicity of one of the photons) and simultaneously ##B## is measured on subsystem 2 (i.e., the helicity of the other of the photons).

Now you prepared the helicities of your photons in the state
$$|\Psi \rangle = \frac{1}{\sqrt{2}} (|1,-1 \rangle - |-1,1 \rangle).$$
This implies that neither of the single photons has a determined helicity. Only the total helicity
$$\hat{H}=\hat{h}_1 + \hat{h}_2 :=\hat{h} \otimes \mathbb{1} + \mathbb{1} \otimes \hat{h}$$
has the determined value 0. ##\hat{H}## means to measure the helicity of photon 1 and the helicity of photon 2 and adding the two values. It's not the operator describing the joint measurement of the two helicities, which would be ##\hat{h} \otimes \hat{h}##.

The observable for the joint measurement of two helicities, represented by ##\hat{h} \otimes \hat{h}## has no determined value, given the preparation in this state, and the probability is ##1/2## for finding helicity +1 for photon 1 and (then necessarily) -1 for the other one or vice versa. The other two possible joint helicity results, i.e., (1,1) or (-1,-1) do not occur at all when the measurement is done on the prepared state.

PeterDonis
Mentor
what is this operation ?

It's the composition of two operators on the 2-qubit Hilbert space. The composition of two operators is another operator on the same Hilbert space. You appear to be thinking of the ##\cdot## as being the same kind of thing as the ##\otimes##. It isn't.

Yes I was a bit unclear on the right we have a usual matrix product ##\cdot##.

The underbraces indicate the size of the eigenvectors.

PeterDonis
Mentor
on the right we have a usual matrix product

Which, in operator language, is just a composition of two operators. It's not the same as a tensor product, which is what ##\otimes## represents; ##\otimes## enlarges the Hilbert space, while ##\cdot## only makes sense to begin with for two operators on the same Hilbert space, and doesn't change the Hilbert space at all.

The underbraces indicate the size of the eigenvectors.

But on the RHS as you wrote it, they don't. That's my point. The eigenvectors of the RHS are ##2 x 1 \otimes 2 x 1##, just like the LHS. The ##\cdot## operator does not increase the size of the Hilbert space or the dimension of the eigenvectors; if we have two operators ##X## and ##Y##, both on the same Hilbert space, then the eigenvectors of ##X \cdot Y## are of the same "size" as the eigenvectors of ##X## and ##Y## by themselves. That's why I said ##\cdot## doesn't work the way ##\otimes## does.

vanhees71
Gold Member
2021 Award
No, the ##\otimes## symbol indicates a tensor product. If you have two vector spaced ##V_1## and ##V_2## its tensor product ##V_1 \otimes V_2## is another vector space, consisting of pairs of vectors, written as ##|v_1 \rangle \otimes |v_2 \rangle## with ##|v_1 \rangle \in V_1## and ##|v_2 \rangle \in V_2## and linear combinations thereof. The vector space ##V_1 \otimes V_2## "inherits" its linear structure in the natural way (if needed, I can give the details, but it's pretty obvious). It's easy to see that if ##|u_{k} \rangle## and ##v_{k} \rangle## are bases of ##V_1## and ##V_2## respectively, then ##|u_{i} \rangle \otimes |u_j \rangle## is a basis of ##V_1 \otimes V_2##. If the dimensions of the original spaces are finite, then ##\text{dim}(V_1 \otimes V_2)=\text{dim} V_1 \text{dim} V_2##. If ##V_1## and ##V_2## have defined scalar products also the product space inherits a scalar product in the obvious way:
$$\langle v_1| \otimes \langle v_2|w_1 \rangle \otimes w_2 \rangle=\langle v_1|w_1 \rangle \langle v_2 | w_2 \rangle.$$

Also the operators ##\hat{A}_1## and ##\hat{A}_2## which are linear mappings of ##V_1## or ##V_2##, respectively on itself, are combined to an operator ##\hat{A}_1 \otimes \hat{A}_2## acting as a linear operator on ##V_1 \otimes V_2##,
$$\hat{A}_1 \otimes \hat{A}_2 |v_1 \rangle \otimes |v_2 \rangle=(\hat{A}_1 |v_1 \rangle) \otimes (\hat{A}_2 |v_2 \rangle).$$
In this way you have defined the operator on the entire product space ##V_1 \otimes V_2##.

It's looking a bit dull to remember all this formalism, but it's necessary to understand the very important topic of how to treat composite systems, and it's pretty obvious too. One just has to get used to it!

PeterDonis
Mentor
Yes that's what I wanted to say

No, it isn't. Go back and read my post #15 again, carefully.

the eigenvector of ##A\otimes \mathbb{1}## is ##4\times 1##, the one of ##\mathbb{1}\otimes B## is 4×1

No, each of these has a ##2 \times 1 \otimes 2 \times 1## eigenvector. Therefore their composition does too. Exactly as on the LHS of your equation. As I've already said in post #15.

vanhees71
Gold Member
2021 Award
I've never ever seen this notation. My point is that you have to properly define what you mean with your notation. What does the expression ##2 \times 1 \otimes 2 \times 1## refer to?

PeterDonis
Mentor
What does the expression ##2 \times 1 \otimes 2 \times 1## refer to?

I was just trying to match the OP's notation in post #11. I agree it's not really proper notation.

vanhees71
Gold Member
2021 Award
Ok, then I have to ask the OP, what he means with this notation, which I've never encountered before.

I use ##n\times 1## for saying it is a vector of this size, instead of ##\in\mathbb{R}^n##.

vanhees71