Commutativity of the Born Rule

  • Thread starter the_pulp
  • Start date
  • Tags
    Born rule
In summary: In short, in order to calculate the probability of a given situation, given some knowledge about the state of a system, you start by calculating the probability of the state of the system given the information you have about the situation. Then you multiply that probability by the probability of the situation.
  • #1
the_pulp
207
9
The born rule, written in the following way:

P(ψ/[itex]\varphi[/itex])=|<ψ|[itex]\varphi[/itex]>|^2

As a consequence,

P(ψ/[itex]\varphi[/itex])=P([itex]\varphi[/itex]/ψ)

I don't see it as an obvious fact from real life, do you? Why does it happen? Is there any intuitive reasoning / experience behind this commutativity of states?

Thanks!
 
Physics news on Phys.org
  • #2
the_pulp said:
I don't see it as an obvious fact from real life, do you? Why does it happen? Is there any intuitive reasoning / experience behind this commutativity of states?
It's not "commutativity". More like time reversal invariance, which is usually appears in theories based on Galilean/Poincare symmetry unless specific techniques are employed to avoid it.
 
  • #3
##\left|\langle x,y\rangle\right|## is the angle between the lines through 0 (i.e. 1-dimensional subspaces) that contain x and y.

The Cauchy-Schwartz inequality says that for any two vectors x,y, we have ##\left|\langle x,y\rangle\right|\leq\|x\|\|y\|##. If we had been dealing with a real vector space, this would have implied that
$$-1\leq\frac{\langle x,y\rangle}{\|x\|\|y\|}\leq 1,$$ and this would have allowed us to define the angle θ between the two vectors by
$$\cos\theta=\frac{\langle x,y\rangle}{\|x\|\|y\|}.$$ Since we're dealing with a complex vector space, this doesn't quite work. So we can't define the angle between the vectors, but we can define the angle between the 1-dimensional subspaces that contain x and y as
$$\frac{\left|\langle x,y\rangle\right|}{\|x\|\|y\|}.$$ If x and y are unit vectors, i.e. if ##\|x\|=\|y\|=1##, this of course reduces to ##\left|\langle x,y\rangle\right|##.
 
Last edited:
  • #4
It's not "commutativity". More like time reversal invariance, which is usually appears in theories based on Galilean/Poincare symmetry unless specific techniques are employed to avoid it

|⟨x,y⟩| is the angle between the lines through 0 (i.e. 1-dimensional subspaces) that contain x and y.

Thanks for your answers, but I still can't see it. What I'm trying to say is that, with this axiom, it Will always happen that, given two experiments. E1 And E2, with eigenstates ψ1i And ψ2j the two following experiments:

1 prepare the system in order to be in the state 1i, perform the experiment E2 And see how likely is 2j to happen

2 prepare the system in order to be in the state 2j, perform the experiment E1 And see how likely is 1i to happen

have the same probability.

I mean, it is an implicit axiom of the theory, but, before starting reading about QM And QFT, I was not expecting that to be obvious to happen. So, is it an experimental fact? Was it obvious in 1920 that laws of physics should imply this? Is it obvious to happen? Is it right to say "its obvious that the probability of situation 1 And situation 2 should be the same"?

Thanks
 
  • #5
I don't know a reason to think that P(x|y)=P(y|x) should hold in all theories. It is however pretty obvious that it must hold in QM, since P(x|y) is just the angle between the subspaces that contain x and y.
 
  • #6
the_pulp said:
1 prepare the system in order to be in the state 1i, perform the experiment E2 And see how likely is 2j to happen

2 prepare the system in order to be in the state 2j, perform the experiment E1 And see how likely is 1i to happen

have the same probability.
Aren't these just time-reversed versions of each other?

Early in Weinberg vol 1, he uses assumptions of time-reversal invariance to derive the anti-linear nature of the time-reversal operator.
 
  • #7
strangerep said:
Aren't these just time-reversed versions of each other?
I find it hard to answer that due to the the non-deterministic nature of measurements. But at least now I see what you had in mind.

I think Weinberg's approach makes it clear that a quantum theory possesses a certain type of invariance if and only if it contains an operator corresponding to that invariance. For example a theory is invariant under translations in the x direction of space if and only if it contains an "x component of momentum" operator. So a quantum theory should be time-reversal invariant if and only if it contains a time-reversal operator. But the result discussed in this thread holds even if there's no time-reversal operator in the theory.
 
  • #8
Fredrik said:
But the result discussed in this thread holds even if there's no time-reversal operator in the theory.
Maybe it's simpler to appeal to Bayes' theorem, aka the principle of "inverse probability".
Cf. Ballentine, p31.
$$
P(B|A\&C) ~=~ \frac{P(A|B\&C) \; P(B|C)}{P(A|C)}
$$
(and specialize to the case where ##C## is a certainty).

Of course, this is a consequence of assuming ##A\&B = B\& A## .

Edit: Actually this is a bit fuzzy so I probably need to go revise some (quantum) propositional logic...
 
Last edited:
  • #9
Id love if the answer were "its because life respects time reversal" but, doesn't weak interaction violate time reversal? So, what do we do?

Thanks for all the replies!
 
  • #10
the_pulp said:
Id love if the answer were "its because life respects time reversal" but, doesn't weak interaction violate time reversal? So, what do we do?
Use CPT invariance instead? :biggrin:
 
  • #11
Use CPT invariance instead?

Thats the real reason or its just a guess? Sounds promising, can you expand a little or give some reference that shows the link between the two of them? (I have the intuition that it may be the answer, but I'm not sure)
 
  • #12
the_pulp said:
Id love if the answer were "its because life respects time reversal" but, doesn't weak interaction violate time reversal? So, what do we do?
What makes you think that something needs to be done? The theory doesn't need to be time-reversal invariant for ##\left|\langle x,y\rangle\right|=\left|\langle y,x\rangle\right|## to hold.
 
  • #13
Fredrik said:
What makes you think that something needs to be done? The theory doesn't need to be time-reversal invariant for ##\left|\langle x,y\rangle\right|=\left|\langle y,x\rangle\right|## to hold.


[itex]\vert \langle x \vert y \rangle \vert = \vert \langle y \vert x \rangle \vert [/itex]

makes perfect sense as a claim about vectors, but as the original poster said, it's weird as a statement about probabilities. As a statement about vectors, if you start with a vector [itex]\vert y \rangle[/itex] and project it along a different vector [itex]\vert x \rangle[/itex], you don't get a probabilistic result of 0 or 1, you get a deterministic result, that the projection has length [itex]\dfrac{\vert \langle x \vert y \rangle \vert}{\sqrt{\langle x \vert x \rangle}}[/itex].
 
  • #14
stevendaryl said:
[itex]\vert \langle x \vert y \rangle \vert = \vert \langle y \vert x \rangle \vert [/itex]

makes perfect sense as a claim about vectors, but as the original poster said, it's weird as a statement about probabilities. As a statement about vectors, if you start with a vector [itex]\vert y \rangle[/itex] and project it along a different vector [itex]\vert x \rangle[/itex], you don't get a probabilistic result of 0 or 1, you get a deterministic result, that the projection has length [itex]\dfrac{\vert \langle x \vert y \rangle \vert}{\sqrt{\langle x \vert x \rangle}}[/itex].
Right, but if x is a unit vector, the denominator is 1. If we for some reason choose to not work with unit vectors the way we usually do, the probability formula is
$$P(x|y)=\frac{\left|\langle x,y\rangle\right|^2}{\|x\|^2\|y\|^2}$$ so we still have ##P(x|y)=P(y|x)##, which is what he asked about. The intuitive way of looking at this is that ##P(x|y)## is the square of the angle between the 1-dimensional subspaces ##\mathbb Cx## and ##\mathbb Cy##.
 
Last edited:
  • #15
Fredrik said:
Right, but if x is a unit vector, the denominator is 1. If we for some reason choose to not work with unit vectors the way we usually do, the probability formula is
$$P(x|y)=\frac{\left|\langle x,y\rangle\right|}{\|x\|\|y\|}$$ so we still have ##P(x|y)=P(y|x)##, which is what he asked about. The intuitive way of looking at this is that ##P(x|y)## is given by the angle between the 1-dimensional subspaces ##\mathbb Cx## and ##\mathbb Cy##.

My point is not about the vector space result. As I said, it's the interpretation of the result as a probability that is strange.
 
  • #16
I've been already puzzled by the question, because it doesn't make any sense to me. The more I'm puzzled by the answers given so far. So, here are my 2 cents:

First of all the conception to write something like
[tex]P(\psi|\phi)=|\langle \psi|\phi \rangle|^2[/tex]
doesn't make sense, except, I misinterpret the symbols:

Usually the left-hand side denotes a conditional probability that "and event [itex]\psi[/itex]" occurs, given some "condition [itex]\phi[/itex]". The right-hand side takes two Hilbert-space vectors and the modulus of its scalar product squared (tacitly assuming that the vectors are normalized to 1, as I'll do also in the following). This resembles vaguely Born's rule, but it's not really Born's rule!

One has to remember the very basic postulates of quantum theory to carefully analyze the statement of Born's rule, which partially is interfering with a lot of philosophical ballast known as "interpretation". Here, I follow the minimal statistical interpretation, which is as free as possible from this philosophical confusion.

(1) A completely determined state of a quantum system is given by a ray in Hilbert space, represented by an arbitrary representant [itex]|\psi \rangle[/itex], or equivalently by the projection operator [itex]\hat{R}_{\psi}=|\psi \rangle \langle \psi |[/itex] as the statistical operator. Such a state is linked to a system by a preparation procedure that determines a complete set of compatible observables (a notion which is explanable only with the following other postulates).

(2) Any observable [itex]A[/itex] is represented by a self-adjoint operator [itex]\hat{A}[/itex]. The possible values this observable can take are given by the spectral values [itex]a[/itex] of this operator. Further let [itex]|a \rangle[/itex] denote the corresponding (generalized) eigenvectors (I don't want to make this posting mathematically rigorous by introducing formally the spectral decomposition a la von Neumann or the more modern but equivalent formal introduction of the "rigged Hilbert space").

(3) Two observables [itex]A[/itex] and [itex]B[/itex] are compatible if the corresponding operators have a complete set of (generalized) common eigen vectors [itex]|a,b \rangle[/itex]. One can show that then the operators necessarily commute [itex][\hat{A},\hat{B}]=0.[/itex]

(4) A set of compatible independent observables [itex]\{A_{i} \}_{i=1,\ldots n}[/itex] is complete, if any common (generalized) eigenvector [itex]|a_1,a_2,\ldots,a_n \rangle[/itex] is determined up to a (phase) factor and if any of these observables cannot be expressed as the function of the others. If a system is prepared such that such a complete set of independent compatible observables take a determined value, then the system is prepared in the corresponding state.

In the following, I assume that in the case that some of the observables have a continuous part in the spectrum the corresponding eigenvectors are somewhat smeared with a square-integrable weight such that one has a normalizable true Hilbert-space vector [itex]|\psi \rangle[/itex], which is also a mathematical but pretty important detail, which we can discuss later, if necessary.

(5) If the system is prepared in this sense in a state, represented by the normalized vector [itex]|\psi \rangle[/itex], then the probability (density) to find the common values [itex](b_1,\ldots,b_n)[/itex] of (the same or another) complete set of compatible observables [itex]\{B_i \}_{i=1,\ldots,n}[/itex] is given by Born's rule,
[tex]P(b_1,\ldots,b_n|\psi)=|\langle b_1,\ldots,b_n|\psi \rangle|^2.[/tex]

Now the whole construct makes sense, because on the left-hand side we have the probability to measure certain values for a complete compatible set of independent observables under the constraint that the system has been prepared in the state, represented by [itex]\psi[/itex].

BTW, note that it's the modulus squared not the modulus, which is also very important, because otherwise one wouldn't get a well-definied probability distribution in the sense of Kolmogorov's axioms!

It is important to note that we have in some sense an asymmetry here: The one vector in Born's rule, [itex]|\psi \rangle[/itex] represents the state of a system, which is linked to the system by a preparation procedure to bring it into this state, and the other vector is the (generalized) common eigenvector of a complete set of compatible independent observables, referring to the measurement of this set of observables.

One should note that the clear distinction of the two vectors in Born's rule is crucial for the whole formalism to work also mathematically. Particularly, the dynamics of these vectors is described differently. The mathematical time dependence of both kinds of vectors is determined only up to a time-dependent unitary transformation, which freedom is often made use of in practical calculations.

Usually, in non-relativistic quantum mechanics one starts with the Schrödinger picture, where the state kets carry the full time evolution, according to the equation
[tex]\mathrm{i} \hbar \partial_t |\psi,t \rangle = \hat{H} |\psi,t \rangle,[/tex]
where [itex]\hat{H}[/itex] is the Hamilton operator of the system. The operators and generalized eigenvectors of observables that are not explictly time dependent, are independent of time.

The other extreme is the Heisenberg picture, where the state ket is time-independent, but the Operators representing observables and thus the eigenvectors carry the full time dependence through the equation of motion,
[tex]\frac{\mathrm{d} \hat{A}(t)}{\mathrm{d} t}=\frac{1}{\mathrm{i} \hbar} [\hat{A},\hat{H} ].[/tex]

The most general case is the Dirac picture, where both the state vector and the observables are time dependent, according to the equations of motion,
[tex]\mathrm{i} \hbar \partial_t |\psi,t \rangle = \hat{H}_1 |\psi,t \rangle[/tex]
[tex]\frac{\mathrm{d} \hat{A}(t)}{\mathrm{d} t}=\frac{1}{\mathrm{i} \hbar} [\hat{A},\hat{H}_2][/tex]
with an arbitrary decomposition of the Hamiltonian as
[tex]\hat{H}=\hat{H}_1+\hat{H}_2.[/tex]
A commonly used Dirac picture is the "interaction picture", where the observable operators evolve as for free particles, i.e., [itex]\hat{H}_1[/itex] is the kinetic energy only, and the states according to the interaction part [itex]\hat{H}_2[/itex] of the Hamiltonian.

Of course at the end all these descriptions must lead to the same physical results. Most importantly all the probabilities must be independent of the picture of time evolution, i.e., independent of the split of the Hamiltonian into the two parts within the Dirac picture (of which Schrödinger and Heisenberg picture are only special cases). As one can easily check this works out only when making clearly the distinction between state vectors and the generalized eigenvectors of the observable operators!

So there is an asymmetry in the two vectors involved in Born's rule, and it's a physically very meaningful asymmetry!
 
  • #17
Sorry Vanshees, I am reading 4 and 5 from your post but I don't see the diference from my formula. In fact, if you make b1,b2...bn=phi, then you have my formula right?, then you can interchange phi and psy and you get the same probability and I don't see a clear reason of why it should be like this...

Sorry I insist but I really did not get the reason why what you write can invalidate my question?

Nevertheless I appreciate your answers very much
 
  • #18
Fredrik, as stevendaryl says, my question is not about the math (I understand it). My question is if there is any reason that can be stated in a few words (I don't want to use the phrase "physical reason" but perhaps it helps you to find what I am asking) in order to explain why the probability to go from [itex]\psi[/itex] to [itex]\varphi[/itex] and the probability to go the other way round are the same.

Thanks all the same
 
  • #19
I'm not sure this is right, but isn't one vector an eigenvector of an observable, while the other is an arbitrary vector?
 
  • #20
I'm not sure this is right, but isn't one vector an eigenvector of an observable, while the other is an arbitrary vector?

As I see it, the two of them can be regarded as eingenvectors of an observable (or a couple of observables to be more general)
 
  • #21
the_pulp said:
As I see it, the two of them can be regarded as eingenvectors of an observable (or a couple of observables to be more general)

What if I prepare a state which is a sum of eigenvectors?
 
  • #22
What if I prepare a state which is a sum of eigenvectors?
You mean a mixed state, right? Well, in that case its different. If you start with a pure state and you finnish with a pure state, then there is no difference, right? (This thread was intended to that case)
 
  • #23
the_pulp said:
You mean a mixed state, right? Well, in that case its different. If you start with a pure state and you finnish with a pure state, then there is no difference, right? (This thread was intended to that case)

No, I mean a pure state. I suppose my question is whether every state is an eigenvector of some observable.
 
  • #24
I don't see why that could happen. Anyway, let's stick please to the easy example because it seems as if we are going off topic. I repeat my first question: We have two states [itex]\varphi[/itex] and [itex]\psi[/itex] (lets suppose they describe the same system in two different situations and there are observables associated with these states that allow us to detect exactly if we are in one of them).

Then:

P(ψ|ϕ) = P(ϕ|ψ)

Besides the fact that the math implies this relation in an easy way, the question is, what is the physical reason that forces this to happend?

I will paste below another way in which I wrote it, perhaps it helps you to see my doubt:


What I'm trying to say is that, with this axiom, it Will always happen that, given two experiments. E1 And E2, with eigenstates ψ1i And ψ2j the two following experiments:

1 prepare the system in order to be in the state 1i, perform the experiment E2 And see how likely is 2j to happen

2 prepare the system in order to be in the state 2j, perform the experiment E1 And see how likely is 1i to happen

have the same probability.

I mean, it is an implicit axiom of the theory, but, before starting reading about QM And QFT, I was not expecting that to be obvious to happen. So, is it an experimental fact? Was it obvious in 1920 that laws of physics should imply this? Is it obvious to happen? Is it right to say "its obvious that the probability of situation 1 And situation 2 should be the same"?

Thanks
 
  • #25
Would your question be equivalent to asking why projective measurements are projective?
 
  • #26
the_pulp said:
Fredrik, as stevendaryl says, my question is not about the math (I understand it). My question is if there is any reason that can be stated in a few words (I don't want to use the phrase "physical reason" but perhaps it helps you to find what I am asking) in order to explain why the probability to go from [itex]\psi[/itex] to [itex]\varphi[/itex] and the probability to go the other way round are the same.
That follows immediately from the Born rule, so I guess you're asking why the Born rule holds. The Born rule is part of the definition of QM, so what you're really asking is why QM is a good theory (or rather, a good framework in which we can define theories of matter and interactions). The only thing that can answer that is a better theory.
 
  • #27
Fredrik said:
That follows immediately from the Born rule, so I guess you're asking why the Born rule holds. The Born rule is part of the definition of QM, so what you're really asking is why QM is a good theory (or rather, a good framework in which we can define theories of matter and interactions). The only thing that can answer that is a better theory.

That too is my understanding of what is being asked. I suppose the proposals for deriving the Born rule are:

1) Bohmian mechanics - works for non-relativistic cases, no consensus on relativistic cases

2) Many-worlds - widely believed to work, but I don't understand this derivation

3) Gleason's theorem - uncontroversial, but mystifying

4) Zurek's envariance - pretty new attempt, and no consensus yet
 
  • #28
atyy said:
I'm not sure this is right, but isn't one vector an eigenvector of an observable, while the other is an arbitrary vector?
This is true, but even the arbitrary vector is an eigenvector of some observable.
 
  • #29
Fredrik said:
This is true, but even the arbitrary vector is an eigenvector of some observable.

Is there a short proof, or can you point me to a reference? (Anyway, I do think it's not very relevant to the question, since I know this is true in many cases.)
 
  • #30
atyy said:
Is there a short proof, or can you point me to a reference? (Anyway, I do think it's not very relevant to the question, since I know this is true in many cases.)

A nonzero vector ##|u>## is an eigenvector of ##|u><u|##.
 
  • #31
I don't want you to explain the interpretation of the born rule. Just of a little part of it, the apparent symmetry between the inicial And final state in the calculation of probabilities. I thought that it may have a clear analogy in real life, but, perhaps I was wrong And it is just as obscure as all the rest of the Born rule.

Thanks all the same for your usual help!

Ps I know that the ultimate justification of QM is its prdictive power. I thought that this particular part of QM may have a proper interpretation.
 
  • #32
vanhees71 said:
I've been already puzzled by the question, because it doesn't make any sense to me. The more I'm puzzled by the answers given so far. So, here are my 2 cents:

First of all the conception to write something like
[tex]P(\psi|\phi)=|\langle \psi|\phi \rangle|^2[/tex]
doesn't make sense, except, I misinterpret the symbols:Usually the left-hand side denotes a conditional probability that "and event [itex]\psi[/itex]" occurs, given some "condition [itex]\phi[/itex]". The right-hand side takes two Hilbert-space vectors and the modulus of its scalar product squared (tacitly assuming that the vectors are normalized to 1, as I'll do also in the following). This resembles vaguely Born's rule, but it's not really Born's rule!
The OP spoke of "the Born rule, written in the following way", so I assumed that the notation P(x|y) should be interpreted as "the probability that the measurement will have the result that leaves the system in the state x, given that the state before the measurement is y". This makes sense if we're talking about a measuring device that's represented by an operator such that x is one of its eigenvectors, and the eigenspace corresponding to the eigenvalue of x is the 1-dimensional subspace that contains x. So I assumed that we are talking about a measurement with such a measuring device.
vanhees71 said:
BTW, note that it's the modulus squared not the modulus
Oops. I see that I forgot the square in post #14. I just fixed that in an edit. Also, in post #5 I said that P(x|y) is the angle between the 1-dimensional subspaces ##\mathbb Cx## and ##\mathbb Cy##. I can't edit that anymore, so I'll just say here that it's the square of the angle, not the angle itself.

vanhees71 said:
It is important to note that we have in some sense an asymmetry here:
I don't follow this argument, and I don't think that there is an asymmetry, at least not one that's severe enough to cause any problems. A measurement that doesn't destroy the system is a preparation procedure, because it leaves the system in a known state. So both the "before" and "after" states have been produced by preparation procedures.

I suppose you could nitpick that statement by insisting that only a procedure that leaves the particles in the same state every time should be considered a preparation procedure, but that is only a matter of discarding the particle every time we get the "wrong" result.
 
  • #33
micromass said:
A nonzero vector ##|u>## is an eigenvector of ##|u><u|##.

Are all eigenvalues of ##|u><u|## real?
 
  • #34
the_pulp said:
I don't want you to explain the interpretation of the born rule. Just of a little part of it, the apparent symmetry between the inicial And final state in the calculation of probabilities. I thought that it may have a clear analogy in real life, but, perhaps I was wrong And it is just as obscure as all the rest of the Born rule.

Thanks all the same for your usual help!

Ps I know that the ultimate justification of QM is its prdictive power. I thought that this particular part of QM may have a proper interpretation.

But as Fredrik has been saying, this seems to follow from the idea that when you measure, you make a projection. So your question is like asking why a projection?
 
  • #35
atyy said:
Are all eigenvalues of ##|u><u|## real?
That's a projection operator, so it only has eigenvalues 0 and 1.

Note that if ##P^2=P## and ##Px=\lambda x## where ##\lambda## is a complex number and ##\|x\|\neq 0##, we have ##P^2x=P(Px)=P(\lambda x)=\lambda Px=\lambda^2x##, and also ##P^2x=Px=\lambda x##. So ##\lambda(\lambda-1)x=0##, and this implies that ##\lambda## is 0 or 1.
 
Last edited:

Similar threads

Replies
14
Views
890
  • Quantum Physics
2
Replies
64
Views
3K
  • Quantum Physics
Replies
14
Views
2K
Replies
6
Views
1K
  • Quantum Physics
Replies
5
Views
826
  • Quantum Physics
Replies
9
Views
2K
Replies
3
Views
1K
  • Quantum Physics
Replies
19
Views
3K
  • Quantum Physics
Replies
17
Views
1K
Replies
3
Views
894
Back
Top