# Quantum Amplitudes, Probabilities and EPR

[Total: 2    Average: 5/5]

This is a little note about quantum amplitudes. Even though quantum probabilities seem very mysterious, with weird interference effects and seemingly nonlocal effects, the mathematics of quantum amplitudes are completely straight-forward. (The amplitude squared gives the probability.) As a matter of fact, the rules for computing amplitudes are almost exactly the same as the classical rules for computing probabilities for a memoryless stochastic process. (Memoryless means that future probabilities depend only on the current state, not on how it got to that state.)

Probabilities for stochastic processes:

If you have a stochastic process such as Brownian motion, then probabilities work this way:

Let $P(i,t|j,t’)$ be the probability that the system winds up in state $i$ at time $t$, given that it is in state $j$ at time $t'$.

Then these transition probabilities combine as follows: (Assume $t’ < t” < t$)

$P(i,t|j,t’) = \sum_k P(i,t|k,t”) P(k,t”|j,t’)$

where the sum is over all possible intermediate states $k$.

There are two principles at work here:

1. In computing the probability for going from state $j$ to state $k$ to state $i$, you multiply the probabilities for each “leg” of the path.
2. In computing the probability for going from state $j$ to state $i$ via an intermediate state, you add the probabilities for each alternative intermediate state.

These are exactly the same two rules for computing transition amplitudes using Feynman path integrals. So there is an analogy: amplitudes are to quantum mechanics as probabilities are to classical stochastic processes.

Continuing with the analogy, we can ask the question as to whether there is a local hidden variables theory for quantum amplitudes. The answer is YES.

Local “hidden-variables” model for EPR amplitudes

Here’s a “hidden-variables” theory for the amplitudes for the EPR experiment.

First, a refresher on the probabilities for the spin-1/2 anti-correlated EPR experiment, and what a “hidden-variables” explanation for those probabilities would be:

In the EPR experiment, there is a source for anti-correlated electron-positron pairs. One particle of each pair is sent to Alice, and another is sent to Bob. They each measure the spin relative to some axis that they choose independently.

Assume Alice chooses her axis at angle $\alpha$ relative to the x-axis in the x-y plane, and Bob chooses his to be at angle $\beta$ (let’s confine the orientations of the detectors to the x-y plane, so that orientation can be given by a single real number, an angle). Then the prediction of quantum mechanics is that probability that Alice will get result $A$ (+1 for spin-up, relative to the detector orientation, and -1 for spin-down) and Bob will get result $B$ is:

$P(A, B | \alpha, \beta) = \frac{1}{2} sin^2(\frac{\beta-\alpha}{2})$ if $A = B$
$P(A, B | \alpha, \beta) = \frac{1}{2} cos^2(\frac{\beta-\alpha}{2})$ if $A \neq B$

A “local hidden variables” explanation for this result would be given by a probability distribution $P(\lambda)$ on values of some hidden variable $\lambda$, together with probability distributions

$P_A(A | \alpha, \lambda)$
$P_B(B | \beta, \lambda)$

such that

$P(A, B | \alpha, \beta) = \sum P(\lambda) P_A(A|\alpha, \lambda) P_B(B|\beta, \lambda)$

(where the sum is over all possible values of $\lambda$; if $\lambda$ is continuous, the sum should be replaced by $\int d\lambda$.)

The fact that the QM predictions violate Bell’s inequality proves that there is no such hidden-variables explanation of this sort.

But now, let’s go through the same exercise in terms of amplitudes, instead of probabilities. The amplitude for Alice and Bob to get their respective results is basically the square-root of the probability (up to a phase). So let’s consider the amplitude:

$\psi(A, B|\alpha, \beta) \sim \frac{1}{\sqrt{2}} sin(\frac{\beta – \alpha}{2})$ if $A = B$, and
$\psi(A, B|\alpha, \beta) \sim \frac{1}{\sqrt{2}} cos(\frac{\beta – \alpha}{2})$ if $A \neq B$.

(I’m using the symbol $\sim$ to mean “equal up to a phase”; I’ll figure out a convenient phase as I go).

In analogy with the case for probabilities, let’s say a “hidden variables” explanation for these amplitudes will be a parameter $\lambda$ with associated functions $\psi(\lambda)$, $\psi_A(A|\lambda, \alpha)$, and $\psi_B(B|\lambda, \beta)$ such that:

$\psi(A, B|\alpha, \beta) = \sum \psi(\lambda) \psi_A(A | \alpha, \lambda) \psi_B(B | \beta, \lambda)$

where the sum ranges over all possible values for the hidden variable $\lambda$.
I’m not going to bore you (any more than you are already) by deriving such a model, but I will just present it:

1. The parameter $\lambda$ ranges over the two-element set, $\{ +1, -1 \}$
2. The amplitudes associated with these are: $\psi(\lambda) = \frac{\lambda}{\sqrt{2}} = \pm \frac{1}{\sqrt{2}}$
3. When $\lambda = +1$, $\psi_A(A | \alpha, \lambda) = A \frac{1}{\sqrt{2}} e^{i \alpha/2}$ and $\psi_B(B | \beta, \lambda) = \frac{1}{\sqrt{2}} e^{-i \beta/2}$
4. When $\lambda = -1$, $\psi_A(A | \alpha, \lambda) = \frac{1}{\sqrt{2}} e^{-i \alpha/2}$ and $\psi_B(B | \alpha, \lambda) = B \frac{1}{\sqrt{2}} e^{i \beta/2}$

Check:
$\sum \psi(\lambda) \psi_A(A|\alpha, \lambda) \psi_B(B|\beta, \lambda) = \frac{1}{\sqrt{2}} (A \frac{1}{\sqrt{2}} e^{i \alpha/2}\frac{1}{\sqrt{2}} e^{-i \beta/2} – \frac{1}{\sqrt{2}} e^{-i \alpha/2} B \frac{1}{\sqrt{2}} e^{+i \beta/2})$

If $A = B = \pm 1$, then this becomes (using $sin(\theta) = \frac{e^{i \theta} – e^{-i \theta}}{2i}$):

$= \pm 1 \frac{i}{\sqrt{2}} sin(\frac{\alpha – \beta}{2})$

If $A = -B = \pm 1$, then this becomes (using $cos(\theta) = \frac{e^{i \theta} + e^{-i \theta}}{2}$):

$= \pm 1 \frac{1}{\sqrt{2}} cos(\frac{\alpha – \beta}{2})$

So we have successfully reproduced the quantum predictions for amplitudes (up to the phase $\pm 1$).

What does it mean?

In a certain sense, what this suggests is that quantum mechanics is a sort of “stochastic process”, but where the “measure” of possible outcomes of a transition is not real-valued probabilities but complex-valued probability amplitudes. When we just look in terms of amplitudes, everything seems to work out the same as it does classically, and the weird correlations that we see in experiments such as EPR are easily explained by local hidden variables, just as Einstein, Podolsky and Rosen hoped. But in actually testing the predictions of quantum mechanics, we can’t directly measure amplitudes, but instead compile statistics which give us probabilities, which are the squares of the amplitudes. The squaring process is in some sense responsible for the weirdness of QM correlations.

Do these observations contribute anything to our understanding of QM? Beats me. But they are interesting.

81 replies
1. secur says:

It's how nature works, assuming we have "this typical Bell-type experiment" – i.e., the particular experiment that @stevendaryl proposed. The two particles are entangled with opposite spins – sometimes called a "Bell state". Therefore "when their detector angles are equal, they will always detect the opposite".

The term "Bell-type" is vague. I don't think there's any official definition. To me it does not necessarily mean entanglement with opposite spins, although that's most common, and that's how Bell originally did it. They could instead be in the "twin state" so that they must have the same spin for the same angles. I even use that term sometimes when I'm not talking about Bell's inequality at all, but something similar like CHSH inequality. Almost any experiment that demonstrates the conclusion Bell came up with (ruling out realist, local hidden-variables model) might be referred to, loosely, as "Bell-type". The meaning should be clear from context.

If others disagree with my use of this term "Bell-type", I won't argue, maybe they're right.

2. RockyMarciano says:

Rocky, my understanding is that the maths is just a convenience. Complex numbers have the ability to reduce two real solutions to one complex one.

In general you are right that math is just a convenient tool to describe the physics, but I'm not questioning this when I try to anlayze the role of the complex structure of amplitudes in the context of classical probabilties versus EPR.

I think if we are invited to think about and draw conclusions from the clear set up in the OP we have to consider the role of the complex structure in this particular case, not necessarily to derive anything about nature but about the mathematical meaning of the variables involved here and therefore wich are the valid conclusions to draw if any..

To be specific, the probability amplitudes used to obtain probability densities are different from the amplitudes up to sign obtained from the square root of the probabilities. Namely, only the former have a complex phase, so it seems it is this complex phase rather than their squaring that is responsible for the differences between classical and quantum correlations.

It would be interesting to know if somebody disagrees with this or thinks it is irrelevant and if so why.

3. stevendaryl says:

To be specific, the probability amplitudes used to obtain probability densities are different from the amplitudes up to sign obtained from the square root of the probabilities. Namely, only the former have a complex phase, so it seems it is this complex phase rather than their squaring that is responsible for the differences between classical and quantum correlations.

Well, it's the combination of nonpositive amplitudes and squaring that leads to interference effects. (You don't need complex amplitudes for that, just negative ones).

4. RockyMarciano says:

Well, it's the combination of nonpositive amplitudes and squaring that leads to interference effects. (You don't need complex amplitudes for that, just negative ones).

True, but it is in the context of complex numbers that you can integrate those nonpositive amplitudes in a coherent mathematical way.

I think we basically agree that all the weirdness is due to using complex numbers instead of reals as inputs(as commented by Lavinia in previous post this is nothing new), so maybe my point is just a nitpicking that might seem pedantic, but mathematically I think it is important to remark that the difference between classical and EPR correlations is not just the squaring, but as you say the squaring combined with more things that conform the complex structure of QM.

5. stevendaryl says:

True, but it is in the context of complex numbers that you can integrate those nonpositive amplitudes in a coherent mathematical way.

I think we basically agree that all the weirdness is due to using complex numbers instead of reals as inputs(as commented by Lavinia in previous post this is nothing new), so maybe my point is just a nitpicking that might seem pedantic, but mathematically I think it is important to remark that the difference between classical and EPR correlations is not just the squaring, but as you say the squaring combined with more things that conform the complex structure of QM.

Well, it's more dramatic with complex amplitudes, but interference effects would show up even if all amplitudes are positive real numbers.

Suppose you do a double slit experiment with positive real amplitudes. A photon can either go through the left slit, with probability $p$, or the other slit, with probability $1-p$. If it goes through the left slit, say that it has a probability of $q_L$ of triggering a particular photon detector. If it goes through the right slit, say that it has a probability of $q_R$ of triggering that detector. Then the amplitude for triggering the detector, when you don't observe which slit it goes through, is:

$psi = sqrt{p} sqrt{q_L} + sqrt{1-p}sqrt{q_R}$

$P = |psi|^2 = p q_L + (1-p) q_R + 2 sqrt{p(1-p)q_L q_R}$

That last term is the interference term, and it seems nonlocal, in the sense that it depends on details of both paths (and so in picturesque terms, the photon seems to have taken both paths). Without negative numbers, the interference term is always positive, so you don't have the stark pattern of zero-intensity bands that come from cancellations, but you still have a similar appearance of nonlocality.

6. RockyMarciano says:

Well, it's more dramatic with complex amplitudes, but interference effects would show up even if all amplitudes are positive real numbers.

Suppose you do a double slit experiment with positive real amplitudes. A photon can either go through the left slit, with probability $p$, or the other slit, with probability $1-p$. If it goes through the left slit, say that it has a probability of $q_L$ of triggering a particular photon detector. If it goes through the right slit, say that it has a probability of $q_R$ of triggering that detector. Then the amplitude for triggering the detector, when you don't observe which slit it goes through, is:

$psi = sqrt{p} sqrt{q_L} + sqrt{1-p}sqrt{q_R}$

$P = |psi|^2 = p q_L + (1-p) q_R + 2 sqrt{p(1-p)q_L q_R}$

That last term is the interference term, and it seems nonlocal, in the sense that it depends on details of both paths (and so in picturesque terms, the photon seems to have taken both paths). Without negative numbers, the interference term is always positive, so you don't have the stark pattern of zero-intensity bands that come from cancellations, but you still have a similar appearance of nonlocality.

You obviously mean that a "nonlocal term" with dependence on the two paths is indeed there(that is no longer really an "interference term" since as you wrote the pattern is lost without cancellations). This observation is of course true but one should wonder where does this term come from to begin with. And the only reason is that a 2-norm is being used to compute the probabilities, if we just used the one-norm of real valued probabilities only the probabilities from each path(without cross-term) would be summed to 1,  as all probabilities must sum up. It is only because the quadratic 2-norm of a complex line(Argand surface) is being used that an additional term that includes both paths appears, and their squares is what is summed to 1.

So I'm afraid you can't get rid of complex numbers as they are needed to explain the appearance of a cross-term in the first place.

7. Stephen Tashi says:

Rocky, my understanding is that the maths is just a convenience. Complex numbers have the ability to reduce two real solutions to one complex one.

It's tempting to think that real number probabilities are the "real" (in the sense of genuine) type of probability.

However, it's worthwhile remembering that the standard formulation of probability theory in terms of real numbers is also just a convenience.   It is convenient because real valued probabilities resemble observed frequencies and there are analogies between computations involving probabilities and computations involving observed fequencies.

Even people who have studied advanced probability theory tend to confuse observed frequencies with probabilities,    However,  standard probability theory gives no theorems about observed frequencies except those that talk about the probability of an observed frequency.  So probability theory is exclusively about probability.  It is circular in that sense.

In trying to apply probability theory to observations, the various statistical methods that are used likewise are computations whose results give the probabilities of the observations or parameters that cause them.

Furthermore, in mathematical probability theory ( i.e. the Kolmogorov approach) there is no formal definition of an "observation" , in the sense of an event that "actually happens".   There isn't even an axiom that says it is possible to take random samples.  The closest one gets to the concept of a "possibility" that "actually happens" in the definition of conditional probability and that definition merely defines a "conditional distribution"  as a quotient and uses the terminology that an event is "given".   The definition of conditional probability doesn't define "given" as a concept by itself.  (This is analogous to the fact that the concept of "approaches" has no technical definition within the definition ##lim_{xrightarrow a} f(x)##.  even though the word "approaches" appears when we verbalize the notation )

The intuitive problem with using complex numbers as a basis for probability theory seems (to me) to revolve around the interpretation of conditional (complex) probabilities.  They involve a concept of "given" that is different from the conventional concept of "given".  This is a contrast between intuitions, not a contrast between an intutition and a precisely defined mathematical concept because conventional probability theory has no precisely defined concept of "given" – even though it's usually crystal clear how we want to define "given" when we apply that theory to a specific problem.

8. edguy99 says:

Nice Insight @stevendaryl!

Thanks for bumping this. I forgot how truly great this insight is where @stevendaryl asks us to consider a model of the photon, where the probability amplitude of detecting a photon at a particular angle outside of its basis vectors (vertical or horizontal) is slightly random and non-linear, specifically:

ψ(A,B|α,β) ∼ 1/√2*sin((β–α)/2) if A=B, and

ψ(A,B|α,β) ∼ 1/√2*cos((β–α)/2) if A≠B.

Think of this as a photon coming straight at you that has a wobble. It's been prepared vertical (90º) except that it wobbles back and forth a bit. If you measure it vertically, it will always be vertical. If you measure it horizontal, it will never be horizontal. If you measure it at 45º, randomly it will be 50% vertical, 50% horizontal. But, if you measure it at 60º, it will have MORE then a 66% chance of being vertical.

Whether using probabilities or amplitudes, as @secur has pointed out, this is a non-bell model since "Another way to put it, your scheme doesn't guarantee that if α = β then their results will definitely be opposite."

In the experiment here, the @stevendaryl model works since Dehlinger and Mitchell consider all mismatched photons (when α = β) to be noise and throw them out. As far as I can see most other experiments consider mismatched photons as noise, does anyone have a counter-example?

9. DrChinese says:

In the experiment here, the @stevendaryl model works since Dehlinger and Mitchell consider all mismatched photons (when α = β) to be noise and throw them out. As far as I can see most other experiments consider mismatched photons as noise, does anyone have a counter-example?

Those cannot be thrown (for being a mismatch) out in an actual experiment.  That would defeat the purpose.  They can be analyzed for tuning purposes.  Sometimes, they help determine the proper time window for matching.  Photons that arrive too far apart (per expectation) are much less likely to be entangled.