quantumapplitudes

Learn About Quantum Amplitudes, Probabilities and EPR

Estimated Read Time: 5 minute(s)
Common Topics: amplitudes, probabilities, state, hidden, quantum

This is a little note about quantum amplitudes. Even though quantum probabilities seem very mysterious, with weird interference effects and seemingly nonlocal effects, the mathematics of quantum amplitudes is completely straightforward. (The amplitude squared gives the probability.) As a matter of fact, the rules for computing amplitudes are almost the same as the classical rules for computing probabilities for a memoryless stochastic process. (Memoryless means that future probabilities depend only on the current state, not on how it got to that state.)

Probabilities for stochastic processes:

If you have a stochastic process such as Brownian motion, then probabilities work this way:

Let [itex]P(i,t|j,t’)[/itex] be the probability that the system winds up in state [itex]i[/itex] at time [itex]t[/itex], given that it is in state [itex]j[/itex] at time [itex]t'[/itex].

Then these transition probabilities combine as follows: (Assume [itex]t’ < t” < t[/itex])

[itex]P(i,t|j,t’) = \sum_k P(i,t|k,t”) P(k,t”|j,t’)[/itex]

where the sum is over all possible intermediate states [itex]k[/itex].

There are two principles at work here:

  1. In computing the probability for going from state [itex]j[/itex] to state [itex]k[/itex] to state [itex]i[/itex], you multiply the probabilities for each “leg” of the path.
  2. In computing the probability for going from state [itex]j[/itex] to state [itex]i[/itex] via an intermediate state, you add the probabilities for each alternative intermediate state.

These are exactly the same two rules for computing transition amplitudes using Feynman path integrals. So there is an analogy: amplitudes are to quantum mechanics as probabilities are to classical stochastic processes.

Continuing with the analogy, we can ask the question as to whether there is a local hidden variables theory for quantum amplitudes. The answer is YES.

Local “hidden-variables” model for EPR amplitudes

Here’s a “hidden-variables” theory for the amplitudes for the EPR experiment.

First, a refresher on the probabilities for the spin-1/2 anti-correlated EPR experiment, and what a “hidden-variables” explanation for those probabilities would be:

In the EPR experiment, there is a source for anti-correlated electron-positron pairs. One particle of each pair is sent to Alice, and another is sent to Bob. They each measure the spin relative to some axis that they choose independently.

Assume Alice chooses her axis at angle [itex]\alpha[/itex] relative to the x-axis in the x-y plane, and Bob chooses his to be at angle [itex]\beta[/itex] (let’s confine the orientations of the detectors to the x-y plane so that orientation can be given by a single real number, an angle). Then the prediction of quantum mechanics is that probability that Alice will get the result [itex]A[/itex] (+1 for spin-up, relative to the detector orientation, and -1 for spin-down) and Bob will get result [itex]B[/itex] is:

[itex]P(A, B | \alpha, \beta) = \frac{1}{2} sin^2(\frac{\beta-\alpha}{2}) [/itex] if [itex]A = B[/itex]
[itex]P(A, B | \alpha, \beta) = \frac{1}{2} cos^2(\frac{\beta-\alpha}{2}) [/itex] if [itex]A \neq B[/itex]

A “local hidden variables” explanation for this result would be given by a probability distribution [itex]P(\lambda)[/itex] on values of some hidden variable [itex]\lambda[/itex], together with probability distributions

[itex]P_A(A | \alpha, \lambda)[/itex]
[itex]P_B(B | \beta, \lambda)[/itex]

such that

[itex]P(A, B | \alpha, \beta) = \sum P(\lambda) P_A(A|\alpha, \lambda) P_B(B|\beta, \lambda)[/itex]

(where the sum is over all possible values of [itex]\lambda[/itex]; if [itex]\lambda[/itex] is continuous, the sum should be replaced by [itex]\int d\lambda[/itex].)

The fact that the QM predictions violate Bell’s inequality proves that there is no such hidden-variables explanation of this sort.

But now, let’s go through the same exercise in terms of amplitudes, instead of probabilities. The amplitude for Alice and Bob to get their respective results is basically the square-root of the probability (up to a phase). So let’s consider the amplitude:

[itex]\psi(A, B|\alpha, \beta) \sim \frac{1}{\sqrt{2}} sin(\frac{\beta – \alpha}{2})[/itex] if [itex]A = B[/itex], and
[itex]\psi(A, B|\alpha, \beta) \sim \frac{1}{\sqrt{2}} cos(\frac{\beta – \alpha}{2})[/itex] if [itex]A \neq B[/itex].

(I’m using the symbol [itex]\sim[/itex] to mean “equal up to a phase”; I’ll figure out a convenient phase as I go).

In analogy with the case for probabilities, let’s say a “hidden variables” explanation for these amplitudes will be a parameter [itex]\lambda[/itex] with associated functions [itex]\psi(\lambda)[/itex], [itex]\psi_A(A|\lambda, \alpha)[/itex], and [itex]\psi_B(B|\lambda, \beta)[/itex] such that:

[itex]\psi(A, B|\alpha, \beta) = \sum \psi(\lambda) \psi_A(A | \alpha, \lambda) \psi_B(B | \beta, \lambda)[/itex]

where the sum ranges over all possible values for the hidden variable [itex]\lambda[/itex].
I’m not going to bore you (any more than you are already) by deriving such a model, but I will just present it:

  1. The parameter [itex]\lambda[/itex] ranges over the two-element set, [itex]\{ +1, -1 \}[/itex]
  2. The amplitudes associated with these are: [itex]\psi(\lambda) = \frac{\lambda}{\sqrt{2}} = \pm \frac{1}{\sqrt{2}}[/itex]
  3. When [itex]\lambda = +1[/itex], [itex]\psi_A(A | \alpha, \lambda) = A \frac{1}{\sqrt{2}} e^{i \alpha/2}[/itex] and [itex]\psi_B(B | \beta, \lambda) = \frac{1}{\sqrt{2}} e^{-i \beta/2}[/itex]
  4. When [itex]\lambda = -1[/itex], [itex]\psi_A(A | \alpha, \lambda) = \frac{1}{\sqrt{2}} e^{-i \alpha/2}[/itex] and [itex]\psi_B(B | \alpha, \lambda) = B \frac{1}{\sqrt{2}} e^{i \beta/2}[/itex]

Check:
[itex]\sum \psi(\lambda) \psi_A(A|\alpha, \lambda) \psi_B(B|\beta, \lambda) = \frac{1}{\sqrt{2}} (A \frac{1}{\sqrt{2}} e^{i \alpha/2}\frac{1}{\sqrt{2}} e^{-i \beta/2} – \frac{1}{\sqrt{2}} e^{-i \alpha/2} B \frac{1}{\sqrt{2}} e^{+i \beta/2})[/itex]

If [itex]A = B = \pm 1[/itex], then this becomes (using [itex]sin(\theta) = \frac{e^{i \theta} – e^{-i \theta}}{2i}[/itex]):

[itex] = \pm 1 \frac{i}{\sqrt{2}} sin(\frac{\alpha – \beta}{2})[/itex]

If [itex]A = -B = \pm 1[/itex], then this becomes (using [itex]cos(\theta) = \frac{e^{i \theta} + e^{-i \theta}}{2}[/itex]):

[itex] = \pm 1 \frac{1}{\sqrt{2}} cos(\frac{\alpha – \beta}{2})[/itex]

So we have successfully reproduced the quantum predictions for amplitudes (up to the phase [itex]\pm 1[/itex]).

What does it mean?

In a certain sense, what this suggests is that quantum mechanics is a sort of “stochastic process”, but where the “measure” of possible outcomes of a transition is not real-valued probabilities but complex-valued probability amplitudes. When we just look in terms of amplitudes, everything seems to work out the same as it does classically, and the weird correlations that we see in experiments such as EPR are easily explained by local hidden variables, just as Einstein, Podolsky, and Rosen hoped. But in actually testing the predictions of quantum mechanics, we can’t directly measure amplitudes, but instead compile statistics that give us probabilities, which are the squares of the amplitudes. The squaring process is in some sense responsible for the weirdness of QM correlations.

Do these observations contribute anything to our understanding of QM? Beats me. But they are interesting.

83 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply