Amplitudes, Probabilities and EPR

In summary, the conversation discusses the concept of probability in quantum mechanics, specifically the two different concepts of probability: the traditional frequency counting and the theoretical concept represented by the probability amplitude. The amplitude is seen as the primary encoding of theoretical probability, allowing for probabilistic prediction of quantum states without detecting any events. The conversation also touches on the issue of independence and identically distributed random variables and how they relate to quantum mechanics and the concept of hidden-variable models.
Physics news on Phys.org
  • #2
Nice job.
 
  • #3
@stevendaryl

I like that a lot. Gourmet food for thought. There is something in the two-valuedness of amplitude that is elusive.
[later]
I deleted this bit. Still thinking about it.
 
Last edited:
  • #4
stevendaryl said:
Do these observations contribute anything to our understanding of QM?
I think yes. There are two diverse concepts of probability. The primary one that we think of all the time is that which corresponds to frequency counting. It can be determined only asymptotically as the number of possible events approaches infinity. But there is also a theoretical concept of probability which we use all the time before even a single event is detected. Historically we try to predict probability as that asymptotic limit itself. But, any quantity from which the asymptotic limit is uniquely computable is an encoding of theoretical probability. In QM the amplitude is such an encoding. As such a probability encoding it enables the probabilistic prediction of quantum states without detecting any events. It has all the requirements of a theoretical probability encoding while at the same time describing the physical reality of a single system. Frequency counting, on the other hand, has meaning only for large numbers of systems/events. We can think of QM and the probability amplitude, therefore, as the primary encoding of theoretical probability and frequency counting as secondary.
 
  • Like
Likes Jilang
  • #5
stevendaryl said:
Do these observations contribute anything to our understanding of QM? Beats me. But they are interesting.
We can talk about probabilities when assumption that ensemble is i.i.d. holds. When this assumption holds probabilities of individual events independently contribute to total probability. Amplitudes obviously are not independent as two opposite amplitudes can cancel out.

It is interesting that in your model the final amplitude is calculated by adding/subtracting amplitudes from two different subsets of pairs (with ##\lambda## +1 and -1). So each separate pair does not produce correct amplitude. And they add up not just statistically but in a way that suggest some interdependence on the level of ensemble of pairs.
 
  • #6
zonde said:
We can talk about probabilities when assumption that ensemble is i.i.d. holds. When this assumption holds probabilities of individual events independently contribute to total probability. Amplitudes obviously are not independent as two opposite amplitudes can cancel out.

I'm not sure I understand your point. Amplitudes add in the same way that probabilities do. The reason that some amplitudes cancel others is because they aren't guaranteed to be positive.

It is interesting that in your model the final amplitude is calculated by adding/subtracting amplitudes from two different subsets of pairs (with ##\lambda## +1 and -1). So each separate pair does not produce correct amplitude. And they add up not just statistically but in a way that suggest some interdependence on the level of ensemble of pairs.

As I said, I think it's the same issue with classical probabilities (except for the big difference that classical probabilities are positive reals numbers, while quantum amplitudes are complex numbers).

Let me give a made-up classical probability problem to illustrate. Suppose I'm trying to figure out the probability that a certain newborn baby will grow up to be left-handed. This is not true, but I'm going to pretend that there is a "left-handed gene" such that if you have this gene, there is a 90% chance that you will be left-handed, and if you lack this gene, then there is a 90% chance that you will be right-handed. Let's furthermore assume that the mother lacks this gene, but the father received the gene from one parent but not the other. Then genetics would say that the father has a 50% chance of passing on the gene to the baby. So letting [itex]\lambda = +1[/itex] to indicate having the gene, and [itex]\lambda = -1[/itex] to indicate not having the gene, we can compute:

[itex]P(LH) = \sum_\lambda P(\lambda) P(LH | \lambda) = P(+1) P(LH | +1) + P(-1) P(LH | -1) = .50 \cdot .90 + .50 \cdot .10 = .50[/itex]

So the baby has a 50% chance of being left-handed.

That involves summing over different values of [itex]\lambda[/itex] in exactly the same way as the amplitudes were computed in my EPR example.
 
  • #7
zonde said:
We can talk about probabilities when assumption that ensemble is i.i.d. holds.

I skipped over this first line without asking: What does "i.i.d." stands for?
 
  • #8
stevendaryl said:
But in actually testing the predictions of quantum mechanics, we can't directly measure amplitudes, but instead compile statistics which give us probabilities, which are the squares of the amplitudes. The squaring process is in some sense responsible for the weirdness of QM correlations.
Yes, that's true! A similar conclusion is also drawn in
https://arxiv.org/abs/0707.2319
 
  • Like
Likes YAHA and Mentz114
  • #10
stevendaryl said:
Amplitudes add in the same way that probabilities do. The reason that some amplitudes cancel others is because they aren't guaranteed to be positive.

Great analysis for the original post!
 
  • #11
Sorry, this is not a hidden-variable model as understood in Bell experiment. The problem is that A and B results must be +-1. When you calculate

[itex]\psi(A, B|\alpha, \beta) = \sum \psi(\lambda) \psi_A(A | \alpha, \lambda) \psi_B(B | \beta, \lambda)[/itex]

the numbers that appear for [itex] \psi_A(A | \alpha, \lambda)[/itex] and [itex]\psi_B(B | \beta, \lambda)[/itex] can't be complex; can't even be any real numbers, except +-1. That's because this calculation must be performed on their actual results, after the experiment is concluded.

Another way to put it, your scheme doesn't guarantee that if [itex]\alpha[/itex] = [itex]\beta[/itex] then their results will definitely be opposite. A and B must "flip a coin" based on their amplitudes, and record a definite +1 or -1. In general with these probability amplitudes they will often get the same result, even though the angles are equal.
 
  • Like
Likes edguy99 and DrClaude
  • #12
Secur, why can't the wavefunctions be complex?
 
  • #13
@Jilang. the wavefunctions can, in fact, be complex. I said that in the calculation of [itex]\psi(A, B|\alpha, \beta)[/itex] "the numbers that appear for [itex] \psi_A(A | \alpha, \lambda)[/itex] and [itex]\psi_B(B | \beta, \lambda)[/itex] can't be complex". Realizing the point might not be too clear I went on to "Another way to put it" which is, I think, straightforward.

Consider what happens in the Bell-type experiments. Alice and Bob independently measure +1 or -1 for the spin states of their respective particles. Those measurements are based, probabilistically, on complex wavefunctions. After their experimental data is complete we bring the results together in a calculation where A and B (+-1) are multiplied together, similar to @stevendaryl's [itex]\psi(A, B|\alpha, \beta)[/itex]. The key point: when those results are multiplied together they must have been reduced to +-1, can't still be the underlying wavefunction numbers.

Note if A and B could record the underlying spinors (as quaternions), we could easily get the right correlations for actual QM probabilities, never mind amplitudes.

I'm not very happy with this explanation, hopefully with further discussion it will get better. But consider my simpler second point, "Another way to put it". That's symptomatic of the same problem and clearly shows there's something wrong here.
 
  • #14
secur said:
Sorry, this is not a hidden-variable model as understood in Bell experiment. The problem is that A and B results must be +-1. When you calculate

[itex]\psi(A, B|\alpha, \beta) = \sum \psi(\lambda) \psi_A(A | \alpha, \lambda) \psi_B(B | \beta, \lambda)[/itex]

the numbers that appear for [itex] \psi_A(A | \alpha, \lambda)[/itex] and [itex]\psi_B(B | \beta, \lambda)[/itex] can't be complex; can't even be any real numbers, except +-1. That's because this calculation must be performed on their actual results, after the experiment is concluded.

I'm afraid you're completely misunderstanding what is being claimed, and in particular, what the meanings of the various [itex]\psi[/itex]s are. I tried to explain it in the first post, but if it was unclear, let me try again.

We have experimenters Alice and Bob that are performing measurements at a spacelike separation. Assume that their results are described by a joint probability distribution [itex]P(A, B|\alpha, \beta)[/itex], which gives the probability that Alice gets result [itex]A[/itex] (assumed to be [itex]\pm 1[/itex]) and Bob gets [itex]B[/itex] (also [itex]\pm 1[/itex]), given that Alice chooses detector setting [itex]\alpha[/itex] and Bob chooses detector setting [itex]\beta[/itex]

A nondeterministic local hidden-variables theory explanation for [itex]P(A, B|\alpha, \beta)[/itex] would consist of:
  1. A set of values for a parameter, [itex]\lambda[/itex]
  2. A probability distribution [itex]P(\lambda)[/itex] on the values of [itex]\lambda[/itex]
  3. Probability distributions [itex]P_A(A|\alpha, \lambda)[/itex] and [itex]P_B(B|\beta, \lambda)[/itex]
such that

[itex]P(A, B| \alpha, \beta) = \sum_\lambda P(\lambda) P_A(A|\alpha, \lambda) P_B(B|\beta, \lambda)[/itex]

A deterministic local hidden-variables theory explanation makes a stronger assumption, that [itex]A[/itex] and [itex]B[/itex] are determined by the parameters [itex]\alpha, \beta, \lambda[/itex]. That is, it assumes that there are deterministic functions [itex]F_A(\alpha, \lambda)[/itex] and [itex]F_B(\beta, \lambda)[/itex] such that whenever Alice chooses setting [itex]\alpha[/itex] and the hidden variable has value [itex]\lambda[/itex], then Alice will deterministically get the result [itex]F_A(\alpha, \lambda)[/itex]. Similarly, whenever Bob chooses setting [itex]\beta[/itex] and the hidden variable has value [itex]\lambda[/itex], then Bob will deterministically get the result [itex]F_B(\beta, \lambda)[/itex]. Such a deterministic model would reproduce the joint probability distribution, provided that:

[itex]P(A, B | \alpha, \beta) = \sum'_\lambda P(\lambda)[/itex]

where [itex]\sum'[/itex] means that the sum is only over those values of [itex]\lambda[/itex] such that
[itex]F_A(\alpha, \lambda) = A[/itex] and [itex]F_B(\beta, \lambda) = B[/itex].

Do you understand the distinction?

The distinction was not important for Bell, because it's easy to show that if there is a nondeterministic local hidden variables theory, then there is also a deterministic local hidden variables theory. So if he disproved the existence of a deterministic local hidden variables theory, that also proved that there was no nondeterministic local hidden variables theory.

But when I made my amplitude analogy, I was making an analogy to the nondeterministic theory, not the deterministic theory.

The functions [itex]\psi(\lambda)[/itex], [itex]\psi_A(A|\alpha, \lambda)[/itex], [itex]\psi_B(B|\beta, \lambda)[/itex] are the amplitude analogues of the probability distributions [itex]P(\lambda)[/itex], [itex]P_A(A|\alpha, \lambda)[/itex], [itex]P_B(B|\beta, \lambda)[/itex]. They are not analogues to the deterministic functions [itex]F_A(\alpha, \lambda)[/itex] and [itex]F_B(\beta, \lambda)[/itex].
 
  • #15
secur said:
@Jilang. the wavefunctions can, in fact, be complex. I said that in the calculation of [itex]\psi(A, B|\alpha, \beta)[/itex] "the numbers that appear for [itex] \psi_A(A | \alpha, \lambda)[/itex] and [itex]\psi_B(B | \beta, \lambda)[/itex] can't be complex".

You are misunderstanding what the functions [itex]\psi_A(A|\alpha, \lambda)[/itex] and [itex]\psi_B(B|\beta, \lambda)[/itex] are.

[itex]\psi_A(A|\alpha, \lambda) = [/itex] the amplitude for Alice getting result [itex]A[/itex], given that she chooses setting [itex]\alpha[/itex] and that the initial twin-particle state was described by variable [itex]\lambda[/itex].

[itex]\psi_B(B|\beta, \lambda) = [/itex] the amplitude for Bob getting result [itex]B[/itex], given that he chooses setting [itex]\beta[/itex] and that the initial twin-particle state was described by variable [itex]\lambda[/itex].

These are amplitudes. They are not measurement results. Amplitudes are complex numbers whose squares give probabilities for transitions.
 
  • #16
Seems there's some confusion. I think the best way to straighten it out is, please address the other point I made, which is very simple.

In this typical Bell-type experiment, QM says A and B must always be opposite (product is -1) when their detector angles are equal. A valid hidden-variable model must reproduce that behavior. But that's not the case with your model:

secur said:
Sorry, this is not a hidden-variable model as understood in Bell experiment. The problem is that A and B results must be +-1 ...

... your scheme doesn't guarantee that if [itex]\alpha[/itex] = [itex]\beta[/itex] then their results will definitely be opposite. A and B must "flip a coin" based on their amplitudes, and record a definite +1 or -1. In general with these probability amplitudes they will often get the same result, even though the angles are equal.
 
Last edited:
  • #17
Another way to look at it: you don't specify the procedure whereby Alice and Bob generate their two results, A and B, which = +1 or -1. Presumably they're generated randomly based on their (known, given) amplitudes. I referred to it above hand-wavingly as "flipping a coin". Can you specify that procedure, showing how it guarantees the correct result for equal angles, namely, A * B = -1 ?
 
  • #18
secur said:
Seems there's some confusion. I think the best way to straighten it out is, please address the other point I made, which is very simple.

In this typical Bell-type experiment, QM says A and B must always be opposite (product is -1) when their detector angles are equal. A valid hidden-variable model must reproduce that behavior. But that's not the case with your model:

It certainly is. The resulting probability amplitude is (the very first post):
  • If [itex]A=B=\pm 1[/itex], then [itex]\psi(A, B|\alpha, \beta) = \pm \frac{i}{\sqrt{2}} sin(\frac{\beta-\alpha}{2})[/itex]. This means that the probability amplitude that Alice and Bob both get the same result is proportional to [itex]sin(\frac{\beta-\alpha}{2})[/itex], which means it is zero when [itex]\alpha = \beta[/itex].
  • If [itex]A=-B=\pm 1[/itex], then [itex]\psi(A, B|\alpha, \beta) = \pm \frac{1}{\sqrt{2}} cos(\frac{\beta-\alpha}{2})[/itex]. This means that the probability amplitude that Alice and Bob get opposite results is proportional to [itex]cos(\frac{\beta - \alpha}{2})[/itex], which means that it's 0 when [itex]\alpha - \beta = \pi[/itex].
Look, the whole point of the first post was to reproduce the EPR spin-1/2 joint probability function.
 
  • #19
Ok, I thought that answer would remove my confusion. This is NOT a hidden-variables model of the type addressed by Bell's theorem.

stevendaryl said:
It certainly is. The resulting probability amplitude is (the very first post):
•If [itex]A=B=\pm 1[/itex], then [itex]\psi(A, B|\alpha, \beta) = \pm \frac{i}{\sqrt{2}} sin(\frac{\beta-\alpha}{2})[/itex]. This means that the probability amplitude that Alice and Bob both get the same result is proportional to [itex]sin(\frac{\beta-\alpha}{2})[/itex], which means it is zero when [itex]\alpha = \beta[/itex].

•If [itex]A=-B=\pm 1[/itex], then [itex]\psi(A, B|\alpha, \beta) = \pm \frac{1}{\sqrt{2}} cos(\frac{\beta-\alpha}{2})[/itex]. This means that the probability amplitude that Alice and Bob get opposite results is proportional to [itex]cos(\frac{\beta - \alpha}{2})[/itex], which means that it's 0 when [itex]\alpha - \beta = \pi[/itex].

These are the amplitudes which, in usual Bell HV attempt, need to be achieved. One comes up with a scheme to generate A's and B's - two sequences of +-1's - which will result, ultimately, in these amplitudes (actually, the correlation function is usually aimed at). Alice's results can depend on her detector setting [itex]\alpha[/itex], the hidden variable(s), and almost anything else, except Bob's detector setting [itex]\beta[/itex]. And similar for Bob.

But that's not the type of model you're doing. You present amplitudes - NOT sequences of detector results - which multiply (and, sum over the [itex]\lambda[/itex]'s) to give the correct joint probability (or, amplitude). The two amplitudes, for Alice and Bob, don't depend on the other's detector setting - that's good. But you don't present correct results (indeed, any results) for each individual detection, nor (a fortiori) do you use them to get to the final joint distribution (or, correlation).

stevendaryl said:
Look, the whole point of the first post was to reproduce the EPR spin-1/2 joint probability function.

Exactly. NOT the individual results for each pair of entangled photons, as in all other attempted HV models.

That's fine. On its own terms, your model works. The only problem, however, is your assumption that a similar model can't reproduce the true QM probabilities, with squares of sin and cos.

stevendaryl said:
The fact that the QM predictions violate Bell's inequality proves that there is no such hidden-variables explanation of this sort.

No. It's been well-proven there's no (local, realistic) HV model of the usual sort giving the right QM predictions. But none of that work has addressed the different type of HV model you're presenting. Your model doesn't have to worry about individual photon-by-photon results, and the two factors from Alice and Bob are represented by complex numbers, not just +-1's. It may well be that a model like that can reproduce the right QM predictions. At least no one's shown otherwise. Without such demonstration this conclusion is not justified:

stevendaryl said:
The squaring process is in some sense responsible for the weirdness of QM correlations.
 
  • Like
Likes edguy99 and DrClaude
  • #20
secur said:
Ok, I thought that answer would remove my confusion. This is NOT a hidden-variables model of the type addressed by Bell's theorem.

Of course not. Bell proved that there was no such thing.

The point, which I made in the very first post, is that
  1. We can formulate certain mathematical rules for how we think that probability ought to work, in a local realistic model.
  2. We can prove that QM probabilities don't work that way.
  3. However, the analogous rules for QM amplitudes do work that way.
Amplitudes work for QM in the way that we would expect probabilities to work in a local hidden variables model of the sort Bell investigated. As you say, and as I said in the very first post, amplitudes don't correspond directly to anything can measure, unlike probabilities, so it's unclear what relevance this observation is. I just thought it was interesting.
 
  • Like
Likes PeterDonis
  • #21
That the situation is local at some level could remove an element of the spookiness (of which far too much is made).
 
  • #22
stevendaryl said:
amplitudes don't correspond directly to anything can measure
Because of an arbitrary global phase. But relative amplitudes are physically significant. Spin-statistics is an obvious example.
 
  • #23
mikeyork said:
Because of an arbitrary global phase. But relative amplitudes are physically significant. Spin-statistics is an obvious example.
It looks like you talk about phases, not amplitudes.
 
  • #24
stevendaryl said:
The point, which I made in the very first post, is that
1.We can formulate certain mathematical rules for how we think that probability ought to work, in a local realistic model.
2.We can prove that QM probabilities don't work that way.
3.However, the analogous rules for QM amplitudes do work that way.

Amplitudes work for QM in the way that we would expect probabilities to work in a local hidden variables model of the sort Bell investigated. As you say, and as I said in the very first post, amplitudes don't correspond directly to anything can measure, unlike probabilities, so it's unclear what relevance this observation is. I just thought it was interesting.

Yes it's interesting, and now it's a lot clearer to me, as delineated in my ...
secur said:
previous post.

Initially I thought you were talking about the standard Bell experiment hidden-variables situation. You're actually doing something a bit different, and unique, AFAIK. That's a problem with an original idea: many people will mistake it for the "same old thing" they've heard before. Reviewing OP the misunderstanding seems pretty natural. On the plus side we've been able to bring out some of the subtleties of the standard Bell HV model, by contrast. Also, no doubt others thought the same as I did, so it was worthwhile to straighten that out. Thanks for the stimulating thread!
 
  • Like
Likes Jilang
  • #25
mfb said:
It looks like you talk about phases, not amplitudes.
Yes, of course; the magnitude is already physically significant in giving the probability.
 
  • #26
Just to expand a little bit about the analogy between classical probabilities and quantum amplitudes:

Here is the (false, as shown by tests of Bell's inequality) classical nondeterministic local hidden-variables story for correlated measurements:
  1. There is a source of twin particles. Each twin particle is associated with a hidden parameter, [itex]\lambda[/itex]. The source randomly chooses a value of [itex]\lambda[/itex] according to some probability distribution [itex]P(\lambda)[/itex].
  2. Alice chooses a detector setting [itex]\alpha[/itex] for her measurement. Her detector randomly chooses a value for the output, [itex]A[/itex], according to a probability distribution [itex]P_A(A|\alpha, \lambda)[/itex], which depends on both [itex]\lambda[/itex] and [itex]\alpha[/itex].
  3. Bob chooses a detector setting [itex]\beta[/itex] for his measurement. His detector randomly chooses a value for the output, [itex]B[/itex], according to a probability distribution [itex]P_B(B|\beta, \lambda)[/itex], which depends on both [itex]\lambda[/itex] and [itex]\beta[/itex].
  4. Steps 2&3 are independent, so the joint probability is just a product of the individual probabilities: [itex]P(A,B|\alpha, \beta, \lambda) = P_A(A|\alpha, \lambda) P_B(B|\beta, \lambda)[/itex].
  5. Since Alice and Bob don't know [itex]\lambda[/itex], we average over all possible values, weighted by the probability [itex]P(\lambda)[/itex], to get a correlated joint probability distribution: [itex]P(A,B|\alpha, \beta) = \sum_\lambda P(\lambda) P(A,B|\alpha, \beta, \lambda)[/itex]
So this story would explain the correlation in Alice's and Bob's measurements as being due to a common (though unknown) hidden variable, [itex]\lambda[/itex]. That's what a local hidden variable theory would do, if there were one. Note, that even though this model is nondeterministic, all choices being made---which value of [itex]\lambda[/itex], which value of [itex]A[/itex], which value of [itex]B[/itex]---are made using only local information.

Here's the analogous story for amplitudes:
  1. There is a source of twin particles. Each twin particle is associated with a hidden parameter, [itex]\lambda[/itex]. The source randomly chooses a value of [itex]\lambda[/itex] according to some amplitude [itex]\psi(\lambda)[/itex].
  2. Alice chooses a detector setting [itex]\alpha[/itex] for her measurement. Her detector randomly chooses a value for the output, [itex]A[/itex], according to an amplitude [itex]\psi_A(A|\alpha, \lambda)[/itex], which depends on both [itex]\lambda[/itex] and [itex]\alpha[/itex].
  3. Bob chooses a detector setting [itex]\beta[/itex] for his measurement. His detector randomly chooses a value for the output, [itex]B[/itex], according to an amplitude [itex]\psi_B(B|\beta, \lambda)[/itex], which depends on both [itex]\lambda[/itex] and [itex]\beta[/itex].
  4. Steps 2&3 are independent, so the joint amplitude is just a product of the individual probabilities: [itex]\psi(A,B|\alpha, \beta, \lambda) = \psi_A(A|\alpha, \lambda) \psi_B(B|\beta, \lambda)[/itex].
  5. Since Alice and Bob don't know [itex]\lambda[/itex], we average over all possible values, weighted by the amplitude [itex]\psi(\lambda)[/itex] to get a correlated joint probability distribution: [itex]\psi(A,B|\alpha, \beta) = \sum_\lambda \psi(\lambda) \psi(A,B|\alpha, \beta, \lambda)[/itex]
The amplitude story has one final step:

6. We compute a probability from the amplitude, according to the rule [itex]P(A,B|\alpha, \beta) = |\psi(A,B|\alpha,\beta)|^2[/itex]​

The hidden-variables amplitude story sounds as local as the hidden-variables probability story. And as a matter of fact, when people give rigorous mathematical proofs of the locality of quantum mechanics or quantum field theory, they are really showing that amplitudes behave locally, even if probabilities do not.

The screwy thing about the amplitude story is that we have an intuitive idea about what it means to choose a value according to a certain probability distribution (rolling dice, for instance), but we don't have an intuitive idea about what it means to choose a value according to a certain amplitude.
 
  • #27
stevendaryl said:
And as a matter of fact, when people give rigorous mathematical proofs of the locality of quantum mechanics or quantum field theory, they are really showing that amplitudes behave locally, even if probabilities do not.
I don't think that this is how people argue for locality of QM. The argument for locality is that a hidden parameter is not the only possible explanation for the correlations, because mathematically, the assumption of a hidden parameter is a non-trivial restriction on the set of models (i.e. hidden variable models aren't the most general models). In order for a particular model to be local, that model just needs to offer an explanation for how the correlations can come about without invoking interactions over space-like distances.
 
  • #28
stevendaryl said:
Amplitudes add in the same way that probabilities do. The reason that some amplitudes cancel others is because they aren't guaranteed to be positive.
Let's say I am giving you apples. Every time I give you apples we describe this event with positive (or at least non negative) integer. Every such event can be viewed as independent because it's different apples every time. But now let's say that event of me giving you apples can be described by any integer (positive, negative or zero). If I give you negative number of apples it actually means I am taking apples from you. Obviously event of taking away apples is not independent from event of giving you apples as the same apples participate in both events.
But how would you model "negative" click in detector?
 
  • #29
rubi said:
I don't think that this is how people argue for locality of QM. The argument for locality is that a hidden parameter is not the only possible explanation for the correlations, because mathematically, the assumption of a hidden parameter is a non-trivial restriction on the set of models (i.e. hidden variable models aren't the most general models). In order for a particular model to be local, that model just needs to offer an explanation for how the correlations can come about without invoking interactions over space-like distances.
Without hidden variables or interactions over space time distances what is another explanation?
 
  • #30
Jilang said:
Without hidden variables or interactions over space time distances what is another explanation?
That depends on the model. There are several manifestly local quantum mechanical models. One example would be consistent histories. A careful analysis of the EPR paradox is done in the following paper:
http://scitation.aip.org/content/aapt/journal/ajp/55/1/10.1119/1.14965

[Mentor's note: This post has been edited to remove a reply to a deleted post]
 
Last edited by a moderator:
  • #31
rubi said:
That depends on the model. There are several manifestly local quantum mechanical models. One example would be consistent histories. A careful analysis of the EPR paradox is done in the following paper:
http://scitation.aip.org/content/aapt/journal/ajp/55/1/10.1119/1.14965Space-time is not observer dependent. Relativity doesn't claim that.
Sorry, I don't have a registration with that provider. Can the third alternative (fourth-sorry Mike) be summarised here?
 
  • #32
zonde said:
Let's say I am giving you apples. Every time I give you apples we describe this event with positive (or at least non negative) integer. Every such event can be viewed as independent because it's different apples every time. But now let's say that event of me giving you apples can be described by any integer (positive, negative or zero). If I give you negative number of apples it actually means I am taking apples from you. Obviously event of taking away apples is not independent from event of giving you apples as the same apples participate in both events.
But how would you model "negative" click in detector?
Zonde, SD already stressed that it is not the amplitude that gets measured, You don't need to worry about negative clicks.
 
  • #33
zonde said:
But how would you model "negative" click in detector?
We don't. We have negative (or even complex) amplitudes for positive or zero numbers of clicks.

Stevendaryl's point about us not having an intuition for what it means to select a result according to an amplitude, as opposed to a probability, is looking pretty good right now...
 
  • #34
@stevendaryl's approach seems to be unaffected by this issue. It still works if we treat the amplitude the normal way, when it comes to selecting an actual result, since that's not crucial in his scheme. I.e., square the amplitude (complex norm) and use that as the probability. Perhaps I'm missing something.
 
  • #35
stevendaryl said:
The screwy thing about the amplitude story is that we have an intuitive idea about what it means to choose a value according to a certain probability distribution (rolling dice, for instance), but we don't have an intuitive idea about what it means to choose a value according to a certain amplitude.

Can we make the description of the intuitive difficulty more precise?

Mathematically, it is easy to imagine choosing a value according to any sort of input variable. You just need an algorithm that maps values of the input to a value that defines a probability. You can use that probability to make you final choice.

So doesn't the intuitive problem begin in step 4 or 5 instead of in step 1 and 2 ?
 
<h2>1. What are amplitudes and how do they relate to probabilities?</h2><p>Amplitudes are complex numbers that represent the strength and phase of a quantum state. The square of the amplitude gives the probability of measuring that state in a specific outcome. This is known as the Born rule.</p><h2>2. What is the EPR (Einstein-Podolsky-Rosen) paradox?</h2><p>The EPR paradox is a thought experiment that highlights the concept of quantum entanglement. It involves two particles that are entangled, meaning their properties are correlated even when they are separated by large distances. This violates the principle of local realism, which states that objects must have definite properties independent of observation.</p><h2>3. How does quantum entanglement relate to the concept of non-locality?</h2><p>Quantum entanglement is a phenomenon in which two or more particles become connected in such a way that the state of one particle depends on the state of the other, even when they are separated by large distances. This connection is known as non-locality, as it implies that information is being transmitted faster than the speed of light.</p><h2>4. Can the EPR paradox be resolved?</h2><p>There is no consensus on whether the EPR paradox can be resolved. Some interpretations of quantum mechanics, such as the Copenhagen interpretation, accept non-locality as a fundamental aspect of the universe. Others, such as the many-worlds interpretation, propose that the paradox can be resolved by considering all possible outcomes as equally real.</p><h2>5. How is the concept of entanglement used in quantum computing?</h2><p>Quantum computing takes advantage of the phenomenon of entanglement to perform calculations that would be impossible with classical computers. By manipulating the entangled states of quantum bits (qubits), quantum computers can perform tasks such as factoring large numbers and simulating complex quantum systems more efficiently than classical computers.</p>

1. What are amplitudes and how do they relate to probabilities?

Amplitudes are complex numbers that represent the strength and phase of a quantum state. The square of the amplitude gives the probability of measuring that state in a specific outcome. This is known as the Born rule.

2. What is the EPR (Einstein-Podolsky-Rosen) paradox?

The EPR paradox is a thought experiment that highlights the concept of quantum entanglement. It involves two particles that are entangled, meaning their properties are correlated even when they are separated by large distances. This violates the principle of local realism, which states that objects must have definite properties independent of observation.

3. How does quantum entanglement relate to the concept of non-locality?

Quantum entanglement is a phenomenon in which two or more particles become connected in such a way that the state of one particle depends on the state of the other, even when they are separated by large distances. This connection is known as non-locality, as it implies that information is being transmitted faster than the speed of light.

4. Can the EPR paradox be resolved?

There is no consensus on whether the EPR paradox can be resolved. Some interpretations of quantum mechanics, such as the Copenhagen interpretation, accept non-locality as a fundamental aspect of the universe. Others, such as the many-worlds interpretation, propose that the paradox can be resolved by considering all possible outcomes as equally real.

5. How is the concept of entanglement used in quantum computing?

Quantum computing takes advantage of the phenomenon of entanglement to perform calculations that would be impossible with classical computers. By manipulating the entangled states of quantum bits (qubits), quantum computers can perform tasks such as factoring large numbers and simulating complex quantum systems more efficiently than classical computers.

Similar threads

  • Quantum Physics
Replies
7
Views
711
Replies
17
Views
2K
Replies
4
Views
1K
  • Quantum Physics
Replies
24
Views
2K
  • Quantum Physics
Replies
13
Views
1K
Replies
61
Views
3K
  • Quantum Physics
Replies
2
Views
1K
Replies
6
Views
873
Replies
12
Views
3K
Replies
0
Views
705
Back
Top