Local Realism After Bell: Can Quantum Mechanics and Local Realism Coexist?

In summary: Y = correlations between measurements at B and C, a relative difference of 45...Z = the total correlation between measurements at A, B and C, a relative difference of 100.
  • #1
DrChinese
Science Advisor
Gold Member
8,088
1,866
I will split this post so to make it easier to reply, if needed.

I. Background
=============
The EPR Paradox (1935) left the scientific community in a divided state. Some saw QM, even then a successful theory, as the likely victor in the debate over realism. But many remained unconvinced, and felt that hidden variables existed which could restore determinism in a classical fashion. But then Bell (1964) stunned the theoretical community with his paper on EPR. After, it became clear to the majority of physicists that QM could not be reconciled with the classical form of determinism, herein called local realism (LR). Bell's Theorem demonstrates that the following 3 things cannot all be true:

i) The experimental predictions of quantum mechanics (QM) are correct in all particulars
ii) Hidden variables exist (particle attributes really exist independently of observation)
iii) Locality holds (a measurement at one place does not affect a measurement result at another)

QM predicts that certain LR scenarios, if they existed, would have negative likelihood of occurance (in defiance of common sense). Any LR theory - in which ii) and iii) above are assumed to be true - will make predictions for values of these scenarios which is significantly different than the QM predicted values. QM does not acknowledge the existence of these scenarios, often called hidden variables (HV or LHV), so it does not have a problem with this consequence of Bell's Theorem.

Why would Bell mean the end of LR? The reason is that by 1964, QM had scored a long series of victories. Its formalism explained things that were unimaginable, and in ever increasing accuracy. It made prediction after prediction in advance of experiment. When the experimental technology eventually caught up, QM would be shown to be correct yet again. It had endured no significant predictive failures. So Bell seemed like a sure bet.

Eventually, experimental technology caught up yet again, and Aspect soundly confirmed Bell in favor of QM. LR was ruled out by 9 standard deviations by 1982, and 30 by 1998. No surprise in that.

During recent years, a chorus of tenacious opponents has attempted to argue that Aspect is NOT, after all, evidence in favor of QM and against LR. They soundly reject Aspect with clains of "accidentals" and failure to achieve "fair sampling". Maybe. Let's take a look under the hood.


II. After Bell, before Aspect
=============================
Let's roll things back to before Aspect's confirming experiments. As noted above, QM was already held in such high esteem (and why not?) that few expected to see its predictions rejected in favor of LR. After all, Bell showed that the assumption of independent reality of hidden variables led to the prediction of negative probabilities for the results of certain outcomes in spin tests of photons in a correlated singlet state. This was a sure sign to most that these cases could not exist. So we already had the following:

a. QM was a star theory that could do no wrong (perhaps a slight exaggeration :)
b. QM was at odds with LR, and the two could not peacefully coexist.
c. The QM predictions matched classical optics with its cos^2 function.
d. There were no specific/consistent alternative predictions made by local realists.

I could arguably stop here, and there is already plenty of evidence for QM and against LR.


III. The Setup to be Discussed
==============================
See elsewhere for more information on Bell tests. It is enough to state that we will refer to the same setup used in the thread "Bell's Theorem and Negative Probabilities".

In the Realistic view, we could imagine that the spin answers to polarizer settings A, B and C all exist at the same time - even if we could only measure 2 at a time. Therefore, there are 8 possible outcomes probabilities, cases [1..8] below, that must total to 100% (probability=1). This is "common sense" and is a requirement of LR (but not QM). The permutations are:

[1] A+ B+ C+ (and the likelihood of this is >=0)
[2] A+ B+ C- (and the likelihood of this is >=0)
[3] A+ B- C+ (and the likelihood of this is >=0)
[4] A+ B- C- (and the likelihood of this is >=0)
[5] A- B+ C+ (and the likelihood of this is >=0)
[6] A- B+ C- (and the likelihood of this is >=0)
[7] A- B- C+ (and the likelihood of this is >=0)
[8] A- B- C- (and the likelihood of this is >=0)

With "relative" settings for A, B and C of 0, 45 and 67.5 degrees respectively, we solve for 2 special cases [SC] - essentially pointed out by Bell:

SC = [3] + {6]

= (X + Y - Z) / 2

where

X = correlations between measurements at A and C, a relative difference of 67.5 degrees
Y = non-correlations between measurements at A and B, a relative difference of 45 degrees
Z = correlations between measurements at B and C, a relative difference of 22.5 degrees

and leading to (where "QM." is prefixed to the Quantum Mechanical prediction, and "LR." is prefixed to the Local Realistic prediction):

[QM.SC] = -.1036 (prediction per QM, if the cases existed)

[LR.SC] >= 0 (prediction per LR, simply by assuming the cases existed)

There ended up being significant discrepancies between the predicted values of X, Y and Z that led to [QM.SC] and [LR.SC].

QM predicts QM.X = .1464 while LR predicts* LR.X > .2500 (big difference)
QM predicts QM.Y = .5000 while LR predicts* LR.X =.5000 (no difference)
QM predicts QM.Z = .8536 while LR predicts* LR.Z < .7500 (big difference)

* Though not specifically required, there are the assumptions used here that LR.X + LR.Z = 1 and QM.Y = LR.Y but strictly any values of LR.X, LR.Y and LR.Z are acceptable IF LR.SC = LR.X + LR.Y - LR.Z >= 0. Let's assume these values, coming the closest to the QM predictions and therefore having the smallest discrepancies, are the predictions of LR.

I note also that Caroline Thompson's LR model, the "Chaotic Ball" makes reference to a roughly linear function which matches the above values closely (see her page 3, below formula 1 and her figure 14).


IV. Tests of X, Y and Z
=======================
So all we need to do is measure X, Y and Z and we will solve the puzzle. Or do we? The QM predictions match a known optics formula: cos^2(angle). LR matches no otherwise known spin statistics. In other words, QM is working from a mathematical formalism while LR has no specific values to offer us. QM is an otherwise successful theory; LR has long been abandoned as a working tool. Why bother with any tests at all?

Scientists, being a thorough lot, tested away - of course. Through the years, Aspect and others performed tests of X, Y and Z in various forms and combinations. Always the results were substantial and ever-increasing deviations from the LR expecation values that had the smallest variance with QM.

I could arguably stop here, and there is already plenty of evidence for QM and against LR.
 
Physics news on Phys.org
  • #2
V. Loopholes in the Aspect and other tests
==========================================
The supporters of LR have cried foul, claiming there are loopholes in the Aspect tests. If so, they are not falling back to much. A loophole in Aspect provides NO support for the LR predictions! A Pyrrhic victory, if the loopholes are valid. But let's ask a few questions first.

Fair sampling loophole: The tested photons are not a fair sample of the universe, the photons that get counted are ones that just happen to be tilted towards the QM camp. Guess what, this has no positive implications for LR anyway. You would need to show first that a fairer sample leads to a result compatible with LR. There is no such evidence! The loophole is more theoretical - the sample could be biased, and if biased, the results could be compatible with LR (that is the argument, anyway). But it is difficult to believe that the photons know to be biased against LR in the first place.

Subtraction of accidentals loophole: Detections that should be included in the sample are ignored - essentially the reverse of the Fair Sampling Loophole. This has no positive implications for LR either. You would again need to show first that a fairer sample leads to a result compatible with LR. There is no such evidence! The loophole is also theoretical - the sample could be biased, and if biased, the results could be compatible with LR (that is the argument, anyway). But it is difficult to believe that the photons know to be biased against LR in the first place

And this is the coup de grace for the loopholes: Why, with loopholes present that are alleged to be statistically significant, do we end up with the following relationships:

LR.X + LR.LoopholeAdjustment.X = QM.X = .1464
LR.Y + LR.LoopholeAdjustment.Y = QM.Y = .5000
LR.Z + LR.LoopholeAdjustment.Z = QM.Z = .8536

or more strictly:

LR.SC + LR.LoopholeAdjustment.SC = QM.SC = -.1036

which if we minimize the LR.LoopholeAdjustment.SC becomes:

LR.LoopholeAdjustment.SC = QM.SC = -.1036

...with ever increasing sampling precision, now up to 30 standard deviations! And the loophole is the QM value in its entirety!

The LoopholeAdjustments above create a lot more problems for LR than they are intended to solve. Rather than an argument for LR, it is actually an argument against LR. We accept that loopholes are possible, could be even significant, but none have been demonstrated to have an actual specific value (merely a range of possible values). And it would need to have the exact value above to match LR anyway. Rejecting QM certainly does not imply LR is valid, as you can see from the above. The LR side actually requires an extremely tight correlation between the loopholes and the QM predictions.

It is as one argued that the proton actually had less mass than an election - but that all experiments that showed otherwise had a loophole that took back it to exactly the observed value. That is not science.

So I could arguably stop here, and there is still plenty of evidence for QM and against LR.


6. Conclusion
=============
We all understand the idea that there may be experimental problems that prevent measured values from getting arbitrarily close to predictions. But no matter how far you choose to go in evaluating the evidence - stopping anywhere above, for example - it is easy to see that QM has a lot more in its favor than LR.

In my own opinion, the mere existence of the Bell Theorem is probably enough to exclude LR, and Aspect is definitely icing on the cake.

-DrChinese
 
Last edited:
  • #3
For Caroline Thompson

Caroline,

A. Please tell me what the probability values you predict for correlations at 22.5, 45, and 67.5 degrees, where 0=perfectly correlated. I estimate that you give these values (which I label X, Y and Z), per your figure 14 of The Chaotic Ball, as:

X = .7500
Y = .5000
Z = .2500

respectively, is that correct? If so, in my opinion, these would conform to the requirements of LR per Bell's Theorem. Do you agree?


B. Please list all loopholes that apply in the measurement process described above a la Aspect, and the values of their error at the above angles.


C. Please describe what currently hidden variables account for the differences between your figures 14-17 of the Chaotic Ball and how a physical experiment a la Aspect might confirm or deny the existence of such currently unknown variables - I believe you refer to them as "bands" and "imperfections".

(Presumably, B and C combined will fulfill the requirements of my LR.LoopholeAdjustment.SC in my 5. in the post above.)


D. Please explain why the same Loopholes and Chaotic Ball ball you describe above (B. and C.) do not apply to other experimental tests of QM, if indeed that is the case.


Thanks, and feel free to answer in pieces as your time allows,

-DrC
 
  • #4
some reflections on EPR and their opponents.

I would like to throw in a few reflections on EPR experiments and the main stream attitude vs. the "local realist" defenders.

A point to make (I think in accordance with what DrChinese says) is on a misunderstanding on how science works. Science doesn't work "by mathematical proof" which, once and for all, establishes undeniable truths, but more like a court case where evidence is provided pro and contra to make stand out different theories. In the end, personal and social jugement play a role.
In a way, the local realist defenders are right when people claim, erroneously, that it has now been proved, once and for all, that local realism is absolutely impossible, for the rest of the future of science, because this is an overstatement which is not scientific. Nothing is ever established "once and for all". But pointing out this rethoric is by itself not very interesting, and what LR defenders forget to point out is that the case IS massively made in favor of the QM predictions.

There is no point in trying to establish particular LR models which have identical raw data predictions for a certain class of EPR experiments IF THESE MODELS ARE MADE UP TO DO EXACTLY THAT. The only thing you establish that way is that the case is not 100% mathematically closed. But we knew that already. In order to be fruitfull, you have to give a global physical theory from which you can derive these models, and from which you can also derive all the other, verified, predictions of quantum theory.
In that context, things like "stochastic electrodynamics" have a higher merit, but before using them to tackle EPR setups, they first have to survive more modest checks: atomic physics, particle physics, solid state physics.
This work can be interesting, and for the following reason: if an equivalence for these matters IS established, maybe it gives a different view on certain problems and opens up different strategies to solve problems which, actually, turn out to be too hard. But given the overwhelming case in favor of QM, there's a really big chance that it won't work out.

Nevertheless, an ongoing investigation in these EPR situations IS important. I think the main reason is that it indicates that one way to implement the measurement process, namely a small non-linearity which gives rise to collapse on the mesoscopic scale, has now a strong argument against it. Due to decoherence, it will be extremely difficult to maintain, in a small system, superposition beyond the mesoscopic scale, so "local quantum theory" doesn't allow you to distinguish decohered linear QM predictions with "true collapse models" ; and EPR is the way out. Indeed, two entangled photons, 50 km apart, cannot be considered to be part of a mesoscopic system !
As such, EPR is not so much an indication against local realism, but is more an indication against a physical mesoscopic collapse of the wavefunction.
 
  • #5
DrChinese said:
I. Background
... Eventually, experimental technology caught up yet again, and Aspect soundly confirmed Bell in favor of QM.
Aspect in his PhD thesis did not claim this! He kept a fairly open mind back in 1983, immediately after his experiments.

DrChinese said:
LR was ruled out by 9 standard deviations by 1982, and 30 by 1998. No surprise in that.
Again and again we hear this kind of fact, but surely even someone with no statistical training can see that if the experiment is biased a statement of its accuracy -- the precision of its estimates, made on the basis of the statistical spread of the observation -- is meaningless.

DrChinese said:
During recent years, a chorus of tenacious opponents has attempted to argue that Aspect is NOT, after all, evidence in favor of QM and against LR. They soundly reject Aspect with clains of "accidentals" and failure to achieve "fair sampling". Maybe. Let's take a look under the hood.
My analysis of data from Aspect's PhD thesis shows that the subtraction of accidentals produces very severe bias, sufficient to account for the violation of the Bell tests in two of his experiments (the first and last). Though the data is available only from the first, the fact that there were more accidentals in the last means that I am pretty confident this was even more biased than the one I could analyse. See http://arXiv.org/abs/quant-ph/9903066 .

This data unfortunately was not recognised as important until I came on the scene, in about 1998. Marshal et al in 1983 had realized that the subtraction was illegal but had not looked into the figures.

I have no other quarrel with your description of the status quo until I come to
DrChinese said:
c. The QM predictions matched classical optics with its cos^2 function.
Since when was a cos^2 function the same as a (cos^2 + constant) one? When you carry out the local realist calculation (for the perfect case), the usual classical assumption re Malus' Law leads to a curve that has a constant added.

DrChinese said:
d. There were no specific/consistent alternative predictions made by local realists.
Why would you expect there to be? Every actual experiment is different. The detailed local realist predictions depend (as I explained elsewhere in this forum earlier today) on the actual relationship that replaces the ideal "Malus' Law" one.

DrChinese said:
III. The Setup to be Discussed
==============================
See elsewhere for more information on Bell tests. It is enough to state that we will refer to the same setup used in the thread "Bell's Theorem and Negative Probabilities" ...
I'm afraid I have no time and/or interest in checking your algebra re negative probabilities. The original Bell argument, supplemented by Clauser and Horne's reasoning in Physical Review D, 10, 526-35 (1974), is enough for me.

DrChinese said:
In the Realistic view, we could imagine that the spin answers to polarizer settings A, B and C all exist at the same time - even if we could only measure 2 at a time. Therefore, there are 8 possible outcomes probabilities, cases [1..8] below, that must total to 100% (probability=1).
Aha! Even without looking at the details, this is probably where you are in error. In no actual experiment have we had 100% efficiency. There have always (other than in the Rowe et al experiment, but that does not count since we did not have sufficiently separated particles) been lots of non-detections. See my Chaotic Ball model for why this matters. (http://arxiv.org/abs/quant-ph/0210150 )

DrChinese said:
I note also that Caroline Thompson's LR model, the "Chaotic Ball" makes reference to a roughly linear function which matches the above values closely (see her page 3, below formula 1 and her figure 14).
Yes indeed. This is the standard LR prediction. It applies, though, only to the 100% efficiency case.

DrChinese said:
The QM predictions match a known optics formula: cos^2(angle).
Here we go again, with this claim that it matches known optics. The known optics formula you are thinking of applies to two polarisers placed one after the other, "in series". What the Bell test is concerned with is two polarisers "in parallel". The "coincidence rates" that are measured are not expected, under classical theory, to obey the same logic as when the polarisers are in series.

DrChinese said:
Scientists, being a thorough lot, tested away - of course. Through the years, Aspect and others performed tests of X, Y and Z in various forms and combinations. Always the results were substantial and ever-increasing deviations from the LR expecation values that had the smallest variance with QM.
Small variance with large bias nullifies any claim of significance! Yes, the results are reproducible, but what I claim is that the bias is being reproduced. The source of bias is not the same in every Bell test experiment, but I claim that one or more of the loopholes are operational in every one.

I have left myself no time for responding to the other recent messages in this thread. I can answer one of your questions, though.

You ask if the loopholes apply in other areas of QM, and the answer is "Yes". Something that is effectively the fair sampling loophole applies in many experiments where coincidence rates are analysed. The subtraction of accidentals, though harmless in many contexts, might equally be important when coincidences are analysed. In many experiments it is assumed that the local realist model to be refuted involves the assumption that the detection rate after a polariser is proportional to cos^ 2 (angle). This is not a necessary assumption. Relax it and the options for local realism are substantially increased.

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #6
double slit

I don't understand why you make such an effort to prove that local realism isn't really proved by experiment not to exist. These experiments that try to disprove local realism with the Bell inequality are as it seems very complicated and difficult to do. And it might well be that mistakes have been made in some of them. I don't know. But as I understand things then QM have to be wrong for localrealism to be right. QM don't seem to be wrong.

Also say we are to have local realism how could such a theory explain the double slit experiment with single electrons? If local realism is to be right then the electron will always have a well defined position that would imply it travels through only one of the two slits and further since we only allow local influences it would have to be unaffected of whether the other slit was open or closed! We all know that the interference patterns disappears when one of the slits was closed. As I understand things now the double slit experiment is enough for me to show that local realism is impossible.

The other thing I think about is the passage of particles through Stein-Gerlach machines. If the SG machine is open not registering which way the particle went, then the wave function of the particle is unaffected. This seems to imply that the particle didn't pass through anyone of the open paths. For if we knew it did then the wavefunction would be collapsed.

If there exist one fundamental theory that everything can be derived from, one truth. Than that theory can't have an explanation for if it has then its explanation must be that theory.

Buddists seems to be aware of this since they feel that the universe must have existed before God. Thus that universe can't have an explanation since we call the explanation God. Thats why I feel that non-realism isn't that bad. :cool:
 
  • #7
DrChinese said:
The EPR Paradox (1935) left the scientific community in a divided state. Some saw QM, even then a successful theory, as the likely victor in the debate over realism. But many remained unconvinced, and felt that hidden variables existed which could restore determinism in a classical fashion. But then Bell (1964) stunned the theoretical community with his paper on EPR. After, it became clear to the majority of physicists that QM could not be reconciled with the classical form of determinism, herein called local realism (LR). Bell's Theorem demonstrates that the following 3 things cannot all be true:

i) The experimental predictions of quantum mechanics (QM) are correct in all particulars
ii) Hidden variables exist (particle attributes really exist independently of observation)
iii) Locality holds (a measurement at one place does not affect a measurement result at another)

As I understand it, this is a correct statement of Bell's theorem. And since I agree with the others here who have argued that the experiments are fairly conclusive in demonstrating violation of Bell's inequalities, I will simply assume premise (i). That leaves (ii) and (iii) as possibly to blame for the violation: either there are no hidden variables, or the Bell locality condition fails.

But this statement of Bell's result is also misleading, because it leaves out half of what Bell himself considered to be the full argument for his conclusion. Specifically, it leaves out the EPR argument, which showed that orthodox QM itself is either incomplete or nonlocal. That is, the EPR argument shows that either some hidden variable theory is true (i.e., QM is incomplete) or QM is nonlocal. This part of the argument is so straightforward, it's shocking the Bohr and others couldn't understand it. Two particles fly off back to back; when you measure some property on one of the particles, you immediately learn about some property on the other distant particle; if you didn't affect that distant particle by measuring the nearby particle then it must have had that property all along; if you did affect it that constitutes a nonlocal interaction. So there's the EPR dilemma.

But notice what happens when you combine the EPR argument with Bell's theorem. EPR says either "completeness" or "locality" is false. Bell says either "incompleteness" or "locality" is false. It might be clarifying to rephrase the two claims like this: EPR says that the completeness assumption forces us to concede that QM is nonlocal: completeness --> nonlocality. Bell on the other hand shows that the assumption that QM is incomplete leads inevitably to the conclusion of nonlocality: incompleteness --> nonlocality.

Obviously, when combined, these two arguments yield the inescapable conclusion of nonlocality. So it is really misleading to claim that what the Bell theorem/tests prove is the non-viability of something called "local realism". Whatever "realism" is supposed to mean here, it apparently has nothing to do with it. The EPR argument + Bell's theorem + experiment show the non-viability of locality, period. That is, any theory which is going to be consistent with experiment (i.e., with the predictions of quantum theory) is going to have to be nonlocal.

People seem to know this about hidden variable theories -- e.g., the only way Bohm's theory is able to predict the right answers for these kinds of experiments is because of its nonlocality. But people seem unwilling to let themselves recognize that orthodox QM is equally non-local. Indeed, I think Dr. Chinese said elsewhere in this thread that he considers himself a "local non-realist." Well, evidently that means he doesn't agree with quantum mechanics, because that theory is non-local. It's right there in the collapse of the wave function, and especially obvious for EPR/Bell states: when you measure a property of one of the particles, the state attributed to the other distant particle changes. The probabilities for various outcomes of various possible measurements on that distant particle are different after the nearby measurement than they were before it. But that is *precisely* the kind of thing Bell's locality condition forbids.

Put slightly differently, what I am saying is that QM itself violates "Bell locality" (the factorizability condition Bell used in deriving the inequalities). And it is simply inconsistent, therefore, to dismiss hidden variable theories on the grounds that they must violate this condition, unless one is also willing to dismiss orthodox QM on the same grounds, for it also violates the condition. Of course, dismissing both of those options leaves one in a pickle -- it would then be impossible to find *any* theory to account for the results. So the correct option is to recognize this, and to acknowledge that one's theory will have to include nonlocality because nature is nonlocal. And then it becomes far less clear whether one should prefer standard QM or something like Bohm's theory.

Of course, I agree with DrChinese that the evidence against local hidden variable theories is overwhelming, if not 100% airtight. So what I am objecting to here is specifically the implication that we learn something about hidden variable theories more generally (or "realism" or whatever) from this whole issue. We don't. We learn that nature is non-local, and that's it.

I hope that is clarifying or at least provocative enough to generate some discussion. =)

ttn
 
  • #8
trosten said:
Also say we are to have local realism how could such a theory explain the double slit experiment with single electrons? If local realism is to be right then the electron will always have a well defined position that would imply it travels through only one of the two slits and further since we only allow local influences it would have to be unaffected of whether the other slit was open or closed! We all know that the interference patterns disappears when one of the slits was closed. As I understand things now the double slit experiment is enough for me to show that local realism is impossible.

That's easy. Bohm's theory for a single electron is perfectly local, and has no trouble explaining the 2 slit experiment.

ttn
 
  • #9
ttn said:
We learn that nature is non-local, and that's it.

The part which is obviously non-local in quantum theory is the collapse of an entangled state because that influences space-like separated subsystems.
But you don't need to collapse that state, if you can accept something which is close to "many-worlds".
Imagine the entangled state to be:
|+>|-> - |->|+>

Now consider that Alice measures the first subsystem, and Bob the second, 2 miles away. If we say that their "measurement" is just an entanglement, we get our Alice and Bob in an entangled state:
|A+>|B-> - |A->|B+>

But there's no way to find out yet !
They have to travel to my place, and then I can measure their states (which correspond to their measurement results). I can only look at both of their results when they are "local" to me. The result of this measurement comes from the interference of the different |A>|B> terms, locally.

So EPR situations can be considered local, if you accept that measurements can only occur in one single place (yours!), and that "far away done measurements" are not collapsing measurements but only entanglements of the subsystems with the measurement result carrier.
 
  • #10
vanesch said:
The part which is obviously non-local in quantum theory is the collapse of an entangled state because that influences space-like separated subsystems.
But you don't need to collapse that state, if you can accept something which is close to "many-worlds".

Yes, that's true. You can avoid the conclusion that nature is nonlocal by denying that experiments (including, by the way, all of the experiments which made us believe in quantum mechanics in the first place) have definite outcomes (including the specific outcomes we erroneously thought they had which made us erroneously believe in quantum mechanics).

MWI is not just a clever way to avoid having a nonlocal theory; it has implications which radically undermine virtually everything that humans have ever believed. I'm happy to see it brought into the discussion, though: it demonstrates the lengths one must go to to avoid the conclusion I argued for earlier. Personally, I think this strengthens my case: if the next best option (after admitting that nature is nonlocal) is to believe that nature is nothing like what we have always thought and to believe that literally everything you believe (including this??) is a delusion, it starts to make it seem pretty reasonable (which of course it is) to just admit nonlocality and get on with life.

ttn
 
  • #11
ttn said:
Yes, that's true. You can avoid the conclusion that nature is nonlocal by denying that experiments (including, by the way, all of the experiments which made us believe in quantum mechanics in the first place) have definite outcomes (including the specific outcomes we erroneously thought they had which made us erroneously believe in quantum mechanics).

That's too cheap a remark :smile:
I'm convinced that MWI as it stands, is wrong. However, I'm saying that you have to apply the projection postulate at the very end of the processing chain (which is always your conscious observation). So I do not deny experiments to have definite outcomes, once I personally observed them. I only consider the possibility that outcomes, far away, of experiments, can still exist in QM superpositions. I agree that the solipsist part of this explanation is unconfortable, but philosophically not impossible.

it demonstrates the lengths one must go to to avoid the conclusion I argued for earlier. Personally, I think this strengthens my case: if the next best option (after admitting that nature is nonlocal) is to believe that nature is nothing like what we have always thought and to believe that literally everything you believe (including this??) is a delusion, it starts to make it seem pretty reasonable (which of course it is) to just admit nonlocality and get on with life.

I don't have profound conceptual problems with non-locality. The problem I have with it is that nature seems only to be halfbaked nonlocal, in that this non-locality can never be used to transmit information. If nature were truly non-local, then why is it impossible in principle to make a faster-than-light telephone ?

If you accept EPR results, then there's one thing which is for sure: entangled superpositions of particles can exist over huge distances. So we should take the superposition principle very seriously. There's still a struggle, however, with what exactly is this projection postulate. Decoherence theory shows us that in most cases, it doesn't matter where we put it, as long as we put it between the system at hand and "observation". But what IS the ultimate observation ? I say that in EPR experiments, the ultimate observation is the observation of the correlations. If the price to pay is that machines, people and so on walk around in superposed states, then so be it. It is undetectable... except in EPR experiments !

Although this explanation has obvious "common sense" problems, it has one big advantage: apart from being fully compatible with everything QM has ever predicted, it keeps locality (not for locality's sake) together with EPR, and as such, indicates you why you will not be able to make a faster-than-light telephone.
It is nothing else but a daring macroscopic measurement apparatus version of the 2-slit experiment, where you need "nonlocality" to explain the "correlation" (the interference pattern observed by the detector) if you want to have the particle go through slit 1 and slit 2 ; but where you don't have that problem if you consider the "correlation" to be the result of interference of the two partial waves coming together again.
In the EPR case, we don't let partial waves of a particle interfere, we let "measurement result signals" interfere in the device that calculates the correlations.

As I said, this is my pet-interpretation of EPR, but 1) I like it and 2) it is in no contradiction with any result.
 
  • #12
Patrick, as you say this gets you into "entangled cat" paradoxes on the macroscopic level, and I am not sure decoherence can get you out. Here's a question, say we have two entangled states that are each 1 kilometer long, can they (without observation or projection) become entangled and make a state 2 kilometers long?

I guess what I am asking is, is entanglement transitive at the quantum state level.
 
  • #13
ttn said:
That's easy. Bohm's theory for a single electron is perfectly local, and has no trouble explaining the 2 slit experiment.

ttn

Is it the DeBroglie-Bohm theory you refer to? If it is I got the impression when reading alittle about it that it contains its quantum potential which in turn is all knowing which imply the theory isn't local. Please correct me if I am wrong.
 
  • #14
trosten said:
Is it the DeBroglie-Bohm theory you refer to? If it is I got the impression when reading alittle about it that it contains its quantum potential which in turn is all knowing which imply the theory isn't local. Please correct me if I am wrong.

Yes, the de Broglie - Bohm (dBB) theory, aka "Bohm's theory" or "Bohmian Mechanics" or whatever.

It's true, in the general N-particle case, that the quantum potential (if you choose to even formulate the theory that way, which you don't actually have to) "is all knowing", i.e., the force that potential exerts on a given particle depends on the instantaneous positions of all the other particles in the N-particle system. No doubt that means the theory is nonlocal.

But in the one particle case, the force just depends on the quantum potential at the point of the particle in question, just like the classical (say) electric force on a particle depends on the electric field at the particle's current location. So it's not nonlocal in that case.

Check out S. Goldstein's article on Bohm's theory for more information:

http://plato.stanford.edu/entries/qm-bohm/


By the way, I don't want to make a big deal over whether or not Bohm's theory is local in the one-particle case. Who really cares, after all? There's no denying its nonlocality in the general case, so it's probably just fine to say, as you did, that "the theory isn't local" and leave it at that. The point I am more concerned to make is that this cannot be used as an argument against the theory. *Any* theory that reproduces the quantum predictions, QM itself included, has to be nonlocal. That's what the combined EPR-Bell analysis proves. And, paraphrasing Bell, it is to the credit of the Bohmian theory for helping to bring this out.

ttn
 
  • #15
selfAdjoint said:
Here's a question, say we have two entangled states that are each 1 kilometer long, can they (without observation or projection) become entangled and make a state 2 kilometers long?

"Decoherence is the theory of universal entanglement" in the words of Erich Joos, one of the theory's pioneers. So I guess the entire universe is the limit to what can become entangled.
 
  • #16
Einstein Was Right

The Sept 04 issue of Scientific American is a special issue titled "Beyond Einstein". One of the articles is "Was Einstein Right?" which deals with that question in relation to quantum mechanics. The following is from that article:

"Was Einstein Right? by George Musser

Unlike nearly all his contemporaries, Albert Einstein thought quantum mechanics would give way to a classical theory. Some researchers nowadays are inclined to agree

Einstein has become such an icon that it sounds sacrilegious to suggest he was wrong. Even his notorious "biggest blunder" merely reinforces his aura of infallibility: the supposed mistake turns out to explain astronomical observations quite nicely [see "A Cosmic Conundrum," by Lawrence M. Krauss and Michael S. Turner]. But if most laypeople are scandalized by claims that Einstein may have been wrong, most theoretical physicists would be much more startled if he had been right.

Although no one doubts the man's greatness, physicists wonder what happened to him during the quantum revolution of the 1920s and 1930s. Textbooks and biographies depict him as the quantum's deadbeat dad. In 1905 he helped to bring the basic concepts into the world, but as quantum mechanics matured, all he seemed to do was wag his finger. He made little effort to build up the theory and much to tear it down. A reactionary mysticism--embodied in his famous pronouncement, "I shall never believe that God plays dice with the world"--appeared to eclipse his scientific rationality.

Estranged from the quantum mainstream, Einstein spent his final decades in quixotic pursuit of a unified theory of physics. String theorists and others who later took up that pursuit vowed not to walk down the same road. Their assumption has been that when the general theory of relativity (which describes gravity) meets quantum mechanics (which handles everything else), it is relativity that must give way. Einstein's masterpiece, though not strictly "wrong," will ultimately be exposed as mere approximation.

Collapsing Theories
In recent years, though, as physicists have redoubled their efforts to grok quantum theory, a growing number have come to admire Einstein's position. "This guy saw more deeply and more quickly into the central issues of quantum mechanics than many give him credit for," says Christopher Fuchs of Bell Labs. Some even agree with Einstein that the quantum must eventually yield to a more fundamental theory. "We shouldn't just assume quantum mechanics is going to make it through unaltered, says Raphael Bousso of the University of California at Berkeley.

Those are strong words, because quantum mechanics is the most successful theoretical framework in the history of science. It has superceded all the classical theories that preceded it, except for general relativity, and most physicists think its total victory is just a matter of time. After all, relativity is riddled with holes--black holes. It predicts that stars can collapse to infinitesimal points but fails to explain what happens then. Clearly, the theory is incomplete. A natural way to overcome its limitations would be to subsume it in a quantum theory of gravity, such as string theory.

Still, something is rotten in the state of quantumland, too. As Einstein was amoung the first to realize, quantum mechanics, too, is incomplete. It offers no reason for why individual quantum events happen, provides no way to get at object's intrinsic properties and has no compelling conceptual foundation. Moreover, quantum theory turns the clock back to a pre-Einsteinian conception of space and time. It says, for example, that an eight-liter bucket can hold eight times as much as a one-liter bucket. That is true for every day life, but relativity cautions that an eight-liter bucket can ultimately hold only four times as much--that is, the true capacity of buckets goes up in proportion to their surface area rather than their volume. This restriction is known as the holographic limit. When the contents of the buckets are dense enough, exceding the limit triggers a collapse to a black hole. Black holes may thus signal the breakdown not only of relativity but also of quantum theory (not to mention buckets).

The obvious response to an incomplete theory is to try to complete it. Since the 1920s, several researchers have proposed rounding out quantum mechanics with 'hidden variables.'
The idea is that quantum mechanics actually derives from classical mechanics rather than the other way around. Particles have definite positions and velocities and obey Newton's laws (or their relativistic extension). They appear to behave in funky quantum ways simply because we don't, or can't, see this underlying order. 'In these models, the randomness of quantum mechanics is like a coin toss,' says Carsten van de Bruck of the University of Shelffield in England. 'It looks random, but it is not really random. You could write down a deterministic equation.'
.
.
Musser continues on to explain 'hidden variables' and other classical models. He then concludes with the following:

"All that said, most physicists still regard hidden variables as a long shot. Quantum mechanics is such a rain forest of a theory, filled with indescribably weird animals and endlessly explorable backwaters, that seeking to reduce it to classical physics seems like trying to grow the Amazon from a rock garden. Instead of presuming to reconstruct the theory from scratch, why not take it apart and find out what makes it tick. That is the approach of Fuchs and others in the mainstream of studying the foundations of quantum mechanics.

They have discovered that much of the theory is subjective: it does not describe the objective properties of a physical system but rather the state of knowledge of the observer who probes it. Einstein reached much the same conclusion when he critiqued the concept of quantum entanglement--the "spooky" connection between two far-flung particles. What looks like a physical connection is actually an intertwining of the observer's knowledge about the particles. After all, if there really were a connection, engineers should be able to use it to send faster than light signals, and they can't. Similarly, physicists had long assumed that measuring a quantum system causes it to "collapse" from a range of possibilities into a single actuality. Fuchs argues that it is just our uncertainty about the system that collapses.

The trick is to strip away the subjective aspects of the theory to expose the objective reality. Uncertainty about a quantum system is very different from uncertainty about a classical one, and this difference is a clue to what is really going on. Consider Schroedinger's famous cat. Classically, the cat is either alive or dead; uncertainty means that you do not know until you look. Quantum-mechanically, the cat is neither alive nor dead; when you look, you force it to be one or the other, with a 50-50 chance. That struck Einstein as arbitrary. Hidden variables would eliminate that arbitrariness.

Or would they? The classical universe is no less arbitrary than the quantum one. The difference is where the arbitrariness comes in. In classical physics, it goes back to the dawn of time; once the universe was created, it played itself out as a set piece. In quantum mechanics, the universe makes things up as it goes along, partly through the intervention of observers. Fuchs calls this idea the 'sexual interpretation of quantum mechanics.' He has written: 'There is no one way the world is because the world is still in creation, still being hammered out.' The same thing can clearly be said of our understanding of quantum reality."


Christopher A. Fuchs Webpage ==>
http://netlib.bell-labs.com/who/cafuchs/

"Physics Today" Article "Quantum Theory Needs No 'Interpretation'"==>
http://chaos.swarthmore.edu/courses/phys134/papers/Fuchs2000a.pdf

Comments on "Physics Today" Article "Quantum Theory Needs No 'Interpretation'"==>
http://www.aip.org/pt/vol-53/iss-9/p14.html

"I've said it before, I'll say it again:
Can a dog collapse a state vector?
Dogs don't use state vectors.
I myself didn't collapse a state
vector until I was 20 years old."
- Christopher A. Fuchs

All the best
John B.
 
Last edited by a moderator:
  • #17
DrChinese said:
Caroline,

A. Please tell me what the probability values you predict for correlations at 22.5, 45, and 67.5 degrees, where 0=perfectly correlated. I estimate that you give these values (which I label X, Y and Z), per your figure 14 of The Chaotic Ball, as:

X = .7500
Y = .5000
Z = .2500

respectively, is that correct?

The point of my Chaotic Ball model is to illustrate the principle behind the detection loophole -- to demonstrate how easy it is to produce local realist models that violate the CHSH test when the statistic is estimated in the accepted way. None of the numerical predictions have any value, since nobody seriously thinks that any real system would behave in that way, with sharp boundaries between regions of hidden variable space where all particles are detected and others where none are.

The model shows the difference between the two ways of estimating that test statistic. If we estimate each of the terms by expressions of the form:
(1) (N++ + N-- - N+- - N-+)/ (N++ + N-- + N+- + N-+)​
we may quite likely violate the CHSH inequality. If, however, we estimate them by expressions of the form:
(2) (N++ + N-- - N+- - N-+)/ N​
where N is the total number of emitted pairs, we cannot ever violate it.

I wonder, incidentally, if you have looked at the following paper?

R. García-Patrón Sánchez, J. Fiurácek , N. J. Cerf , J. Wenger , R. Tualle-Brouri , and Ph. Grangier, “Proposal for a Loophole-Free Bell Test Using Homodyne Detection”, Phys. Rev. Lett. 93, 130409 (2004)
http://arxiv.org/abs/quant-ph/0403191

In this experiment they propose to use the "event-ready detectors" that Bell himself recommended as leading to a valid version of his test, thus effectively using the second of the above two methods. The proposal looks quite feasible and I can see no potential loopholes. Unfortunately, though, I think the theory they put forward to test for the presence of "non-classical" light is seriously flawed. The experiment is doomed to be a flop, producing no Bell test violations but, equally, no sign of non-classical light. Once they have realized that they have only got classical light, I expect the quantum theorists and local realists to agree that no Bell test violation should be predicted.

DrChinese said:
B. Please list all loopholes that apply in the measurement process described above a la Aspect, and the values of their error at the above angles.
I don't know what you mean here. It is not the purpose of the model to provide error estimates, any more than it is to give numerical predictions for any real experiment.

DrChinese said:
C. Please describe what currently hidden variables account for the differences between your figures 14-17 of the Chaotic Ball and how a physical experiment a la Aspect might confirm or deny the existence of such currently unknown variables - I believe you refer to them as "bands" and "imperfections".

Now I'm definitely lost, since my figure numbers end at number 15! However, I shall try and answer what I think is your question. If we're looking at real experiments such as Aspect's, we have a slightly different basic model, one in which the 0/1 probabilities are replaced by the cos^2 curves of Malus' Law (see Appendix C of http://arXiv.org/abs/quant-ph/9903066). The equivalent of my Chaotic Ball "missing bands" is broad troughs in the functions that replace Malus' Law cos^2 curves. I expect these broad troughs to be found when detector efficiencies are very low, or, alternatively, when beam intensities are very low.

One reason for mentioning the Sanchez et al "loophole-free" experiment is that this would, I think, present a very good opportunity to check out the local realist ideas. It would be easy to take different measurements -- use the raw detector output voltages instead of the proposed "digitised voltage differences" -- and explore what happens with different efficiencies etc..

DrChinese said:
D. Please explain why the same Loopholes and Chaotic Ball ball you describe above (B. and C.) do not apply to other experimental tests of QM, if indeed that is the case.

If you look at my expressions (1) and (2) above, you will see that they involve counts such as N++ and N+-. They assume that both '+' and '-' events are counted. Not all experiments do this. The test that is designed for use when only '+' results are counted is the CH74 test, which does not have the same risk of bias. [Some experiments in which only '+' events are counted nevertheless, by doing extra experimental runs, manage to use the CHSH test. This is most unfortunate!]

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #18
Bell's Theorem demonstrates that the following 3 things cannot all be true:

i) The experimental predictions of quantum mechanics (QM) are correct in all particulars
ii) Hidden variables exist (particle attributes really exist independently of observation)
iii) Locality holds (a measurement at one place does not affect a measurement result at another)
A Bell-type inequality of the form

E(2θ) ≤ 2E(θ)

(where E(θ) is the probability of anti-correlation for an angle θ between the axes of the two polarizers) can be derived on the basis of the following five assumptions:

1) probability E of anti-correlation depends only on relative angle θ between polarizers' axes ;

2) E(0) = 0 ;

3) free will of experimenters ;

4) contrafactual definiteness of measurement results ;

5) locality .


But Quantum Mechanics predicts

6) E(θ) = sin2θ ,

which is not consistent with the said inequality.


Which of 1-6 do people wish to negate? (... Or would you rather say I did not correctly identify the assumptions?)
 
  • #19
Caroline Thompson said:
1. The point of my Chaotic Ball model is to illustrate the principle behind the detection loophole -- to demonstrate how easy it is to produce local realist models that violate the CHSH test when the statistic is estimated in the accepted way. None of the numerical predictions have any value, since nobody seriously thinks that any real system would behave in that way, with sharp boundaries between regions of hidden variable space where all particles are detected and others where none are.

...

I don't know what you mean here. It is not the purpose of the model to provide error estimates, any more than it is to give numerical predictions for any real experiment.

2. Now I'm definitely lost, since my figure numbers end at number 15!
1. I think that is my point. The Chaotic Ball is 100% speculation. So what if LR theories can disguise themselves experimentally when testing conditions are not perfect? That does not change the fact that LR is not compatible with the predictions of QM, and vice versa.

There has never been a statistically significant variance in the measured results a la Aspect - they all agree with QM to the penny within N standard deviations where N is increasing. Yet your Chaotic Ball theory - by your own admission above - yields an entire range of results both higher and lower than the QM value AND should vary in observed value according to the hypothetical variable(s) that affect it (the ones that have yet to be discovered). Yet there is NO VARIATION outside the QM observed range and your model makes absolutely no provision for this.

In other words:

.7500 = LR maximum for correlation at 22.5 degrees
.8536 = QM predicted value for 22.5 degrees, same as observed
.7501, .7502, .7503, ... .9999 = set of possible observed values per Chaotic Ball which violate the CHSH tests.

If you expect to be taken seriously as a scientist, you must acknowledge this serious deficiency and correct it. Otherwise, you could use similar logic to attack all experiments ever run previously and without exception. What would be the utility (a word you obviously despise) to that? After all, and let's face it, the Chaotic Ball is nothing more than an artifact of your imagination, even you don't pretend it is "real" (and you are the realist).

2. Sorry, I meant figures 11-14.
 
  • #20
JohnBarchak said:
The Sept 04 issue of Scientific American is a special issue titled "Beyond Einstein". One of the articles is "Was Einstein Right?" which deals with that question in relation to quantum mechanics. The following is from that article:

"Was Einstein Right? by George Musser

Unlike nearly all his contemporaries, Albert Einstein thought quantum mechanics would give way to a classical theory. Some researchers nowadays are inclined to agree [...]

This must be one of the reasons I gave up my subscription to Sci. Am. :devil:
More and more they have these watered-down smoky-bar discussions, which make those who know what it is about, smile, and confuse those who don't know.
"will quantum theory remain with us for ever" ? No of course. I hope that physics, 2000 years from now, will have evolved ! But does that mean that Einstein's viewpoint was right ? No ! Did he point out the hard parts in QM ? Yes. Did he solve them ? No.

But all this is blahblah in the air.
 
  • #21
selfAdjoint said:
Patrick, as you say this gets you into "entangled cat" paradoxes on the macroscopic level, and I am not sure decoherence can get you out. Here's a question, say we have two entangled states that are each 1 kilometer long, can they (without observation or projection) become entangled and make a state 2 kilometers long?

I guess what I am asking is, is entanglement transitive at the quantum state level.

I'm affraid I don't understand a word of what you write here, Dick ??

Maybe this has nothing to do with your remark, but the way I see it, in the framework of decoherence, is the following:

Normally, when states interact with the environment, they get so complicately "entangled" with the environment, that there is no way to observe interferences between components of robust states, because they are entangled with essentially orthogonal "environment states".
However, I see EPR experiments as an exception to this: because the entanglements with the environments of the respective measurement systems Bob and Alice compose exactly as the original entangled pair (A+B- - A-B+), you DO get quantum interference of the entire "environment" partial states of Alice and Bob when you bring them together to produce the correlation results!
EPR "result carriers" (Bob and Alice) are one of the few macroscopic systems, entangled with their environment, which still have exact phase relationships between them. This is the result of the linearity of QM and the original entanglement of the photons. It is one of the few occasions where you can observe superpositions of people interfere :smile:
Should I go and see a doctor now ? :biggrin:
 
  • #22
DrChinese said:
1. I think that is my point. The Chaotic Ball is 100% speculation.
It is 100% illustration of a point of principle!

DrChinese said:
So what if LR theories can disguise themselves experimentally when testing conditions are not perfect? That does not change the fact that LR is not compatible with the predictions of QM, and vice versa.
I don't dispute this. It was proved by EPR in 1935 and confirmed by Bell in 1964.

DrChinese said:
There has never been a statistically significant variance in the measured results a la Aspect - they all agree with QM to the penny within N standard deviations where N is increasing. Yet your Chaotic Ball theory - by your own admission above - yields an entire range of results both higher and lower than the QM value AND should vary in observed value according to the hypothetical variable(s) that affect it (the ones that have yet to be discovered). Yet there is NO VARIATION outside the QM observed range and your model makes absolutely no provision for this.

Hmmm ... but the community has continued to ignore my suggestions as to how the experiments could be modified so as to settle the matter once and for all! Why is this? Why don't they simply conduct careful series of experiments, each one identical except for changes in one parameter -- the efficiency of the detectors, for instance? Are they perchance frightened of what they might find?

I have frequently heard it suggested that if proving QM wrong were that easy it would have been done long ago and the experimenter awarded a Nobel prize. This does not seem to be the way human nature works! I've recently been reading Aczel's book, "Entanglement" (Four Walls Eight Windows, New York, 2001),which, though totally wrong in its statements about the experimental proofs, is very interesting regarding the biographical information and social interactions among the people concerned. He tells how Feynman thought Clauser mad for attemping Bell tests, and my notes include:
P186: When Aspect went to CERN to discuss his proposed experiments, Bell asked “Are you tenured?”. When Aspect said he was only a graduate student Bell answered: “You must be a very courageous graduate student.”​
Who is prepared to supervise a student or fund research when the declared aim is to give local realism half a chance of proving itself?

DrChinese said:
If you expect to be taken seriously as a scientist, you must acknowledge this serious deficiency and correct it.
I make no apology for not attempting the impossible! All actual experiments are different and need different assumptions. The logic of the local realist model is the same in all (though sometimes needs adaptation if, for example, there are factors that might influence the detection times and hence the probabilities of a coincidence being recognised as such). There is no point in making numerical predictions since this can (as in QM!) only be done for the ideal case.

DrChinese said:
Otherwise, you could use similar logic to attack all experiments ever run previously and without exception. What would be the utility (a word you obviously despise) to that? After all, and let's face it, the Chaotic Ball is nothing more than an artifact of your imagination, even you don't pretend it is "real" (and you are the realist).
The Chaotic Ball model, though invented independently, can be regarded as a simplified and user-friendly version of Philip Pearle's model of 1970. The latter is accepted as valid. If it were more widely understood I don't think we'd have seen the CHSH inequality or similar coming into use. The community would have stuck with the CH74 one. [They don't always call it this, but if you look at what they actually test you'll find that they use either the CH74 test or, if they can assume rotational invariance, the Freedman test, which can be derived from it.]

Hmmm ... I hope you will be looking at the talk page for http://en.wikipedia.org/wiki/Clauser_and_Horne's_1974_Bell_test . I wonder, incidentally, if you've read Clauser and Shimony's 1978 report. That 1974 paper was important!

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #23
Caroline Thompson said:
1. It is 100% illustration of a point of principle!

2. Hmmm ... but the community has continued to ignore my suggestions as to how the experiments could be modified so as to settle the matter once and for all! Why is this?

3. Why don't they simply conduct careful series of experiments, each one identical except for changes in one parameter -- the efficiency of the detectors, for instance? Are they perchance frightened of what they might find?

4. There is no point in making numerical predictions since this can (as in QM!) only be done for the ideal case.

5. Hmmm ... I hope you will be looking at the talk page for http://en.wikipedia.org/wiki/Clauser_and_Horne's_1974_Bell_test . I wonder, incidentally, if you've read Clauser and Shimony's 1978 report. That 1974 paper was important!

Caroline
http://freespace.virgin.net/ch.thompson1/

1. Long on principle, I guess, but short on substance. Where's the beef? Speculation is a dime a dozen.

2. The scientific community ignores your suggestions because there are plenty of more useful pursuits out there.

3. Why don't you do the experiment yourself? I think your answer will explain even better why others won't bother.

4. Now this is one of the strangest comments I have ever heard from a scientist. I.e. "Can't have predictions where there is imperfection." The pattern below summarizes things for the rest of us (all values for correlations at 22.5 degrees):

.7500 : Highest LR predicted value compatible with Bell
.7501 : A value not observed experimentally, compatible with the Chaotic Ball model
.7502 : Another value not observed experimentally, compatible with the Chaotic Ball model
.7503 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
...
.8001 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
.8002 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
.8003 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
...

.8536 : Experimentally observed value, as predicted by QM, also compatible with the Chaotic Ball model

...
.9001 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
.9002 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
.9003 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
etc...

QM is useful, Chaotic Ball is useless, as can be seen above.

5. I have left a comment there.

For those who are not already aware, I am in the process of removing much of Caroline's contributions to Wikipedia on the subject of Bell's Theorem and related articles. If you want to know why, you can go there and see on the talk page, at the bottom. If anyone wishes to volunteer time to assist me in the process, it would be appreciated.
 
Last edited by a moderator:
  • #24
DrChinese said:
1. Long on principle, I guess, but short on substance. Where's the beef? Speculation is a dime a dozen.
As I've explained recently, my Chaotic Ball model is essentially identical to Philip Pearle's of 1970, and both are logical deductions from one "speculation" that is, in my view, undeniably reasonable. The speculation consists in the idea that if you have a piece of apparatus such as a Stern-Gerlach one and some of your particles fail to reach a detector because they went straight on instead of being deflected, they did so because their spin did not have a sufficiently large component in the direction determined by the magnet orientations.

I'm not entirely happy with the above, since it talks of S-G magnets and, as you know, no actual Bell test has ever used these. All the ones that matter have used light.

Now, with light, we (a) have different geometry and (b) run immediately into a higher level of controversy, since classical theory says that when you pass light through a polariser you get some output from the '+' channel, say, even if the polarisation direction of your light was almost at 90 deg to the '+' axis. It follows that classically you get some light output from both channels most of the time (for all input polarisations except exactly 0 and 90 deg). QM assumes that you only get one or the other, and some experiments have been done that seem to (partially) support this. I say "partially", though, since experiments also show that this is an illusion: a well-known (?) one by Aspect's team (using the same source as in his Bell test experiments) showed that even the apparently zero output was in fact there, since it was capable of interfering with the other output when recombined.

[See Grangier, P, G Roger and A Aspect, “Experimental evidence for a photon anticorrelation effect on a beam splitter: a new light on single-photon interferences”, Europhysics Letters 1, 173-179 (1986)]

The long and the short of it is that QM disagrees rather seriously with classical theory here, since it says (though, in my view, the above experiment disproves this) that you never get both '+' and '-' outputs at once if you input a single photon to a 2-channel polariser. [Has this ever been tested within a Bell test experiment, I wonder?] QM is thus saying, effectively, that you only get a '+' output if your input was within 45 deg of the '+' axis, as against classical theory that says the output intensity is a continuous variable, of magnitude (under perfect conditions) given by a cosine^2 law.

My Chaotic Ball assumption here is that, though the output intensity is a continuous variable, it obeys (effectively, allowing also for the nonlinearity of the detector response) a law that is rather closer to the QM idea than to conventional classical theory. The output intensity (and hence probability of detection) is appreciable only when the input polarisation angle is close to one or other of the polariser axes. Taking an extreme case, we might find that only outputs corresponding to angles less than 30 deg are strong enough to detect. The Chaotic Ball logic would then apply unchanged, apart from trivial geometrical differences, with the "missing band" consisting of any angle between 30 deg and 60 deg.

In reality, the ball model needs rather more radical revision than this, since the probabilities of detection will not be quite zero anywhere and the '+' and '-' zones of hidden variable space will overlap. It's probably best to ignore the model and just look at the algebra.

DrChinese said:
3. Why don't you do the experiment yourself? I think your answer will explain even better why others won't bother.
I can't do experiments myself! I have never been an experimental physicist, for one thing, and am retired now for another. I should be only too happy to be involved in the experiments, though.

DrChinese said:
4. Now this is one of the strangest comments I have ever heard from a scientist. I.e. "Can't have predictions where there is imperfection." The pattern below summarizes things for the rest of us (all values for correlations at 22.5 degrees):

.7500 : Highest LR predicted value compatible with Bell
.7501 : A value not observed experimentally, compatible with the Chaotic Ball
...
...
.8002 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
.8003 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
...
.8536 : Experimentally observed value, as predicted by QM, also compatible with the Chaotic Ball model
...
.9003 : Yet another value not observed experimentally, compatible with the Chaotic Ball model
etc...

QM is useful, Chaotic Ball is useless, as can be seen above.

Not one of the above values has actually been observed! The figures you quote are after "normalisation". What we want is actual relative frequencies -- ratios of observed counts to pairs of "photons" produced. It is only if you have 100% efficiency that the normalised and actual probabilities coincide.

QM is misleading; the Chaotic Ball at least tells you what the real logic is!

DrChinese said:
5. I have left a comment there.

For those who are not already aware, I am in the process of removing much of Caroline's contributions to Wikipedia on the subject of Bell's Theorem and related articles. If you want to know why, you can go there and see on the talk page, at the bottom. If anyone wishes to volunteer time to assist me in the process, it would be appreciated.
See the discussion pages associated with:
http://en.wikipedia.org/wiki/Clauser_and_Horne's_1974_Bell_test
http://en.wikipedia.org/wiki/Bell's_Theorem
and now
http://en.wikipedia.org/wiki/Interpretation_of_quantum_mechanics.

You say you are "removing" some of my contributions, but wikipedia is run democratically. As I understand it the decision is not entirely down to you. If you remove things and I disagree I am free to replace them and, of course, will do so unless the matter has been properly discussed and it really is proved that the majority agree with you. I am well aware of the fact that the powers that be can, if the see fit, ban me, so it's not in my interests to make too much nuisance of myself.

Caroline
http://freespace.virgin.net/ch.thompson1/
 
Last edited by a moderator:
  • #25
DrChinese
I've just read your post on 'Bell's theorem and negative probabilities'.In classical optics,cos^2 term will arise only if the detectors are in series.For QM,can you tell me how cos^2 is arrived(may be I'm asking a stupid question)--do you average over all possible polarization directions?
Coming back to your set-up and classical optics,what exactly are you saying:-using classical optics probabilities for the 8 possibilities don't add to one (unless a couple of them add to a -ve probability) ?
Using QM (3) and (6) are anyway ruled out--so do the rest add to one?
 

1. What is Local Realism After Bell?

Local Realism After Bell is a scientific concept that explores the relationship between quantum mechanics and local realism. It is based on the experiment conducted by physicist John Stewart Bell in the 1960s, which showed that local realism cannot fully explain the behavior of quantum particles.

2. How does Bell's inequality relate to Local Realism After Bell?

Bell's inequality is a mathematical expression that sets limits on the correlations between different measurements of quantum particles. Local Realism After Bell examines these correlations and how they challenge the principles of local realism.

3. What is the difference between local realism and quantum mechanics?

Local realism is the idea that physical properties of objects exist independently of observation and can be predicted based on their local interactions. Quantum mechanics, on the other hand, describes the behavior of particles at a subatomic level, where properties are not always definite and can be influenced by observation.

4. How does Local Realism After Bell affect our understanding of the universe?

Local Realism After Bell challenges our traditional understanding of the universe and the laws of physics that govern it. It suggests that there may be a deeper underlying reality beyond our current understanding and that the behavior of quantum particles may not be fully explained by local realism.

5. What are the implications of Local Realism After Bell for future research?

Local Realism After Bell opens up new avenues for research in the field of quantum mechanics and the fundamental laws of the universe. It challenges scientists to further explore the relationship between local realism and quantum mechanics and how it may impact our understanding of the physical world.

Similar threads

Replies
80
Views
3K
Replies
6
Views
2K
Replies
11
Views
2K
Replies
4
Views
939
  • Quantum Physics
Replies
5
Views
2K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
Replies
18
Views
1K
  • Quantum Physics
Replies
10
Views
2K
Replies
1
Views
762
  • Quantum Interpretations and Foundations
Replies
6
Views
1K
Back
Top