Questioning assumptions behind Bell's and related theorems

  • Thread starter Thread starter bohm2
  • Start date Start date
  • Tags Tags
    Assumptions
Click For Summary
The discussion centers on questioning the mathematical assumptions underlying Bell's theorem, particularly the requirement that all random variables be defined on a single probability space. Authors propose a "chameleon model" to argue that Bell's inequality fails even before considering potential loopholes, emphasizing that physicists often obscure this assumption in their notation. They assert that without the assumption of a single probability space, proving Bell's inequality becomes impossible, and they challenge the contextuality basis of the Kochen-Specker theorem as well. The conversation also touches on the implications of rejecting foundational assumptions like locality and realism, which are critical to the validity of Bell's results. Overall, the debate highlights the complexity and contentious nature of interpretations surrounding Bell's theorem and its implications for local realism.
  • #31
zonde said:
How you could possibly find out the number of particles which have interacted with the apparatus?

That's an important question. In a simulation you can, in real world experiments you can't. That is why they say in section 5 of this paper (http://ics.org.ru/doc?pdf=855&dir=e , page 113), that:

Moreover, and this is a possible difference between the classical and the quantum
case, the very notion of "total number of pairs emitted by the source" is a totally
platonic and in principle unobservable quantity in the quantum case (under the
assumption of a neat space separation between the two apparata).
In some, but not all, classical situations this number might be observable, but in
a quantum context, where you cannot follow the trajectory of single particles without
altering it, this number is quite unobservable.​


Are you sure about that last statement - "All the particles emitted were detected."? Hmm, maybe I misunderstood Accardi's model. My impression was that some particles go astray. But if you say that coincidence does not happen because particle is not yet detected it would be different model. Can you find a quote that illustrates your point?
From the same paper, page 106 they say:
In experiments with photons the term "simultaneous" has to be meant in the sense of a very narrow time window. But our experiment can also reproduce the ideal situation in which all apparata involved are 100% efficient. Exactly as in the experiment for photons the statistics is conditioned on coincidences (these topics are further discussed in Sec. 5). We do not know the
mechanism of coincidences for individual photons because quantum mechanics does not predict the space-time trajectories of microscopic particles. In our model this mechanism is:
(i) deterministic, i.e. uniquely pre-determined by the hidden parameters;
(ii) entirely local.​
Also look at point (2) on page 114
 
Last edited by a moderator:
Physics news on Phys.org
  • #32
DrChinese said:
You are welcome to your opinion. Mine is that your comments are overly speculative and go against the mainstream view. This is not the place to discuss non-standard science, that should be done elsewhere.

Clarification on mainstream view please : Is it the mainstream view (assumption) that particles once and for all have a determined spin . Or that the balls have a determined color before measurement ? Reference page 4 : And is the following not mainstream and speculative ? Measurement of S is the result of a dynamical process of interactions of a system and a measurement device. Also reference page 8 (17).
 
Last edited:
  • #33
morrobay said:
Clarification on mainstream view please

The working definition given by the rules of these forums is rather clear:
"Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature."

Peer-reviewed scientific literature is defined as follows:
"Usually, we accept references from journals that are listed here:

http://ip-science.thomsonreuters.com/mjl/

Use the search feature to search for journals by words in their titles. If you have problems with the search feature, you can view the entire list here:

http://ip-science.thomsonreuters.com...cgi?PC=MASTER

In recent years, there has been an increasing number of "fringe" and Internet-only journals that appear to have lax reviewing standards. We do not generally accept references from such journals. Note that some of these fringe journals are listed in Thomson Reuters. Just because a journal is listed in Thomson Reuters does not mean it is acceptable."
 
Last edited by a moderator:
  • #34
morrobay said:
Is it the mainstream view (assumption) that particles once and for all have a determined spin . Or that the balls have a determined color before measurement ?

It is mainstream that either entangled particle spin is not predetermined, or there are non-local factors affecting it. The assumptions for Bell's Theorem are generally taken to be EPR locality and EPR realism. Some may say there are other assumptions as well, but generally those (for example free will, no conspiracy, etc) have nothing whatsoever to do with Bell and apply to ALL scientific setups equally.

It is mainstream that Bell's Theorem has survived all challenges. Bell is so widely accepted that any new QM interpretation must devote a section to explaining how it reconciles with Bell. That is, if it is to be taken seriously. See for example this new interpretation published this week:

http://arxiv.org/abs/1312.3427

The Emergent Copenhagen Interpretation of Quantum Mechanics
Timothy J. Hollowood
(Submitted on 12 Dec 2013)

We introduce a new and conceptually simple interpretation of quantum mechanics based on reduced density matrices of sub-systems from which the standard Copenhagen interpretation emerges as an effective description of macroscopically large systems. Wave function collapse is seen to be a useful but fundamentally unnecessary piece of prudent book keeping which is only valid for macro-systems. The new interpretation lies in a class of modal interpretations in that it applies to quantum systems that interact with a much larger environment. However, we show that it does not suffer from the problems that have plagued similar modal interpretations like macroscopic superpositions and rapid flipping between macroscopically distinct states. We describe how the interpretation fits neatly together with fully quantum formulations of statistical mechanics and that a measurement process can be viewed as a process of ergodicity breaking analogous to a phase transition. The key feature of the new interpretation is that joint probabilities for the ergodic subsets of states of disjoint macro-systems only arise as emergent quantities. Finally we give an account of the EPR-Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel. The final conclusion is that the Copenhagen interpretation gives a completely satisfactory phenomenology of macro-systems interacting with micro-systems.
 
  • #35
billschnieder said:
Why should the apparatus be expected produce an outcome at a given instant in time, when there is no particle inside?

Because it's a scenario that Bell's analysis can justifiably be applied to. There's no point in doing an experiment that hasn't been shown to be able to detect a difference between locality and nonlocality, particularly when we have known counterexamples.

In a properly performed loophole-free Bell-type test, the detection time windows (when the two parties are going to perform measurements and record outcomes) should normally be decided in advance, or at least before the choices of measurements are made. The usual way to handle "noise" events (e.g. non-detection when a detection was expected) is simply to map them to particular outputs (for instance, the experimenters adopt the convention of recording all non-detections as '+1' events). That's the type of scenario that Bell's theorem readily applies to, and trying to apply it to anything more sophisticated than that would need to be supplemented with a careful justification of how and why it can be done.
 
  • #36
DrChinese said:
The Emergent Copenhagen Interpretation of Quantum Mechanics
[...]
Finally we give an account of the EPR-Bohm thought experiment and show that the interpretation implies the violation of the Bell inequality characteristic of quantum mechanics but in a way that is rather novel.

[PLAIN said:
http://arxiv.org/abs/1312.3427]The[/PLAIN] key point point is that the interaction between A and 1 changes the ontic states of 1 and 1 + 2 but not 2.

Hi DrC, is this really correct? If we are talking perfect (anti)correlations, then there’s no choice for 2, after interaction between A and 1 the state for 2 is determined, right...??
 
Last edited by a moderator:
  • #37
DevilsAvocado said:
It might seem a little bit 'rude', but to me it looks like Prof. Accardi is building a 'classical card castle' around the theoretical chameleons – with no or very little substance – and then to prove the whole thing he exploits the detection loophole (which is embarrassingly easy to do for Mermin’s 'counterfactual argument' in Bell’s theorem).
His argument has little to do with the detection loophole. You might want to look at these threads and particularly the debate between DrChinese and billschnieder:

https://www.physicsforums.com/showthread.php?t=496839&page=6
https://www.physicsforums.com/showthread.php?t=499002

I tried reading the Hand De Raedt articles but I could not follow the math; nevertheless, the discussion/debate between DrChinese and billschnieder was very informative.
 
  • #38
bohm2 said:
His argument has little to do with the detection loophole.

I don’t agree. He spends several pages on the “Difference between coincidences and efficiency of the detectors”. Why? If you have a mathematical theory that is correct and in accordance with the true nature of the world – why on Earth would you mix in human shortcomings in the measuring apparatus??

Answer: He builds the whole thing on tuning the efficiency to fit his theory.

[PLAIN said:
http://arxiv.org/abs/quant-ph/0112067]Suppose[/URL] that a detector is 100% efficient. Then, if a source emits 100 photons, all photons are detected in absence of polarizer. Suppose moreover that, when the polarizer is inserted, only 90 photons and not 100 are detected. Therefore, if as done in [18], the efficiency is calibrated with the ratio of the number of particles detected by the detector with polarizer and without polarizer, we should conclude that our polarizer is 90% efficient.
However, if the loss of these 10 photons is due to the chameleon effect, then by repeating many times the experiment (and postulating a situation of stationarity of the source) one should always detect 90 photons.
On the contrary, if the loss of photons is due to accidental causes, then the number of detected photons should fluctuate and an analysis of these fluctuations should, in principle, allow to distinguish between an 100% efficient detector in presence of the chameleon effect and an 100% efficient detector in presence of a 90% efficient polarizer.
In real physical situations the two effects are most likely combined and their distinction, although clear in principle, might be a very hard challenge both for theoreticians and experimentalists. However we are convinced that a satisfactory theory of measurement should take into account both these effects.

I would say that if one is to construct something as peculiar as a “Theory of Measurement”, then one should not be too restricted in type of equipment... otherwise there’s quite a risk that someone will run you over with a new type of experiment that you just missed... :wink:

http://www.nature.com/nature/journal/v409/n6822/full/409791a0.html said:
Experimental violation of a Bell's inequality with efficient detection

Here we have measured correlations in the classical properties of massive entangled particles (9Be+ ions): these correlations violate a form of Bell's inequality. Our measured value of the appropriate Bell's ‘signal’ is 2.25 ± 0.03, whereas a value of 2 is the maximum allowed by local realistic theories of nature. In contrast to previous measurements with massive particles, this violation of Bell's inequality was obtained by use of a complete set of measurements. Moreover, the high detection efficiency of our apparatus eliminates the so-called ‘detection’ loophole.


bohm2 said:
You might want to look at these threads and particularly the debate between DrChinese and billschnieder:

OMG... I get the creeps just thinking about these “do-nuts” and “French doctors” performing cruel allergy tests on patients... :smile:

Okay, I’ll give it a (hopefully) last try...

It’s seems very popular among the “LR-gang” to come down hard on Bell for identifying Local Realism inaccurate (maybe because it’s their last hope). The truth is that Einstein-causality (locality) of course comes from Einstein, and so does also realism. In correspondence with Max Born (March 1948), he writes:

Albert Einstein said:
“That which really exists in B should …not depend on what kind of measurement is carried out in part of space A; it should also be independent of whether or not any measurement at all is carried out in space A. If one adheres to this program, one can hardly consider the quantum-theoretical description as a complete representation of the physically real. If one tries to do so in spite of this, one has to assume that the physically real in B suffers a sudden change as a result of a measurement in A. My instinct for physics bristles at this.”

I don’t know... (maybe I’m unfair), but I get a slight feeling that this 'confusion' (on PF) regarding definite properties, somehow emanate from DrChinese’s “Bell's Theorem with Easy Math”... (truly sorry DrC if this is wrong!), and the mathematical abstraction he introduce to make things a little bit easier. We have all seen billschnieder go do-nuts over 3 vs 1 measured property, and some are fatally convinced that it’s a “magician's trick”, like trying to fit a circle onto a square... impossible from the beginning and hence unfounded as a theorem.

I think this unfortunate situation is due to that DrChinese (and I’m writing this to assist, not oppose DrC) in his example uses only the combinations [AB] [BC] [AC] and never [AA] [BB] [CC]. The later proves (without any discussion on the validity of unmeasured properties) that the LHV has to be prepared for perfect correlations, i.e. they have to agree – at the source – on what to do in case of perfect correlations. Here the only options are ++ or -- and there’s no statistical dependency in this case. One pair has to agree on this value on beforehand, it cannot differ in any single measurement.

This means that the LHV must have a predefined property/value/function/chameleon, or whatever, for all 3 cases of [AA] [BB] [CC] since there is no way to know in advance what will happen at the measuring apparatus (unless you have non-local function finding this out for you).

This shows without doubt and with – empirical verifications – that statements like this:

"The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs."

is catastrophically wrong. Perfect correlations in LHV demands three predetermined values, for three different settings, in a single measurement/experiment. Period.

If we take one step back to Einstein/EPR, things hopefully becomes even clearer. They used definite position and definite momentum as criteria instead of definite photon polarization. I guess everybody understands how hilarious things become if one simultaneous is claiming that local realism is indeed still alive, but it’s terribly wrong to state that a particle in this world will have a definite position and a definite momentum at all times!

For God’s sake – where is the particle in this new weird interpretation of local (sur)realism!? :bugeye:

This is nothing but the extended version of One Flew Over the Cuckoo's Nest II (Director's cut). :smile:

Of course we can also translate the perfect correlations to billschnieder’s infamous “do-nuts” setup:

Bill and his twin brother Buffalo are the main characters in this experiment. Buffalo is stationed in L.A. and Bill in NYC. They hired a baker in Omaha NE to send them each three black boxes of goodies, every morning, with the LR-express.

The three black boxes contain “do-nuts”, “fruitcake” and “half-baked potatoes”, separately.

Buffalo & Bill eagerly want to demonstrate how easy it is to mimic the dreadful perfect correlations in QM, in a completely local and very realistic way. So they create a set of rules:

1) They can only open one box a day.
2) If current date is even, they shall eat what’s inside the box.
3) If current date is odd, they shall give what’s inside the box to their fat cat.
4) They must throw the unopened boxes in the garbage every night.
5) They are not allowed to communicate, in case of “black box confusion”.

This very interesting experiment goes on for a month and then Buffalo travel to NYC to compare the data. Amazingly enough the data matches perfect correlations in QM, if we only consider the days when Buffalo & Bill happened to get the same kind of goodies!​

From this little fairytale we can draw the conclusion that if the baker in Omaha had put vacuum in the black boxes or dough to be baked on site – this experiment would have failed completely.

I surely hope billschnieder realize this as well...

And from this we can very easily see that if Buffalo & Bill want to go further and also mimic the rest of the statistics of QM, they run their heads into the solid concrete wall of elementary mathematics, which DrChinese has demonstrated so well in his example.

It’s mathematical impossible!

bohm2 said:
I tried reading the Hand De Raedt articles but I could not follow the math;

Neither do I, but I do understand some parts in his software and it does not convince me in any way.
 
Last edited by a moderator:
  • #39
DevilsAvocado said:
This means that the LHV must have a predefined property/value/function/chameleon, or whatever, for all 3 cases of [AA] [BB] [CC] since there is no way to know in advance what will happen at the measuring apparatus (unless you have non-local function finding this out for you).

This shows without doubt and with – empirical verifications – that statements like this:

"The mistake here is that Bell and followers insist from the start that the same element of reality occurs for the three different experiments with three different setting pairs."

is catastrophically wrong. Perfect correlations in LHV demands three predetermined values, for three different settings, in a single measurement/experiment. Period.
There are no dedicated tests of perfect correlations. It still remains untested prediction of QM with some experimental observations that this might be true - I am commenting your claim about empirical verifications.

Even so we have practically 100% efficient photon detectors they are used for attempts at falsification of "local realism" rather than attempts at falsification of QM.
 
  • #40
DrChinese said:
It is mainstream that either entangled particle spin is not predetermined, or there are non-local factors affecting it.
I would challenge that "either" part of your statement.
How a local measurement of particle spin that is not predetermined can produce non-local correlations? There should be sources explaining this if it's mainstream view.
 
  • #41
DrChinese said:
It is mainstream that either entangled particle spin is not predetermined, or there are non-local factors affecting it.

Can the interpretation of the above be that it is mainstream that entangled particle spins are not predetermined and non-local factors affect second measurement ? Ie, A measurement of an entangled particle at detector A that is in superposition , spin up,spin down is made on z axis. It is spin up. The second measurement is made at space-like separated detector B on same axis and is anti correlated , spin down.
Non local effect.
 
  • #42
morrobay said:
Can the interpretation of the above be that it is mainstream that entangled particle spins are not predetermined and non-local factors affect second measurement ? Ie, A measurement of an entangled particle at detector A that is in superposition , spin up,spin down is made on z axis. It is spin up. The second measurement is made at space-like separated detector B on same axis and is anti correlated , spin down.
Non local effect.

If the two measurements are space-like separated, then there is no "first" or "second" measurement - their ordering is different for observers moving at different relative speeds because ofthe relativity of simultaneity.
 
  • #43
Nugatory said:
If the two measurements are space-like separated, then there is no "first" or "second" measurement - their ordering is different for observers moving at different relative speeds because ofthe relativity of simultaneity.

Space-like that the measurement at detector A, a distance (d) from detector B, is made at t1 and the measurement at detector B is made at t2 such that t2 < d/c
 
  • #44
morrobay said:
Space-like that the measurement at detector A, a distance (d) from detector B, is made at t1 and the measurement at detector B is made at t2 such that t2 < d/c

If ##\Delta{t}<\frac{d}{c}## for any observer, then for some observers ##t_1 < t_2## while for other observers ##t_2 < t_1## - so which one is "first"?
 
  • #45
morrobay said:
Can the interpretation of the above be that it is mainstream that entangled particle spins are not predetermined and non-local factors affect second measurement ? Ie, A measurement of an entangled particle at detector A that is in superposition , spin up,spin down is made on z axis. It is spin up. The second measurement is made at space-like separated detector B on same axis and is anti correlated , spin down.
Non local effect.

Besides Nugatory’s correct objection, it will not work because A & B are only perfectly (anti)correlated on (anti)parallel settings (i.e. the old EPR picture). In all other cases there are statistical correlations ranging from 0.01 to 0.99. This is Bell's ingenious contribution to EPR that finally settled the Bohr–Einstein debates.

Bell’s theorem stipulates that QM violates at least one of these three assumptions:
  • Realism
  • Locality
  • Free will
 
  • #46
zonde said:
I would challenge that "either" part of your statement.
How a local measurement of particle spin that is not predetermined can produce non-local correlations? There should be sources explaining this if it's mainstream view.

Entanglement and the shared wave function, described by Erwin Schrödinger 1935:


[PLAIN said:
http://www.tuhh.de/rzt/rzt/it/QM/cat.html]THE[/PLAIN] PRESENT SITUATION IN QUANTUM MECHANICS

The remarkable theory of measurement, the apparent jumping around of the psi-function, and finally the "antinomies of entanglement", all derive from the simple manner in which the calculation methods of quantum mechanics allow two separated systems conceptually to be combined together into a single one; for which the methods seem plainly predestined. When two systems interact, their psi-functions, as we have seen, do not come into interaction but rather they immediately cease to exist and a single one, for the combined system, takes their place.
 
Last edited by a moderator:
  • #47
DrChinese said:
It is mainstream that either entangled particle spin is not predetermined, or there are non-local factors affecting it.

That statement to me is weaker than it needs to be. That was of phrasing things would seem to leave open the possibility of a local, non-deterministic hidden-variables theory (where spin results are not predetermined but are instead the result of a stochastic process). But there is no such local stochastic model of EPR-type correlations.
 
  • #48
stevendaryl said:
That statement to me is weaker than it needs to be. That was of phrasing things would seem to leave open the possibility of a local, non-deterministic hidden-variables theory (where spin results are not predetermined but are instead the result of a stochastic process). But there is no such local stochastic model of EPR-type correlations.

Perhaps. I would say that there can be no such local stochastic model because of perfect correlations. Such a model would inevitably feature something which insures a specific outcome (since the settings/detectors themselves cannot introduce any element of randomness). Ergo it must be predetermined from hidden variables.
 
  • #49
DevilsAvocado said:
Entanglement and the shared wave function, described by Erwin Schrödinger 1935:
This quote does not answer my question. My question was how you do away with spooky action at a distance by assuming that measurements are just random.

Besides it does not seem like Erwin Schrödinger is explaining solution but rather a problem in that paragraph. After all the paragraph ends with this sentence: "Best possible knowledge of a whole does not include best possible knowledge of its parts - and that is what keeps coming back to haunt us."

DevilsAvocado said:
Bell’s theorem stipulates that QM violates at least one of these three assumptions:
  • Realism
  • Locality
  • Free will
Realism and free will are very basic assumptions behind scientific method. You could simply state that QM is either non-scientific or it violates locality IMHO.
 
  • #50
Nugatory said:
If ##\Delta{t}<\frac{d}{c}## for any observer, then for some observers ##t_1 < t_2## while for other observers ##t_2 < t_1## - so which one is "first"?

Let me re state and clarify to show that measurement at t1 at detector A is first. And a non - local effect in second measurement , t2 at detector B : Suppose both detectors are in same frame or in comoving frames.
The source of the entangled photons is 9/20 of total distance (d) from A and 11/20 of total distance (d) from B on AB axis The first measurement at t1 = 9/20 d/c The particle before measurement was in superposition spin up spin down on a parallel setting. It is spin up, collapsed wave function.
The second measurement t2 at detector B, same setting , equals 11/20 d/c. Spin down, anti correlated. So the second measurement t2 was 1/10 d/c after first measurement. Then Δt2 < d/c (superluminal signal between t1 and t2). Again my interpretation of mainstream view, non predetermined spin values for entangled particles and non locality. .
 
Last edited:
  • #51
zonde said:
This quote does not answer my question.

Oops sorry, bad interpretation on my side.

zonde said:
My question was how you do away with spooky action at a distance by assuming that measurements are just random.

DrC can speak for himself, but to me it looks like he is talking about that the only way to have determined entangled particle spin, is thru some non-local function, i.e. de Broglie-Bohm theory.

On the other hand, if we interpret your way; i.e. what happens in EPR-Bell if we keep locality and exclude realism (in the “Three Amigos” above)?

That’s a good question, and the most important answer is that Bell’s theorem is a no-go theorem – i.e. it states that that a particular situation is not physically possible. In Bell’s theorem this impossible situation is LHV compatible with the predictions of QM.

This means that to refute Bell’s theorem you must also prove QM wrong.

This will, with no doubt, be the biggest task in history of science (certainly not compatible with the rules of this forum), and if this (against all odds in the observable universe) will ever happen – your computer and every other electronic gadget on this planet will stop working in a fraction of a second, and we will be thrown back to the dark ages of nineteenth century, using telegraphs and steam engines.

Back to your question; Are there any attempts to explain how locality & non-realism would work?

Yes there are, where non-realism is implemented by means of nonseparability, in for example a relational blockworld.

zonde said:
Realism and free will are very basic assumptions behind scientific method. You could simply state that QM is either non-scientific or it violates locality IMHO.

I think Lawrence Krauss said – "the universe isn't designed for us" – and QM has proven that this is at least true regarding the human brain.

You could easily replace QM for My computer, and say; “My computer is either non-scientific or it violates locality IMHO”.

What fits best in the mouth, I leave to you and any other reader out there... :wink:
 
Last edited:
  • #52
zonde said:
This quote does not answer my question. My question was how you do away with spooky action at a distance by assuming that measurements are just random.

Besides it does not seem like Erwin Schrödinger is explaining solution but rather a problem in that paragraph. After all the paragraph ends with this sentence: "Best possible knowledge of a whole does not include best possible knowledge of its parts - and that is what keeps coming back to haunt us."

Realism and free will are very basic assumptions behind scientific method. You could simply state that QM is either non-scientific or it violates locality IMHO.

Personally, I find the terms "free will" and "realism" to be too fuzzy to reason about. I think it's more useful to think in terms of possible models that are ruled out by invoking "free will" or "realism".

As for "free will", a sort of model that is local, deterministic and compatible with the QM prediction is a "superdeterministic" model. In the reasoning that leads up to Bell's inequality, it is assumed that the choice of the hidden variable is independent of the choice of settings of distant measurement devices. That might not be the case. If the world is deterministic, then the settings of detectors is determined long in the past, and so it is possible to choose the hidden variable in a way that takes into account the future settings. (Actually, there's an interesting--to me--question about whether superdeterminism requires that twin-pair sources and detectors have an overlap in their backward lightcones. In some GR cosmologies, that might not be the case for really distant detectors.) I think it would be really difficult to reason about such a superdeterministic theory, but not impossible.

I don't really know what people mean by "realism". What is an example of a non-realistic theory? Well, I suppose one could explain the EPR correlations by assuming that the world is just a dream.
 
  • #53
DevilsAvocado said:
DrC can speak for himself, but to me it looks like he is talking about that the only way to have determined entangled particle spin, is thru some non-local function, i.e. de Broglie-Bohm theory.

The problem I have with Dr. C's either/or is that I don't see how a nondeterministic local realistic theory can reproduce the predictions of QM, either. So what's ruled out is local realistic models, and determinism is irrelevant.
 
  • #54
morrobay said:
Let me re state and clarify to show that measurement at t1 at detector A is first. And a non - local effect in second measurement , t2 at detector B : Suppose both detectors are in same frame or in comoving frames.
The source of the entangled photons is 9/20 of total distance (d) from A and 11/20 of total distance (d) from B on AB axis

And therefore you, at rest relative to the source and both detectors, are quite clear that ##t_1 < t_2## and that the detection at A happens before the detection at B. I, however, am watching from a spaceship moving along the AB axis at a speed of .1c relative to your lab and I will observe that ##t_1 = t_2## so the two detections were simultaneous. At any greater speed, I would find that ##t_1 > t_2## so the detection at B came first.

(Someone check my math, please - I did the algebra in my head so .1c may not be right for this particular combination of distances).

This is a typical example of the relativity of simultaneity at work; there is no way of deciding which of two spacelike-separated events "really" happened first. It's also why the apparent faster-than-light propagation of entanglement effects is so perplexing.
 
  • #55
stevendaryl said:
The problem I have with Dr. C's either/or is that I don't see how a nondeterministic local realistic theory can reproduce the predictions of QM, either. So what's ruled out is local realistic models, and determinism is irrelevant.

Interesting, I have never thought of this... nondeterministic local realistic theory... is that even possible? Let's see:

  1. The entangled particles are in a superposition of correlated random outcomes.
  2. Non-locality is not present to assist.
  3. Realism is a fact.
But... afaik, realism requires that "the Moon is there even when nobody looks"... which means the particles assembling the Moon must have definite states all the time, i.e. superposition is out of the question. If this also excludes "classical randomness", I don't know... how would a "stochastic Moon" look like??

AFAIK, Einstein spent a lot of time debating randomness vs realism, and I don't dare to enter these deep waters, but this setup will have serious problems just combining 1 & 2, which means it's not compatible with the predictions of QM.

I think many do put an equal sign between determinism and realism so when realism goes, so does determinism. We can at least be sure that if we exclude non-locality, determinism is the only way to get perfect correlations in EPR-Bell, which then is refuted by the full predictions of QM (i.e. all the other correlations/settings).
 
Last edited:
  • #56
stevendaryl said:
The problem I have with Dr. C's either/or is that I don't see how a nondeterministic local realistic theory can reproduce the predictions of QM, either. So what's ruled out is local realistic models, and determinism is irrelevant.

If one of Bell's assumptions (such as deterministic hidden variables) is wrong, then there could be a nondeterministic local model that can reproduce the predictions of QM. I don't claim it to be realistic, however, so I don't follow that part.

You are asking "how is that possible physically?" or similar (what is the mechanism etc). I can't say I know any more than I can answer a lot of "how" questions. I choose (and this is consistent with Bell) to think of such a model in terms of time symmetry. The full experimental setup includes future variables (which include information about Alice and Bob's choice of measurement). I wouldn't speculate as to whether there is a root cause to the outcome of a quantum spin flip. And since there is still an element of randomness in the outcome, this is a non-deterministic model. It is also local because nothing is happening faster than c.
 
  • #57
DevilsAvocado said:
I think many do put an equal sign between determinism and realism so when realism goes, so does determinism.

That is how I see it too. I know there are others who draw a distinction between determinism and realism. That there is no distinction is best seen simply by referring to the EPR definition, which is what was used by Bell:

a) An element of reality exists if a prediction can be made with certainty.
b) Elements of reality do not need to be simultaneously demonstrable.

If you then define both realism and determinism around that, they must be the same thing for purposes of the EPR/Bell/Aspect line of reasoning.
 
  • #58
And in my post above, I would add that requirement b) goes directly against QM's HUP. Therefore it is the weak link. No b) means Bell's realism assumption is invalid. And Bell's Theorem is satisfied because: No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics. You could also say: No physical theory featuring local pre-determination can ever reproduce all of the predictions of quantum mechanics. Or: No EPR-like physical theory can ever reproduce all of the predictions of quantum mechanics.
 
  • #59
DrChinese said:
If one of Bell's assumptions (such as deterministic hidden variables) is wrong, then there could be a nondeterministic local model that can reproduce the predictions of QM.

What I'm saying is that Bell's argument goes through perfectly well without assuming determinism. The assumption of deterministic hidden variables is that there is a hidden variable \lambda that determines the outcomes at both detectors:

F(\hat{A}, \lambda) = the outcome at Alice's detector when the hidden variable has value \lambda and Alice's detector is at orientation \hat{A}

G(\hat{B}, \lambda) = the outcome at Bob's detector when the hidden variable has value \lambda and Bob's detector is at orientation \hat{B}

But if you started with a more general assumption (not assuming determinism), then there would be additional probabilities involved:

P_A(\hat{A}, \lambda) = the probability that Alice will measure spin-up at orientation \hat{A} when the hidden variable has value \lambda

P_B(\hat{B}, \lambda) = the probability that Bob will measure spin-up at orientation \hat{B} when the hidden variable has value \lambda

The perfect correlations that occur when \hat{A} = \pm \hat{B} imply that
P_B(\hat{B}, \lambda) = 0 or 1
P_B(\hat{A}, \lambda) = 0 or 1

So determinism is a consequence of the assumption of local realism, not an additional assumption.
 
  • #60
stevendaryl said:
Personally, I find the terms "free will" and "realism" to be too fuzzy to reason about. I think it's more useful to think in terms of possible models that are ruled out by invoking "free will" or "realism".


I don't really know what people mean by "realism". What is an example of a non-realistic theory? Well, I suppose one could explain the EPR correlations by assuming that the world is just a dream.


counterfactual definiteness, absence of definite values of quantum objects or process.
just that, but some wish more, i.e no existence, absence of anything.

but how can someone make science without anything ?

.
 

Similar threads

  • · Replies 50 ·
2
Replies
50
Views
7K
  • · Replies 93 ·
4
Replies
93
Views
7K
  • · Replies 80 ·
3
Replies
80
Views
7K
  • · Replies 333 ·
12
Replies
333
Views
18K
  • · Replies 55 ·
2
Replies
55
Views
8K
  • · Replies 874 ·
30
Replies
874
Views
44K
  • · Replies 220 ·
8
Replies
220
Views
22K
  • · Replies 16 ·
Replies
16
Views
3K
Replies
25
Views
2K
Replies
44
Views
5K