Can we violate Bell inequalities by giving up CFD?

In summary: I find it easier to think of a choice between locality or singular outcomes.The issue is that under certain assumptions, singular outcomes imply a choice between reality and locality. And the violation of Bell's inequality implies singular outcomes.If you don't want to give up reality, then you must give up locality. That is what Bell's theorem tells us. The fact that some people say it the other way around doesn't make it so.Note that the issue is not that 2 measurement events are connected. That is completely fine. The issue is that they are connected in a way that would require either superluminal communication or a violation of local causality. But both of those are ruled out by special relativity. This is what we mean
  • #71
rubi said:
I pretty much completely agree with the operational view. I might just have a different standard for what I consider a possible explanation. For me, a theory that describes every aspect of a phenomenon accurately, is already a possible explanation. You seem to additionally require an explanation to be philosophically pleasing. I also prefer philosophically pleasing models, but for me it is not a necessary condition for an explanation.

I'm not requiring anything pleasing. In fact, I think dBB is very ugly and Copenhagen is very beautiful. What I'm saying is that in the ordinary use of the word, an explanation or a cause must be something real. So if one considers the wave function to be an explanation or a cause, then one is considering it to be real. Almost everyone agrees that if the wave function is real, then there is manifest violation of Lorentz invariance, which can be particularly clearly seen by the wave function collapse.

rubi said:
I'm pretty sure that if you are not going to collapse the state anyway, i.e. you are just using it as a tool that encodes available information, you can just apply a Lorentz transform to it to get an equivalent description in any other inertial frame. The unitarity of the transformation ensures that all predictions must be equivalent.
But my point really wasn't about a no-collapse interpretation. What I'm saying is that even in plain Copenhagen with collapse, the probabilities that lead to a Bell inequality violation are calculated using only the pre-collapsed state, so it is really the entanglement and not the collapse, which causes the violation.

Yes, I understood that. I was just making a minor side point.

rubi said:
This is not what i meant to imply. I agree that it is uncommon to regard the preparation procedure as cause of the correlations. What I'm saying is that I'm fairly sure that the majority of physicists don't know Reichenbach's principle and will reject it as soon as you tell them that it implies Lorentz violation, the exception being the rather negligible group of Bohmians.

The more usual way to say it in physics that I don't think is controversial is
(1) QM has a measurement problem (of course one can deny this, but many do not, including Landau & Lifshitz, Dirac, Weinberg, Bell, Tsirelson, all Everettians etc)
(2) The measurement problem can potentially be solved by hidden variables
(3) Bell's theorem says that any hidden variable solution of the measurement problem will be nonlocal.

rubi said:
Ok, but Valentini's version seems to be a version that actually dares to make experimental predictions that contradict conventional quantum mechanics. I happily encourage this kind of research, since it may actually lead to an expansion of our understanding.

Yes, of course, the whole point of the measurement problem is that it potentially points to new physics - Dirac explicitly says this. I'm pretty sure Ilja is thinking of Valentini's version of dBB when he says dBB, I think most people do.

rubi said:
I explained above what my standard for an admissible explanation is. I'm not forcing anyone to adopt the same standard. However, I don't think that it is controversial to say that relativistic quantum theories maintain Lorentz invariance.

It isn't controversial that the predictions of quantum theory are Lorentz invariant, ie. at the operational level. But beyond that looking for QM to "explain", then one runs into problems with Lorentz invariance.

rubi said:
I don't think that one is forced to adopt such a point of view. After all, the wave function may just be a container for information about statistics of repeated identically prepared experiments.

Of course one is not forced to adopt such a point of view, I was just bringing up the minor side point that one can do so and save locality (EPR themselves mentioned this).
 
Last edited:
Physics news on Phys.org
  • #72
rubi said:
I might just have a different standard for what I consider a possible explanation. For me, a theory that describes every aspect of a phenomenon accurately, is already a possible explanation. You seem to additionally require an explanation to be philosophically pleasing. I also prefer philosophically pleasing models, but for me it is not a necessary condition for an explanation.
In principle ok. I would make only the methodological point that to have more open scientific problems is in no way harmful. On the contrary, it is even preferable. Because these are more places where we can reach some progress.

Instead, "solving" problems by accepting what is reached as satisfactory is not a good idea. If people would have accepted classical thermodynamics simply as a field theory, as we accept today the SM, we possibly would not know even today that atoms exist.
rubi said:
What I'm saying is that I'm fairly sure that the majority of physicists don't know Reichenbach's principle and will reject it as soon as you tell them that it implies Lorentz violation, the exception being the rather negligible group of Bohmians.
Here I agree. And I see this as evidence that, contrary to your claim, religous believes have a very big place in science. The rejection of a hidden preferred frame is something more or less acceptable, everything else being equal. But if one prefers it even if one has to give up such fundamental scientific principles like causality and realism, I'm no longer able to see rational explanations. What remains is some quasi-religious belief: The ether is simply anathema.
 
  • #73
Ilja said:
Here I agree. And I see this as evidence that, contrary to your claim, religous believes have a very big place in science. The rejection of a hidden preferred frame is something more or less acceptable, everything else being equal. But if one prefers it even if one has to give up such fundamental scientific principles like causality and realism, I'm no longer able to see rational explanations. What remains is some quasi-religious belief: The ether is simply anathema.

rubi made an exception for Valentini in his comments in post #68.
 
  • #74
atyy said:
rubi made an exception for Valentini in his comments in post #68.
I have seen. And the point is a quite reasonable one:

Ok, but Valentini's version seems to be a version that actually dares to make experimental predictions that contradict conventional quantum mechanics. I happily encourage this kind of research, since it may actually lead to an expansion of our understanding.
This is what I tell all the time: The point of considering different interpretations is that they are starting points for different directions of development of the theory. It is quite natural that an interpretation leads to problems. And the way to solve problems is to modify the theory. Thus, the positivistic rejection of the consideration of interpretations is harmful for science, because it kills whole directions of possible theory development.

I have developed my ether theories following the same scheme. I have started with an interpretation of GR. Then, I have solved a problem - that there was no Euler-Lagrange equation for the preferred coordinates - and the result was already a theory different from GR, http://arxiv.org/abs/gr-qc/0205035

Or for the standard model - initially all I wanted was to obtain the SM. The only way I have managed to incorporate Dirac fermions was in pair (interpreted as electroweak) together with a scalar field. Thus, the ether model http://arxiv.org/abs/0908.0591 has obtained already some CDM candidates.

And in http://arxiv.org/abs/1101.5774 I consider the Wallstrom objection against some interpretations of QM, in particular Nelsonian stochastics. And, it seems, a solution of this problem can also be found by a modification of the theory.
 
  • #75
Ilja said:
No. It corresponds nicely with de Broglie-Bohm theory. That you don't like a theory does not make it unreasonable.
I didn't say that it was apriori unreasonable. I'm just saying that it is a perfectly valid point of view to prefer its rejection over the alternatives. Reichenbach's principle may or may not be relalized by nature. We must take the alternative seriously, especially if it leads to a simpler theory.

Big problem. Ok, I do not say that a fine tuning problem is not a problem at all - it is an interesting problem worth to be considered, because the solution of this problem will probably give some additional insight, for example some symmetry.

But the problem in this paper does not have much to do with Lorentz symmetry - it is a general problem of a superposition: If one measures one part, the reduction of the wave function gives information about what has been measured as well as the result of the measurement - but this information remains invisible in the probabilities. Thus, the same problem appears also in non-relativistic QM if one applies the same technique.

And, it appears, dBB solves it - it is the "conspiracy" defined by quantum equilibrium, which creates a 1:1 correspondence between the probability distribution of the configuration and the wave function.

PS: Finishing reading the paper, I have seen that this has already been recognized in the paper itself, in the part where Valentini's variant is considered.
This is not what I meant. My point is that Bohmian mechanics predicts the existence of many additional entities and shields them from the observer in such a way that he cannot predict more about them than ordinary quantum theory can. It includes action at a distance but doesn't allow superluminal communication. It is hard to believe that nature has such a rich ontology, yet an observer cannot access any of the additional information or use the action at a distance for superluminal communication. Since there is already a theory that works without any such assumptions and is at the same time simpler to use, I find it natural to rejection the Bohmian theory. I acknowledge that Valentini's theory allows for superluminal communication and it deserves to be tested, but it still seems much more convoluted than ordinary QM for be to jump onto it before experiments disprove ordinary QM.

What would be these so horrible consequences that it is preferable to give up such essential fundamental concepts like Reichenbach's principle?
I don't consider Reichenbach's principle essential or fundamental. It is just one possible principle that may or may not be true and it doesn't seem like we gain much by accepting it. On the other hand, our best theories are all relativistically covariant and it would be a big problem to explain why all our theories are relativistically covariant when nature really isn't relativistically covariant. I rather accept the violation of a principle that doesn't need to be realized anyway, than overthrowing basically all of modern physics, especially if there is no evidence that anything can be gained by that.

One should indeed think about if it is only a waste of time to have discussions with people who behave in such a way, so I have deleted the answers to the remaining points, leaving only those where I'm interested enough to find out if you have some arguments or not.
You were the one who started being disrespectful.

stevendaryl said:
I don't think that that's the full story, for the same reason that Einstein, Rosen and Podolsky said. Consider an EPR-type twin-pair experiment where Alice and Bob decide ahead of time to choose the same orientation for their spin (or polarization) measurements. Furthermore, suppose that Alice performs her measurement slightly before Bob performs his. Then in the time between those measurements, Alice knows for certain what result Bob will get. So, if Alice were to describe the state of affairs near Bob's detector, then she would have to describe it by a density matrix, or probability distribution, or wave function, or whatever that gave 100% chance for certain outcomes and 0% chance for other outcomes. That's a different state than she would have used a moment before. So the state that she ascribes to Bob's detector/particle changes discontinuously.

That doesn't mean anything nonlocal is going on, if the state that Alice ascribes to Bob is subjective. But it isn't just subjective. Whether anyone else besides Alice knows it, it is certain what result Bob will get (unless you want to get many-worlds about it).
Alice is performing a Bayesian updating of her knowledge. It doesn't influence Bob's state as he is using his own copy of the state for the description of the system and it produces consistent statistics.

atyy said:
What I'm saying is that in the ordinary use of the word, an explanation or a cause must be something real. So if one considers the wave function to be an explanation or a cause, then one is considering it to be real. Almost everyone agrees that if the wave function is real, then there is manifest violation of Lorentz invariance, which can be particularly clearly seen by the wave function collapse.
The explanation isn't the wave function, but the preparation procedure. If I prepare a system in a state that can only produce 100% correlations (given a certain alignment of the detectors), I shouldn't be surprised if I find 100% correlations. One can still consider the wave-function as a container of information. Yes, I agree that this still violates Reichenbach's principle, but if I register 100% correlation if and only if I prepare the system in a state that can only produce 100% correlation, then it seems hard for me to deny a cause and effect relationship here, even if it is not consistent with Reichenbach's principle. It would rather give me the impression that Reichenbach's principle doesn't fully capture the notion of causality.

The more usual way to say it in physics that I don't think is controversial is
(1) QM has a measurement problem (of course one can deny this, but many do not, including Landau & Lifshitz, Dirac, Weinberg, Bell, Tsirelson, all Everettians etc)
(2) The measurement problem can potentially be solved by hidden variables
(3) Bell's theorem says that any hidden variable solution of the measurement problem will be nonlocal.
I agree that this is uncontroversial, but it cannot be used to argue against locality.

Ilja said:
In principle ok. I would make only the methodological point that to have more open scientific problems is in no way harmful. On the contrary, it is even preferable. Because these are more places where we can reach some progress.

Instead, "solving" problems by accepting what is reached as satisfactory is not a good idea. If people would have accepted classical thermodynamics simply as a field theory, as we accept today the SM, we possibly would not know even today that atoms exist.
I have no problem with research in that direction. But If you demand your views to be respected by others, you should also respect other peoples views.

Here I agree. And I see this as evidence that, contrary to your claim, religous believes have a very big place in science. The rejection of a hidden preferred frame is something more or less acceptable, everything else being equal. But if one prefers it even if one has to give up such fundamental scientific principles like causality and realism, I'm no longer able to see rational explanations. What remains is some quasi-religious belief: The ether is simply anathema.
There are perfectly good reasons to reject an ether theory if it doesn't yield any immediate gain, while only making the theory more complicated. I'm sure if you managed to describe some new experimental result with Bohmian mechanics, people would be just as quickly switch to your theory as they rejected Reichenbach's principle.
 
  • #76
rubi said:
Alice is performing a Bayesian updating of her knowledge.

That's the subjective view that I was talking about, and I don't think that that makes sense. If it's just an updating of her subjective knowledge, then whatever she discovers to be true about Bob's situation by performing her measurement was also true (although she didn't know it) BEFORE her measurement.
 
  • #77
stevendaryl said:
I don't think that that's the full story, for the same reason that Einstein, Rosen and Podolsky said. Consider an EPR-type twin-pair experiment where Alice and Bob decide ahead of time to choose the same orientation for their spin (or polarization) measurements. Furthermore, suppose that Alice performs her measurement slightly before Bob performs his. Then in the time between those measurements, Alice knows for certain what result Bob will get. So, if Alice were to describe the state of affairs near Bob's detector, then she would have to describe it by a density matrix, or probability distribution, or wave function, or whatever that gave 100% chance for certain outcomes and 0% chance for other outcomes. That's a different state than she would have used a moment before. So the state that she ascribes to Bob's detector/particle changes discontinuously.

That doesn't mean anything nonlocal is going on, if the state that Alice ascribes to Bob is subjective. But it isn't just subjective. Whether anyone else besides Alice knows it, it is certain what result Bob will get (unless you want to get many-worlds about it).

The correlations at multiples of 45 degrees, which include "same orientation" of course, can be explained by local variables in a classical sort of way like the red and blue socks.
 
Last edited:
  • #78
Derek Potter said:
The correlations at multiples of 45 degrees, which include "same orientation" of course, can be explained by local variables in a classical sort of way like the red and blue socks. edit - hang on a tick, I'm just adding something :)

But Bell's inequality shows that that explanation isn't true.
 
  • #79
Since this thread seems to have morphed into a discussion of whether a principle proposed by Hans Reichenbach - a philosopher of science - is believed by most physicists, it seems it would now more appropriately belong in the philosophy sub-forum. Does physicsforums still have one?

Practising physicists can practice perfectly well with or without a belief in Reichenbach's 'common cause'. Since (1) science is not a democracy and (2) such a belief has no impact on the actual science, guessing about whether a majority believe in such a principle seems pretty irrelevant. One might as well ask whether a majority of physicists believe in God.

I'd also like to point out that people will expostulate for ages on 'causes' without ever pausing to think whether they really know what they mean by the word 'cause'. It is one of the most vague, misused and unnecessary words in the philosophical lexicon, and discussions become much clearer if we discard it altogether.

Here's an essay from Bertrand Russell pointing this out, from about 100 years ago:
http://www.jstor.org/stable/4543833?seq=1#page_scan_tab_contents
http://www.scribd.com/doc/269810250/Russell-On-the-Notion-of-Cause#scribd [alternate link]

And here's one I wrote a few years ago that made a similar point (I wasn't aware of Russell's essay at the time). It proposes a formal definition that I think is both well-defined and matches reasonably well the naive, folk notion of cause.
https://sageandonions.wordpress.com...-to-distil-clarity-from-a-very-muddy-concept/
 
  • Like
Likes atyy
  • #80
rubi said:
The explanation isn't the wave function, but the preparation procedure. If I prepare a system in a state that can only produce 100% correlations (given a certain alignment of the detectors), I shouldn't be surprised if I find 100% correlations. One can still consider the wave-function as a container of information. Yes, I agree that this still violates Reichenbach's principle, but if I register 100% correlation if and only if I prepare the system in a state that can only produce 100% correlation, then it seems hard for me to deny a cause and effect relationship here, even if it is not consistent with Reichenbach's principle. It would rather give me the impression that Reichenbach's principle doesn't fully capture the notion of causality.

But it is not true that you register 100% correlation if and only if you prepare the state in a certain way. The measurement settings of Alice and Bob are also needed to get the 100% correlation.

rubi said:
I agree that this is uncontroversial, but it cannot be used to argue against locality.

Why not, since you do agree that any hidden variables approach to solving the measurement problem must be nonlocal (or retrocausal or superdeterministic and other usual caveats)? Unless you are using another defnition of local, ie. no superluminal signalling? That is fine, and quantum theory is certainly local by that operational definition. But in trying to solve the measurement problem, we have to go beyond operational quantum theory, in which case it is the locality of classical relativity that is important, since that is a version of relativity that does not have a measurement problem.

My point of view is that there are two notions of locality, and Bell's theorem is important for both of them. For the operational point of view locality means no superluminal signalling, and Bell's theorem guarantees that quantum mechanics is operationally random if no one can signal superluminally.

From the point of view of the measurement problem, the notion of locality is classical relativistic causality since that is a version of special relativity without a measurement problem. Here Bell's theorem guarantees that that is gone, so we have to solve the measurement problem by nonlocal hidden variables, retrocausation, many-worlds , superdeterminism or something more drastic.

So quantum theory is both local and nonlocal, according to different definition of locality.
 
  • #81
stevendaryl said:
But Bell's inequality shows that that explanation isn't true.
The CHSH inequality is
-2 <= E(a, b) − E(a, b′) + E(a′, b) + E(a′, b′) <= +2
I don't know how you can apply the CHSH inequality when you only specify a single pair of angles - the same angle at that.
On a simple level we would make a = b = a' = b' . Alice's observation of perfect anti-correlation then gives you a value of -2 which does not violate the inequality.
If you have some other way of plugging numbers into the expression, I'd like to know what it is.
 
  • #82
andrewkirk said:
Since this thread seems to have morphed into a discussion of whether a principle proposed by Hans Reichenbach - a philosopher of science - is believed by most physicists, it seems it would now more appropriately belong in the philosophy sub-forum.

No, that is actually incidental to the discussion. The real issue is whether the only definition of locality that matters is "no superluminal signalling". No one is arguing that that is not an important sense, nor that quantum mechanics is not local by that definition. What is being argued is that that is not the only definition of locality that matters, because quantum mechanics has a measurement problem - one is certainly entitled to say that there is no measurement problem - however, many physicists, including Landau & Lifshitz, Dirac, Weinberg, Bell, Tsirelson, and all Everettians have agreed there is a problem.
 
  • #83
Derek Potter said:
The CHSH inequality is
-2 <= E(a, b) − E(a, b′) + E(a′, b) + E(a′, b′) <= +2
I don't know how you can apply the CHSH inequality when you only specify a single pair of angles - the same angle at that.

I don't understand your point. We've already proved (or Bell already proved) that EPR violates Bell's inequality. So why do I need to prove it again?
 
  • #84
stevendaryl said:
I don't understand your point. We've already proved (or Bell already proved) that EPR violates Bell's inequality. So why do I need to prove it again?

To elaborate a little more, you're right, in the case in which Alice and Bob agree ahead of time to use the same orientation, that particular experiment has a local hidden-variables explanation for its result. But we have other experiments that have shown that there are no local hidden variables involved. It doesn't make sense to explain a result in terms of some hypothetical entities (local hidden variables) which OTHER experiments have shown do not exist.
 
  • #85
andrewkirk said:
And here's one I wrote a few years ago that made a similar point (I wasn't aware of Russell's essay at the time). It proposes a formal definition that I think is both well-defined and matches reasonably well the naive, folk notion of cause.
https://sageandonions.wordpress.com...-to-distil-clarity-from-a-very-muddy-concept/

That's an interesting essay. Do you know http://www.cs.ucla.edu/~eb/r384-lnai.pdf?
 
Last edited by a moderator:
  • #86
bhobba said:
I gave a model that specifically rejects counter-factual definiteness and predicts the violation of Bell's inequality. Obviously your assertion is wrong.

Oh - I nearly forgot to mention - I make no claim about locality because I don't believe locality applies to correlated systems. But if you do, by a suitable definition of locality, you can reject CFD and keep locality.
My requirement was that model has to be local. As you don't like word "locality" (or like to associate uncommon concept with this word) let me replace "locality" with "factorizability".
 
  • #87
stevendaryl said:
To elaborate a little more, you're right, in the case in which Alice and Bob agree ahead of time to use the same orientation, that particular experiment has a local hidden-variables explanation for its result. But we have other experiments that have shown that there are no local hidden variables involved. It doesn't make sense to explain a result in terms of some hypothetical entities (local hidden variables) which OTHER experiments have shown do not exist.
OK, but the point is obscured if you illustrate non-locality with an example where non-locality is not needed :)
 
  • #88
RUTA said:
Here is an example http://www.ijqf.org/archives/2402. Also, note that it is a realist theory without CFD.
There are number of things about your proposed model and RB interpretation in general.
First, entanglement model is not worked out. In p.154-155 setup is described and then when it would be time to introduce particular configuration of "spacetimesource elements" and show how one arrives at expected result there is some handwaving instead.
Second, there was requirement that model has to be local (factorizable, in case bhobba would read this). But as I understand, relations that are fundamental in this model are non-local, right?
Third, to me AGC seems like a cheat (read, non scientific explanation). Is there some motivation why it is reasonable to introduce AGC?
And forth, to me it seems that switching from worldlines to relations as fundamental entities is philosophically fundamental and so extremely radical change that steps out of domain of science.
 
  • #89
rubi said:
I didn't say that it was apriori unreasonable. I'm just saying that it is a perfectly valid point of view to prefer its rejection over the alternatives. Reichenbach's principle may or may not be relalized by nature. We must take the alternative seriously, especially if it leads to a simpler theory.
If the only gain in simplicity is similar to the gain thermodynamics would have reached by rejecting the atomic hypothesis and being interpreted as a field theory, I would disagree that a "simpler theory" is an advantage. It would be better to care about predictive and explanatory power. But that to prefer an alternative is something which should be allowed in science is nothing I would question.

rubi said:
My point is that Bohmian mechanics predicts the existence of many additional entities and shields them from the observer in such a way that he cannot predict more about them than ordinary quantum theory can. It includes action at a distance but doesn't allow superluminal communication. It is hard to believe that nature has such a rich ontology, yet an observer cannot access any of the additional information or use the action at a distance for superluminal communication. Since there is already a theory that works without any such assumptions and is at the same time simpler to use, I find it natural to rejection the Bohmian theory. I acknowledge that Valentini's theory allows for superluminal communication and it deserves to be tested, but it still seems much more convoluted than ordinary QM for be to jump onto it before experiments disprove ordinary QM.
I do not think dBB ontology is nice. But if we combine the Copenhagen interpretation with the idea that that there exists a unique theory for everything, we cannot really avoid to have a configuration also for the quantum part. We have the access to this part of the structure in the classical part. What we can try to get rid of is the wave function part, which may be epistemological.

rubi said:
I don't consider Reichenbach's principle essential or fundamental. It is just one possible principle that may or may not be true and it doesn't seem like we gain much by accepting it. On the other hand, our best theories are all relativistically covariant and it would be a big problem to explain why all our theories are relativistically covariant when nature really isn't relativistically covariant.
That's a minor problem and already solved in http://arxiv.org/abs/gr-qc/0205035 at least for the classical part, the EEP is derived there for a non-covariant theory.

rubi said:
I rather accept the violation of a principle that doesn't need to be realized anyway, than overthrowing basically all of modern physics, especially if there is no evidence that anything can be gained by that.
There is IMHO nothing important which has to be overthrown, except some metaphysical prejudices against hidden variables. The ether theory of http://arxiv.org/abs/gr-qc/0205035 has the EEP and the Einstein equations as a limit, and its most serious differences with GR disappear if one chooses Y<0, which gives only four massless dark matter fields and some arbitrary small cosmological terms as the remaining difference.

The SM should not be overthrown too, http://arxiv.org/abs/0908.0591 is the only theory I know of which actually predicts the three generations of SM fermions, the SM gauge group, and its action on the fermions, which, I think, is a gain. But, of course, you may find the string theory landscape more attractive.

So, what are the things which have to be overthrown?
rubi said:
You were the one who started being disrespectful.
I have different memories, but let's forget it.
rubi said:
There are perfectly good reasons to reject an ether theory if it doesn't yield any immediate gain, while only making the theory more complicated. I'm sure if you managed to describe some new experimental result with Bohmian mechanics, people would be just as quickly switch to your theory as they rejected Reichenbach's principle.
Sorry, but the gains which can be reached with the ether can be easily seen. The problems of quantization of gravity essentially disappear - we know how to quantize condensed matter theories. The problem to explain why the SM is what it is is solved in an IMHO satisfactory way too, even if the model does not allow to compute the masses. Bohmian mechanics does not have a measurement problem. My ether theories have been published already some years, but are simply ignored. So, no, I don't believe anymore that people would switch to ether theory if it would give some gains. (Not in a world where you need an independent income to do independent research because you can be sure that it will be extremely hard to publish anything and you will never obtain a grant for this.)

And, of course, the question is what has been the immediate gain of the atomic hypothesis? How long has atomic theory been developed before it managed to obtain a new experimental result?
 
  • #90
zonde said:
My requirement was that model has to be local. As you don't like word "locality" (or like to associate uncommon concept with this word) let me replace "locality" with "factorizability".

Its compatible with the definition of locality ie from the previously linked paper:
'Let us define a “local” theory as a one where the outcomes of an experiment on a system are independent of the actions performed on a different system which has no causal connection with the first. For example, the temperature of this room is independent on whether I choose to wear purple socks today. Einstein’s relativity provides a stringent condition for causal connections: if two events are outside their respective light cones, there cannot be any causal connection among them.'

I don't deny the existence locality - I am simply saying it doesn't apply to correlated systems because by the definition of correlation if things are correlated what is done in one system is related to what goes on in the other. Include it in locality if you like. The issue however is if you want to keep counter-factual definiteness you must allow superluminal signalling which specifically makes it non-local:
http://drchinese.com/David/Bell_Theorem_Easy_Math.htm
'But there was a price to pay for such this experimental setup: we must add a SECOND assumption. That assumption is: A measurement setting for one particle does not somehow affect the outcome at the other particle if those particles are space-like separated. This is needed because if there was something that affected Alice due to a measurement on Bob, the results would be skewed and the results could no longer be relied upon as being an independent measurement of a second attribute. This second assumption is called "Bell Locality" and results in a modification to our conclusion above. In this modified version, we conclude: the predictions of any LOCAL Hidden Variables theory are incompatible with the predictions of Quantum Mechanics. Q.E.D. '

Thanks
Bill
 
  • #91
bhobba said:
I don't deny the existence locality - I am simply saying it doesn't apply to correlated systems because by the definition of correlation if things are correlated what is done in one system is related to what goes on in the other. Include it in locality if you like. The issue however is if you want to keep counter-factual definiteness you must allow superluminal signalling which specifically makes it non-local:
http://drchinese.com/David/Bell_Theorem_Easy_Math.htm
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.

And the article does not get the main point of the EPR argument:
EPR also said that since it is "unreasonable" to believe that these particle attributes require observation to become real, therefore Hidden Variables must exist. Einstein said: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it." This second part of EPR was accepted by some, and rejected by others including Bell.
No, once we accept the EPR criterion of reality, accept also the observable 100% anti-correlation, and accept Einstein causality, we can prove that the particle has a predetermined spin in all directions, so that we do not have to rely on vague philosophical "I think" feelings.
 
  • #92
atyy said:
That's an interesting essay. Do you know http://www.cs.ucla.edu/~eb/r384-lnai.pdf?
Thank you atyy. I am not familiar with that Bayesian paper but it looks interesting. I've added it to my reading list.

Andrew
 
Last edited by a moderator:
  • #93
Ilja said:
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.

That's wrong - Bell says you can't have locality and counter-factual definiteness. Counter-factual definiteness is simply a more careful statement of realism - although its slightly different.

Thanks
Bill
 
  • #94
Ilja said:
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.
That is the reverse of my understanding, and of everything I've ever read on the topic. Why do you think that?
And why do you think anybody would ever want to reject CFD if doing so doesn't solve anything and only creates more problems?
 
  • Like
Likes Derek Potter and bhobba
  • #95
andrewkirk said:
That is the reverse of my understanding, and of everything I've ever read on the topic.

Its wrong. We all make errors and that's all it was. I do things like that all the time.

Thanks
Bill
 
  • #96
Ilja said:
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.
Not at all. Without CFD, there is no definite state so there is no need for an influence to cause it, whether superluminal or not. In more familiar terms, nothing has to collapse the wavefunction of the detectors if it does not, in fact, collapse.
Ilja said:
No, once we accept the EPR criterion of reality, accept also the observable 100% anti-correlation, and accept Einstein causality, we can prove that the particle has a predetermined spin in all directions, so that we do not have to rely on vague philosophical "I think" feelings.
Well those assumptions may entail predetermined spin but EPR violates BI and this proves the opposite. When pitching facts against assumptions I tend to back the facts. One or more assumptions are wrong. Perhaps that is what you mean?

Einstein overstated the case because Heisenberg and Bohr were concocting anti-real theories or weird ideas that observation creates reality. With EPR confirmed by experiment, Einstein would undoubtedly have continued to believe that the moon exists even when he wasn't looking at it, but he would have accepted that it might well be in a positional superposition rather than simply "there".

Of course discussing what Einstein would have thought is counter-factual reasoning too.
 
Last edited:
  • #97
Derek Potter said:
OK, but the point is obscured if you illustrate non-locality with an example where non-locality is not needed :)

The reasoning goes like this:
  • If at some point, Alice knows for certain what Bob's measurement's outcome will be before the measurement takes place, then that reflects a physical fact about Bob's situation.
  • Either (A) that fact was true before Alice performed her measurement (and her measurement merely revealed that fact to her), or (B) the fact became true when Alice performed her experiment.
  • Choice (A) is a hidden-variables theory, of the type ruled out by Bell's inequality.
  • Choice (B) implies that something taking place near Alice (her measurement) caused a change in the facts about Bob.
 
  • #98
Derek Potter said:
Not at all. Without CFD, there is no definite state so there is no need for an influence to cause it, whether superluminal or not. In more familiar terms, nothing has to collapse the wavefunction of the detectors if it does not, in fact, collapse.
What means "without CFD" if the CFD is derived?
Derek Potter said:
Well those assumptions may entail predetermined spin but EPR violates BI and this proves the opposite. When pitching facts against assumptions I tend to back the facts. One or more assumptions are wrong. Perhaps that is what you mean?
Of course, one of the assumptions has to be wrong. One is an observational fact, which we can take as given (let others care about loopholes). What remains is:

1.) The EPR principle of reality: If, without in any way disturbing the system, we can predict, with certainty, the result of an experiment, this result is an element of reality even without or before the measurement is done, that means, is CFD.

2.) Einstein causality: The experiment done by Bob does in no way disturb the system measured by Alice.
 
  • #99
andrewkirk said:
That is the reverse of my understanding, and of everything I've ever read on the topic. Why do you think that?
And why do you think anybody would ever want to reject CFD if doing so doesn't solve anything and only creates more problems?

CFD in this particular situation is the consequence of the EPR criterion of reality and Einstein causality, together with the observable fact of the 100% anticorrelation.

We would want to reject it, because it would be all we need (together with Einstein causality) to continue with the proof of Bell's inequalities. They are violated (modulo loopholes I ignore), and one way to solve the problem would be to reject CFD in this particular situation.

But if we want to do this, we are faced with the EPR argument, which derives CFD from the EPR criterion of reality and Einstein causality. If one rejects the idea to reject the EPR criterion of reality, you obtain what I have claimed, namely that the rejection of CFD requires the rejection of Einstein causality.

And, indeed, this is the reverse of the understanding of many people, all those who make the quite common error to ignore that determinism is not assumed but derived by the EPR argument, so that they think that simply rejecting determinism would be sufficient to solve the problem.
 
  • #100
bhobba said:
Counter-factual definiteness is simply a more careful statement of realism - although its slightly different.

Not at all. In particular, de Broglie-Bohm theory is clearly realistic, even deterministic, but there is no CFD in it. The outcomes of "measurements" in dBB are essentially results of interactions, and depend on the configuration of the "measured" system as well as of the "measurement" device. Thus, there is no prediction for outcomes which are not performed, because such unperformed experiments have no configuration of the "measurement" device.
 
  • #101
Ilja said:
Not at all. In particular, de Broglie-Bohm theory is clearly realistic, even deterministic, but there is no CFD in it.

Hmmm. Actually that's an interesting case. Its the ability to speak meaningfully of the definiteness of the results of measurements that have not been performed. Its real for BM but you can't make definite predictions because of lack of knowledge about initial conditions. In principle you can speak about it, but in practice you can't measure it. It depends on your interpretation of 'speak meaningfully of the definiteness of the results of measurements that have not been performed'. I side with you on that one - but I suspect others may not agree. In other words are measurements that have not been performed measurements in principle or in practice.

Thanks
Bill
 
Last edited:
  • #102
atyy said:
[denial of collapse can avoid violating the inequality at spacelike separation.] [..] No, it means that Bob includes Alice as part of his classical apparatus and Alice includes Bob as part of her classical apparatus. So the measurement that is performed is the simultaneous measurement by Alice and Bob. However, using this method to avoid collapse will create a preferred frame, since it takes the frame in which Alice and Bob measure simultaneously. To avoid the preferred frame, one cannot accept the reality of measurements at spacelike intervals. [..]
Such a "preferred frame" is no more preferred than a "rest frame" in SR. In a "rest frame" (also called "inertial system of reference") certain events are simultaneous by mere definition. That doesn't contradict relativity, as any inertial frame will do.
Thus I'm interested in your first remark, can you clarify how denial of collapse can avoid violating the inequality if the two events are simultaneous? I always read the inequality as referring to (at least approximately) simultaneous events. So I find that remark puzzling... Probably you mean something else than how it sounds.
 
  • #103
bhobba said:
Hmmm. Actually that's an interesting case. Its the ability to speak meaningfully of the definiteness of the results of measurements that have not been performed. Its real for BM but you can't make definite predictions because of lack of knowledge about initial conditions. In principle you can speak about it, but in practice you can't measure it. It depends on your interpretation of 'speak meaningfully of the definiteness of the results of measurements that have not been performed'. I side with you on that one - but I suspect others may not agree. In other words are measurements that have not been performed measurements in principle or in practice.

The point which matters is not that much about "speaking about", but about what we need to prove Bell's theorem.

In dBB theory we can accept the EPR criterion of reality, and it does not follow that the result of the spin measurement has to be predetermined. It depends on the particular hidden configuration of Bobs measurement device, thus, depends also on additional local choices (Bob may have different measurement devices even for the same direction, and choose freely one of them). Thus, the measurement result is essentially created by the measurement. Then, the effective wave function of Alice's part collapses - a well-defined physical effect which follows from putting the configuration of Bobs particle into the shared wave function to obtain the effective wave function of Alice's particle. A procedure which creates the state which Alice will measure, and depends on what happens in Bob's experiment.

So, the EPR criterion is inapplicable - Bob's measurement distorts Alice's system - and so I cannot prove that the spin components are predefined.
 
  • #104
Ilja said:
The point which matters is not that much about "speaking about", but about what we need to prove Bell's theorem.

My concern is this heading down the semantic argumentation of philosophy route where you simply argue the meaning of words. Personally its really obvious. Its the ability to speak about things independent of measurement. Call it CFD, realism, whatever you like, but you can't have that and absence of superluminal influences.

Thanks
Bill
 
  • #105
bhobba said:
My concern is this heading down the semantic argumentation of philosophy route where you simply argue the meaning of words. Personally its really obvious. Its the ability to speak about things independent of measurement. Call it CFD, realism, whatever you like, but you can't have that and absence of superluminal influences.
Wrong words are misleading, and a point which has been made by Bell in "Against measurement" is that already the use of the term "measurement" in the quantum context is misleading. And CFD is the thesis that results of measurements are predefined. Which is very different from realism, so that it is quite important that you don't name it realism, but use different words to describe these very different things.
 

Similar threads

Replies
71
Views
3K
  • Quantum Physics
Replies
5
Views
1K
  • Quantum Physics
Replies
16
Views
2K
Replies
25
Views
2K
Replies
93
Views
4K
Replies
49
Views
2K
Replies
4
Views
2K
  • Quantum Physics
Replies
10
Views
2K
Replies
70
Views
7K
  • Quantum Interpretations and Foundations
Replies
2
Views
769
Back
Top