Can we violate Bell inequalities by giving up CFD?

  • #51
rubi said:
The measurement problem is only a problem to a physicist, who is convinced that there must be some theory of everything that also includes himself. I would argue that this physicist has no basis for his conviction.

Yes, only to a physicist who believes that there is some theory that also includes him or at least his measurement apparatus. One can certainly take your view, as Bohr and Heisenberg did. But as I mentioned, many others including Landau and LIfshitz, Dirac, Weinberg, Bell and Tsirelson did not.

rubi said:
There is certainly always new physics to be discovered, but I see no reason to believe that this new physics must be a classical description, so I don't agree that the violation of Bell's inequality implies that nature must be non-local. I also think that the measurement problem and the violation of Bell's inequality are not necessarily related.

Yes, the next theory beyond quantum theory may also have a measurement problem. However, it the usual sense of the word, "nature" refers to a theory without a measurement problem, and classical theories like general relativity do not have a measurement problem. So Bell's theorem says that if we use such a theory, then it is nonlocal. The measurement problem and Bell's inequality violation are related, because one way of solving the measurement problem is to introduce hidden variables, eg. Bohmian Mechanics. Bell's theorem says that such a theory that reproduces quantum mechanics will be nonlocal.

rubi said:
I don't see how pushing the measurement to the end implies Alice and Bob measure simultaneously. What I'm saying is that textbook derivation of the violation of Bell's inequality with conventional quantum mechanics never references the collapse. All probabilities are calculated with the pre-collapsed state.

By definition, pushing the measurement to the end means Alice and Bob measure simultaneously - is there another option?
 
Physics news on Phys.org
  • #52
rubi said:
So you finally acknowledge the fact that there exist perfectly relativistically covariant quantum theories, contrary to your inital claim? (Free QED is already enough to correctly predict the Bell tests by the way.)

It is conventional to say it either way that free QED is or is not manifestly relativistically covariant. Wave function collapse means that the wave function evolution is neither covariant nor invariant. However it doesn't matter, since the predictions are relativistically invariant.
 
  • #53
Derek Potter said:
The end is not when Alice and Bob have completed their measurements but when they have shared their measurements with Charles (or each other). This final collation of results is made at time-like separation. In which case it does not matter if Alice measures ahead of Bob. There is no preferred frame, the only criterion is that measurement (collapse) must be postponed until the 4 states have been able to interfere. The fact that Alice and Bob enter Schrodinger Cat states is unfortunate but the conceptual problem for realists was anticipated with Wigner's Friend, here played by Charles.

Yes, that's a slightly more general version of what I said. In either case, there is no violation of the Bell inequalities at spacelike separation, so no implication of nonlocality via the Bell inequalities.
 
  • #54
rubi said:
It is well known that QM doesn't satisfy Bell's locality criterion ("Einstein causality"). That doesn't mean it is not causal. The cause for the correlations is of course that the quantum system has been prepared in a state that results in those correlations. If you prepare the system in a different state, you will not see the same correlations. This is a perfect cause and effect relationship. Bell's locality criterion is just too narrow and doesn't capture the meaning of the word causality adequately. So please don't force your personal definition of causality on everybody else.

Ilja's definition is the conventional definition throughout science. I think many would agree that maybe there is some other definition that makes what you say correct, but so far there are none widely agreed upon. For example, there is brief commentary by Cavalcanti and Lal on proposals like yours (considering the entangled state to be the cause), but these are not yet widely accepted. (At any rate, if the entangled state is the cause, the formalism is manifestly not relativistically invariant).

http://arxiv.org/abs/1311.6852 (p11): "Another way of dropping FP while keeping PCC would be to point out that correlations do not need to be explained in terms of a factorisability condition, but that the quantum state of the joint system in its causal past can itself be considered as the common cause of the correlations. An objection to this point of view, however, is that the precise correlations cannot be determined without knowledge of the measurements to be performed (the inputs x and y in Fig. 1), and these may be determined by factors not in the common past of the correlated events. A similar criticism may be made of the L-S approach. However, an advantage of the latter is that it does give an analogue of the factorisation condition (rather than simply dropping it), and thus could allow for a generalisation of Reichenbach’s Principle of Common Cause in understanding the implication of causal structure for probabilistic correlations, and be of potential application in areas such as causal discovery algorithms."
 
  • Like
Likes zonde
  • #55
rubi said:
So you finally acknowledge the fact that there exist perfectly relativistically covariant quantum theories, contrary to your inital claim? (Free QED is already enough to correctly predict the Bell tests by the way.)
What I question is your "perfectly". And I continue to question it, because all the conceptual issues with the measurement problem and so on are simply ignored.

rubi said:
Haags theorem is not relevant to the existence of interacting quantum field theories. It just states that they can't be unitarily equivalent to free theories, which is neither necessary nor expected. It is strongly believed that interacting 4d QFT's exist (otherwise the Clay institute wouldn't have put a million dollar bounty on it).
As if this would matter. I think this only shows that they don't exist - there are enough clever guys who would have found one if it exists given such a price.

Anyway, it would be useless - gravity is at best an effective field theory, and field theory in combination with gravity will not fare better. But effective field theories may have a Lorentz symmetry in their large distance limit, but are conceptually not Lorentz-covariant.
rubi said:
The causal relationship I'm talking about is that whenever we prepare the system in a specific entangled state, we will see the correlations and whenever we prepare it in a different state, we don't see the correlations (or see different correlations).
Yes, fine, but this is not all what Reichenbach's principle is about. The point is that the correlation should be explained by the common cause. In a quite precise sense of probability theory, P(A and B|cc) = P(A|cc) P(B|cc).
rubi said:
Therefore, we can say that the cause for the appearance of the correlations is our preparation of the state. So quantum theory explains the correlations, even if you don't like it, and this is all a scientist needs.
For those who don't like mathematics and formulas, this may be sufficient as an "explanation". Scientists have usually higher requirements for this. The tobacco industry would be, again, very happy if such a verbal description would be all what scientists need to explain correlations.
rubi said:
And the fact that the only way to save it seems to be to introduce an ether and come up with essentially a conspiracy theory is really more than enough evidence for its rejection.
LOL. A big problem - the Lorentz ether interpretation coming back. And, no, the Lorentz ether does not need any conspiracy, this is an old fairy tale for schoolboys. The Poincare group is the symmetry group of a wave equation, and if everything follows the same wave equation, you obtain it automatically without any conspiracy.
rubi said:
The Lorentz covariance is exactly as manifest as it is in Maxwell's equations. In the Heisenberg picture, you have a non time-dependent state ##\left|\Psi\right>## and operators ##\phi(x,t)## and you get the Schrödinger picture by defining ##\left|\Psi(t)\right> = U(t) \left|\Psi\right>## and ##\phi(x) = U(t)^\dagger \phi(x,t) U(t)##. The state will satisfy the Schrödinger equation defined by the generator of ##U(t)##, as can be easily checked by applying a time-derivative. The time coordinate plays exactly the same role in both pictures. All the expectation values are the same. This is not even specific to quantum theory. You can also formulate classical relativistic theories in an initial-value formulation with a preferred time-coordinate. Even GR has such a formulation (the ADM formalism). There is nothing wrong about rewriting equations in an equivalent way. And even if it were (which it isn't), then free QED in the Heisenberg picture would still be a perfectly manifestly Lorentz covariant quantum theory, which you claim doesn't exist.
I repeat, I have a little bit sharper criteria than you for perfection of a theory. I have not seen yet any consistent Lorentz-covariant description of the measurement process. At least none which would be comparable in clarity with the description of the measurement process in de Broglie-Bohm theory, which is clearly non-covariant.

Given that it has also a free QED variant, from its start in Bohm's article, one can compare them with your Lorentz-covariant form. I think that the latter is far from perfect, except in the quite trivial variant of perfectness which simply removes all imperfect things from the consideration.
 
  • #56
Ilja said:
As if this would matter. I think this only shows that they don't exist - there are enough clever guys who would have found one if it exists given such a price.

Haag's theorem doesn't prevent relativistic QFTs from existing, since these have already been constructed in 1+1D and 2+1D.

Ilja said:
I repeat, I have a little bit sharper criteria than you for perfection of a theory. I have not seen yet any consistent Lorentz-covariant description of the measurement process. At least none which would be comparable in clarity with the description of the measurement process in de Broglie-Bohm theory, which is clearly non-covariant.

But Bell's theorem doesn't rule them out - ie. is it possible for nonlocal Lorentz covariant hidden variables to exist? Maybe http://arxiv.org/abs/1111.1425?
 
  • #57
atyy said:
But Bell's theorem doesn't rule them out - ie. is it possible for nonlocal Lorentz covariant hidden variables to exist? Maybe http://arxiv.org/abs/1111.1425?
I think if one relies on causality (requiring Reichenbach's principle and no causal loops) covariant theories are ruled out.

BI excludes common cause, so what remains is or A->B or B->A in arbitrary small environments of A, B. Not above, because no causal loops. Then A->B defines a classical causality connected with a preferred foliation.
 
  • #58
atyy said:
Yes, only to a physicist who believes that there is some theory that also includes him or at least his measurement apparatus. One can certainly take your view, as Bohr and Heisenberg did. But as I mentioned, many others including Landau and LIfshitz, Dirac, Weinberg, Bell and Tsirelson did not.
I agree that one can also take the other point of view and I accept that people do so. I just wanted to explain that one doesn't need to and quantum theory can be a very satisfactory theory if one doesn't.

Yes, the next theory beyond quantum theory may also have a measurement problem. However, it the usual sense of the word, "nature" refers to a theory without a measurement problem, and classical theories like general relativity do not have a measurement problem. So Bell's theorem says that if we use such a theory, then it is nonlocal. The measurement problem and Bell's inequality violation are related, because one way of solving the measurement problem is to introduce hidden variables, eg. Bohmian Mechanics. Bell's theorem says that such a theory that reproduces quantum mechanics will be nonlocal.
Well, I would say that "nature" refers to nature and a theory is just a representation of some ideas about nature in the language of mathematics. We can't read off what nature is by looking at a mathematical theory.

By definition, pushing the measurement to the end means Alice and Bob measure simultaneously - is there another option?
Pushing the measurement to the end means that you are describing the situation from the outside, i.e. you have a third observer. But as a matter of fact, Alice and Bob perform measurements at spacelike separated intervals and the results are consistent with the statistics that is predicted by the pre-collapsed state.

atyy said:
Ilja's definition is the conventional definition throughout science. I think many would agree that maybe there is some other definition that makes what you say correct, but so far there are none widely agreed upon. For example, there is brief commentary by Cavalcanti and Lal on proposals like yours (considering the entangled state to be the cause), but these are not yet widely accepted. (At any rate, if the entangled state is the cause, the formalism is manifestly not relativistically invariant).
I believe that the majority of quantum physicists would agree that Reichenbachs criterion is too strong for application in quantum theory.

Ilja said:
What I question is your "perfectly". And I continue to question it, because all the conceptual issues with the measurement problem and so on are simply ignored.
Perfectly means that the theory is invariant under the Poincare group. This is the definition of a Lorentz covariant theory.

As if this would matter. I think this only shows that they don't exist - there are enough clever guys who would have found one if it exists given such a price.

Anyway, it would be useless - gravity is at best an effective field theory, and field theory in combination with gravity will not fare better. But effective field theories may have a Lorentz symmetry in their large distance limit, but are conceptually not Lorentz-covariant
So the remaining 5 unsolved millenium problems also unsolvable, since nobody has solved them yet? This is a hilarious claim. Anyway, I can only tell you that you will not find a single person working in the area of rigorous QFT who seriously believes that 4d Yang-Mills doesn't exist. It's seen about as unlikely as assuming that P=NP will turn out right. But of course you are invited to submit your refutation of the remaining millenium problems and collect the 5 million dollars. We can talk about it again, when I read about it in the news.

Loop quantum gravity provides a rigorous potential theory of quantum gravity coupled to all known standard model matter. It would really be helpful if you didn't randomly mention all these subjects that you clearly don't really understand as if it would be in favour of your argument.

Yes, fine, but this is not all what Reichenbach's principle is about. The point is that the correlation should be explained by the common cause. In a quite precise sense of probability theory, P(A and B|cc) = P(A|cc) P(B|cc).
Well, Reichenbachs principle needs to be rejected then if it forces us to give up a perfectly satisfactory theory. It's not like Reichenbachs principle is something that nature must necessarily obey. Nature can behave as she may and we have to accept that. Religous believes like yours have no place in science.

For those who don't like mathematics and formulas, this may be sufficient as an "explanation". Scientists have usually higher requirements for this. The tobacco industry would be, again, very happy if such a verbal description would be all what scientists need to explain correlations.
Real scientists will give up a theory if it can't be rescued in a reasonable way. And Reichenbachs principle is such a theory.

LOL. A big problem - the Lorentz ether interpretation coming back. And, no, the Lorentz ether does not need any conspiracy, this is an old fairy tale for schoolboys. The Poincare group is the symmetry group of a wave equation, and if everything follows the same wave equation, you obtain it automatically without any conspiracy
Bohmian mechanics definitely needs a conspiracy to explain why the Lorentz violation cannot be observed (arXiv:1208.4119). Even the paper you quoted earlier comes to this conclusion. And introducing an ether with all its consequences when there is really no need for it is just not reasonable.

I repeat, I have a little bit sharper criteria than you for perfection of a theory. I have not seen yet any consistent Lorentz-covariant description of the measurement process. At least none which would be comparable in clarity with the description of the measurement process in de Broglie-Bohm theory, which is clearly non-covariant.

Given that it has also a free QED variant, from its start in Bohm's article, one can compare them with your Lorentz-covariant form. I think that the latter is far from perfect, except in the quite trivial variant of perfectness which simply removes all imperfect things from the consideration.
Every phenomenon has an equivalent discription in every inertial frame and they are connected by Lorentz transformations. This is what Lorentz covariance means. If you don't agree that this is the definition of Lorentz covariance, then I'm wasting my time here.
 
  • #59
To me, the details of QFT or QM are not so relevant to the issue of locality/causality/etc. as the bare facts of EPR correlations. To me, that's the core question, is whether Bell's-inequality-violating correlations are somehow nonlocal (in the sense of SR).

The perfect correlations in EPR seem to imply a strong connection between distant experiments. As a correlation, it's nonlocal. But it doesn't violate SR's prohibition against FTL propagation of information. But those two facts together are strange. Why doesn't it?

The distinction is between something I would call "factorizability" and "signal locality". Factorizability is just the claim that the evolution of distant subsystems proceeds independently. Roughly speaking, what happens at Alice's location should depend only on conditions local to Alice, and what happens at Bob's location should depend only on conditions local to Bob. That is, facts about Bob's situation shouldn't tell us anything about Alice's future results, unless those results are somehow determined by conditions local to Alice. EPR violates the principle of factorizability. But this violation is not visible in the equations of QFT or QM. Those equations are perfectly factorable, it is only when you interpret the amplitudes as probability distributions for measurements that non-factorizability comes in.

Signal locality is weaker than factorizability, but in a strange way (or it seems strange to me). There is a failure of factorizability if knowing facts about Bob's situation reveals something about Alice's future results (in a way that local knowledge of Alice's situation doesn't). But Bob's situation has two sort-of independent components:

  1. Choices made by Bob.
  2. Choices made by "nature"--that is, random events.
Signal locality basically amounts to factorizability for Bob's choices. If all you know is what Bob's choices are, that'll tell you nothing about what's going to happen to Alice that couldn't be predicted using Alice's local conditions. So Bob's choices can't influence Alice's results.

This distinction between factorizability and signal locality causes philosophical problems for me, no matter what interpretation of "locality" you're using. For some people, signal locality is all that's important, so they're perfectly happy with saying QM is local (or is not nonlocal, to make a fine distinction). But I have problems with that. What is special about "choices made by agents"? Why should physics particular care about those sources of unpredictability?

On the other hand, saying that the violation of Bell's inequality implies that nature is nonlocal is unsatisfying for other reasons. If interactions are nonlocal at the fundamental level (as they are in the Bohmian interpretation of QM), then why can't it be used for FTL signalling? I certainly understand the proof that it can't be, but it seems very ugly and suspicious to have a fundamental fact about the universe (such as the rest frame relative to which these nonlocal interactions are instantaneous) be inherently undetectable.
 
  • Like
Likes zonde
  • #60
But there does seem to be a sense in which Bob's setting and outcome in a particular trial *do* influence Alice's result in EPR-Bell. For example, consider the Mermin device (Mermin, N.D.: Bringing home the atomic world: Quantum mysteries for anybody. American Journal of Physics 49, 940-943 (1981)) where Bob chooses setting 1 and finds an outcome of R (say). As Mermin shows, it can't be the case that conditions local to Alice already existed for outcomes in each of the three possible settings (no CFD), so knowing Bob's setting and outcome tells you something that couldn't have been know otherwise before Alice actually chooses her setting and obtains her result.
 
  • #61
stevendaryl said:
This distinction between factorizability and signal locality causes philosophical problems for me, no matter what interpretation of "locality" you're using. For some people, signal locality is all that's important, so they're perfectly happy with saying QM is local (or is not nonlocal, to make a fine distinction). But I have problems with that. What is special about "choices made by agents"? Why should physics particular care about those sources of unpredictability?
I agree that it would be much less mysterious if nature just did behave classically. But unfortunately she doesn't and at some point we just have to accept it and adopt the most reasonable explanation. After taking all possibilities into consideration, I've personally come to the conclusion that we have to accept the fact that there is a peculiar thing like quantum probability theory that we just don't quite understand yet. The reason I think so is that it applies universally to every phsical theory. If quantum probabilities weren't a thing, then why can we apply quantum theory to large effective systems without even knowing the actual details of the interactions? If there were a fundamental theory, then we wouldn't expect that simplifying it would still pertain its structure as a quantum theory. But as a matter of fact, the quantum framework works nicely at all levels of complexity. I can imagine that one day we might even successfully apply it to models of economics. And economics clearly isn't a theory of physics.

(Of course, everyone is allowed to have their own opinion. I just don't accept it if people like Ilja force their personal prejudices upon others, especially if there is no evidence in favour of them.)
 
  • #62
rubi said:
I agree that one can also take the other point of view and I accept that people do so. I just wanted to explain that one doesn't need to and quantum theory can be a very satisfactory theory if one doesn't.

I think one should be clear that those who take the "other point of view" are not claiming that quantum theory is not a very satisfactory theory. In the same spirit that one can take QED to be a very satisfactory theory because of the Wilsonian effective theory point of view, one can also say that very satisfactory theories can themselves indicate their incompleteness and point towards theoretical opportunities.

rubi said:
Well, I would say that "nature" refers to nature and a theory is just a representation of some ideas about nature in the language of mathematics. We can't read off what nature is by looking at a mathematical theory.

I think your language is unusual. If you would like to just say quantum theory is not a theory about what nature is, but what we can say about nature, ie. predict the probabilities of outcomes, then that would be fine. But going on to say that quantum theory explains the observations is controversial. Usually, in the operational view the wave function is not taken to be real, and just a tool. If the wave function is taken to be an explanation, then it is taken to be real, and collapse is real, and relativistic invariance is manifestly violated.

rubi said:
Pushing the measurement to the end means that you are describing the situation from the outside, i.e. you have a third observer. But as a matter of fact, Alice and Bob perform measurements at spacelike separated intervals and the results are consistent with the statistics that is predicted by the pre-collapsed state.

No, it means that Bob includes Alice as part of his classical apparatus and Alice includes Bob as part of her classical apparatus. So the measurement that is performed is the simultaneous measurement by Alice and Bob. However, using this method to avoid collapse will create a preferred frame, since it takes the frame in which Alice and Bob measure simultaneously. To avoid the preferred frame, one cannot accept the reality of measurements at spacelike intervals.

rubi said:
II believe that the majority of quantum physicists would agree that Reichenbachs criterion is too strong for application in quantum theory.

Yes, perhaps the precise statement of Reichenbach's principle might not be agreed on by everyone. However, Ilja is much closer to consensus than you are - there is no widely accepted notion of cause in which quantum theory explains the correlations.

rubi said:
IBohmian mechanics definitely needs a conspiracy to explain why the Lorentz violation cannot be observed (arXiv:1208.4119). Even the paper you quoted earlier comes to this conclusion. And introducing an ether with all its consequences when there is really no need for it is just not reasonable.

The paper makes separate comments about Valentini's version of Bohmian Mechanics.
 
  • #63
rubi said:
(Of course, everyone is allowed to have their own opinion. I just don't accept it if people like Ilja force their personal prejudices upon others, especially if there is no evidence in favour of them.)

I'm pretty sure Ilja's view is the common one, or at least the one that is closer to correct. The problem with your view is that you go beyond the view that the role of quantum theory is only to predict the correlations, which is the operational view of Bohr and Heisenberg, and all who believe there is a measurement problem also agree the operational view is very satisfactory. But to go beyond that and say that quantum theory "explains" or is about "causes" and can maintain relativistic invariance is very controversial.
 
  • #64
atyy said:
Yes, that's a slightly more general version of what I said. In either case, there is no violation of the Bell inequalities at spacelike separation, so no implication of nonlocality via the Bell inequalities.
I disagree. Alice and Bob are spacelike separated. The violation occurs. They are both spacelike separated from Charles when the photons are detected. The violation occurs.

I suspect that what you mean is that the observation (to fix the results) has to wait until further down Charles' world-line, where he can receive their results. But whilst that may save locality, it forces us to assume that Charles' classical observation of Alice and Bob's classical data is what collapses their wavefunction(s). So Alice and Bob's lives are put on hold until their data intersect. Good job photons are pretty nifty so it's all over in a few microseconds, but I wonder how this would work with cold electrons where Alice and Bob remain in Schrodinger Cat states for half an hour? Perhaps we should ask them what it was like... oh I forgot, their memories get wiped at the same time as the forbidden data.
 
  • #65
Derek Potter said:
I disagree. Alice and Bob are spacelike separated. The violation occurs. They are both spacelike separated from Charles when the photons are detected. The violation occurs.

To be clear, here I always use Copenhagen, so measurement is something which produces a classical result.

If Alice and Bob measure at spacelike separation, one can choose a frame in which their measurements are not simultaneous. In which case, one will have collapse.

To get rid of collapse, one has to use the frame in which Alice and Bob measure simultaneously. However, that means that there is a preferred frame.

To get rid of collapse and to get rid of the preferred frame, one has to say that there is no reality to Alice's measurement at spacelike separation.

Derek Potter said:
I suspect that what you mean is that the observation (to fix the results) has to wait until further down Charles' world-line, where he can receive their results. But whilst that may save locality, it forces us to assume that Charles' classical observation of Alice and Bob's classical data is what collapses their wavefunction(s). So Alice and Bob's lives are put on hold until their data intersect. Good job photons are pretty nifty so it's all over in a few microseconds, but I wonder how this would work with cold electrons where Alice and Bob remain in Schrodinger Cat states for half an hour? Perhaps we should ask them what it was like... oh I forgot, their memories get wiped at the same time as the forbidden data.

Almost except that Alice and Bob have no classical data. They don't really exist, and are just ghostly things in the wave function which is not real. When Charles measures them, he observes he classical result that Alice and Bob report that they violated the Bell inequality at spacelike separation. However, only the report received by Charles is real Alice and Bob and their experiments are not real.
 
Last edited:
  • #66
atyy said:
Almost except that Alice and Bob have no classical data. They don't really exist, and are just ghostly things in the wave function which is not real. When Charles measures them, he observes he classical result that Alice and Bob report that they violated the Bell inequality at spacelike separation. However, only the report received by Charles is real Alice and Bob and their experiments are not real.
That will be news to Alice and Bob. And I thought the LSD-dropping hippies were wierd.
 
  • Like
Likes atyy
  • #67
atyy said:
To be clear, here I always use Copenhagen, so measurement is something which produces a classical result.
If Alice and Bob measure at spacelike separation, one can choose a frame in which their measurements are not simultaneous. In which case, one will have collapse.
To get rid of collapse, one has to use the frame in which Alice and Bob measure simultaneously. However, that means that there is a preferred frame.
I don't think simultaneity solves anything except making it much harder to think about. The wavefunction collapses under two observations: in stages if Alice and Bob stagger their observations, in one step if they are simultaneous.
atyy said:
To get rid of collapse and to get rid of the preferred frame, one has to say that there is no reality to Alice's measurement at spacelike separation.
No *classical* reality. But we know this anyway even without a preferred frame. And we are not trying to get rid of collapse because we are in love with MWI, we need to postpose it otherwise Bob's detector is being affected by an event at Alice which, in some frames, hasn't even happened yet.
 
  • #68
atyy said:
I think your language is unusual. If you would like to just say quantum theory is not a theory about what nature is, but what we can say about nature, ie. predict the probabilities of outcomes, then that would be fine. But going on to say that quantum theory explains the observations is controversial. Usually, in the operational view the wave function is not taken to be real, and just a tool. If the wave function is taken to be an explanation, then it is taken to be real, and collapse is real, and relativistic invariance is manifestly violated.
I pretty much completely agree with the operational view. I might just have a different standard for what I consider a possible explanation. For me, a theory that describes every aspect of a phenomenon accurately, is already a possible explanation. You seem to additionally require an explanation to be philosophically pleasing. I also prefer philosophically pleasing models, but for me it is not a necessary condition for an explanation.

No, it means that Bob includes Alice as part of his classical apparatus and Alice includes Bob as part of her classical apparatus. So the measurement that is performed is the simultaneous measurement by Alice and Bob. However, using this method to avoid collapse will create a preferred frame, since it takes the frame in which Alice and Bob measure simultaneously. To avoid the preferred frame, one cannot accept the reality of measurements at spacelike intervals.
I'm pretty sure that if you are not going to collapse the state anyway, i.e. you are just using it as a tool that encodes available information, you can just apply a Lorentz transform to it to get an equivalent description in any other inertial frame. The unitarity of the transformation ensures that all predictions must be equivalent.
But my point really wasn't about a no-collapse interpretation. What I'm saying is that even in plain Copenhagen with collapse, the probabilities that lead to a Bell inequality violation are calculated using only the pre-collapsed state, so it is really the entanglement and not the collapse, which causes the violation.

Yes, perhaps the precise statement of Reichenbach's principle might not be agreed on by everyone. However, Ilja is much closer to consensus than you are - there is no widely accepted notion of cause in which quantum theory explains the correlations.
This is not what i meant to imply. I agree that it is uncommon to regard the preparation procedure as cause of the correlations. What I'm saying is that I'm fairly sure that the majority of physicists don't know Reichenbach's principle and will reject it as soon as you tell them that it implies Lorentz violation, the exception being the rather negligible group of Bohmians.

The paper makes separate comments about Valentini's version of Bohmian Mechanics.
Ok, but Valentini's version seems to be a version that actually dares to make experimental predictions that contradict conventional quantum mechanics. I happily encourage this kind of research, since it may actually lead to an expansion of our understanding.

atyy said:
I'm pretty sure Ilja's view is the common one, or at least the one that is closer to correct. The problem with your view is that you go beyond the view that the role of quantum theory is only to predict the correlations, which is the operational view of Bohr and Heisenberg, and all who believe there is a measurement problem also agree the operational view is very satisfactory. But to go beyond that and say that quantum theory "explains" or is about "causes" and can maintain relativistic invariance is very controversial.
I explained above what my standard for an admissible explanation is. I'm not forcing anyone to adopt the same standard. However, I don't think that it is controversial to say that relativistic quantum theories maintain Lorentz invariance.

atyy said:
Almost except that Alice and Bob have no classical data. They don't really exist, and are just ghostly things in the wave function which is not real.
I don't think that one is forced to adopt such a point of view. After all, the wave function may just be a container for information about statistics of repeated identically prepared experiments.
 
  • #69
rubi said:
Well, Reichenbachs principle needs to be rejected then if it forces us to give up a perfectly satisfactory theory. It's not like Reichenbachs principle is something that nature must necessarily obey. Nature can behave as she may and we have to accept that.
Real scientists will give up a theory if it can't be rescued in a reasonable way. And Reichenbachs principle is such a theory.
No. It corresponds nicely with de Broglie-Bohm theory. That you don't like a theory does not make it unreasonable.
rubi said:
Bohmian mechanics definitely needs a conspiracy to explain why the Lorentz violation cannot be observed (arXiv:1208.4119).
Big problem. Ok, I do not say that a fine tuning problem is not a problem at all - it is an interesting problem worth to be considered, because the solution of this problem will probably give some additional insight, for example some symmetry.

But the problem in this paper does not have much to do with Lorentz symmetry - it is a general problem of a superposition: If one measures one part, the reduction of the wave function gives information about what has been measured as well as the result of the measurement - but this information remains invisible in the probabilities. Thus, the same problem appears also in non-relativistic QM if one applies the same technique.

And, it appears, dBB solves it - it is the "conspiracy" defined by quantum equilibrium, which creates a 1:1 correspondence between the probability distribution of the configuration and the wave function.

PS: Finishing reading the paper, I have seen that this has already been recognized in the paper itself, in the part where Valentini's variant is considered.
rubi said:
And introducing an ether with all its consequences when there is really no need for it is just not reasonable.
What would be these so horrible consequences that it is preferable to give up such essential fundamental concepts like Reichenbach's principle?

rubi said:
Religous believes like yours have no place in science.
...
It would really be helpful if you didn't randomly mention all these subjects that you clearly don't really understand as if it would be in favour of your argument.
...
If you don't agree that this is the definition of Lorentz covariance, then I'm wasting my time here.
One should indeed think about if it is only a waste of time to have discussions with people who behave in such a way, so I have deleted the answers to the remaining points, leaving only those where I'm interested enough to find out if you have some arguments or not.
 
Last edited:
  • #70
rubi said:
It is well known that QM doesn't satisfy Bell's locality criterion ("Einstein causality"). That doesn't mean it is not causal. The cause for the correlations is of course that the quantum system has been prepared in a state that results in those correlations. If you prepare the system in a different state, you will not see the same correlations. This is a perfect cause and effect relationship.

I don't think that that's the full story, for the same reason that Einstein, Rosen and Podolsky said. Consider an EPR-type twin-pair experiment where Alice and Bob decide ahead of time to choose the same orientation for their spin (or polarization) measurements. Furthermore, suppose that Alice performs her measurement slightly before Bob performs his. Then in the time between those measurements, Alice knows for certain what result Bob will get. So, if Alice were to describe the state of affairs near Bob's detector, then she would have to describe it by a density matrix, or probability distribution, or wave function, or whatever that gave 100% chance for certain outcomes and 0% chance for other outcomes. That's a different state than she would have used a moment before. So the state that she ascribes to Bob's detector/particle changes discontinuously.

That doesn't mean anything nonlocal is going on, if the state that Alice ascribes to Bob is subjective. But it isn't just subjective. Whether anyone else besides Alice knows it, it is certain what result Bob will get (unless you want to get many-worlds about it).
 
  • #71
rubi said:
I pretty much completely agree with the operational view. I might just have a different standard for what I consider a possible explanation. For me, a theory that describes every aspect of a phenomenon accurately, is already a possible explanation. You seem to additionally require an explanation to be philosophically pleasing. I also prefer philosophically pleasing models, but for me it is not a necessary condition for an explanation.

I'm not requiring anything pleasing. In fact, I think dBB is very ugly and Copenhagen is very beautiful. What I'm saying is that in the ordinary use of the word, an explanation or a cause must be something real. So if one considers the wave function to be an explanation or a cause, then one is considering it to be real. Almost everyone agrees that if the wave function is real, then there is manifest violation of Lorentz invariance, which can be particularly clearly seen by the wave function collapse.

rubi said:
I'm pretty sure that if you are not going to collapse the state anyway, i.e. you are just using it as a tool that encodes available information, you can just apply a Lorentz transform to it to get an equivalent description in any other inertial frame. The unitarity of the transformation ensures that all predictions must be equivalent.
But my point really wasn't about a no-collapse interpretation. What I'm saying is that even in plain Copenhagen with collapse, the probabilities that lead to a Bell inequality violation are calculated using only the pre-collapsed state, so it is really the entanglement and not the collapse, which causes the violation.

Yes, I understood that. I was just making a minor side point.

rubi said:
This is not what i meant to imply. I agree that it is uncommon to regard the preparation procedure as cause of the correlations. What I'm saying is that I'm fairly sure that the majority of physicists don't know Reichenbach's principle and will reject it as soon as you tell them that it implies Lorentz violation, the exception being the rather negligible group of Bohmians.

The more usual way to say it in physics that I don't think is controversial is
(1) QM has a measurement problem (of course one can deny this, but many do not, including Landau & Lifshitz, Dirac, Weinberg, Bell, Tsirelson, all Everettians etc)
(2) The measurement problem can potentially be solved by hidden variables
(3) Bell's theorem says that any hidden variable solution of the measurement problem will be nonlocal.

rubi said:
Ok, but Valentini's version seems to be a version that actually dares to make experimental predictions that contradict conventional quantum mechanics. I happily encourage this kind of research, since it may actually lead to an expansion of our understanding.

Yes, of course, the whole point of the measurement problem is that it potentially points to new physics - Dirac explicitly says this. I'm pretty sure Ilja is thinking of Valentini's version of dBB when he says dBB, I think most people do.

rubi said:
I explained above what my standard for an admissible explanation is. I'm not forcing anyone to adopt the same standard. However, I don't think that it is controversial to say that relativistic quantum theories maintain Lorentz invariance.

It isn't controversial that the predictions of quantum theory are Lorentz invariant, ie. at the operational level. But beyond that looking for QM to "explain", then one runs into problems with Lorentz invariance.

rubi said:
I don't think that one is forced to adopt such a point of view. After all, the wave function may just be a container for information about statistics of repeated identically prepared experiments.

Of course one is not forced to adopt such a point of view, I was just bringing up the minor side point that one can do so and save locality (EPR themselves mentioned this).
 
Last edited:
  • #72
rubi said:
I might just have a different standard for what I consider a possible explanation. For me, a theory that describes every aspect of a phenomenon accurately, is already a possible explanation. You seem to additionally require an explanation to be philosophically pleasing. I also prefer philosophically pleasing models, but for me it is not a necessary condition for an explanation.
In principle ok. I would make only the methodological point that to have more open scientific problems is in no way harmful. On the contrary, it is even preferable. Because these are more places where we can reach some progress.

Instead, "solving" problems by accepting what is reached as satisfactory is not a good idea. If people would have accepted classical thermodynamics simply as a field theory, as we accept today the SM, we possibly would not know even today that atoms exist.
rubi said:
What I'm saying is that I'm fairly sure that the majority of physicists don't know Reichenbach's principle and will reject it as soon as you tell them that it implies Lorentz violation, the exception being the rather negligible group of Bohmians.
Here I agree. And I see this as evidence that, contrary to your claim, religous believes have a very big place in science. The rejection of a hidden preferred frame is something more or less acceptable, everything else being equal. But if one prefers it even if one has to give up such fundamental scientific principles like causality and realism, I'm no longer able to see rational explanations. What remains is some quasi-religious belief: The ether is simply anathema.
 
  • #73
Ilja said:
Here I agree. And I see this as evidence that, contrary to your claim, religous believes have a very big place in science. The rejection of a hidden preferred frame is something more or less acceptable, everything else being equal. But if one prefers it even if one has to give up such fundamental scientific principles like causality and realism, I'm no longer able to see rational explanations. What remains is some quasi-religious belief: The ether is simply anathema.

rubi made an exception for Valentini in his comments in post #68.
 
  • #74
atyy said:
rubi made an exception for Valentini in his comments in post #68.
I have seen. And the point is a quite reasonable one:

Ok, but Valentini's version seems to be a version that actually dares to make experimental predictions that contradict conventional quantum mechanics. I happily encourage this kind of research, since it may actually lead to an expansion of our understanding.
This is what I tell all the time: The point of considering different interpretations is that they are starting points for different directions of development of the theory. It is quite natural that an interpretation leads to problems. And the way to solve problems is to modify the theory. Thus, the positivistic rejection of the consideration of interpretations is harmful for science, because it kills whole directions of possible theory development.

I have developed my ether theories following the same scheme. I have started with an interpretation of GR. Then, I have solved a problem - that there was no Euler-Lagrange equation for the preferred coordinates - and the result was already a theory different from GR, http://arxiv.org/abs/gr-qc/0205035

Or for the standard model - initially all I wanted was to obtain the SM. The only way I have managed to incorporate Dirac fermions was in pair (interpreted as electroweak) together with a scalar field. Thus, the ether model http://arxiv.org/abs/0908.0591 has obtained already some CDM candidates.

And in http://arxiv.org/abs/1101.5774 I consider the Wallstrom objection against some interpretations of QM, in particular Nelsonian stochastics. And, it seems, a solution of this problem can also be found by a modification of the theory.
 
  • #75
Ilja said:
No. It corresponds nicely with de Broglie-Bohm theory. That you don't like a theory does not make it unreasonable.
I didn't say that it was apriori unreasonable. I'm just saying that it is a perfectly valid point of view to prefer its rejection over the alternatives. Reichenbach's principle may or may not be relalized by nature. We must take the alternative seriously, especially if it leads to a simpler theory.

Big problem. Ok, I do not say that a fine tuning problem is not a problem at all - it is an interesting problem worth to be considered, because the solution of this problem will probably give some additional insight, for example some symmetry.

But the problem in this paper does not have much to do with Lorentz symmetry - it is a general problem of a superposition: If one measures one part, the reduction of the wave function gives information about what has been measured as well as the result of the measurement - but this information remains invisible in the probabilities. Thus, the same problem appears also in non-relativistic QM if one applies the same technique.

And, it appears, dBB solves it - it is the "conspiracy" defined by quantum equilibrium, which creates a 1:1 correspondence between the probability distribution of the configuration and the wave function.

PS: Finishing reading the paper, I have seen that this has already been recognized in the paper itself, in the part where Valentini's variant is considered.
This is not what I meant. My point is that Bohmian mechanics predicts the existence of many additional entities and shields them from the observer in such a way that he cannot predict more about them than ordinary quantum theory can. It includes action at a distance but doesn't allow superluminal communication. It is hard to believe that nature has such a rich ontology, yet an observer cannot access any of the additional information or use the action at a distance for superluminal communication. Since there is already a theory that works without any such assumptions and is at the same time simpler to use, I find it natural to rejection the Bohmian theory. I acknowledge that Valentini's theory allows for superluminal communication and it deserves to be tested, but it still seems much more convoluted than ordinary QM for be to jump onto it before experiments disprove ordinary QM.

What would be these so horrible consequences that it is preferable to give up such essential fundamental concepts like Reichenbach's principle?
I don't consider Reichenbach's principle essential or fundamental. It is just one possible principle that may or may not be true and it doesn't seem like we gain much by accepting it. On the other hand, our best theories are all relativistically covariant and it would be a big problem to explain why all our theories are relativistically covariant when nature really isn't relativistically covariant. I rather accept the violation of a principle that doesn't need to be realized anyway, than overthrowing basically all of modern physics, especially if there is no evidence that anything can be gained by that.

One should indeed think about if it is only a waste of time to have discussions with people who behave in such a way, so I have deleted the answers to the remaining points, leaving only those where I'm interested enough to find out if you have some arguments or not.
You were the one who started being disrespectful.

stevendaryl said:
I don't think that that's the full story, for the same reason that Einstein, Rosen and Podolsky said. Consider an EPR-type twin-pair experiment where Alice and Bob decide ahead of time to choose the same orientation for their spin (or polarization) measurements. Furthermore, suppose that Alice performs her measurement slightly before Bob performs his. Then in the time between those measurements, Alice knows for certain what result Bob will get. So, if Alice were to describe the state of affairs near Bob's detector, then she would have to describe it by a density matrix, or probability distribution, or wave function, or whatever that gave 100% chance for certain outcomes and 0% chance for other outcomes. That's a different state than she would have used a moment before. So the state that she ascribes to Bob's detector/particle changes discontinuously.

That doesn't mean anything nonlocal is going on, if the state that Alice ascribes to Bob is subjective. But it isn't just subjective. Whether anyone else besides Alice knows it, it is certain what result Bob will get (unless you want to get many-worlds about it).
Alice is performing a Bayesian updating of her knowledge. It doesn't influence Bob's state as he is using his own copy of the state for the description of the system and it produces consistent statistics.

atyy said:
What I'm saying is that in the ordinary use of the word, an explanation or a cause must be something real. So if one considers the wave function to be an explanation or a cause, then one is considering it to be real. Almost everyone agrees that if the wave function is real, then there is manifest violation of Lorentz invariance, which can be particularly clearly seen by the wave function collapse.
The explanation isn't the wave function, but the preparation procedure. If I prepare a system in a state that can only produce 100% correlations (given a certain alignment of the detectors), I shouldn't be surprised if I find 100% correlations. One can still consider the wave-function as a container of information. Yes, I agree that this still violates Reichenbach's principle, but if I register 100% correlation if and only if I prepare the system in a state that can only produce 100% correlation, then it seems hard for me to deny a cause and effect relationship here, even if it is not consistent with Reichenbach's principle. It would rather give me the impression that Reichenbach's principle doesn't fully capture the notion of causality.

The more usual way to say it in physics that I don't think is controversial is
(1) QM has a measurement problem (of course one can deny this, but many do not, including Landau & Lifshitz, Dirac, Weinberg, Bell, Tsirelson, all Everettians etc)
(2) The measurement problem can potentially be solved by hidden variables
(3) Bell's theorem says that any hidden variable solution of the measurement problem will be nonlocal.
I agree that this is uncontroversial, but it cannot be used to argue against locality.

Ilja said:
In principle ok. I would make only the methodological point that to have more open scientific problems is in no way harmful. On the contrary, it is even preferable. Because these are more places where we can reach some progress.

Instead, "solving" problems by accepting what is reached as satisfactory is not a good idea. If people would have accepted classical thermodynamics simply as a field theory, as we accept today the SM, we possibly would not know even today that atoms exist.
I have no problem with research in that direction. But If you demand your views to be respected by others, you should also respect other peoples views.

Here I agree. And I see this as evidence that, contrary to your claim, religous believes have a very big place in science. The rejection of a hidden preferred frame is something more or less acceptable, everything else being equal. But if one prefers it even if one has to give up such fundamental scientific principles like causality and realism, I'm no longer able to see rational explanations. What remains is some quasi-religious belief: The ether is simply anathema.
There are perfectly good reasons to reject an ether theory if it doesn't yield any immediate gain, while only making the theory more complicated. I'm sure if you managed to describe some new experimental result with Bohmian mechanics, people would be just as quickly switch to your theory as they rejected Reichenbach's principle.
 
  • #76
rubi said:
Alice is performing a Bayesian updating of her knowledge.

That's the subjective view that I was talking about, and I don't think that that makes sense. If it's just an updating of her subjective knowledge, then whatever she discovers to be true about Bob's situation by performing her measurement was also true (although she didn't know it) BEFORE her measurement.
 
  • #77
stevendaryl said:
I don't think that that's the full story, for the same reason that Einstein, Rosen and Podolsky said. Consider an EPR-type twin-pair experiment where Alice and Bob decide ahead of time to choose the same orientation for their spin (or polarization) measurements. Furthermore, suppose that Alice performs her measurement slightly before Bob performs his. Then in the time between those measurements, Alice knows for certain what result Bob will get. So, if Alice were to describe the state of affairs near Bob's detector, then she would have to describe it by a density matrix, or probability distribution, or wave function, or whatever that gave 100% chance for certain outcomes and 0% chance for other outcomes. That's a different state than she would have used a moment before. So the state that she ascribes to Bob's detector/particle changes discontinuously.

That doesn't mean anything nonlocal is going on, if the state that Alice ascribes to Bob is subjective. But it isn't just subjective. Whether anyone else besides Alice knows it, it is certain what result Bob will get (unless you want to get many-worlds about it).

The correlations at multiples of 45 degrees, which include "same orientation" of course, can be explained by local variables in a classical sort of way like the red and blue socks.
 
Last edited:
  • #78
Derek Potter said:
The correlations at multiples of 45 degrees, which include "same orientation" of course, can be explained by local variables in a classical sort of way like the red and blue socks. edit - hang on a tick, I'm just adding something :)

But Bell's inequality shows that that explanation isn't true.
 
  • #79
Since this thread seems to have morphed into a discussion of whether a principle proposed by Hans Reichenbach - a philosopher of science - is believed by most physicists, it seems it would now more appropriately belong in the philosophy sub-forum. Does physicsforums still have one?

Practising physicists can practice perfectly well with or without a belief in Reichenbach's 'common cause'. Since (1) science is not a democracy and (2) such a belief has no impact on the actual science, guessing about whether a majority believe in such a principle seems pretty irrelevant. One might as well ask whether a majority of physicists believe in God.

I'd also like to point out that people will expostulate for ages on 'causes' without ever pausing to think whether they really know what they mean by the word 'cause'. It is one of the most vague, misused and unnecessary words in the philosophical lexicon, and discussions become much clearer if we discard it altogether.

Here's an essay from Bertrand Russell pointing this out, from about 100 years ago:
http://www.jstor.org/stable/4543833?seq=1#page_scan_tab_contents
http://www.scribd.com/doc/269810250/Russell-On-the-Notion-of-Cause#scribd [alternate link]

And here's one I wrote a few years ago that made a similar point (I wasn't aware of Russell's essay at the time). It proposes a formal definition that I think is both well-defined and matches reasonably well the naive, folk notion of cause.
https://sageandonions.wordpress.com...-to-distil-clarity-from-a-very-muddy-concept/
 
  • Like
Likes atyy
  • #80
rubi said:
The explanation isn't the wave function, but the preparation procedure. If I prepare a system in a state that can only produce 100% correlations (given a certain alignment of the detectors), I shouldn't be surprised if I find 100% correlations. One can still consider the wave-function as a container of information. Yes, I agree that this still violates Reichenbach's principle, but if I register 100% correlation if and only if I prepare the system in a state that can only produce 100% correlation, then it seems hard for me to deny a cause and effect relationship here, even if it is not consistent with Reichenbach's principle. It would rather give me the impression that Reichenbach's principle doesn't fully capture the notion of causality.

But it is not true that you register 100% correlation if and only if you prepare the state in a certain way. The measurement settings of Alice and Bob are also needed to get the 100% correlation.

rubi said:
I agree that this is uncontroversial, but it cannot be used to argue against locality.

Why not, since you do agree that any hidden variables approach to solving the measurement problem must be nonlocal (or retrocausal or superdeterministic and other usual caveats)? Unless you are using another defnition of local, ie. no superluminal signalling? That is fine, and quantum theory is certainly local by that operational definition. But in trying to solve the measurement problem, we have to go beyond operational quantum theory, in which case it is the locality of classical relativity that is important, since that is a version of relativity that does not have a measurement problem.

My point of view is that there are two notions of locality, and Bell's theorem is important for both of them. For the operational point of view locality means no superluminal signalling, and Bell's theorem guarantees that quantum mechanics is operationally random if no one can signal superluminally.

From the point of view of the measurement problem, the notion of locality is classical relativistic causality since that is a version of special relativity without a measurement problem. Here Bell's theorem guarantees that that is gone, so we have to solve the measurement problem by nonlocal hidden variables, retrocausation, many-worlds , superdeterminism or something more drastic.

So quantum theory is both local and nonlocal, according to different definition of locality.
 
  • #81
stevendaryl said:
But Bell's inequality shows that that explanation isn't true.
The CHSH inequality is
-2 <= E(a, b) − E(a, b′) + E(a′, b) + E(a′, b′) <= +2
I don't know how you can apply the CHSH inequality when you only specify a single pair of angles - the same angle at that.
On a simple level we would make a = b = a' = b' . Alice's observation of perfect anti-correlation then gives you a value of -2 which does not violate the inequality.
If you have some other way of plugging numbers into the expression, I'd like to know what it is.
 
  • #82
andrewkirk said:
Since this thread seems to have morphed into a discussion of whether a principle proposed by Hans Reichenbach - a philosopher of science - is believed by most physicists, it seems it would now more appropriately belong in the philosophy sub-forum.

No, that is actually incidental to the discussion. The real issue is whether the only definition of locality that matters is "no superluminal signalling". No one is arguing that that is not an important sense, nor that quantum mechanics is not local by that definition. What is being argued is that that is not the only definition of locality that matters, because quantum mechanics has a measurement problem - one is certainly entitled to say that there is no measurement problem - however, many physicists, including Landau & Lifshitz, Dirac, Weinberg, Bell, Tsirelson, and all Everettians have agreed there is a problem.
 
  • #83
Derek Potter said:
The CHSH inequality is
-2 <= E(a, b) − E(a, b′) + E(a′, b) + E(a′, b′) <= +2
I don't know how you can apply the CHSH inequality when you only specify a single pair of angles - the same angle at that.

I don't understand your point. We've already proved (or Bell already proved) that EPR violates Bell's inequality. So why do I need to prove it again?
 
  • #84
stevendaryl said:
I don't understand your point. We've already proved (or Bell already proved) that EPR violates Bell's inequality. So why do I need to prove it again?

To elaborate a little more, you're right, in the case in which Alice and Bob agree ahead of time to use the same orientation, that particular experiment has a local hidden-variables explanation for its result. But we have other experiments that have shown that there are no local hidden variables involved. It doesn't make sense to explain a result in terms of some hypothetical entities (local hidden variables) which OTHER experiments have shown do not exist.
 
  • #85
andrewkirk said:
And here's one I wrote a few years ago that made a similar point (I wasn't aware of Russell's essay at the time). It proposes a formal definition that I think is both well-defined and matches reasonably well the naive, folk notion of cause.
https://sageandonions.wordpress.com...-to-distil-clarity-from-a-very-muddy-concept/

That's an interesting essay. Do you know http://www.cs.ucla.edu/~eb/r384-lnai.pdf?
 
Last edited by a moderator:
  • #86
bhobba said:
I gave a model that specifically rejects counter-factual definiteness and predicts the violation of Bell's inequality. Obviously your assertion is wrong.

Oh - I nearly forgot to mention - I make no claim about locality because I don't believe locality applies to correlated systems. But if you do, by a suitable definition of locality, you can reject CFD and keep locality.
My requirement was that model has to be local. As you don't like word "locality" (or like to associate uncommon concept with this word) let me replace "locality" with "factorizability".
 
  • #87
stevendaryl said:
To elaborate a little more, you're right, in the case in which Alice and Bob agree ahead of time to use the same orientation, that particular experiment has a local hidden-variables explanation for its result. But we have other experiments that have shown that there are no local hidden variables involved. It doesn't make sense to explain a result in terms of some hypothetical entities (local hidden variables) which OTHER experiments have shown do not exist.
OK, but the point is obscured if you illustrate non-locality with an example where non-locality is not needed :)
 
  • #88
RUTA said:
Here is an example http://www.ijqf.org/archives/2402. Also, note that it is a realist theory without CFD.
There are number of things about your proposed model and RB interpretation in general.
First, entanglement model is not worked out. In p.154-155 setup is described and then when it would be time to introduce particular configuration of "spacetimesource elements" and show how one arrives at expected result there is some handwaving instead.
Second, there was requirement that model has to be local (factorizable, in case bhobba would read this). But as I understand, relations that are fundamental in this model are non-local, right?
Third, to me AGC seems like a cheat (read, non scientific explanation). Is there some motivation why it is reasonable to introduce AGC?
And forth, to me it seems that switching from worldlines to relations as fundamental entities is philosophically fundamental and so extremely radical change that steps out of domain of science.
 
  • #89
rubi said:
I didn't say that it was apriori unreasonable. I'm just saying that it is a perfectly valid point of view to prefer its rejection over the alternatives. Reichenbach's principle may or may not be relalized by nature. We must take the alternative seriously, especially if it leads to a simpler theory.
If the only gain in simplicity is similar to the gain thermodynamics would have reached by rejecting the atomic hypothesis and being interpreted as a field theory, I would disagree that a "simpler theory" is an advantage. It would be better to care about predictive and explanatory power. But that to prefer an alternative is something which should be allowed in science is nothing I would question.

rubi said:
My point is that Bohmian mechanics predicts the existence of many additional entities and shields them from the observer in such a way that he cannot predict more about them than ordinary quantum theory can. It includes action at a distance but doesn't allow superluminal communication. It is hard to believe that nature has such a rich ontology, yet an observer cannot access any of the additional information or use the action at a distance for superluminal communication. Since there is already a theory that works without any such assumptions and is at the same time simpler to use, I find it natural to rejection the Bohmian theory. I acknowledge that Valentini's theory allows for superluminal communication and it deserves to be tested, but it still seems much more convoluted than ordinary QM for be to jump onto it before experiments disprove ordinary QM.
I do not think dBB ontology is nice. But if we combine the Copenhagen interpretation with the idea that that there exists a unique theory for everything, we cannot really avoid to have a configuration also for the quantum part. We have the access to this part of the structure in the classical part. What we can try to get rid of is the wave function part, which may be epistemological.

rubi said:
I don't consider Reichenbach's principle essential or fundamental. It is just one possible principle that may or may not be true and it doesn't seem like we gain much by accepting it. On the other hand, our best theories are all relativistically covariant and it would be a big problem to explain why all our theories are relativistically covariant when nature really isn't relativistically covariant.
That's a minor problem and already solved in http://arxiv.org/abs/gr-qc/0205035 at least for the classical part, the EEP is derived there for a non-covariant theory.

rubi said:
I rather accept the violation of a principle that doesn't need to be realized anyway, than overthrowing basically all of modern physics, especially if there is no evidence that anything can be gained by that.
There is IMHO nothing important which has to be overthrown, except some metaphysical prejudices against hidden variables. The ether theory of http://arxiv.org/abs/gr-qc/0205035 has the EEP and the Einstein equations as a limit, and its most serious differences with GR disappear if one chooses Y<0, which gives only four massless dark matter fields and some arbitrary small cosmological terms as the remaining difference.

The SM should not be overthrown too, http://arxiv.org/abs/0908.0591 is the only theory I know of which actually predicts the three generations of SM fermions, the SM gauge group, and its action on the fermions, which, I think, is a gain. But, of course, you may find the string theory landscape more attractive.

So, what are the things which have to be overthrown?
rubi said:
You were the one who started being disrespectful.
I have different memories, but let's forget it.
rubi said:
There are perfectly good reasons to reject an ether theory if it doesn't yield any immediate gain, while only making the theory more complicated. I'm sure if you managed to describe some new experimental result with Bohmian mechanics, people would be just as quickly switch to your theory as they rejected Reichenbach's principle.
Sorry, but the gains which can be reached with the ether can be easily seen. The problems of quantization of gravity essentially disappear - we know how to quantize condensed matter theories. The problem to explain why the SM is what it is is solved in an IMHO satisfactory way too, even if the model does not allow to compute the masses. Bohmian mechanics does not have a measurement problem. My ether theories have been published already some years, but are simply ignored. So, no, I don't believe anymore that people would switch to ether theory if it would give some gains. (Not in a world where you need an independent income to do independent research because you can be sure that it will be extremely hard to publish anything and you will never obtain a grant for this.)

And, of course, the question is what has been the immediate gain of the atomic hypothesis? How long has atomic theory been developed before it managed to obtain a new experimental result?
 
  • #90
zonde said:
My requirement was that model has to be local. As you don't like word "locality" (or like to associate uncommon concept with this word) let me replace "locality" with "factorizability".

Its compatible with the definition of locality ie from the previously linked paper:
'Let us define a “local” theory as a one where the outcomes of an experiment on a system are independent of the actions performed on a different system which has no causal connection with the first. For example, the temperature of this room is independent on whether I choose to wear purple socks today. Einstein’s relativity provides a stringent condition for causal connections: if two events are outside their respective light cones, there cannot be any causal connection among them.'

I don't deny the existence locality - I am simply saying it doesn't apply to correlated systems because by the definition of correlation if things are correlated what is done in one system is related to what goes on in the other. Include it in locality if you like. The issue however is if you want to keep counter-factual definiteness you must allow superluminal signalling which specifically makes it non-local:
http://drchinese.com/David/Bell_Theorem_Easy_Math.htm
'But there was a price to pay for such this experimental setup: we must add a SECOND assumption. That assumption is: A measurement setting for one particle does not somehow affect the outcome at the other particle if those particles are space-like separated. This is needed because if there was something that affected Alice due to a measurement on Bob, the results would be skewed and the results could no longer be relied upon as being an independent measurement of a second attribute. This second assumption is called "Bell Locality" and results in a modification to our conclusion above. In this modified version, we conclude: the predictions of any LOCAL Hidden Variables theory are incompatible with the predictions of Quantum Mechanics. Q.E.D. '

Thanks
Bill
 
  • #91
bhobba said:
I don't deny the existence locality - I am simply saying it doesn't apply to correlated systems because by the definition of correlation if things are correlated what is done in one system is related to what goes on in the other. Include it in locality if you like. The issue however is if you want to keep counter-factual definiteness you must allow superluminal signalling which specifically makes it non-local:
http://drchinese.com/David/Bell_Theorem_Easy_Math.htm
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.

And the article does not get the main point of the EPR argument:
EPR also said that since it is "unreasonable" to believe that these particle attributes require observation to become real, therefore Hidden Variables must exist. Einstein said: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it." This second part of EPR was accepted by some, and rejected by others including Bell.
No, once we accept the EPR criterion of reality, accept also the observable 100% anti-correlation, and accept Einstein causality, we can prove that the particle has a predetermined spin in all directions, so that we do not have to rely on vague philosophical "I think" feelings.
 
  • #92
atyy said:
That's an interesting essay. Do you know http://www.cs.ucla.edu/~eb/r384-lnai.pdf?
Thank you atyy. I am not familiar with that Bayesian paper but it looks interesting. I've added it to my reading list.

Andrew
 
Last edited by a moderator:
  • #93
Ilja said:
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.

That's wrong - Bell says you can't have locality and counter-factual definiteness. Counter-factual definiteness is simply a more careful statement of realism - although its slightly different.

Thanks
Bill
 
  • #94
Ilja said:
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.
That is the reverse of my understanding, and of everything I've ever read on the topic. Why do you think that?
And why do you think anybody would ever want to reject CFD if doing so doesn't solve anything and only creates more problems?
 
  • Like
Likes Derek Potter and bhobba
  • #95
andrewkirk said:
That is the reverse of my understanding, and of everything I've ever read on the topic.

Its wrong. We all make errors and that's all it was. I do things like that all the time.

Thanks
Bill
 
  • #96
Ilja said:
Its reverse - if you want to reject counter-factual definiteness (once it leads to the BI which are violated) you must allow superluminal causal influences.
Not at all. Without CFD, there is no definite state so there is no need for an influence to cause it, whether superluminal or not. In more familiar terms, nothing has to collapse the wavefunction of the detectors if it does not, in fact, collapse.
Ilja said:
No, once we accept the EPR criterion of reality, accept also the observable 100% anti-correlation, and accept Einstein causality, we can prove that the particle has a predetermined spin in all directions, so that we do not have to rely on vague philosophical "I think" feelings.
Well those assumptions may entail predetermined spin but EPR violates BI and this proves the opposite. When pitching facts against assumptions I tend to back the facts. One or more assumptions are wrong. Perhaps that is what you mean?

Einstein overstated the case because Heisenberg and Bohr were concocting anti-real theories or weird ideas that observation creates reality. With EPR confirmed by experiment, Einstein would undoubtedly have continued to believe that the moon exists even when he wasn't looking at it, but he would have accepted that it might well be in a positional superposition rather than simply "there".

Of course discussing what Einstein would have thought is counter-factual reasoning too.
 
Last edited:
  • #97
Derek Potter said:
OK, but the point is obscured if you illustrate non-locality with an example where non-locality is not needed :)

The reasoning goes like this:
  • If at some point, Alice knows for certain what Bob's measurement's outcome will be before the measurement takes place, then that reflects a physical fact about Bob's situation.
  • Either (A) that fact was true before Alice performed her measurement (and her measurement merely revealed that fact to her), or (B) the fact became true when Alice performed her experiment.
  • Choice (A) is a hidden-variables theory, of the type ruled out by Bell's inequality.
  • Choice (B) implies that something taking place near Alice (her measurement) caused a change in the facts about Bob.
 
  • #98
Derek Potter said:
Not at all. Without CFD, there is no definite state so there is no need for an influence to cause it, whether superluminal or not. In more familiar terms, nothing has to collapse the wavefunction of the detectors if it does not, in fact, collapse.
What means "without CFD" if the CFD is derived?
Derek Potter said:
Well those assumptions may entail predetermined spin but EPR violates BI and this proves the opposite. When pitching facts against assumptions I tend to back the facts. One or more assumptions are wrong. Perhaps that is what you mean?
Of course, one of the assumptions has to be wrong. One is an observational fact, which we can take as given (let others care about loopholes). What remains is:

1.) The EPR principle of reality: If, without in any way disturbing the system, we can predict, with certainty, the result of an experiment, this result is an element of reality even without or before the measurement is done, that means, is CFD.

2.) Einstein causality: The experiment done by Bob does in no way disturb the system measured by Alice.
 
  • #99
andrewkirk said:
That is the reverse of my understanding, and of everything I've ever read on the topic. Why do you think that?
And why do you think anybody would ever want to reject CFD if doing so doesn't solve anything and only creates more problems?

CFD in this particular situation is the consequence of the EPR criterion of reality and Einstein causality, together with the observable fact of the 100% anticorrelation.

We would want to reject it, because it would be all we need (together with Einstein causality) to continue with the proof of Bell's inequalities. They are violated (modulo loopholes I ignore), and one way to solve the problem would be to reject CFD in this particular situation.

But if we want to do this, we are faced with the EPR argument, which derives CFD from the EPR criterion of reality and Einstein causality. If one rejects the idea to reject the EPR criterion of reality, you obtain what I have claimed, namely that the rejection of CFD requires the rejection of Einstein causality.

And, indeed, this is the reverse of the understanding of many people, all those who make the quite common error to ignore that determinism is not assumed but derived by the EPR argument, so that they think that simply rejecting determinism would be sufficient to solve the problem.
 
  • #100
bhobba said:
Counter-factual definiteness is simply a more careful statement of realism - although its slightly different.

Not at all. In particular, de Broglie-Bohm theory is clearly realistic, even deterministic, but there is no CFD in it. The outcomes of "measurements" in dBB are essentially results of interactions, and depend on the configuration of the "measured" system as well as of the "measurement" device. Thus, there is no prediction for outcomes which are not performed, because such unperformed experiments have no configuration of the "measurement" device.
 
Back
Top