Local realism ruled out? (was: Photon entanglement and )

  • #551


jambaugh said:
We can also view the classical probability description as a fiber-bundle with base, the state manifold and fibers of probability density. This is the heart of the Bell inequality derivation which is equivalent to the assumption that probabilities form a measure over a manifold of objective states of reality. No locality issues need apply. (And "rigidity" or its lack in measurement is not the issue.)

The forms of locality involved with violations of Bell's inequality are associated with the spacetime manifold -- causal and constitutive. The manifold of objective states of reality necessarily contain violations of one or both when the states are those of QM, but to "see" that locality is in jeopardy, one needs to go to spacetime. That's why so many physicists don't "get it," i.e., they work in Hilbert space where it all makes sense. I teach QM using both Heisenberg and Schrodinger formalisms, QM makes perfect sense. You have to move beyond playing with the formalism to appreciate the ontological implications (those highlighted in the popular literature). However, if you're only concerned with formal consequences, you have them -- if QM is right, GR can't be right because GR is both causally and constitutively local. Where do you fall on that issue?
 
Physics news on Phys.org
  • #552


jambaugh said:
Yes the math parallels but the "thing" upon which the relativity group acts is no longer the system state. It is the "state vector" or "mode of preparation" vector identifying a class of actual systems. One cannot narrow this class to the point of all systems acting identically under any possible measurement and thus one cannot speak of a instantiation of the class as being in an objective state of reality in that this state determines the outcome of all measurements exactly. Contextuality is an important feature of understanding the quantum description but there is more than that going on here.
Why would you require that all representations act identically under any possible measurement. Maybe you mean in exact manner?

Well in relativity contextuality means that it is quite useless to talk about preferred reference frame.
In QM contextuality makes it quite hard to talk about preferred measurement base.
But here is equivalence between representations of ensemble in different measurement bases. That's the idea of "state vector", isn't it?

But otherwise absence of preferred measurement base is of course only a small part of QM.
 
  • #553


zonde said:
Why would you require that all representations act identically under any possible measurement. Maybe you mean in exact manner?

If many representations are representing the same physical entity then the many representations must transform isomorphically under the relativity group. But that is not what I'm talking about...rather the reverse. Two distinct categories of entities may transform isomorphically but this isomorphism does not imply they are the same type of entity.

In the case to which I refer, there is the objective reality of a classical object, the traditional observables of which are as you say "contextual" and as I would say relative. To each classical objective measurable quantity acting as coordinate the object's reality there is the act of measurement, the classes of experimental procedures which yield identical information about the state of that classical entity. These classes (of observations) necessarily transform isomorphically (or dually depending on the representation) to the objective reality they measure. They are none the less a distinct category of entities in the physics from the category of physical objective states.

Now hop into QM and you have the same (and typically a larger relativity group of) transformations acting on the classes of measurement actions/devices. You however lose the whole of the dual objective reality that in the classical case they were presumed to measure for a single physical entity. Instead you have each observation individually corresponding to an instance of actuality but only commuting subsets able to correspond to a single physical instance of the quantum entity.

Indeed in QM the term "system" refers more to the system of empirical actions on the physical entity, rather than to the entity directly. Contextuality is a prerequisite to this quantum non-objective actuality but (especially since contextuality can be invoked classically), by no means is it the sole defining characteristic.

Well in relativity contextuality means that it is quite useless to talk about preferred reference frame.
In QM contextuality makes it quite hard to talk about preferred measurement base.
But here is equivalence between representations of ensemble in different measurement bases. That's the idea of "state vector", isn't it?
(A side note, and repetition of one of my usual speeches)

The essence of a state vector is a maximal measurement of the physical system. Since this measurement is not classically total, it should no longer be referred to as a "state vector" but more properly (as you'll find in some literature) as a mode vector as in describing the mode of measurement or equivalently mode of preparation of the actual physical entity.

With that in mind, when we speak of the relativity group it again has passive and active context, i.e. we can rotate the physical entity or reverse rotate our measuring devices and achieve the same change of outcomes but in both cases we work within the same representation framework (of "state" vector i.e. measured values i.e. measurement processes) because we cease to have the "metaphysical" duality of measurement process+ objective state.

This is proper, and indeed imperative in the discipline of science since science is an epistemological discipline. Within the doctrine of modern science the observation is most fundamental component of a theory and not the objective state.
But otherwise absence of preferred measurement base is of course only a small part of QM.
Yes, that was principally my point.

Another note, in this non-object understanding of QM one can still be reductive in the sense of say reducing the behavior of the moon to the behaviors of its component elementary particles, however the contextuality gets "squared" in the treatment of composite systems. Not only do we have the sum of relativity transformations for the components, we have the product which implies there is not only a contextual aspect to how you measure the components to derive a quantity corresponding to the composite but also that there is a contextual aspect to how you actually subdivide the composite into components.

For a concrete example consider a ground state helium-4 atom. In subdividing into nucleus and 2 electrons we may speak of the spin z +1/2 electron and spin z -1/2 electron, or alternatively into spin x +1/2 and spin x -1/2 (since we know the electron pair is in a singlet [STRIKE]state[/STRIKE] mode ).

In the two cases we are subdividing the electron pair into two electrons in very distinct ways. This is something we must be conscious of when we parse e.g. EPR type experiments especially with our common language which has evolved to describe classical rather than quantum entities. This is where the conterfactuality landmine can trip us up.
 
  • #554


RUTA said:
The forms of locality involved with violations of Bell's inequality are associated with the spacetime manifold -- causal and constitutive.
To this I disagree strongly. Bell (and Einstein, Podolsky, & Rosen) invokes space-time locality only in so far as it enables him (them) to exemplify the more basic concept of independent acts of measurement. One can also derive, and then observe the violation of Bell's inequality by considering say two independent observables of a single localized particle. Assuming probabilities derive from a measure on a state manifold for the outcomes and assuming causal independence in the process of the two classes of measurements one may derive Bell's inequality. By entangling and then measuring one can demonstrate (or predict via QM) violation of the inequality.

For example one could take a spin-3/2 system (4 dimensional Hilbert space) and consider the cross commuting pair of observables constructable via complex superpositions from z-spin > 0 vs z-spin < 0, and separately |z-spin component| = 3/2 vs |z-spin component| = 1/2. The observables sets (block) commute which means they are causally isolated.

Of course as a practical matter it is terribly terribly difficult to isolate the two measurement processes. But for cross commuting sets one can in principle construct the devices to carry out actual experiments. Distance is the easiest means but not the only means.


The manifold of objective states of reality necessarily contain violations of one or both when the states are those of QM, but to "see" that locality is in jeopardy, one needs to go to spacetime.
It is a question of what assumptions one wants to make. ( I don't see locality as ever having been in jeopardy). Just as we empirically verify spatial locality to assure ourselves it is a valid assumption we can similarly verify that say gross position and spin measurements are similarly independent (commuting). And in both cases we may hypothesize that our assumption is wrong when we see Bell inequality violation and that there is some mechanism of interaction beyond the theoretical prediction that they are causally independent.

But QM predicts any pair of commuting observables may be none-the-less entangled and thus that you can both derive a Bell inequality from "reality assumptions" and that said inequality gets violated. It isn't about the locality! It's about the reality!

However, if you're only concerned with formal consequences, you have them -- if QM is right, GR can't be right because GR is both causally and constitutively local. Where do you fall on that issue?

I don't agree with that statement. GR and QM are perfectly compatible beyond GR being as yet still a classical theory. With regard to causal vs constitutive locality I fully believe in causal locality but am not sure how you mean constitutive locality. Especially I'm not sure "constitutive anything" is proper in QM if by that you are invoking an objective reality of the physical system or its constituents.

I think in the end, as long as by "constitutively local" one is referring to the ability to expand measurement processes into constituent local measurements (invoking superposition) and thus so that this becomes a further qualification on the causal locality of the measurements then your fine.

Said better one may postulate "a complete set of causally local observables."

If you mean otherwise then you may be reifing the wave-function further than I think is proper.
 
  • #555


jambaugh said:
Indeed in QM the term "system" refers more to the system of empirical actions on the physical entity, rather than to the entity directly.
That is no different in relativity. In relativity physical entity is not described directly with length and time but rather with measurements of length (rulers) and measurements of time (clocks) and of course along with synchronization procedure for clocks instead of universal simultaneity.

But I agree that there is significant diference between limits of objective reality under relativity and QM.
Let's formulate it this way and see if you will agree that this is the key difference.
Relativity allow reductionism and that is one of the key parts of objective reality. You can split description of reality however you like and all parts will still obey the same isomorphism of different measurements.

Actually you say yourself something like that here:
jambaugh said:
Another note, in this non-object understanding of QM one can still be reductive in the sense of say reducing the behavior of the moon to the behaviors of its component elementary particles, however the contextuality gets "squared" in the treatment of composite systems. Not only do we have the sum of relativity transformations for the components, we have the product which implies there is not only a contextual aspect to how you measure the components to derive a quantity corresponding to the composite but also that there is a contextual aspect to how you actually subdivide the composite into components.

jambaugh said:
With that in mind, when we speak of the relativity group it again has passive and active context, i.e. we can rotate the physical entity or reverse rotate our measuring devices and achieve the same change of outcomes but in both cases we work within the same representation framework (of "state" vector i.e. measured values i.e. measurement processes) because we cease to have the "metaphysical" duality of measurement process+ objective state.
Actually I didn't mean that when I said that there is no preferred measurement base.
What I mean is that when we talk about non-commuting measurements ensemble is represented using two orthogonal vectors and you do not necessarily have preferred basis for representation of those two vectors. Actually you have preferred basis in case when in that basis one of the vectors becomes zero.
But then if we want to relate this back to relativity then you have preferred representation there in special case when you have reference frame where every object under consideration is at rest. In that reference frame all effects of relativity will disappear.

And there is another thing where have different viewpoints that prevents me from accepting your arguments about probabilistic measurements.
You talk about uncertainty of measurement and with that you imply that single entity of ensemble is fair representative of whole ensemble and you acquire some certainty of measurement only as statistical build-up of individual independent probabilistic measurements.
I say that there is more than statistics in QM and ensemble is not statistical ensemble (at least not always) but physical ensemble i.e. measurement of ensemble can acquire such certainties that are not possible for simple statistical ensemble. So I talk about certainty of measurement of ensemble (rate of clicks versus individual click).
That's a bit similar as we talk about length measurement of a stick instead of count of atoms along the length of a stick.
 
  • #556
Jambaugh, clearly you’re not familiar with the terminology of the foundations community. Let me provide the background via excerpts from “Reconciling Spacetime and the Quantum: Relational Blockworld and the Quantum Liar Paradox,” W.M. Stuckey, Michael Silberstein & Michael Cifone, Foundations of Physics 38, No. 4, 348 – 383 (2008), quant-ph/0510090 and arXiv 0908.4348 (accepted for presentation at PSA 2010, revised version under re-review at FoP).

Fm second paper:

In Healey’s language, strong nonseparability might be dubbed a kind of non-locality, not “causal non-locality” but rather “constitutive non-locality” (Healey, R.: Gauging What’s Real: The Conceptual Foundations of Gauge Theories. Oxford University Press, Oxford (2007), p 127). As he says, strong nonseparability strongly suggests physical property holism, i.e., “There is some set of physical objects from a domain D subject only to type P processes, not all of whose qualitative intrinsic physical properties and relations supervene on qualitative intrinsic physical properties and relations in the supervenience basis of their basic physical parts (relative to D and P) (Healey, 2007, p 125).”

From first paper:

In particular, the implied metric isn’t an “extreme embodiment of the separability principle” (D. Howard, in Potentiality, Entanglement and Passion-at-a-Distance, edited by R.S. Cohen et al. (Kluwer Academic, Great Britain, 1997), p 122).

As Howard notes in the following passage, one of the central debates between the founding fathers of quantum mechanics was over the conflict between the spacetime picture and the quantum picture of reality and how they may be reconciled (Howard, 1997, pp 114-115):

"The second striking feature of Pauli’s last-quoted paragraph is that it points backward to what was by 1935 an old debate over the nonseparable manner in which quantum mechanics describes interacting systems. The fact that this was the central issue in the pre-1935 debate over the adequacy of the quantum theory disappeared from the collective memory of the physics community after EPR….Einstein had been trying in every which way to convince his colleagues that this was sufficient reason to abandon the quantum path…But it was not just Einstein who worried about quantum nonseparability in the years before 1935. It was at the forefront of the thinking of Bohr and Schrödinger."

In today’s terminology we would say that the spacetime picture of relativity adheres to the following principles (Howard, 1997, pp 124-125):

Separability principle: any two systems A and B, regardless of the history of their interactions, separated by a non-null spatiotemporal interval have their own independent real states such that the joint state is completely determined by the independent states.

Locality principle: any two space-like separated systems A and B are such that the separate real state of A let us say, cannot be influenced by events in the neighborhood of B.

It is now generally believed that Einstein-Podolsky-Rosen (EPR) correlations, i.e., correlated space-like separated experimental outcomes which violate Bell’s inequality, force us to abandon either the separability or locality principle.

As Howard notes, Einstein thought that both these principles, but especially the latter, were transcendental grounds for the very possibility of science. Einstein’s spatiotemporal realism is summarized in his own words (A. Einstein, Deutsche Literaturzeitung 45, 1685-1692 (1924)):

"Is there not an experiential reality that one encounters directly and that is also, indirectly, the source of that which science designates as real? Moreover, are the realists and, with them, all natural scientists not right if they allow themselves to be led by the startling possibility of ordering all experience in a (spatio-temporal-causal) conceptual system to postulate something real that exists independently of their own thought and being?"

Minkowski spacetime (M4) is a perfect realization of Einstein’s vision but as Howard says (D. Howard, “Einstein and the Development of Twentieth-Century Philosophy of Science” to appear in Cambridge Companion to Einstein, from his website):

"Schrödinger’s introduction of entangled n-particle wave functions written not in 3-space but in 3n-dimensional configuration space offends against space-time description because it denies the mutual independence of spatially separated systems that is a fundamental feature of a space-time description."

In this sense, we agree with Howard (Howard, 1997, pp 124-129) that NRQM is best understood as violating “separability” (i.e., independence) rather than “locality” (i.e., no action at a distance, no super-luminal signaling), and we take to heart Pauli’s admonition that “in providing a systematic foundation for quantum mechanics, one should start more from the composition and separation of systems than has until now (with Dirac, e.g.) been the case” (W. Pauli, Scientific Correspondence with Bohr, Einstein, Heisenberg a.o., Vol 2, 1930-1939, edited by Karl von Meyenn (Springer-Verlag, Berlin, 1985), pp 402-404).
***************************************************

Given your postings to date, Jambaugh, I’m guessing you’ll fall into our camp, i.e., causal locality is maintained, QM is “right” and GR is “wrong” in that the separability of GR is only an approximation. So, what say you?
 
  • #557
RUTA said:
Jambaugh, clearly you’re not familiar with the terminology of the foundations community. Let me provide [...]

In Healey’s language, strong nonseparability might be dubbed a kind of non-locality, not “causal non-locality” but rather “constitutive non-locality”

Thanks for the translation. Yes I'm not familiar with "constitutive (non)locality" as a phrase. To my mind "nonseparability" is more encompassing since it reflects not just spatial issues. inseparability is clear enough in the subadditivity of entropy for quantum systems.

Separability principle: any two systems A and B, regardless of the history of their interactions, separated by a non-null spatiotemporal interval have their own independent real states such that the joint state is completely determined by the independent states.
But I am of the camp that feels even a single system A has no "independent real state" as such so this definition of separability fails from the start. (I think the issue being considered in defining separability vs nonseparability is one of trying to reconcile QM with a ontology... a futile quest IMNSHO).

"Locality principle: any two space-like separated systems A and B are such that the separate real state of A let us say, cannot be influenced by events in the neighborhood of B."
Here again I see a bias toward "statism";-) if you pardon the misuse of the term. Rather try:

Observational locality principle: An action carried out in region A spatially separated from region B can have no effect on measurements made in region B.

Probably I could word that better given time but you see the point. Avoid reference to operationally meaningless unobserved states of reality and stick to the operationally meaningful actions such as measurements.

Given your postings to date, Jambaugh, I’m guessing you’ll fall into our camp, i.e., causal locality is maintained, QM is “right” and GR is “wrong” in that the separability of GR is only an approximation. So, what say you?

Fairly accurate except I see nothing wrong with GR at its foundation, only in the categorization of the geometric model of GR as an ontological theory as opposed to being a model. The elimination of the gravitational force qua dynamic force is to my mind a "gauge condition" and the full power of the equivalence principle has yet to be invoked in attempts to quantize GR.

I bring this up because I think the separability of GR is a function of its typical geometric formulation (model) and not the theory itself when "properly" (i.e. operationally) interpreted.
 
  • #558
jambaugh said:
But I am of the camp that feels even a single system A has no "independent real state" as such so this definition of separability fails from the start. (I think the issue being considered in defining separability vs nonseparability is one of trying to reconcile QM with a ontology... a futile quest IMNSHO).

You have to have some ontology to do physics; the formalism is meaningless in and of itself. I teach QM using actual experiments, so all the formalism translates immediately to actual experimental configurations and measurement devices. Anyway, no ontology, no physics.

jambaugh said:
Probably I could word that better given time but you see the point. Avoid reference to operationally meaningless unobserved states of reality and stick to the operationally meaningful actions such as measurements.

And when I teach QM according to experiments, like I said before, QM is perfectly clear. It's not until you ask, "How can that be?" that you run into confusion.

jambaugh said:
Fairly accurate except I see nothing wrong with GR at its foundation, only in the categorization of the geometric model of GR as an ontological theory as opposed to being a model. The elimination of the gravitational force qua dynamic force is to my mind a "gauge condition" and the full power of the equivalence principle has yet to be invoked in attempts to quantize GR.

I bring this up because I think the separability of GR is a function of its typical geometric formulation (model) and not the theory itself when "properly" (i.e. operationally) interpreted.

And this is where we in the foundations community see a benefit to asking, "How can that be?" By understanding that QM is nonlocal and/or nonseparable while GR is local and separable, you see immediately that one or both have to be corrected in one or both respects. Our formalism has GR as a statistical limit to quantum physics when the approximation of separability holds. We developed our approach to QG via our interpretation of QM. So, while foundational issues may be irrelevant to you, they were sine qua non for us.
 
  • #559
RUTA said:
You have to have some ontology to do physics; the formalism is meaningless in and of itself. I teach QM using actual experiments, so all the formalism translates immediately to actual experimental configurations and measurement devices. Anyway, no ontology, no physics.
Not to my mind. The formalism can be meaningfully interpreted without invoking ontology of the system; e.g. "an electron" is the phenomenon of an electron detector going "click". Of course one must invoke an ontological description of the measuring devices and the records of measurements.

And when I teach QM according to experiments, like I said before, QM is perfectly clear. It's not until you ask, "How can that be?" that you run into confusion.
Right, and in teaching QM according to experiments you should be demonstrating that the formalism is operationally applied to the configuration of experimental devices. I assert that the confusion arises when one tries to push beyond that operational interpretation.
And this is where we in the foundations community see a benefit to asking, "How can that be?" By understanding that QM is nonlocal and/or nonseparable while GR is local and separable, you see immediately that one or both have to be corrected in one or both respects. Our formalism has GR as a statistical limit to quantum physics when the approximation of separability holds. We developed our approach to QG via our interpretation of QM. So, while foundational issues may be irrelevant to you, they were sine qua non for us.
I rather see QM as non-separable, causally local, while CM is separable, causally local. Classical GR is separable, causally local and a QGR should be non-separable, causally local.
 
  • #560
jambaugh said:
Not to my mind. The formalism can be meaningfully interpreted without invoking ontology of the system; e.g. "an electron" is the phenomenon of an electron detector going "click". Of course one must invoke an ontological description of the measuring devices and the records of measurements.

Exactly my point. In fact, in RBW the ontology is "there is no system" (other than the experimental equipment).

jambaugh said:
Right, and in teaching QM according to experiments you should be demonstrating that the formalism is operationally applied to the configuration of experimental devices. I assert that the confusion arises when one tries to push beyond that operational interpretation.

I start my QM course with the Mermin device, interaction-free measurement, delayed choice (Zeilinger and Aharonov have done some cool experiments that I show them), and the quantum liar experiment. Then we use the QM formalism to describe all these experiments. I have them read many articles, but the texts are Shankar and Albert. For example, we work through "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory," D. Dehlinger & M.W. Mitchell, Am. J. Phys. 58, Sep 2002, 903-910, in detail. We also reproduce all the results in Mermin's three AJP papers. So, the students see how QM works and why most physicists don't see anything "weird" about it. But, they also see that QM violates separability and/or locality, which strikes them as "weird," so they can appreciate all the "fuss" made over this fact.

jambaugh said:
I rather see QM as non-separable, causally local, while CM is separable, causally local. Classical GR is separable, causally local and a QGR should be non-separable, causally local.

Exactly what we believe. Our approach to QG can be described as non-separable Regge calculus. The manner by which this unifies physics is explained in 0908.4348. What is your approach to QG?
 
  • #561
RUTA said:
For example, we work through "Entangled photons, nonlocality, and Bell inequalities in the undergraduate laboratory," D. Dehlinger & M.W. Mitchell, Am. J. Phys. 58, Sep 2002, 903-910, in detail.

RUTA, do you ever run that experiment in your lab?
 
  • #562
DrChinese said:
RUTA, do you ever run that experiment in your lab?

I'm a theorist. I was told as an undergrad to avoid the lab -- I destroyed too much equipment :-)
 
  • #563
RUTA, it would be great if you could explain one thing to me, regarding photon entanglement (superposition polarization):

Are the entangled superposition (of two photons) described by one single wavefunction?
 
  • #564
DevilsAvocado said:
RUTA, it would be great if you could explain one thing to me, regarding photon entanglement (superposition polarization):

Are the entangled superposition (of two photons) described by one single wavefunction?

Yes, |psi> ~ |HH> + |VV> is what Dehlinger created (well, close thereto, see eqns 1 and 6). |psi> ~ |HV> - |VH>, called the "singlet state," also gives results consistent with the Mermin device.
 
  • #565
RUTA said:
Yes, |psi> ~ |HH> + |VV> is what Dehlinger created (well, close thereto, see eqns 1 and 6). |psi> ~ |HV> - |VH>, called the "singlet state," also gives results consistent with the Mermin device.

WOW! Just great! Many thanks RUTA!

I’m working on a "personal surprise" that’s going to cause "some trouble" in the "EPR-FTL-Department". :wink:

Will post it in https://www.physicsforums.com/showthread.php?t=395509" in a couple of days...


Just a small follow-up: A measurement on any of these two photons will collapse/decohere the wavefunction/"singlet state", right?


EDIT: I think I found the answer in http://www.optics.rochester.edu/workgroups/lukishova/QuantumOpticsLab/homepage/mitchel1.pdf" :
Despite the randomness, the choice of a clearly has an effect on the state of the idler photon: it gives it a definite polarization in the |Va>i ,|Ha>i basis, which it did not have before the measurement.
 
Last edited by a moderator:
  • #566
DevilsAvocado said:
Just a small follow-up: A measurement on any of these two photons will collapse/decohere the wavefunction/"singlet state", right?

It will collapse the wavefunction, but neither party knows whether the other has made a measurement -- they both get what looks to them like totally random results (50-50 V H outcomes, regardless of setting) whether or not the other guy is doing anything at his end. You only see "weirdness" in the correlations, which are exchanged at sub-light or light speed b/w observers.
 
  • #567
RUTA said:
It will collapse the wavefunction, ...

This is just marvelous! Thanks again!

This is going to be very interesting and fun, as soon as I have everything ready for posting. Watch out! :smile:
 
  • #568
DevilsAvocado said:
This is just marvelous! Thanks again!

This is going to be very interesting and fun, as soon as I have everything ready for posting. Watch out! :smile:

Looking forward to it... :smile:
 
  • #569
DrChinese said:
Looking forward to it... :smile:

Me too. (why am I suddenly getting 'nervous'... ?:bugeye:?)

:wink:
 
  • #570
DevilsAvocado said:
Me too. (why am I suddenly getting 'nervous'... ?:bugeye:?)

:wink:

Because you're about to learn something via one of DrC's painful lessons. No pain, no gain :-)
 
  • #571
RUTA said:
Because you're about to learn something via one of DrC's painful lessons. No pain, no gain :-)

Aw, I promise to be gentle.

Actually RUTA, I am quite in the same boat right now. I just completed a draft of a paper which is available for comments - and yours would be welcome. It has nothing to do with this thread, but check it out if anyone wants to skewer me:

DrC's New Paper and opportunity to bash me with your comments

Here's your chance! Email me (I'm not ready for a new thread quite yet as I submitted to PF Independent Research for review).
 
  • #572
RUTA said:
DrC's painful lessons. No pain, no gain :-)
DrChinese said:
I promise to be gentle.


There seems to be some "entangled discrepancy" here... flip side of the coin...? :biggrin:

I better keep my big mouth shut until there’s something more substantial for the "wolf" to tear apart. :eek:
 
  • #573
Demystifier said:
That's interesting, because my explicit Bohmian model of relativistic nonlocal reality does involve a "meta time".
Now I have a better understanding of the physical meaning of this "meta time". It can be viewed as a generalization of the notion of proper time. It is also formally analogous to the Newton absolute time (even though it is fully relativistic covariant). More details can be found in
http://xxx.lanl.gov/abs/1006.1986
 
  • #574
I have not posted here for quite some time as I did not feel I could add anything new. I am posting now because, on the one hand, the thread has apparently drawn a lot of interest, on the other hand, my paper has just been accepted for publication in the International Journal of Quantum Information (there is a preprint at http://www.akhmeteli.org/akh-prepr-ws-ijqi2.pdf ), so I guess it would be appropriate to summarize its results here, as they are quite relevant to this discussion.

So the article starts with the equations of (non-second-quantized) scalar electrodynamics. They describe a Klein-Gordon particle (a scalar particle described by the Klein-Gordon equation) interacting with electromagnetic field (described by the Maxwell equations). It is shown that this model is equivalent (at least locally) to a local realistic model – modified electrodynamics without particles, as the matter (particle) field can be naturally eliminated from the equations of scalar electrodynamics, and the resulting equations describe independent evolution of the electromagnetic field (electromagnetic 4-potential). Furthermore, this evolution is shown to be equivalent to unitary evolution of a certain (second-quantized) quantum field theory.

This is clearly relevant to the topic of this thread: indeed, it turns out that unitary evolution of a quantum field theory can be reproduced in a local realistic (LR) model, so it is impossible to rule out the LR model without using some additional postulates, such as the projection postulate of the quantum theory of measurement. On the other hand, as I argued repeatedly, this postulate directly contradicts the unitary evolution.
 
  • #575
akhmeteli said:
This is clearly relevant to the topic of this thread: indeed, it turns out that unitary evolution of a quantum field theory can be reproduced in a local realistic (LR) model, so it is impossible to rule out the LR model without using some additional postulates, such as the projection postulate of the quantum theory of measurement.
Can this quantum field theory be used to describe entangled particles and predict the results of Aspect-type experiments? Are you claiming that the LR model can violate any Bell inequalities?
 
  • #576
JesseM said:
Can this quantum field theory be used to describe entangled particles

Yes, this quantum field theory (QFT) can definitely be used to describe entangled particles.

However, this answer, while correct, can be misleading, because another question is relevant here: "Can this local realistic model (LRM) be used to describe entangled particles?" These two questions are not equivalent, as QFT and LRM are not equivalent, they just have the same evolution. One can say that LRM is a subset of QFT.

So what is the answer to the second question? The short answer is "yes". However, it depends on how you would answer the following question: "Can a 3-dimensional body be used to describe its 2-dimensional projections?" If you believe it can, then this LRM can definitely be used to describe entangled particles. If you believe it cannot, then the answer to this question is negative.

Let me explain. The states of the LRM are so called generalized coherent states, which are a superposition of several (infinite number of) states having definite number of particles, including a state with, say, 2 particles, so an entangled state of two particles is a projection of a state of the LRM.

JesseM said:
and predict the results of Aspect-type experiments?

I think so. As I argued here, quoting the leading experts in the field, the genuine Bell inequalities have never been violated in Aspect-type experiments so far.

However, a caveat is required here as well. I don't claim that the QFT or the LRM correctly describe the entire Nature, as, for example, being based on the scalar electrodynamics, they do not describe electronic spin. However, the scalar electrodynamics is a decent theory, successfully describing a very wide area of phenomena.

JesseM said:
Are you claiming that the LR model can violate any Bell inequalities?

No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

So I don't think one can demand that the LRM faithfully reproduce the relevant mutually contradicting predictions of quantum theory. On the other hand, this LRM has exactly the same evolution as the QFT.

By the way, this also suggests that one needs more than unitary evolution to prove the violations in quantum theory.
 
  • #577
akhmeteli said:
I think so. As I argued here, quoting the leading experts in the field, the genuine Bell inequalities have never been violated in Aspect-type experiments so far.
I haven't read the whole thread, are you just talking about experimental loopholes like the ones discussed here? There have been experiments that closed the detector efficiency loophole and experiments that closed the locality loophole, but no experiment that closed both loopholes simultaneously--still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.
akhmeteli said:
No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.
What do you mean by "mutally contradicting postulates"? Remember, in its basic form QM is nothing more than a recipe for making predictions about experimental results, it doesn't come with any built-in interpretation of the "meaning" of this recipe...the fact that the recipe involves calculating the evolution of the wavefunction between measurements and then using the projection postulate to get the probabilities of different measurement results doesn't imply that either the wavefunction or the "collapse of the wavefunction" have any independent reality outside the fact that when we use this recipe we do get correct statistical predictions. Indeed, the example of Bohmian mechanics proves that we are free to believe there is some underlying model that explains the origin of the probabilities given in the recipe without the need to assume anything special really happens during the measurement process. And the only assumption about ordinary QM used in Bell's proof that QM is incompatible with local realism is the assumption that the recipe does indeed give correct statistical predictions about experimental results, regardless of the underlying explanation for the predicted statistics.

Quantum field theory is also just a recipe for making predictions, and although I haven't studied QFT I'm pretty sure that known quantum field theories like quantum electrodynamics do mirror nonrelativistic QM in predicting violations of Bell inequalities. Does the simplified quantum field theory you are considering differ from known quantum field theories in this respect?
 
  • #578
akhmeteli said:
No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

So I don't think one can demand that the LRM faithfully reproduce the relevant mutually contradicting predictions of quantum theory. On the other hand, this LRM has exactly the same evolution as the QFT.

By the way, this also suggests that one needs more than unitary evolution to prove the violations in quantum theory.

So, are you claiming that QM's prediction of the violation of Bell inequalities is wrong?
 
  • #579
JesseM said:
I haven't read the whole thread, are you just talking about experimental loopholes like the ones discussed here?

Yes, that's what I am talking about.

JesseM said:
There have been experiments that closed the detector efficiency loophole and experiments that closed the locality loophole, but no experiment that closed both loopholes simultaneously

I agree. Some people think that closing separate loopholes in separate experiments is good enough though. In post 34 of this thread I asked one of them:

"what’s wrong [then] with the following reasoning: planar Euclidian geometry is wrong because it predicts that the sum of angles of any triangle is 180 degrees, whereas experiments demonstrate with confidence of 300 sigmas or more that the sums of angles of a quadrangle on a plane and a triangle on a sphere are not equal to 180 degrees."

I have never heard an answer from anybody.

JesseM said:
--still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.

I agree, "most experts would agree" on that. But what conclusions am I supposed to draw from that? That the model I offer is "very contrived"? I cannot agree with that, as it's essentially old good scalar electrodynamics (non-second-quantized). That the model does not "get correct predictions (agreeing with those of QM) for the experiments that have already been performed"? But it has the same evolution as the relevant QFT. I agree that the QFT is not the same as the standard quantum electrodynamics (QED), but it is pretty close, so I guess the predictions will be close to those of QED in many cases, although, as I admitted, the QFT fails to describe the electronic spin, for example. So while I cannot state that the LRM gives correct predictions for all experiments performed so far, I would say it suggests that a local realistic theory giving correct predictions for the past experiments and failing in an ideal experiment must not necessarily be "very contrived".

JesseM said:
What do you mean by "mutally contradicting postulates"? Remember, in its basic form QM is nothing more than a recipe for making predictions about experimental results, it doesn't come with any built-in interpretation of the "meaning" of this recipe...the fact that the recipe involves calculating the evolution of the wavefunction between measurements and then using the projection postulate to get the probabilities of different measurement results doesn't imply that either the wavefunction or the "collapse of the wavefunction" have any independent reality outside the fact that when we use this recipe we do get correct statistical predictions.

I think the postulates are indeed "mutually contradicting", as the projection postulate predicts transformation of a pure wavefunction into a mixture and it predicts irreversibility. Neither is true for unitary evolution. Of course, you can indeed avoid a contradiction, saying (following von Neumann) that unitary evolution is correct between measurements, and the projection postulate is correct during measurements. But I think it is rather difficult to cling to that position now, 80 years after von Neumann. Are you ready to say that if you call something "an instrument", it evolves in one way, and if you don't call it that, it evolves differently? Do you think that unitary evolution is wrong for instruments? Or for observers? I quoted Schlosshauer in post 41 in this thread, he reviewed modern experiments and concluded, among other things (please see the exact wording in post 41), that unitary dynamics has been confirmed everywhere it was tested and that there is no positive evidence of collapse.


JesseM said:
Indeed, the example of Bohmian mechanics proves that we are free to believe there is some underlying model that explains the origin of the probabilities given in the recipe without the need to assume anything special really happens during the measurement process.

No, this is not quite so. If I understand Demystifier (https://www.physicsforums.com/showpost.php?p=2167542&postcount=19) correctly (and he has written maybe dozens of articles on Bohmian mechanics), although the projection postulate can be derived in Bohmian mechanics, it can only be derived as an approximation, maybe a very good approximation, but an approximation.

JesseM said:
And the only assumption about ordinary QM used in Bell's proof that QM is incompatible with local realism is the assumption that the recipe does indeed give correct statistical predictions about experimental results, regardless of the underlying explanation for the predicted statistics.

The problem is this recipe includes mutually contradictory components, so it cannot be always correct.

JesseM said:
Quantum field theory is also just a recipe for making predictions, and although I haven't studied QFT I'm pretty sure that known quantum field theories like quantum electrodynamics do mirror nonrelativistic QM in predicting violations of Bell inequalities.

I think this is correct, but they still have the same mutually contradictory components as the standard quantum theory (SQM), so what I said about SQM is true about quantum field theories, such as QED.


JesseM said:
Does the simplified quantum field theory you are considering differ from known quantum field theories in this respect?

I don't think it differs in this respect, if you include the standard measurement theory in it. But I did not say the LRM reproduces both unitary evolution and the measurement theory of this QFT, it just reproduces its unitary evolution. As unitary evolution and measurement theory are mutually contradictory, I don't think the failure to reproduce the measurement theory is a weak point of the LRM.
 
  • #580
RUTA said:
So, are you claiming that QM's prediction of the violation of Bell inequalities is wrong?

Not exactly. I suspect that this prediction may be wrong, but I cannot claim that it is wrong. Indeed, I do understand that the violations can be found in a loophole-free experiment, say, tomorrow. Following other people, I am just saying (right now, not tomorrow) that 1) there has been no evidence of violations of the genuine Bell inequalities so far, and that 2) mutually contradictory assumptions are required to derive the QM's prediction of the violation of Bell inequalities. Therefore, local realism has not been ruled out so far.
 
  • #581
akhmeteli said:
I agree. Some people think that closing separate loopholes in separate experiments is good enough though. In post 34 of this thread I asked one of them:

"what’s wrong [then] with the following reasoning: planar Euclidian geometry is wrong because it predicts that the sum of angles of any triangle is 180 degrees, whereas experiments demonstrate with confidence of 300 sigmas or more that the sums of angles of a quadrangle on a plane and a triangle on a sphere are not equal to 180 degrees."

I have never heard an answer from anybody.
This is kind of a strawman, no one is asking you to adopt a general principle along the lines of "if X is true when condition Y but not condition Z holds, and X is also true when condition Z but not condition Y holds, then we can assume X is true when both conditions Y and Z hold simultaneously". Rather, the reason physicists think we can be pretty confident that Bell inequalities would be violated in an experiment where both loopholes were closed simultaneously has to do with specific considerations about the physical situation we're looking at, like the idea I already mentioned that it would require a very contrived local theory that would exploit both loopholes in just the right way that it would perfectly agree with QM in all experiments done to date.
JesseM said:
still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.
akhmeteli said:
I agree, "most experts would agree" on that. But what conclusions am I supposed to draw from that? That the model I offer is "very contrived"?
Are you claiming that your model gives correct statistical predictions about the empirical results of all the Aspect-type experiments that have been done to date?
akhmeteli said:
That the model does not "get correct predictions (agreeing with those of QM) for the experiments that have already been performed"? But it has the same evolution as the relevant QFT.
That seems like a slightly evasive answer, since you later say that you distinguish the unitary evolution aspect of QM/QFT from the projection postulate, and only claim that your model reproduces the unitary evolution, but isn't the projection postulate the only way to get actual predictions about empirical experiments from QM/QFT? Do you claim that your model can correctly predict actual empirical experimental results in the types of experiments that have been done to date, yes or no?
akhmeteli said:
I think the postulates are indeed "mutually contradicting", as the projection postulate predicts transformation of a pure wavefunction into a mixture and it predicts irreversibility.
Why is this a "contradiction", if we don't assume that either the wavefunction or its collapse on measurement are in any sense "real", but just treat them as parts of a pragmatic recipe for making quantitative predictions about experimental results? Do you claim there are any situations where the two postulates don't lead to a unique prediction about the statistics we should expect to see in some empirical experiment? If so, what situation would that be?
akhmeteli said:
Neither is true for unitary evolution. Of course, you can indeed avoid a contradiction, saying (following von Neumann) that unitary evolution is correct between measurements, and the projection postulate is correct during measurements.
Yes, this is just what the pragmatic recipe says we should do.
akhmeteli said:
But I think it is rather difficult to cling to that position now, 80 years after von Neumann. Are you ready to say that if you call something "an instrument", it evolves in one way, and if you don't call it that, it evolves differently?
Personally I believe there are some true set of laws that describe what's "really" going on (I'd favor some type of many-worlds type view) and which work exactly the same for interactions between quantum systems and "instruments" as they do for interactions between individual particles. But again, if QM is treated just as a pragmatic recipe for making predictions which says nothing about the underlying "reality" one way or another, then in practice I don't think there is much ambiguity about what constitutes a "measurement", my understanding is that it's basically synonymous with interactions that involve environmental decoherence. And the types of experiments that physicists do are typically carefully controlled to prevent environmental decoherence from any other system besides the assigned "measuring device" (for example, a double-slit experiment with an electron will be done in a vacuum to prevent decoherence from interactions between the electrons and air molecules).
akhmeteli said:
Do you think that unitary evolution is wrong for instruments? Or for observers?
I don't think it's likely to be wrong in reality since I favor some sort of variant of the many-worlds interpretation, but I do think it's hard to get concrete predictions about empirical results using unitary evolution alone
akhmeteli said:
I quoted Schlosshauer in post 41 in this thread, he reviewed modern experiments and concluded, among other things (please see the exact wording in post 41), that unitary dynamics has been confirmed everywhere it was tested and that there is no positive evidence of collapse.
You didn't actually give a link to the paper, but you seem to be talking about this one. Anyway, Schlosshauer seems to be just arguing for the many-worlds interpretation (see the discussion beginning with 'The basic idea was introduced in Everett’s proposal of a relative-state view of quantum mechanics' on p. 1) and against any sort of objective collapse theory (see p. 13 where he talks about 'physical collapse models'--note that such models would actually be empirically distinguishable from ordinary QM in certain situations, like if information could be recorded and then 'erased' in a sufficiently large system completely isolated from environmental decoherence), but this is not the same as arguing that on a pragmatic level there's anything wrong with using the projection postulate to get quantitative predictions about experimental results. And it typically requires a lot of sophisticated argument to show how any many-worlds type interpretation can give concrete predictions in the form of probabilities (see the preferred basis problem), with no complete agreement among many-worlds advocates on how to do this (Schlosshauer discusses the problem on p. 14 of the paper, in the section 'Emergence of probabilities in a relative-state framework'); I think they all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate, though. Indeed, Schlosshauer says at the beginning of that section that "The question of the origin and meaning of probabilities in a relative state–type interpretation that is based solely on a deterministically evolving global quantum state, and the problem of how to consistently derive Born’s rule in such a framework, has been the subject of much discussion and criticism aimed at this type of interpretation." And a bit later he says "The solution to the problem of understanding the meaning of probabilities and of deriving Born’s rule in a relative-state framework must therefore be sought on a much more fundamental level of quantum mechanics."
JesseM said:
Indeed, the example of Bohmian mechanics proves that we are free to believe there is some underlying model that explains the origin of the probabilities given in the recipe without the need to assume anything special really happens during the measurement process.
akhmeteli said:
No, this is not quite so. If I understand Demystifier (https://www.physicsforums.com/showpost.php?p=2167542&postcount=19) correctly (and he has written maybe dozens of articles on Bohmian mechanics), although the projection postulate can be derived in Bohmian mechanics, it can only be derived as an approximation, maybe a very good approximation, but an approximation.
I don't think Demystifier was actually saying that there'd be situations where Bohmian mechanics would give different predictions about empirical results than the normal QM recipe involving the Born rule; I think he was just saying that in Bohmian mechanics the collapse is not "real" (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield. In section 4 of the Stanford article on Bohmian mechanics, they say:
However, the form given above has two advantages: First, it makes sense for particles with spin — and all the apparently paradoxical quantum phenomena associated with spin are, in fact, thereby accounted for by Bohmian mechanics without further ado. Secondly, and this is crucial to the fact that Bohmian mechanics is empirically equivalent to orthodox quantum theory, the right hand side of the guiding equation is J/ρ, the ratio of the quantum probability current to the quantum probability density. This shows first of all that it should require no imagination whatsoever to guess the guiding equation from Schrödinger's equation, provided one is looking for one, since the classical formula for current is density times velocity.

...

This demonstrates that all claims to the effect that the predictions of quantum theory are incompatible with the existence of hidden variables, with an underlying deterministic model in which quantum randomness arises from averaging over ignorance, are wrong. For Bohmian mechanics provides us with just such a model: For any quantum experiment we merely take as the relevant Bohmian system the combined system that includes the system upon which the experiment is performed as well as all the measuring instruments and other devices used in performing the experiment (together with all other systems with which these have significant interaction over the course of the experiment). The "hidden variables" model is then obtained by regarding the initial configuration of this big system as random in the usual quantum mechanical way, with distribution given by |ψ|2. The initial configuration is then transformed, via the guiding equation for the big system, into the final configuration at the conclusion of the experiment. It then follows that this final configuration of the big system, including in particular the orientation of instrument pointers, will also be distributed in the quantum mechanical way, so that this deterministic Bohmian model yields the usual quantum predictions for the results of the experiment.
akhmeteli said:
I don't think it differs in this respect, if you include the standard measurement theory in it. But I did not say the LRM reproduces both unitary evolution and the measurement theory of this QFT, it just reproduces its unitary evolution. As unitary evolution and measurement theory are mutually contradictory, I don't think the failure to reproduce the measurement theory is a weak point of the LRM.
But if it only reproduces unitary evolution, can it reproduce any of the empirical predictions about probabilities made by the standard pragmatic recipe which includes the Born rule? Or can it only predict complex amplitudes, which can't directly be compared to empirical probabilities without making use of the Born rule or some subtle many-worlds type argument?

One last thing: note that Bell's proof strictly speaking showed that QM was incompatible with local realism if we assume that part of the definition of "realism" is that each measurement has a unique outcome, rather than each experiment splitting the experimenter into multiple copies who observe different outcomes. See the simple toy model I provided in post #11 of this thread showing how, if two experimenters Alice and Bob split into multiple copies on measurement and the universe doesn't have to decide which copy of Alice is matched to which copy of Bob until there's been time for a signal to pass between them, then we can get a situation where a randomly selected Alice-Bob pair will see statistics that violate Bell inequalities in a purely local model. Likewise, see my post #8 on this thread for links to various many-worlds advocates arguing that the interpretation is a purely local model.
 
Last edited:
  • #582
JesseM said:
This is kind of a strawman, no one is asking you to adopt a general principle along the lines of "if X is true when condition Y but not condition Z holds, and X is also true when condition Z but not condition Y holds, then we can assume X is true when both conditions Y and Z hold simultaneously".
I am happy that you don’t use this argument. But it does not look like a strawman to me. See, e.g., post 7 in this thread. Furthermore, Aspelmeyer and Zeilinger wrote as follows (see the reference in post 385 in this thread):
"But the ultimate test of Bell’s theorem is still missing:
a single experiment that closes all the loopholes at once.
It is very unlikely that such an experiment will disagree
with the prediction of quantum mechanics, since this
would imply that nature makes use of both the detection
loophole in the Innsbruck experiment and of the
locality loophole in the NIST experiment. Nevertheless,
nature could be vicious, and such an experiment is desirable
if we are to finally close the book on local realism."
While they are careful enough to avoid saying anything that is factually incorrect, they do use this argument. So this argument is indeed widely used.
JesseM said:
Rather, the reason physicists think we can be pretty confident that Bell inequalities would be violated in an experiment where both loopholes were closed simultaneously has to do with specific considerations about the physical situation we're looking at, like the idea I already mentioned that it would require a very contrived local theory that would exploit both loopholes in just the right way that it would perfectly agree with QM in all experiments done to date.
I believe I addressed this statement in my previous post and I am not sure I have anything to add.

JesseM said:
Are you claiming that your model gives correct statistical predictions about the empirical results of all the Aspect-type experiments that have been done to date?

That seems like a slightly evasive answer, since you later say that you distinguish the unitary evolution aspect of QM/QFT from the projection postulate, and only claim that your model reproduces the unitary evolution, but isn't the projection postulate the only way to get actual predictions about empirical experiments from QM/QFT? Do you claim that your model can correctly predict actual empirical experimental results in the types of experiments that have been done to date, yes or no?
I appreciate that my answer may look evasive, but I was not trying to sweep anything under the carpet, so maybe the question is not quite appropriate? Let me give you an example. Suppose I’d ask you if the Schroedinger equation correctly describes all experiments performed so far? Yes or no? Strictly speaking, the correct answer is “no”, because the equation is not relativistic and does not describe the electronic spin. But perhaps you’ll agree that this “correct” answer is somewhat misleading because this is a damn good equation :-) So if you want a yes or no answer, then no, the model I offer cannot describe all experiments performed so far, e.g., because it does not describe the electronic spin, and I said so in my previous post. However, this is a quite decent model, as it includes the entire scalar electrodynamics, a well-established theory.
JesseM said:
Why is this a "contradiction", if we don't assume that either the wavefunction or its collapse on measurement are in any sense "real", but just treat them as parts of a pragmatic recipe for making quantitative predictions about experimental results? Do you claim there are any situations where the two postulates don't lead to a unique prediction about the statistics we should expect to see in some empirical experiment? If so, what situation would that be?
According to the projection postulate, after a measurement, the system is in an eigenstate, so another measurement will produce the same result (say, if the relevant operator commutes with the Hamiltonian). According to unitary evolution, though, a measurement cannot turn a superposition of states into a mixture, so there is a probability that the next measurement will return a different result. If this is not a contradiction, what is? Another situation where the two postulates don’t lead to a unique prediction is, I believe, a loophole-free Bell experiment. You cannot get a violation using just unitary evolution.

JesseM said:
Yes, this is just what the pragmatic recipe says we should do.

Personally I believe there are some true set of laws that describe what's "really" going on (I'd favor some type of many-worlds type view) and which work exactly the same for interactions between quantum systems and "instruments" as they do for interactions between individual particles.
This is just great, so we pretty much agree with each other. Then what seems to be the problem?:-)
JesseM said:
But again, if QM is treated just as a pragmatic recipe for making predictions which says nothing about the underlying "reality" one way or another, then in practice I don't think there is much ambiguity about what constitutes a "measurement", my understanding is that it's basically synonymous with interactions that involve environmental decoherence. And the types of experiments that physicists do are typically carefully controlled to prevent environmental decoherence from any other system besides the assigned "measuring device" (for example, a double-slit experiment with an electron will be done in a vacuum to prevent decoherence from interactions between the electrons and air molecules).
JesseM, again, it looks like we pretty much agree. I could agree, say, that the difference between unitary evolution and the projection postulate can be explained by environmental decoherence, but let us agree first what we are talking about. This thread is not about quantum theory being good or bad, everybody agrees that it is extremely good. The question of this thread is whether local realism has been ruled out or not. You see, you are talking about something “pragmatic”, but the question of this thread is not exactly pragmatic. As I said earlier in this thread, Nature cannot be “approximately local” or “approximately nonlocal”, it is either precisely local or precisely nonlocal. Or, if you disagree, then please explain what “approximate locality” can possibly be, because I don’t have a slightest idea:-) So yes, quantum theory is extremely good, but this is not relevant to the issue at hand.

JesseM said:
I don't think it's likely to be wrong in reality since I favor some sort of variant of the many-worlds interpretation, but I do think it's hard to get concrete predictions about empirical results using unitary evolution alone
Again, I agree, but, as I noted in our previous discussion (https://www.physicsforums.com/showpost.php?p=1706652&postcount=78), you may just complement unitary evolution with the Born rule as an operational principle.
JesseM said:
You didn't actually give a link to the paper, but you seem to be talking about this one.
That’s correct. Though I did not give a direct link, post 41 referenced post 31, where there is a reference to the article:-) Sorry for the inconvenience:-)
JesseM said:
Anyway, Schlosshauer seems to be just arguing for the many-worlds interpretation (see the discussion beginning with 'The basic idea was introduced in Everett’s proposal of a relative-state view of quantum mechanics' on p. 1) and against any sort of objective collapse theory (see p. 13 where he talks about 'physical collapse models'--note that such models would actually be empirically distinguishable from ordinary QM in certain situations, like if information could be recorded and then 'erased' in a sufficiently large system completely isolated from environmental decoherence), but this is not the same as arguing that on a pragmatic level there's anything wrong with using the projection postulate to get quantitative predictions about experimental results. And it typically requires a lot of sophisticated argument to show how any many-worlds type interpretation can give concrete predictions in the form of probabilities (see the preferred basis problem), with no complete agreement among many-worlds advocates on how to do this (Schlosshauer discusses the problem on p. 14 of the paper, in the section 'Emergence of probabilities in a relative-state framework'); I think they all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate, though. Indeed, Schlosshauer says at the beginning of that section that "The question of the origin and meaning of probabilities in a relative state–type interpretation that is based solely on a deterministically evolving global quantum state, and the problem of how to consistently derive Born’s rule in such a framework, has been the subject of much discussion and criticism aimed at this type of interpretation." And a bit later he says "The solution to the problem of understanding the meaning of probabilities and of deriving Born’s rule in a relative-state framework must therefore be sought on a much more fundamental level of quantum mechanics."
Again, I agree that quantum theory is a great practical value, but we are not discussing practicality. Again, it seems we both seem to agree that unitary evolution is always correct. However, it is worth mentioning that you are both telling me that you favor many worlds interpretation(s) and that there is no “complete agreement” on how “any many-worlds type interpretation can give concrete predictions in the form of probabilities”. This means that “many-worlds” people can actually live without the projection postulate. They may “all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate”, but, strictly speaking, they are just unable to derive these probabilities. And it is good for them that they cannot derive those probabilities, because if they derived them from unitary evolution, that would mean that they made a mistake somewhere, as you cannot derive from unitary evolution something that directly contradicts it – the projection postulate. Let me emphasize that for all practical purposes you don’t need the Born rule or the projection postulate as precise principles – if they are approximately correct, they may be good enough for practice, but not when you’re trying to understand if Nature is local or not

JesseM said:
I don't think Demystifier was actually saying that there'd be situations where Bohmian mechanics would give different predictions about empirical results than the normal QM recipe involving the Born rule; I think he was just saying that in Bohmian mechanics the collapse is not "real" (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield.
There is no need to guess what he said, as I gave you the reference to what he actually said. He said that the projection postulate is an approximation in Bohmian mechanics. Of course, you are free to disagree with him, with me or anybody else, but if you do, just say so. Do you believe that the projection postulate can be derived in Bohmian mechanics as a precise principle? With all due respect, I strongly doubt that it can (for reasons I explained), so could you give me a reference to such a result? The Born rule is one thing, the projection postulate is something different.

JesseM said:
In section 4 of the Stanford article on Bohmian mechanics, they say:
Again, the Born rule is one thing, the projection postulate is something different. In the quote from Stanford encyclopedia (SE), I’d say, the Born rule is an operational principle. Furthermore, everything they say can be applied to the model I offer. Moreover, one can say that this model is a variant of Bohmian mechanics, which just happens to be local.

JesseM said:
But if it only reproduces unitary evolution, can it reproduce any of the empirical predictions about probabilities made by the standard pragmatic recipe which includes the Born rule? Or can it only predict complex amplitudes, which can't directly be compared to empirical probabilities without making use of the Born rule or some subtle many-worlds type argument?
As I said, your SE quote above applies to this model. If you believe the Bohmian mechanics can reproduce “any of the empirical predictions about probabilities”, then why should you have a problem with this model? If you don’t believe that, well, at least this model is in good company:-)

JesseM said:
One last thing: note that Bell's proof strictly speaking showed that QM was incompatible with local realism if we assume that part of the definition of "realism" is that each measurement has a unique outcome, rather than each experiment splitting the experimenter into multiple copies who observe different outcomes. See the simple toy model I provided in post #11 of this thread showing how, if two experimenters Alice and Bob split into multiple copies on measurement and the universe doesn't have to decide which copy of Alice is matched to which copy of Bob until there's been time for a signal to pass between them, then we can get a situation where a randomly selected Alice-Bob pair will see statistics that violate Bell inequalities in a purely local model. Likewise, see my post #8 on this thread for links to various many-worlds advocates arguing that the interpretation is a purely local model.
I see. I am just not sure such radical ideas as many worlds are really necessary. Furthermore, as I said in our previous discussion, I believe unitary evolution implies that no measurement is ever final, so, strictly speaking, there are never any definite outcomes, but they may seem definite, as transitions between different states of a macroscopic instrument can take an eternity.

In general, I would say our positions have a lot in common.
 
  • #583
With all due respect akhmeteli, to a layman like me, this looks like a "beat around the bushes"...?

The title of your paper is: "IS NO DRAMA QUANTUM THEORY POSSIBLE?"

I could be wrong, but I interpret "NO DRAMA QUANTUM THEORY" as no "spooky action at a distance", i.e. local realism. But then you say:
Is it possible to offer a "no drama" quantum theory? Something as simple (in principle) as classical electrodynamics - a local realistic theory described by a system of partial differential equations in 3+1 dimensions, but reproducing unitary evolution of quantum theory in the configuration space?

Of course, the Bell inequalities cannot be violated in such a theory. This author has little, if anything, new to say about the Bell theorem, and this article is not about the Bell theorem. However, this issue cannot be "swept under the carpet" and will be discussed in Section 5 using other people's arguments.
(My emphasis)

In Section 5, you state:
In Section 3, it was shown that a theory similar to quantum field theory (QFT) can be built that is basically equivalent to non-second-quantized scalar electrodynamics on the set of solutions of the latter. However, the local realistic theory violates the Bell inequalities, so this issue is discussed below using other people's arguments.
I take for granted that this is the (calamitous) typo??
While the Bell inequalities cannot be violated in local realistic theories, there are some reasons to believe these inequalities cannot be violated either in experiments or in quantum theory. Indeed, there seems to be a consensus among experts that "a conclusive experiment falsifying in an absolutely uncontroversial way local realism is still missing".
(My emphasis)

To me this looks like a not very fair 'mixture' of; personal speculations + bogus statements + others statements concerning the current status of EPR-Bell experiments, resulting in the stupendous conclusion that Bell "inequalities cannot be violated either in experiments or in quantum theory" ...!:bugeye:?

And how on Earth is this 'compatible' with your initial statement:
This author has little, if anything, new to say about the Bell theorem, and this article is not about the Bell theorem.
?:confused:?

I trust in RUTA (Mark Stuckey). He’s a working PhD Professor of Physics:
RUTA said:
When I first entered the foundations community (1994), there were still a few conference presentations arguing that the statistical and/or experimental analyses of EPR-Bell experiments were flawed. Such talks have gone the way of the dinosaurs. Virtually everyone agrees that the EPR-Bell experiments and QM are legit, so we need a significant change in our worldview. There is a proper subset who believe this change will be related to the unification of QM and GR :-)
(My emphasis)

I looked at http://www.akhmeteli.org/" and there are no references at all...?

To me, this looks like "personal speculations", and not mainstream physics:
akhmeteli said:
... This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.

And to be frank, your reasoning also looks dim. You are claiming a Local Realistic Model (LRM) that is not capable of violating Bell's Inequality, but that doesn’t matter, because – "these inequalities cannot be violated either in experiments or in quantum theory".

Exactly how do you derive "cannot" from your previous statements ...?:eek:?
 
Last edited by a moderator:
  • #584
akhmeteli said:
Not exactly. I suspect that this prediction may be wrong, but I cannot claim that it is wrong. Indeed, I do understand that the violations can be found in a loophole-free experiment, say, tomorrow.

If the prediction is wrong, then QM is wrong. That's the bold assertion I'm fishing for :-)


akhmeteli said:
Following other people, I am just saying (right now, not tomorrow) that 1) there has been no evidence of violations of the genuine Bell inequalities so far,

Given the preponderance of experimental evidence and the highly contrived nature by which loop holes must exist to explain away violations of Bell inequalities, the foundations community has long ago abandoned any attempt to save local realism. But, you're right, there are no truly "loop hole free" experiments, so die hard local realists can cling to hope.

akhmeteli said:
and that 2) mutually contradictory assumptions are required to derive the QM's prediction of the violation of Bell inequalities. Therefore, local realism has not been ruled out so far.

Are you talking about the measurement problem? That applies to all QM predictions, not just those that violate Bell inequalities.
 
  • #585
JesseM said:
There have been experiments that closed the detector efficiency loophole and experiments that closed the locality loophole, but no experiment that closed both loopholes simultaneously--still I think most experts would agree you'd need a very contrived local realist model to get correct predictions (agreeing with those of QM) for the experiments that have already been performed, but which would fail to violate Bell inequalities (in contradiction with QM) in an ideal experiment.
It does not require contrived model to spot the likely source of systematic error in NIST experiment (if that's the one you have on mind as efficient detection experiment).
In this experiment only one measurement is performed for both particles and that way detection photons are subject to interference.
As authors of that paper say: "Also, the detection solid angle is large enough that Young's interference fringes, if present are averaged out."
First, this interference effect of photons scattered from two ions is experimentally verified so there should be no reason to say that there are no interference fringes (negligible might be a better word).
Second, assumption that interference effect of detection photons is averaged out even when they are conditioned on different ion configurations is the same fair sampling assumption as used in different photon experiments.
 
  • #586
RUTA said:
If the prediction is wrong, then QM is wrong. That's the bold assertion I'm fishing for :-)
If prediction of some green alternate theory is found out to be wrong then theory is wrong.
If prediction of well established theory with proven usefulness is found out to be wrong then domain of it's applicability is established instead. :wink:
 
  • #587
P.S. akhmeteli
... these inequalities cannot be violated either in experiments or in quantum theory ...

It would be interesting to hear your view on this:
http://plato.stanford.edu/entries/bell-theorem/"
...
The incompatibility of Local Realistic Theories with Quantum Mechanics permits adjudication by experiments, some of which are described here. Most of the dozens of experiments performed so far have favored Quantum Mechanics, but not decisively because of the “detection loophole” or the “communication loophole.” The latter has been nearly decisively blocked by a recent experiment and there is a good prospect for blocking the former. The refutation of the family of Local Realistic Theories would imply that certain peculiarities of Quantum Mechanics will remain part of our physical worldview: notably, the objective indefiniteness of properties, the indeterminacy of measurement results, and the tension between quantum nonlocality and the locality of Relativity Theory.


And while you’re at it: Could you please explain why not one (1) EPR-Bell experiment so far has clearly favored Local Realistic Theories? Not one (1).

And, if you have some extra spare time: Could you also explain how nature is providing the "detection loophole", which is regarded as the most 'severe'. I mean, if you look at this slide from Alain Aspect, it’s clear that this "magic LRM function" must be wobbling between "too much" and "too little" to provide the measured data. And last but not least, this "magic LRM function" must KNOW which photons are entangled or not?? (Looks like a very "spooky function" to me... :bugeye:)

2wr1cgm.jpg
 
Last edited by a moderator:
  • #588
zonde said:
If prediction of some green alternate theory is found out to be wrong then theory is wrong.
If prediction of well established theory with proven usefulness is found out to be wrong then domain of it's applicability is established instead. :wink:

So, QM is alright as long as you don't have entangled states? Restrictions on applicability are acceptable when a theory is superceded, e.g., Newtonian dynamics is ok when v << c and was superceded by SR to account for v ~ c, but no one has a theory superceding QM that gets rid of its entangled states. And, unlike v ~ c prior to SR, we have the means to create and explore entangled states and all such experiments vindicate QM.

No, zonde, this is not a mere restriction on the applicability of QM.
 
  • #589
akhmeteli said:
I am happy that you don’t use this argument. But it does not look like a strawman to me. See, e.g., post 7 in this thread. Furthermore, Aspelmeyer and Zeilinger wrote as follows (see the reference in post 385 in this thread):
"But the ultimate test of Bell’s theorem is still missing:
a single experiment that closes all the loopholes at once.
It is very unlikely that such an experiment will disagree
with the prediction of quantum mechanics, since this
would imply that nature makes use of both the detection
loophole in the Innsbruck experiment and of the
locality loophole in the NIST experiment. Nevertheless,
nature could be vicious, and such an experiment is desirable
if we are to finally close the book on local realism."
While they are careful enough to avoid saying anything that is factually incorrect, they do use this argument. So this argument is indeed widely used.
Nowhere in that quote do they imply it is true in general that "if X is true when condition Y but not condition Z holds, and X is also true when condition Z but not condition Y holds, then we can assume X is true when both conditions Y and Z hold simultaneously". Rather they refer to the specific conditions of the experiment when they say "It is very unlikely that such an experiment will disagree with the prediction of quantum mechanics, since this would imply that nature makes use of both the detection loophole in the Innsbruck experiment and of the locality loophole in the NIST experiment." It's quite possible (and I think likely) that the reason they consider it "unlikely" is because a theory making use of both loopholes would be very contrived-looking.
JesseM said:
Rather, the reason physicists think we can be pretty confident that Bell inequalities would be violated in an experiment where both loopholes were closed simultaneously has to do with specific considerations about the physical situation we're looking at, like the idea I already mentioned that it would require a very contrived local theory that would exploit both loopholes in just the right way that it would perfectly agree with QM in all experiments done to date.
akhmeteli said:
I believe I addressed this statement in my previous post and I am not sure I have anything to add.
You addressed it by suggested your own model was non-contrived, but didn't give a clear answer to my question about whether it can actually give statistical predictions about experiments so far like the Innsbruck experiment and the NIST experiment (or any experiments whatsoever, see below)--if it can't, then it obviously doesn't disprove the claim that any local realist theory consistent with experiments so far would have to be very contrived!
JesseM said:
Are you claiming that your model gives correct statistical predictions about the empirical results of all the Aspect-type experiments that have been done to date?

That seems like a slightly evasive answer, since you later say that you distinguish the unitary evolution aspect of QM/QFT from the projection postulate, and only claim that your model reproduces the unitary evolution, but isn't the projection postulate the only way to get actual predictions about empirical experiments from QM/QFT? Do you claim that your model can correctly predict actual empirical experimental results in the types of experiments that have been done to date, yes or no?
akhmeteli said:
I appreciate that my answer may look evasive, but I was not trying to sweep anything under the carpet, so maybe the question is not quite appropriate? Let me give you an example. Suppose I’d ask you if the Schroedinger equation correctly describes all experiments performed so far? Yes or no? Strictly speaking, the correct answer is “no”, because the equation is not relativistic and does not describe the electronic spin. But perhaps you’ll agree that this “correct” answer is somewhat misleading because this is a damn good equation :-) So if you want a yes or no answer, then no, the model I offer cannot describe all experiments performed so far, e.g., because it does not describe the electronic spin, and I said so in my previous post. However, this is a quite decent model, as it includes the entire scalar electrodynamics, a well-established theory.
OK, but can your model actually give "correct predictions about statistical results" for any actual experiments, or does it only reproduce the unitary evolution? If it can't predict actual real-valued statistics that are measured empirically, as opposed to complex amplitudes, then it isn't a local realist model that can explain any existing experiments (you may be able to derive probabilities from amplitudes using many-worlds type arguments, but as I said part of the meaning of 'local realism' is that each measurement yields a unique outcome)
akhmeteli said:
According to the projection postulate, after a measurement, the system is in an eigenstate, so another measurement will produce the same result (say, if the relevant operator commutes with the Hamiltonian). According to unitary evolution, though, a measurement cannot turn a superposition of states into a mixture, so there is a probability that the next measurement will return a different result.
Suppose we do a Wigner's friend type thought-experiment where we imagine a small quantum system that's first measured by an experimenter in an isolated box, and from our point of view this just causes the experimenter to become entangled with the system rather than any collapse occurring. Then we open the box and measure both the system and the record of the previous measurement taken by the experimenter who was inside, and we model this second measurement as collapsing the wavefunction. If the two measurements on the small system were of a type that according to the projection postulate should yield a time-independent eigenstate, are you claiming that in this situation where we model the first measurement as just creating entanglement rather than collapsing the wavefunction, there is some nonzero possibility that the second measurement will find that the record of the first measurement will be of a different state than the one we find on the second measurement? I'm not sure but I don't think that would be the case--even if we assume unitary evolution, as long as there is some record of previous measurements then the statistics seen when comparing the records to the current measurement should be the same as the statistics you'd have if you assumed the earlier measurements (the ones which resulted in the records) collapsed the wavefunction of the system being measured according to the projection postulate.

In any case, the projection postulate does not actually specify that each "measurement" must collapse the wavefunction onto an eigenstate in cases where you're performing a sequence of different measurements. The "pragmatic recipe" is entirely compatible with the notion that in a problem like this, the projection postulate should only be used once at the very end of the complete experiment, when you make a measurement of all the records that resulted from earlier measurements.
akhmeteli said:
JesseM, again, it looks like we pretty much agree. I could agree, say, that the difference between unitary evolution and the projection postulate can be explained by environmental decoherence, but let us agree first what we are talking about. This thread is not about quantum theory being good or bad, everybody agrees that it is extremely good. The question of this thread is whether local realism has been ruled out or not.
But there are two aspects of this question--the first is whether local realism can be ruled out given experiments done so far, the second is whether local realism is consistent with the statistics predicted theoretically by QM. Even if you don't use the projection postulate to generate predictions about statistics, you need some real-valued probabilities for different outcomes, you can't use complex amplitudes alone since those are never directly measured empirically. And if we understand local realism to include the condition that each measurement has a unique outcome, then it is impossible to get these real-valued statistics from a local realist model.
akhmeteli said:
You see, you are talking about something “pragmatic”, but the question of this thread is not exactly pragmatic. As I said earlier in this thread, Nature cannot be “approximately local” or “approximately nonlocal”, it is either precisely local or precisely nonlocal.
No idea where you got the idea that I would be talking about "approximate" locality from anything in my posts. I was just talking about QM being a "pragmatic" recipe for generating statistical predictions, I didn't say that Bell's theorem and the definition of local realism were approximate or pragmatic. Remember, Bell's theorem is about any black-box experiment where two experimenters at a spacelike separation each have a random choice of detector setting, and each measurement must yield one of two binary results--nothing about the proof specifically assumes they are measuring anything "quantum", they might be choosing to ask one of three questions with yes-or-no answers to a messenger sent to them or something. Bell's theorem proves that according to local realism, any experiment of this type must obey some Bell inequalities. So then if you want to show that QM is incompatible with local realism, the only aspect of QM you should be interested in is its statistical predictions about some experiment of this type, all other theoretical aspects of QM are completely irrelevant to you. Unless you claim that the "pragmatic recipe" I described would actually make different statistical predictions about this type of experiment than some other interpretation of QM like Bohmian mechanics or the many-worlds-interpretation, then it's pointless to quibble with the pragmatic recipe in this context.
akhmeteli said:
Again, I agree, but, as I noted in our previous discussion (https://www.physicsforums.com/showpost.php?p=1706652&postcount=78), you may just complement unitary evolution with the Born rule as an operational principle.
But that won't produce a local realist theory where each measurement has a unique outcome. Suppose you have two separate computers, one modeling the amplitudes for various measurements which could be performed in the local region of one simulated experimenter "Alice", another modeling the amplitudes for various measurements which could be performed in the local region of another simulated experimenter "Bob", with the understanding that these amplitudes concerned measurements on a pair of entangled particles that were sent to Alice and Bob (who make their measurements at a spacelike separation). If you want to simulate Alice and Bob making actual measurements, and you must assume that each measurement yields a unique outcome (i.e. Alice and Bob don't each split into multiple copies as in the toy model I linked to at the end of my last post), then if the computers running the simulation are cut off from communicating with one another and neither computer knows in advance what measurement will be performed by the simulated experimenter on the other computer, then there is no way that such a simulation can yield the same Bell-inequality-violating statistics predicted by QM, even if you program the Born rule into each computer to convert amplitudes into probabilities which are used to generate the simulated outcome of each measurement. Do you disagree that there is no way to get the correct statistics predicted by any interpretation of QM in a setup like this where the computers simulating each experimenter are cut off from communicating? (which corresponds to the locality condition that events in regions with a spacelike separation can have no causal effect on one another)
akhmeteli said:
Again, I agree that quantum theory is a great practical value, but we are not discussing practicality. Again, it seems we both seem to agree that unitary evolution is always correct. However, it is worth mentioning that you are both telling me that you favor many worlds interpretation(s) and that there is no “complete agreement” on how “any many-worlds type interpretation can give concrete predictions in the form of probabilities”. This means that “many-worlds” people can actually live without the projection postulate. They may “all agree that the probabilities should be the same as the ones given by the pragmatic recipe involving the projection postulate”, but, strictly speaking, they are just unable to derive these probabilities.
The problem is that there is no agreement on how the many-worlds interpretation can be used to derive any probabilities. If we're not convinced it can do so then we might not view it as being a full "interpretation" of QM yet, rather it'd be more like an incomplete idea for how one might go about constructing an interpretation of QM in which measurement just caused the measuring-system to become entangled with the system being measured.
akhmeteli said:
And it is good for them that they cannot derive those probabilities, because if they derived them from unitary evolution, that would mean that they made a mistake somewhere, as you cannot derive from unitary evolution something that directly contradicts it – the projection postulate.
See my comments above about the Wigner's friend type thought experiment. I am not convinced that you can actually find a situation where a series of measurements are made that each yield records of the result, such that using the projection postulate for each measurement gives different statistical predictions then if we just treat this as a giant entangled system which evolves in a unitary way, and then at the very end use the Born rule to find statistical expectations for the state of all the records of prior measurements. And as I said there as well, the projection postulate does not actually specify whether in a situation like this you should treat each successive measurement as collapsing the wavefunction onto an eigenstate or whether you should save the "projection" for the very last measurement.
JesseM said:
I don't think Demystifier was actually saying that there'd be situations where Bohmian mechanics would give different predictions about empirical results than the normal QM recipe involving the Born rule; I think he was just saying that in Bohmian mechanics the collapse is not "real" (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield.
akhmeteli said:
There is no need to guess what he said, as I gave you the reference to what he actually said.
I wasn't guessing what he said, I was guessing what he meant by what he said. What he said was only the very short statement "Yes, it is an approximation. However, due to decoherence, this is an extremely good approximation. Essentially, this approximation is as good as the second law of thermodynamics is a good approximation." I think this statement is compatible with my interpretation of what he may have meant, namely "in Bohmian mechanics the collapse is not 'real' (i.e. the laws governing measurement interactions are exactly the same as the laws governing other interactions) but just a pragmatic way of getting the same predictions a full Bohmian treatment would yield." Nowhere did he say that using the projection postulate will yield different statistical predictions about observed results than those predicted by Bohmian mechanics.
akhmeteli said:
The Born rule is one thing, the projection postulate is something different.
I think they are different only if you assume multiple successive measurements, and understanding "the projection postulate" to imply that each measurement collapses the wavefunction onto an eigenstate, and assuming that for some of the measurements the records of the results are "erased" so that it cannot be known later what the earlier result was. If you are dealing with a situation where none of the measurement records are erased, I'm pretty sure that the statistics for the measurement results you get using the projection postulate will be exactly the same as the statistics you get if you model the whole thing as a giant entangled system and then use the Born rule at the very end to find the probabilities of different combinations of recorded measurement results. And once again, the "projections postulate" does not precisely define when projection should occur anyway, you are free to interpret the projection postulate to mean that only the final measurement of the records at the end of the entire experiment actually collapses the wavefunction.
 
Last edited:
  • #590
(continued from previous post)
JesseM said:
But if it only reproduces unitary evolution, can it reproduce any of the empirical predictions about probabilities made by the standard pragmatic recipe which includes the Born rule? Or can it only predict complex amplitudes, which can't directly be compared to empirical probabilities without making use of the Born rule or some subtle many-worlds type argument?
akhmeteli said:
As I said, your SE quote above applies to this model. If you believe the Bohmian mechanics can reproduce “any of the empirical predictions about probabilities”, then why should you have a problem with this model?
I think you misunderstood what I meant by "any" above, I wasn't asking if your model could reproduce any arbitrary prediction made by the "standard pragmatic recipe" (i.e. whether it would agree with the standard pragmatic recipe in every possible case, as I think Bohmian mechanics does). Rather, I was using "any" in the same sense as it's used in the question priests used to ask at weddings, "If any person can show just cause why they may not be joined together, let them speak now or forever hold their peace"--in other words, I was asking if there was even a single instance of a case where your model reproduces the probabilistic predictions of standard QM, or whether your model only deals with complex amplitudes that result from unitary evolution. The reason I asked this is that the statement of yours I was responding to was rather ambiguous on this point:
I don't think it differs in this respect, if you include the standard measurement theory in it. But I did not say the LRM reproduces both unitary evolution and the measurement theory of this QFT, it just reproduces its unitary evolution. As unitary evolution and measurement theory are mutually contradictory, I don't think the failure to reproduce the measurement theory is a weak point of the LRM.
If your model does predict actual measurement results, then if the model was applied to an experiment intended to test some Bell inequality, would it in fact predict an apparent violation of the inequalites in both experiments where the locality loophole was closed but not the detector efficiency loophole, and in experiments where the efficiency loophole was closed but not the locality loophole? I think you said your model would not predict violations of Bell inequalities in experiments with all loopholes closed--would you agree that if we model such experiments using unitary evolution plus the Born rule (perhaps applied to the records at the very end of the full experiment, after many trials had been performed, so we don't have to worry about whether applying the Born rule means we have to invoke the projection postulate), then we will predict violations of Bell inequalities even in loophole-free experiments? Likewise, would you agree that Bohmian mechanics also predicts violations in loophole-free experiments, and many-worlds advocates would expect the same prediction even if there is disagreement on how to derive it?
 
  • #591
zonde said:
If prediction of some green alternate theory is found out to be wrong then theory is wrong.

If prediction of well established theory with proven usefulness is found out to be wrong then domain of it's applicability is established instead. :wink:

I agree with this. Technically, theories should not be seen as "Proven" or "Wrong" or whatever; rather as "More Useful" or "Useless". And the scope/domain of a theory may need to be modified from time to time as new information arises. So a theory could remain useful in a narrowed domain if new information is acquired. Newtonian gravity after GR is an example. Still quite useful. I would definitely not call Newtonian gravity a wrong theory.
 
  • #592
JesseM said:
... OK, but can your model actually give "correct predictions about statistical results" for any actual experiments, or does it only reproduce the unitary evolution?
akhmeteli said:
... No, I definitely do not claim that (though there is an unfortunate typo in the article, which I will correct in the proofs). This LRM does not violate the Bell inequalities. But I don't think this is a weak point of the model for the reasons I explained in this thread:

1) There is no experimental evidence of violations of the genuine Bell inequalities so far;
2) Proofs of the Bell theorem use two mutually contradicting postulates of the standard quantum theory (unitary evolution and projection postulate) to prove that the Bell inequalities are indeed violated in quantum theory.


This is how I get it, and I apologize in advance if it’s wrong:

According to akhmeteli, the LRM does not violate the Bell inequalities, but that doesn’t matter much, because according to akhmeteli, there are no experimental evidence of violations of Bell's Inequalities so far, and Bell's Theorem is faulty in using two mutually contradicting postulates of QM.​

If I’m right, it doesn’t impress me... all he’s saying is that EPR-Bell experiments & Bell's Theorem are wrong, without delivering any proofs for that claim.
 
  • #593
DrChinese said:
I agree with this. Technically, theories should not be seen as "Proven" or "Wrong" or whatever; rather as "More Useful" or "Useless". And the scope/domain of a theory may need to be modified from time to time as new information arises. So a theory could remain useful in a narrowed domain if new information is acquired. Newtonian gravity after GR is an example. Still quite useful. I would definitely not call Newtonian gravity a wrong theory.

Whether you choose to call Newtonian gravity "wrong," given it has been superceded by GR, is semantics. The claim that QM should be restricted to use with non-entangled states is not at all consistent with this type of "wrong" b/c there is no theory superceding QM that clearly shows why QM's treatment of entangled states is wrong -- no semantics here, GR says clearly that Newtonian gravity fails in certain regimes and tests of this claim vindicate GR. We don't have any such theory, claims and vindication against QM's predictions for entangled states. Quite the contrary, we've many experiments consistent with QM's predictions with entangled states. Thus, there is a huge burden of proof for anyone claiming QM's prediction of Bell inequality violations is wrong and, in my opinion, this burden is nowhere near being fulfilled by the proponents of local realism.
 
  • #594
RUTA & DrC, this is interesting. If we assume that one day all EPR-Bell loopholes are closed simultaneously, and we all (maybe even ThomasT :wink:) agree that nonlocality and/or nonseparability is a fact; would that mean that Quantum Mechanics has proven Relativity Theory wrong (or slightly "useless")?
 
  • #595
RUTA said:
Whether you choose to call Newtonian gravity "wrong," given it has been superceded by GR, is semantics. The claim that QM should be restricted to use with non-entangled states is not at all consistent with this type of "wrong" b/c there is no theory superceding QM that clearly shows why QM's treatment of entangled states is wrong -- no semantics here, GR says clearly that Newtonian gravity fails in certain regimes and tests of this claim vindicate GR. We don't have any such theory, claims and vindication against QM's predictions for entangled states. Quite the contrary, we've many experiments consistent with QM's predictions with entangled states. Thus, there is a huge burden of proof for anyone claiming QM's prediction of Bell inequality violations is wrong and, in my opinion, this burden is nowhere near being fulfilled by the proponents of local realism.

I agree with what you are saying, and note that I missed a big new chunk of the thread regarding akhmeteli's claims. So my bad for chiming in irelevantly as I do sometimes. akhmeteli's "suspicion" that QM makes a wrong prediction is strange given every experiment performed to date is clearly within the predicted range of QM (but not any prior LR).

akhmeteli: My big question for your model is a familiar one. If it is local realistic, can you tell me what the correct (if QM is wrong in this regard) statistical predictions are for coincidences at a, b, c = 0, 120, 240 degrees? Can you supply a dataset which is in indicative of the rules of your model?

Alice:
a b c
+ - +
- + +
- - +
+ - -

... or whatever you imagine a batch of Alices to be, independent of Bob. A local realistic model should be able to provide this. If not, it does not fulfill the claim of being realistic. And please, do not point me to your paper as proof. The proof is in the pudding, and I am looking to taste some.
 
  • #596
DevilsAvocado said:
RUTA & DrC, this is interesting. If we assume that one day all EPR-Bell loopholes are closed simultaneously, and we all (maybe even ThomasT :wink:) agree that nonlocality and/or nonseparability is a fact; would that mean that Quantum Mechanics has proven Relativity Theory wrong (or slightly "useless")?

GR is both causally local and separable, so if QM is "right," GR is "wrong." I would use DrC's semantic choice here and refuse to say GR is wrong :-) However, I have to admit an enormous bias -- grade school records show my hero was Einstein, I did my undergrad major in physics when I read about SR, and did my PhD in GR. So, for my own sanity, I must believe that GR is the local, separable approximation to the "correct" theory of gravity.

As an aside, we're working on just such a theory now -- nonseparable Regge calculus. Since our Relational Blockworld interpretation of QM and QFT assumes a nonseparable theory X underlying quantum physics*, we developed a "direct action," path integral approach over graphs for theory X. Regge calculus is a path integral approach over graphs for GR so, of course, that's where we expect to link theory X to classical physics. The only difference between Regge calculus and our approach is that our path integrals are "direct action," i.e., link only sources. Since there are no source-free solutions in our theory X (this is the mathematics behind "nonseparability" in our approach), the vacuum solns of GR are only approximations (as well as its use of continuum mathematics). Anyway, it looks like nonseparable Regge calculus will survive the weak field tests of GR, but predict deviations from GR at large distances (galactic scales and larger). I'll keep you apprised :-)

*Here we follow the possibility articulated by Wallace (p 45) that, “QFTs as a whole are to be regarded only as approximate descriptions of some as-yet-unknown deeper theory,” which he calls “theory X.” Wallace, D.: In defence of naiveté: The conceptual status of Lagrangian quantum field theory. Synthese 151, 33-80 (2006).
 
  • #597
DevilsAvocado said:
RUTA & DrC, this is interesting. If we assume that one day all EPR-Bell loopholes are closed simultaneously, and we all (maybe even ThomasT :wink:) agree that nonlocality and/or nonseparability is a fact; would that mean that Quantum Mechanics has proven Relativity Theory wrong (or slightly "useless")?
As I mentioned at the end of post #581, there is a theoretical loophole in Bell's proof due to the implicit assumption that each measurement yields a unique outcome, so with a many-worlds-type interpretation you could have a local model consistent with observed violations of Bell inequalities in experiments with all the experimental loopholes closed:
One last thing: note that Bell's proof strictly speaking showed that QM was incompatible with local realism if we assume that part of the definition of "realism" is that each measurement has a unique outcome, rather than each experiment splitting the experimenter into multiple copies who observe different outcomes. See the simple toy model I provided in post #11 of this thread showing how, if two experimenters Alice and Bob split into multiple copies on measurement and the universe doesn't have to decide which copy of Alice is matched to which copy of Bob until there's been time for a signal to pass between them, then we can get a situation where a randomly selected Alice-Bob pair will see statistics that violate Bell inequalities in a purely local model. Likewise, see my post #8 on this thread for links to various many-worlds advocates arguing that the interpretation is a purely local model.
 
  • #598
JesseM said:
As I mentioned at the end of post #581, there is a theoretical loophole in Bell's proof due to the implicit assumption that each measurement yields a unique outcome, so with a many-worlds-type interpretation you could have a local model consistent with observed violations of Bell inequalities in experiments with all the experimental loopholes closed:

I'm familiar with the no-collapse account whereby the universe has many copies, each instantiating a possible experimental outcome, but I haven't heard of a no-collapse account whereby the universe itself is being split. Is that what you're saying? If so, in what "time" does the "reconstruction" take place? Sounds like you need a metatime and a cosmic conductor orchestrating the proper mix of outcomes, but I'll let you explain before commenting further :-)
 
  • #599
RUTA said:
I'm familiar with the no-collapse account whereby the universe has many copies, each instantiating a possible experimental outcome, but I haven't heard of a no-collapse account whereby the universe itself is being split. Is that what you're saying?
No, only individual systems are being split in a local manner. And "split" doesn't need to be taken too literally, we could imagine an ensemble of preexisting copies of each experimenter that are identical up to the point of measurement, and then at the moment of measurement some copies see one result while others see a different result (so this would be more like 'differentiating' rather than 'splitting'). The key point is just that if you have a bunch of copies of Alice over here and Bob over there, until there's been time for a signal to travel from Bob to Alice (moving at the speed of light or slower), there doesn't need to be any objective truth about whether a given copy of Alice is part of the same "world" as a copy of Bob who saw the result spin-up or a copy of Bob who saw the result spin-down (and once a signal has had time to reach Alice's position, this just causes the copies of Alice to split/differentiate further, so some copies of Alice that saw result spin-up would get a message saying Bob had gotten result spin-down, while others would get a message saying Bob had gotten result spin-up). If this is unclear, please take a look at the toy model I offered in post #11 here.
RUTA said:
If so, in what "time" does the "reconstruction" take place? Sounds like you need a metatime and a cosmic conductor orchestrating the proper mix of outcomes, but I'll let you explain before commenting further :-)
What do you mean by "reconstruction", and why do you think the splitting/differentiating would need to occur in "metatime" rather than ordinary time? Again, just look at the toy model, it's the sort of thing that could be simulated on a pair of classical computers (with an actual spacelike separation between the two simulated measurements on each computer) in realtime.
 
  • #600
JesseM said:
No, only individual systems are being split in a local manner. And "split" doesn't need to be taken too literally, we could imagine an ensemble of preexisting copies of each experimenter that are identical up to the point of measurement, and then at the moment of measurement some copies see one result while others see a different result (so this would be more like 'differentiating' rather than 'splitting'). The key point is just that if you have a bunch of copies of Alice over here and Bob over there, until there's been time for a signal to travel from Bob to Alice (moving at the speed of light or slower), there doesn't need to be any objective truth about whether a given copy of Alice is part of the same "world" as a copy of Bob who saw the result spin-up or a copy of Bob who saw the result spin-down (and once a signal has had time to reach Alice's position, this just causes the copies of Alice to split/differentiate further, so some copies of Alice that saw result spin-up would get a message saying Bob had gotten result spin-down, while others would get a message saying Bob had gotten result spin-up). If this is unclear, please take a look at the toy model I offered in post #11 here.

What do you mean by "reconstruction", and why do you think the splitting/differentiating would need to occur in "metatime" rather than ordinary time? Again, just look at the toy model, it's the sort of thing that could be simulated on a pair of classical computers (with an actual spacelike separation between the two simulated measurements on each computer) in realtime.

Ah, I read through the many exchanges you had with "colorspace" and I will only end up echoing his many complaints. We don't need to repeat that :-)
 
Back
Top