Entanglement spooky action at a distance

  • Thread starter Thread starter Dragonfall
  • Start date Start date
  • Tags Tags
    Entanglement
Click For Summary
Entanglement, often referred to as "spooky action at a distance," is explained through Bell's Theorem, which shows that the outcomes of measurements on entangled particles are correlated in a way that defies the notion of independent randomness. The correlation follows a specific formula supported by experimental evidence, rejecting simpler models that suggest random outcomes. The design of EPR-Bell tests involves synchronized detection events that create interdependencies between measurements, which do not imply faster-than-light (FTL) communication. Discussions also highlight that the correlations arise from shared properties at the quantum level rather than any FTL influence or random pairing. Overall, the consensus is that entanglement does not facilitate instantaneous information transfer, as no physical evidence supports FTL transmissions.
  • #91


BTW, contrary to what some people say, MWI is not an example of locality and nonrealism. That's a bad misconception that MWI supporters like Vanesch (I suspect), or even Tegmark, Wallace, Saunders, Brown, etc., would object to.
 
Physics news on Phys.org
  • #92


I don't understand the fuss about "instantaneous collapse". If you consider an entangled state like singlet two spin state:

|+-> - |-+>

Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. So, if that's what we call collapse and that's when information is transferred to us (in each of our branches), then information about spin 1 was already present in spin 2 and vice versa when the entangled 2-spin state was created.
 
  • #93


Count Iblis said:
I don't understand the fuss about "instantaneous collapse". If you consider an entangled state like singlet two spin state:

|+-> - |-+>

Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. So, if that's what we call collapse and that's when information is transferred to us (in each of our branches), then information about spin 1 was already present in spin 2 and vice versa when the entangled 2-spin state was created.



<< Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. >>

This sentence makes no sense. The wavefunctions of the two "particles" (if you're just talking textbook QM) are spinor-valued, and therefore already contain spin, and when they are in the singlet state, they are already entangled in configuration space (by definition!). When you "measure" the spin of one particle, you "collapse" the entangled spin states of the two "particles" to a definite spin outcome, and they are therefore no longer entangled.
 
  • #94


Maaneli said:
When you "measure" the spin of one particle, you "collapse" the entangled spin states of the two "particles" to a definite spin outcome, and they are therefore no longer entangled.
Here's an intuitive view to explain the quantum postulates that we are using (IMHO):
When something or someone demands an answer
to a state vector (as an observable) by collapsing
the wave function and observing it (say on a screen)
then the Universe is forced to give an answer whether
it has one or not - the Universe cannot reply 'sorry I don't know
where the particle is, actually, I haven't got one, but you are demanding
it - so I'll have to make a guess for you, I've no other choice because of
your clunky apparatus and strange question is forcing me to answer'
It must answer our strange question. The only sensible answer it
can give is a statistical one because any other answer would be
wrong.
 
  • #95


ThomasT said:
Sorry if I misparaphrased you, because you've helped a lot in elucidating these issues.

There is a classical situation which I think analogizes what is happening in optical Bell tests -- the polariscope. The measurement of intensity by the detector behind the analyzing polarizer in a polariscopic setup is analogous to the measurement of rate of coincidental detection in simple optical Bell setups. Extending between the two polarizers in a polariscopic setup is a singular sort of optical disturbance. That is, the disturbance that is transmitted by the first polarizer is identical to the disturbance that's incident on the analyzing polarizer. In an optical Bell setup, it's assumed that for a given emitted pair the disturbance incident on the polarizer at A is identical to the disturbance that's incident on the polarizer at B. Interestingly enough, both these setups produce a cos^2 functional relationship between changes in the angular difference of the crossed polarizers and changes in the intensity (polariscope) or rate of coincidence (Bell test) of the detected light.

Yes, but there's a world of difference. The light disturbance that reaches the second polarizer has undergone the measurement process of the first, and in fact has been altered by the first. As such, it is in a way not surprising that the result of the second polarizer is dependent on the *choice of measurement* (and hence on the specific alteration) of the first. The correlation is indeed given by the same formula, cos^(angular difference), but that shouldn't be surprising in this case. The result of the second polarizer is in fact ONLY dependent on the state of the first polarizer: you can almost see the first polarizer as a SOURCE for the second one. So there is the evident possibility of a causal relation between "choice of angle of first polarizer" and "result of second polarizer".

What is much more surprising - in fact it is the whole mystery - in an EPR setup, is that two different particles (which may or may not have identical or correlated properties) are sent off to two remote experimental sites. As such there can of course be a correlation in the results of the two measurements, but these results shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance. In other words, the two measurements done by the two experimenters "should" be just statistical measurements on a "set of common properties" which are shared by the two particles (because of course they have a common source). And it is THIS kind of correlation which should obey Bell's theorem (statistical correlations of measurements of common properties) and it doesn't.

And yet it's a "common cause" (certainly not of the ordinary kind though) assumption that underlies the construction and application of the quantum mechanical models that pertain to the Bell tests, as well as the preparation and administration of the actual experiments.

Yes, but now it is up to you what you understand by common cause, but not of the ordinary kind. Because the "ordinary kind" includes all kinds of "common properties" (identical copies of datasets). So whatever is not the ordinary kind, it's going to be "very not ordinary".

OK, the correlations are due to a unusual sorts of common causes then. This is actually easier to almost visualize in the experiments where they impart a similar torque to relatively large groups of atoms. The entire, separate groups are then entangled with respect to their common zapping. :smile: Or, isn't this the way you'd view these sorts of experiments?

Well, how do you visualize "these non-ordinary" common causes ? About every mental picture you can think off, falls in the class of "ordinary" common causes, which should respect Bell's theorem.

The problem with instantaneous-action-at-a-distance is that it's physically meaningless. An all-powerful invisible elf would solve the problem too. Just like instantaneous-actions-at-a-distance, the existence of all-powerful invisible elves is pretty hard to disprove. :smile:

I agree. Nevertheless, Newtonian gravity is "action at a distance", but indeed, it opens up the gate for arbitrary explanations, of the astrology kind, of about any phenomenon. It's yet less of a problem than superdeterminism, which means the end of science, though.

Nevertheless, I agree with you, and it is the fundamental difficulty I have with Bohmian mechanics, which would otherwise have been the best explanation for quantum phenomena. But from the moment, indeed, that the motion of an arbitrarily distant particle can induce an arbitrarily large force on a local particle here, "all bets are off".

I disagree. The most common sense option is common cause(s). Call it a working hypothesis -- one that has the advantage of not being at odds with relativity. You've already acknowledged that common cause is an option, just not normal common causes. Well, the submicroscopic behavior of light is a pretty mysterious subject, don't you think? Maybe the classical models of light are (necessarily?) incomplete enough so that a general and conclusive lhv explanation of Bell tests isn't (and maybe will never be) forthcoming.

No, it won't do. All "common sense" common causes are of the "ordinary" kind. So saying that it must be a common sense, but "non-ordinary" common cause is not going to help us.

I will tell you how *I* picture this (but I won't do this for too long, as I have done this at least already a dozen times on this forum). After all, we're not confronted with an *unexpected* phenomenon. We're verifying predictions of quantum theory ! So what's the best way of at least *picture* what happens ? Answer: look at quantum theory itself, which predicts this ! You can obtain the results of an Aspect-like experiment using quantum theory, and purely local interactions (the ones we use normally, such as electrodynamics). You just let the wave-function evolve! And then you see that you get different observer states, which have seen different things, but *when they come together* they separate in the right branches with the right probabilities - which are nothing else but the observed correlations. That's nothing else but "many worlds". It solves the dilemma of the "correlations-at-a-distance" simply by stating that those correlations didn't happen "at the moment of measurement" which simply created both possible outcomes, but the correlations happened when the observers came together to compare their outcomes. In fact, all different versions of the observers came together to compare all their different possible sets of outcomes, and those that are most probable (those with the largest hilbert norm) are simply those with the right correlations from QM predictions.

Of course, now you have the weirdness of multiple worlds, but at least, you have a clear picture of how the theory that correctly predicts the "incomprehensible outcomes" comes itself to those outcomes.

I've worked this out several times here, I won't type all that stuff again.
 
  • #96


Maaneli said:
BTW, contrary to what some people say, MWI is not an example of locality and nonrealism. That's a bad misconception that MWI supporters like Vanesch (I suspect), or even Tegmark, Wallace, Saunders, Brown, etc., would object to.

The problem lies in the word "non-realism" and then the right definition of "local". There are some papers out there that show that you can see unitary wavefunction evolution as a local process (as long as the implemented dynamics - the interactions - are local of course), although that's better seen in the Heisenberg picture. I'm too lazy to look up the arxiv articles.
So MWI can be seen as respecting locality in a way. That's not surprising given that unitary evolution respects lorentz invariance (if the dynamics does so).

As to "realism", instead of calling it "non-realist", I'd rather call it "multi-realist". But that's semantics. The way MWI can get away with Bell is simply that at the moment of "measurement" at each side, there's no "single outcome", but rather both outcomes appear. It is only later, when the correlations are established, and hence when there is a local interaction between the observers that came together, that the actual correlations show up.
 
  • #97


It really is *relative* state, just like the first paper called it. There's no objective state of any particle before observation that everyone will agree on, so there's no one "true" reality.

Are there any other interpretations that preserve locality?
 
  • #98


Maaneli said:
<< Then when you measure the spin of one particle, your wavefunction get's entangled with the spin. >>

This sentence makes no sense. The wavefunctions of the two "particles" (if you're just talking textbook QM) are spinor-valued, and therefore already contain spin, and when they are in the singlet state, they are already entangled in configuration space (by definition!). When you "measure" the spin of one particle, you "collapse" the entangled spin states of the two "particles" to a definite spin outcome, and they are therefore no longer entangled.

In the MWI, there is no collapse, the wavefunction of the observer gets entangled with the two spin state. I think that the "paradox" implied by instantaneous collapse is just an artifact of assuming that the observer collapses the wavefunction, while in reality this is an effective description.
 
  • #99


If wavefunction collapse really happens, then that should be confirmed by experiments testing for violations of unitarity. Unitarity could perhaps be spontaneously broken as has been suggested in some recent publications...
 
  • #100


vanesch said:
Nevertheless, Newtonian gravity is "action at a distance", but indeed, it opens up the gate for arbitrary explanations, of the astrology kind, of about any phenomenon. It's yet less of a problem than superdeterminism, which means the end of science, though.

I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.
 
Last edited by a moderator:
  • #101


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.

The title of t'Hooft's paper should really be "The Free Will Postulate in Science" or even better "How God has tricked everyone into believing their experimental results are meaningful".

In fact: there is no superdeterministic theory to critique. If there were, if would be an immediate target for falsification and I know just where I'd start.

You know, there is also theory that the universe is only 10 minutes old. I don't think that argument needs to be taken seriously either. Superdeterminism is more of a philosophical discussion item, and in my mind does not belong in the quantum physics discussion area. It has nothing whatsoever to do with QM.
 
Last edited by a moderator:
  • #102


DrChinese said:
I don't have a physical explanation for instantaneous collapse. I think this is a weak point in QM. But I think the mathematical apparatus is already there in the standard model.

(Def: Local; requiring both Locality & Realism)

IMO not having a physical explanation for any of the Non-Local specifications (oQM instantaneous collapse; deBB guide waves; GRW; MWI etc.) is a STRONG point for the QM argument by Bohr that no explanation could be “More Complete” than QM.

That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”.

One could apply the Law of Parsimony to claim the high ground, but to use Ockham requires a complete physical explanation (not just a mathematical apparatus) that would in effect be Local not Non-Local. And a Local physical explanation not being possible is the one thing all these have in common, which is all Bohr needs to retain the point that they are not More Complete” than CI.

I agree with you on ’t Hooft's support of superdeterminism – IMO a weak sophist argument not suitable for scientific discussion that belongs in Philosophy not scientific debates.
 
  • #103
RandallB said:
(Def: Local; requiring both Locality & Realism)

IMO not having a physical explanation for any of the Non-Local specifications (oQM instantaneous collapse; deBB guide waves; GRW; MWI etc.) is a STRONG point for the QM argument by Bohr that no explanation could be “More Complete” than QM.

That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”.

One could apply the Law of Parsimony to claim the high ground, but to use Ockham requires a complete physical explanation (not just a mathematical apparatus) that would in effect be Local not Non-Local. And a Local physical explanation not being possible is the one thing all these have in common, which is all Bohr needs to retain the point that they are not More Complete” than CI.

I agree with you on ’t Hooft's support of superdeterminism – IMO a weak sophist argument not suitable for scientific discussion that belongs in Philosophy not scientific debates.




<< That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”. >>

Contrary to common belief, this is actually not true. Many times I have cited the work of leaders in those research areas who have recently shown the possibility of empirically testable differences. I will do so once again:

Generalizations of Quantum Mechanics
Philip Pearle and Antony Valentini
To be published in: Encyclopaedia of Mathematical Physics, eds. J.-P. Francoise, G. Naber and T. S. Tsun (Elsevier, 2006)
http://eprintweb.org/S/authors/quant-ph/va/Valentini/2

The empirical predictions of Bohmian mechanics and GRW theory
This talk was given on October 8, 2007, at the session on "Quantum Reality: Ontology, Probability, Relativity" of the "Shellyfest: A conference in honor of Shelly Goldstein on the occasion of his 60th birthday" at Rutgers University.
http://math.rutgers.edu/~tumulka/shellyfest/tumulka.pdf

The Quantum Formalism and the GRW Formalism
Authors: Sheldon Goldstein, Roderich Tumulka, Nino Zanghi
http://arxiv.org/abs/0710.0885

De Broglie-Bohm Prediction of Quantum Violations for Cosmological Super-Hubble Modes
Antony Valentini
http://eprintweb.org/S/authors/All/va/A_Valentini/2

Inflationary Cosmology as a Probe of Primordial Quantum Mechanics
Antony Valentini
http://eprintweb.org/S/authors/All/va/A_Valentini/1

Subquantum Information and Computation
Antony Valentini
To appear in 'Proceedings of the Second Winter Institute on Foundations of Quantum Theory and Quantum Optics: Quantum Information Processing', ed. R. Ghosh (Indian Academy of Science, Bangalore, 2002). Second version: shortened at editor's request; extra material on outpacing quantum computation (solving NP-complete problems in polynomial time)
Journal-ref. Pramana - J. Phys. 59 (2002) 269-277
http://eprintweb.org/S/authors/All/va/A_Valentini/11

Pilot-wave theory: Everett in denial? - Antony Valentini

" We reply to claims (by Tipler, Deutsch, Zeh, Brown and Wallace) that the pilot-wave theory of de Broglie and Bohm is really a many-worlds theory with a superfluous configuration appended to one of the worlds. Assuming that pilot-wave theory does contain an ontological pilot wave (a complex-valued field in configuration space), we show that such claims arise essentially from not interpreting pilot-wave theory on its own terms. Pilot-wave dynamics is intrinsically nonclassical, with its own (`subquantum') theory of measurement, and it is in general a `nonequilibrium' theory that violates the quantum Born rule. From the point of view of pilot-wave theory itself, an apparent multiplicity of worlds at the microscopic level (envisaged by some many-worlds theorists) stems from the generally mistaken assumption of `eigenvalue realism' (the assumption that eigenvalues have an ontological status), which in turn ultimately derives from the generally mistaken assumption that `quantum measurements' are true and proper measurements. At the macroscopic level, it might be argued that in the presence of quantum experiments the universal (and ontological) pilot wave can develop non-overlapping and localised branches that evolve just like parallel classical (decoherent) worlds, each containing atoms, people, planets, etc. If this occurred, each localised branch would constitute a piece of real `ontological Ψ-stuff' that is executing a classical evolution for a world, and so, it might be argued, our world may as well be regarded as just one of these among many others. This argument fails on two counts: (a) subquantum measurements (allowed in nonequilibrium pilot-wave theory) could track the actual de Broglie-Bohm trajectory without affecting the branching structure of the pilot wave, so that in principle one could distinguish the branch containing the configuration from the empty ones, where the latter would be regarded merely as concentrations of a complex-valued configuration-space field, and (b) such localised configuration-space branches are in any case unrealistic (especially in a world containing chaos). In realistic models of decoherence, the pilot wave is delocalised, and the identification of a set of parallel (approximately) classical worlds does not arise in terms of localised pieces of actual `Ψ-stuff' executing approximately classical motions; instead, such identification amounts to a reification of mathematical trajectories associated with the velocity field of the approximately Hamiltonian flow of the (approximately non-negative) Wigner function --- a move that is fair enough from a many-worlds perspective, but which is unnecessary and unjustified from a pilot-wave perspective because according to pilot-wave theory there is nothing actually moving along any of these trajectories except one (just as in classical mechanics or in the theory of test particles in external fields or a background spacetime geometry). In addition to being unmotivated, such reification begs the question of why the mathematical trajectories should not also be reified outside the classical limit for general wave functions, resulting in a theory of `many de Broglie-Bohm worlds'. Finally, because pilot-wave theory can accommodate violations of the Born rule and many-worlds theory (apparently) cannot, any attempt to argue that the former theory is really the latter theory (`in denial') must in any case fail. At best, such arguments can only show that, if approximately classical experimenters are confined to the quantum equilibrium state, they will encounter a phenomenological appearance of many worlds (just as they will encounter a phenomenological appearance of locality, uncertainty, and of quantum physics generally). From the perspective of pilot-wave theory itself, many worlds are an illusion. "
http://users.ox.ac.uk/~everett/abstracts.htm#valentini


So everything you said based on that initial assumption is null.

Also, superdeterminism, if implemented in an empirically adequate way in replacement of nonlocality, would be just as valid as a nonlocal account of EPR, and therefore just as relevant to QM.
 
Last edited by a moderator:
  • #104


vanesch said:
... but there's a world of difference. The light disturbance that reaches the second polarizer has undergone the measurement process of the first, and in fact has been altered by the first. As such, it is in a way not surprising that the result of the second polarizer is dependent on the *choice of measurement* (and hence on the specific alteration) of the first. The correlation is indeed given by the same formula, cos^(angular difference), but that shouldn't be surprising in this case. The result of the second polarizer is in fact ONLY dependent on the state of the first polarizer: you can almost see the first polarizer as a SOURCE for the second one. So there is the evident possibility of a causal relation between "choice of angle of first polarizer" and "result of second polarizer".

What is much more surprising - in fact it is the whole mystery - in an EPR setup, is that two different particles (which may or may not have identical or correlated properties) are sent off to two remote experimental sites. As such there can of course be a correlation in the results of the two measurements, but these results shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance. In other words, the two measurements done by the two experimenters "should" be just statistical measurements on a "set of common properties" which are shared by the two particles (because of course they have a common source). And it is THIS kind of correlation which should obey Bell's theorem (statistical correlations of measurements of common properties) and it doesn't.
Look at what the two setups have in common, not how they're different.

I don't understand what you mean when you say that the correlations "shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance."

They don't, do they? They only depend on the angular difference between the crossed polarizers associated with paired incident disturbances. This angular difference changes instantaneously no matter what the spatial separation as A or B changes polarizer setting. This isn't action-at-a-distance though.

Anyway, the point of the polariscope analogy is that in both setups there is, in effect, a singular, identical optical disturbance extending from one polarizer to the other -- and that the functional relationship between the angular difference and rate of detection is the same in both. This seems to me to support the assumption that in the quantum experiments the polarizers at A and B are analyzing an identical optical disturbance at each end for each pair. B doesn't need to be influencing A, or vice versa, to produce this functional relationship. They just need to be analyzing the same thing at each end for each pair.
 
  • #105


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.


There is yet another way to relinquish the "free-will" postulate in QM. One can implement backwards causation, as Huw Price and Rod Sutherland have proposed and successfully shown can reproduce the nonlocal correlations, as well as the empirical predictions of QM in general.
 
Last edited by a moderator:
  • #106


DrChinese said:
The title of t'Hooft's paper should really be "The Free Will Postulate in Science" or even better "How God has tricked everyone into believing their experimental results are meaningful".

In fact: there is no superdeterministic theory to critique. If there were, if would be an immediate target for falsification and I know just where I'd start.

You know, there is also theory that the universe is only 10 minutes old. I don't think that argument needs to be taken seriously either. Superdeterminism is more of a philosophical discussion item, and in my mind does not belong in the quantum physics discussion area. It has nothing whatsoever to do with QM.

I don't agree because this is just a general essay, 't Hooft has also proposed more concrete models. Any well defined model can always be falsified. What 't Hooft is saying when he raises superdeterminism is simply that we should not a priori reject deterministic models because of Bell's theorem.

And, of course, the models should not have conspirationally chosen initial conditions of the microscopic degrees of freedom, the topic of the essay is precisely about this point.
 
  • #107


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.

Mmm. I thought 't Hooft was better than this :-p

He argues in fact against the notion of free will - that's totally acceptable as an argument. But at no point he actually argues how it comes that in EPR-like experiments, the correlations come out exactly as they do. Of course, the past initial conditions *could* be such that they both determine exactly the "choice" of the observer and the (hidden) state of the particles. Indeed, that's not impossible. He's simply arguing that there could indeed exist a deterministic dynamics, of which we know nothing, and which mimics quantum theory, such that this is the case. That's true of course. What he doesn't address, which is of course the main critique of the "superdeterminism" argument, is how this comes about so clearly in these EPR experiments, where, again, the "choice" can be implemented in miriads of different ways: humans who decide, computers, measurements of noise of different kinds (noise in a resistor, noise in, say, the light received from a distant star, recorded 10 days before,...). This unknown dynamics has to be such that the correlations appear always in the same way (following in fact the simple predictions of quantum mechanics), starting out from arbitrary initial conditions, and such that in one case, it is the starlight, in another case it is the brain activity, in yet another case, it are the numbers in the decimal expansion of the number pi, ... is exactly correlated in the right way with the particle states that, lo and behold, our simple correlations come out again.
As 't Hooft argues, that's indeed not excluded. But that means that we have absolutely no clue of what's really going on in nature, and that distant starlight of 10 days ago, brain activities and the state of particles in the lab are all correlated through an unknown dynamics, yet it ONLY turns out that we see it in particular experiments.

Couldn't Madame Soleil use the same argument then to claim that her astrology works in exactly the same way ? That there IS a correlation between the positions of the planets and your love life in the next week ? And that she found out a bit more about the hidden dynamics than all these scientists ? If you read 't Hooft paper, and you replace Bell's correlations by astrological correlations, it reads just as well.

EDIT: I realize that Madame Soleil is a French (and maybe European, I knew about her even before I lived in France) cultural object: http://fr.wikipedia.org/wiki/Madame_Soleil
I don't seem to find any english version of this...
ah, there's this one: http://query.nytimes.com/gst/fullpage.html?res=9E00E6DA1239F933A05753C1A960958260
 
Last edited by a moderator:
  • #108


ThomasT said:
B doesn't need to be influencing A, or vice versa, to produce this functional relationship. They just need to be analyzing the same thing at each end for each pair.

I think you really haven't understood Bell. If they would be analyzing the same thing, then the correlations should obey Bell's inequalities. In the consecutive analyzer setup, the first one modifies (or can modify) the light disturbance, "encode" his angular setting in this light disturbance, and then the second can of course "measure the angular difference" (as the value of the angular setting of the first is encoded in the light pulse), and decide there-up what the outcome can be.

But in the EPR setup, two identical light perturbations go to A and B respectively. When A sets a certain angle, this will interact with this disturbance in a certain way, but this interaction cannot depend upon which angle B decides to set up, and so not on the difference either. At B, in the same way, the disturbance arriving there cannot know what angular choice A has made, and hence doesn't know the angular difference between both. At A, only the *absolute angle* and whatever was coded in the identical disturbances can determine what is the outcome, and at B, only the *absolute angle at B* and whatever was coded in the identical disturbances can determine what is the outcome. So maybe this disturbance has, with itself, a list of things to do whenever it encounters "absolute angle this" or that, and both disturbances carry the same list (or a similar list). THIS is exactly the kind of situation Bell analyzed. And then you find his inequalities.
 
  • #109


Maaneli said:
Contrary to common belief, this is actually not true. Many times I have cited the work of leaders in those research areas who have recently shown the possibility of empirically testable differences. I will do so once again:
…..
….. So everything you said based on that initial assumption is null.

Also, superdeterminism, if implemented in an empirically adequate way in replacement of nonlocality, would be just as valid as a nonlocal account of EPR, and therefore just as relevant to QM.
To be clear on superdeterminism you must mean “in an empirically local although un-realistic adequate way”. And yes it is relevant to QM as an Equivalent Non-Local theory. [Note the difference between local and Local; with a cap L we define Local = a reality requiring Causality Locality & Realism]

As to rejecting my assumption that Non-Local theories have been unable to demonstrate which of them is “correct”, “more correct” or “superior” to any other Non-Local Theory, I find no support in your citations.
All I see in those is talk about the possibility of experiments based on arguments about what they think various interpretations “apparently” can or cannot predict. Beyond all the bravado and posturing by different interpretations is see not definitive experiments described that could plausibly be expected to be performed.

IMO the very nature of a Non-Local requirement leaves all of them essentially equivalent to CI including BM superdeterminism.
 
  • #110


RandallB said:
To be clear on superdeterminism you must mean “in an empirically local although un-realistic adequate way”. And yes it is relevant to QM as an Equivalent Non-Local theory. [Note the difference between local and Local; with a cap L we define Local = a reality requiring Causality Locality & Realism]

As to rejecting my assumption that Non-Local theories have been unable to demonstrate which of them is “correct”, “more correct” or “superior” to any other Non-Local Theory, I find no support in your citations.
All I see in those is talk about the possibility of experiments based on arguments about what they think various interpretations “apparently” can or cannot predict. Beyond all the bravado and posturing by different interpretations is see not definitive experiments described that could plausibly be expected to be performed.

IMO the very nature of a Non-Local requirement leaves all of them essentially equivalent to CI including BM superdeterminism.



I think you did not read those papers at all beyond the abstracts. The papers by Valentini clearly describe specific experiments. And it is necessary and sufficient to show that these theories make different empirical predictions to refute your point. I simply can't understand how you could argue otherwise.

BTW, I think your characterization of superdeterminism and local vs Local is totally confused. Superdeterminism doesn't mean non-realism at all. Please read Bell's paper "Free Variables and Local Causality".
 
  • #111


DrChinese said:
The title of t'Hooft's paper should really be "The Free Will Postulate in Science" or even better "How God has tricked everyone into believing their experimental results are meaningful".

Your subjective opinions should be backed up by some really good arguments, otherwise I see no reason to prefer them over those of a Nobel price laureate in particle physics. So, can you show me why a superdeterministic universe makes science meaningless?

In fact: there is no superdeterministic theory to critique.

This is true, but the purpose of Bell's theorem is to reject some classes of possible theories and not to test developed theories. For the later you need to show that the theory gives you Schroedinger's equation or something very similar to it.

You know, there is also theory that the universe is only 10 minutes old. I don't think that argument needs to be taken seriously either.

I don't know that argument so I cannot say if it should be taken seriously or not.

Superdeterminism is more of a philosophical discussion item, and in my mind does not belong in the quantum physics discussion area. It has nothing whatsoever to do with QM.

You should then explain how the assumption of free-will, or the idea of non-realism are not philosophical items. It seems to me that you want to define-away every possibility that does not conform to your beliefs. In fact, unlike free-will (that is basically a remnant of a mind-brain dualism), superdeterminism is a very well defined mathematical concept and it is not less scientific than non-determinism for example.
 
  • #112


Maaneli said:
There is yet another way to relinquish the "free-will" postulate in QM. One can implement backwards causation, as Huw Price and Rod Sutherland have proposed and successfully shown can reproduce the nonlocal correlations, as well as the empirical predictions of QM in general.

Cramer's transactional interpretation is based on the same idea of backwards causation. However, in my opinion, "free-will" should not be assumed at all. One should only think about particles (or whatever entities one might imagine) and their dynamics while developing a QM interpretation. Introducing ideas like backwards causation or non-locality just for the reason to preserve free-will seems pretty absurd to me.
 
  • #113


vanesch said:
Mmm. I thought 't Hooft was better than this :-p

He argues in fact against the notion of free will - that's totally acceptable as an argument. But at no point he actually argues how it comes that in EPR-like experiments, the correlations come out exactly as they do. Of course, the past initial conditions *could* be such that they both determine exactly the "choice" of the observer and the (hidden) state of the particles. Indeed, that's not impossible. He's simply arguing that there could indeed exist a deterministic dynamics, of which we know nothing, and which mimics quantum theory, such that this is the case. That's true of course. What he doesn't address, which is of course the main critique of the "superdeterminism" argument, is how this comes about so clearly in these EPR experiments, where, again, the "choice" can be implemented in miriads of different ways: humans who decide, computers, measurements of noise of different kinds (noise in a resistor, noise in, say, the light received from a distant star, recorded 10 days before,...). This unknown dynamics has to be such that the correlations appear always in the same way (following in fact the simple predictions of quantum mechanics), starting out from arbitrary initial conditions, and such that in one case, it is the starlight, in another case it is the brain activity, in yet another case, it are the numbers in the decimal expansion of the number pi, ... is exactly correlated in the right way with the particle states that, lo and behold, our simple correlations come out again.
As 't Hooft argues, that's indeed not excluded. But that means that we have absolutely no clue of what's really going on in nature, and that distant starlight of 10 days ago, brain activities and the state of particles in the lab are all correlated through an unknown dynamics, yet it ONLY turns out that we see it in particular experiments.

I think the problem with your line of reasoning comes from the fact that you look at those "myriads of different ways" to do the experiment from a macroscopic perspective. Humans, computers, stars are nothing but aggregates of the same type of particles, mainly electrons and quarks. The common cause that could be behind EPR correlations has to be related to the way in which these particles interact. I see absolutely no reason why I should expect the electrons in a human brain to behave different from those being part of a transistor.

Do you also expect that a human or a computer should not obey GR when put in some orbit around a planet because they are different?

Couldn't Madame Soleil use the same argument then to claim that her astrology works in exactly the same way ? That there IS a correlation between the positions of the planets and your love life in the next week ?

No, because such correlations do not exist. Astrology has been falsified already. If such correlations between my love life and the planets could be shown to be statistically significant I would certainly be curious about their origin.

And that she found out a bit more about the hidden dynamics than all these scientists ? If you read 't Hooft paper, and you replace Bell's correlations by astrological correlations, it reads just as well.

You are wrong because you compare a case where such correlations are certain, to a case where they are certainly not there. As I said before, astrology has been falsified, there is no reason to propose any explanation for a non-existing fact. But EPR correlations are real and in need for an explanation.
 
  • #114


ueit said:
Cramer's transactional interpretation is based on the same idea of backwards causation. However, in my opinion, "free-will" should not be assumed at all. One should only think about particles (or whatever entities one might imagine) and their dynamics while developing a QM interpretation. Introducing ideas like backwards causation or non-locality just for the reason to preserve free-will seems pretty absurd to me.


Maudlin has shown Cramer's Transactional interpretation to be physically inconsistent (since it has causal paradoxes).

Also, I don't think backwards causation at all preserves "free will". In Sutherland's model, it is never the case that measurement settings are random variables. The initial and final measurement settings make the theory completely deterministic. So there cannot possibly be a "free will" postulate.
 
  • #115


ueit said:
I think the problem with your line of reasoning comes from the fact that you look at those "myriads of different ways" to do the experiment from a macroscopic perspective. Humans, computers, stars are nothing but aggregates of the same type of particles, mainly electrons and quarks. The common cause that could be behind EPR correlations has to be related to the way in which these particles interact.

I understand that. I don't say that, in principle, superdeterminism is not possible.

But you make my point:

(about Madame Soleil)

No, because such correlations do not exist. Astrology has been falsified already. If such correlations between my love life and the planets could be shown to be statistically significant I would certainly be curious about their origin.

This is what I mean. If they would have been found, then the answer would have been justified by superdeterminism. It's simply because they aren't observed (are they ? :smile:) that superdeterminism shouldn't explain them. But no problem, tomorrow Neptune tells my fortune, superdeterminism can explain it.

So superdeterminism has *the potential* of even justifying astrology (if it were observed). As to their "origin", the superdeterminism answer would simply be "because they are correlated". In the same way as 't Hooft argues that the Bell correlations come about "because they are correlated". And the only way to find out about it is... to see that they are correlated.

Superdeterminism allows/explains/justifies ANY correlation, ANY time. It is even amazing that we don't find more of them! Astrology should be right. Or some version of it. Can't believe these things are uncorrelated.


You are wrong because you compare a case where such correlations are certain, to a case where they are certainly not there. As I said before, astrology has been falsified, there is no reason to propose any explanation for a non-existing fact. But EPR correlations are real and in need for an explanation.

What I'm saying is that superdeterminism has the potential of "explanation" of just any correlation. "Correlations happen". It can't be falsified at all. But worse, it opens the gate to explanations for any kind of correlation, without direct causal link. If they are there, hey, superdeterminism. If they aren't, well, hey, superdeterminism. As such, superdeterminism destroys the possibility of observing causality. Here it's the correlation between starlight and particles. There, it is the correlation between having Rolexes and Ferarris. Smoking and cancer, superdeterminism. No necessary causal link or whatever. In other words, superdeterminism is the theory "things happen".

When you look at other basic principles of physics, they *constrain* possible observations. Superdeterminism allows all of them, and their opposite.
 
  • #116


't Hooft doesn't say that simply saying that "they are correlated because of superdeterminism" is the final explanation. All he says is that the no go theorems ruling out deterministic models have some small print (e.g. because of superdeterminism) and that therefore we should not a priori rule out such models.

So, one can imagine that there exists deterministic models that explain quantum mechanics (in a non conspirational way, see the essay linked above), but there is a danger that no one would find them because no one would bother to study them because of Bell's theorem.
 
  • #117


ueit said:
Your subjective opinions should be backed up by some really good arguments, otherwise I see no reason to prefer them over those of a Nobel price laureate in particle physics. So, can you show me why a superdeterministic universe makes science meaningless?

If you posit a theory that says nothing more than "you appear to have a choice of observations but don't really" then you are really saying the results of experiments are not valid in some way. For if they were valid, then local realism fails.

As Vanesch pointed out, 't Hooft's paper is very high level. I really don't think his point has anything whatsoever to do with the Bell question. It really applies to science as a whole, and as such is more of a philosophical discussion. Regardless of the author's respected accomplishments, this paper does not deserve any greater status than someone saying that God controls all experimental outcomes. I would venture to say that thesis would be more palatable to many anyway.

The issue with a superdeterministic local realistic theory is that the entire future of the entire universe would need to be encoded in each and every particle we can observe. Otherwise there would be no way to keep the experimental results sync'd. That is a big requirement, and in my opinion is falsifiable anyway - if a specific version were offered to discuss.
 
  • #118


Maaneli said:
I think you did not read those papers at all beyond the abstracts. The papers by Valentini clearly describe specific experiments. And it is necessary and sufficient to show that these theories make different empirical predictions to refute your point. I simply can't understand how you could argue otherwise.
Are you saying that any of those “specific experiments” are definitive experiments that can plausibly be expected to be performed ??
Which one(s)? Why haven’t they been performed?
And who, where, and how is someone currently working on performing them to finally make your point in the only way it can – with experimentally observed results?
BTW, I think your characterization of superdeterminism and local vs Local is totally confused.
Are you claiming superdeterminism is local and realistic ?

As Dr C said that would require “the entire future of the entire universe would need to be encoded in each and every particle we can observe”. That in effect would be one huge Hidden Variable – if such a complex HV could be contained on a photon then so could one single addition hidden variable. That single HV is all that would be needed to resolve a more complete description than QM, nullifying the need for such a complex thing a superdeterminism.
 
Last edited:
  • #119


Count Iblis said:
So, one can imagine that there exists deterministic models that explain quantum mechanics (in a non conspirational way, see the essay linked above), but there is a danger that no one would find them because no one would bother to study them because of Bell's theorem.

I think it was understood from the start (even Bell mentions it in "speakable and unspeakable...") that superdeterminism is a way out. So it is not that this was an overlooked possibility. It was simply a possibility which one didn't consider fruitful in any way. Concerning superdeterministic relativistic models (if they don't have to be relativistic, we already have one - a normal deterministic theory: BM), I don't think one could find them, or at least, verify these findings in some way. Indeed, the very acceptance of superdeterminism - that is, accepting that the dynamics will be such that "things that look like free will could have unavoidable statistical correlations, no matter how one does it", in other words, assuming that "things that look like free will cannot be assumed to be statistically independent" simply takes away our only possibility to ever have a way of verifying any causal relationship for good. Moreover, "things that look like free will" are always very complicated physical processes, which are for the time being, and probably for a very long time still if not for ever, untractable in their details, but nevertheless it is in these details that a superdeterministic model finds its, well, superdeterminism.
So this means that it is simply impossible to come up with a genuine superdeterministic model. First of all, because theoretically, we cannot show it to work - because we would have to work out in all detail and in all generality, "processes which look like free will" which are FAPP untractable. And experimentally, we cannot show it, because by the very supposition of superdeterminism, we are unable to disentangle any experimental setup in clear cause-effect relationships, as "arbitrary correlations" can show up just anywhere.

THIS is why superdeterminism is undesirable. It is unworkable as a theory. If ever there is superdeterminism in nature - which is not excluded by any means - then that will be the end point of scientific investigation. Maybe that's what quantum mechanics is simply teaching us. But it is of no conceptual use at all.

EDIT: As Dr. C said: it is conceptually then simpler to assume that there's a god which makes things happen, and it is His will to implement certain correlations between his actions (until this makes Him bored, and he'll change the rules).
Also, there's a very much simpler model which also accounts for just any correlation. God's book. A hypothetical list of all events (past present and future) in the universe. What we discover as "laws of physics" are simply some regularities in that book.
 
Last edited:
  • #120


vanesch said:
If they would be analyzing the same thing, then the correlations should obey Bell's inequalities.
I disagree. The qm formulation does assume that they're analyzing the same thing. What it doesn't do is specify any particular value wrt any particular coincidence interval for that common property. So, we apparently do have common cause and analysis of a common property producing experimental violation of the inequalities.


vanesch said:
In the consecutive analyzer setup, the first one modifies (or can modify) the light disturbance, "encode" his angular setting in this light disturbance, and then the second can of course "measure the angular difference" (as the value of the angular setting of the first is encoded in the light pulse), and decide there-up what the outcome can be.

But in the EPR setup, two identical light perturbations go to A and B respectively. When A sets a certain angle, this will interact with this disturbance in a certain way, but this interaction cannot depend upon which angle B decides to set up, and so not on the difference either.
Suppose you have an optical Bell setup where you've located the A side a bit closer to the emitter than the B side so that A will always record a detection before B. It's a detection at A that starts the coincidence circuitry which then selects the detection attribute (1 for detection and 0 for no detection) associated with a certain time interval at B to be paired with the detection attribute, 1, recorded at A.

The intensity of the light transmitted by the second polarizer in the polariscopic setup is analogous to the rate of coincidental detection in the optical Bell setup.

Because we're always recording a 1 at A, then this can be thought of as an optical disturbance of maximum intensity extending from the polarizer at A and incident on the polarizer at B for any coincidence interval. As in the polariscopic setup, when the polarizer at B (the second polarizer) is set parallel to A, then the maximum rate of coincidental detection will be recorded -- and the rate of coincidental detection is a function of the angular difference between the setting of the polarizer at A and the one at B, just as with a polariscope.

The critical assumption in Bell's theorem is that the data streams accumulated at A and B are statistically independent. The experimental violations of the inequalities don't support the idea of direct causation between A and B, or that the correlations can't be caused by analysis of the same properties. Rather, they simply mean that there is a statistical dependency between A and B. This statistical dependency is a function of the experimental design(s) necessary to produce entanglement. In the simple optical Bell tests the dependency arises from the emission preparations and the subsequent need to match detection attributes via time-stamping.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 18 ·
Replies
18
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
483
Replies
9
Views
2K
  • · Replies 8 ·
Replies
8
Views
1K
  • · Replies 112 ·
4
Replies
112
Views
12K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
8
Views
2K