Entanglement spooky action at a distance

  • Thread starter Thread starter Dragonfall
  • Start date Start date
  • Tags Tags
    Entanglement
  • #101


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.

The title of t'Hooft's paper should really be "The Free Will Postulate in Science" or even better "How God has tricked everyone into believing their experimental results are meaningful".

In fact: there is no superdeterministic theory to critique. If there were, if would be an immediate target for falsification and I know just where I'd start.

You know, there is also theory that the universe is only 10 minutes old. I don't think that argument needs to be taken seriously either. Superdeterminism is more of a philosophical discussion item, and in my mind does not belong in the quantum physics discussion area. It has nothing whatsoever to do with QM.
 
Last edited by a moderator:
Physics news on Phys.org
  • #102


DrChinese said:
I don't have a physical explanation for instantaneous collapse. I think this is a weak point in QM. But I think the mathematical apparatus is already there in the standard model.

(Def: Local; requiring both Locality & Realism)

IMO not having a physical explanation for any of the Non-Local specifications (oQM instantaneous collapse; deBB guide waves; GRW; MWI etc.) is a STRONG point for the QM argument by Bohr that no explanation could be “More Complete” than QM.

That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”.

One could apply the Law of Parsimony to claim the high ground, but to use Ockham requires a complete physical explanation (not just a mathematical apparatus) that would in effect be Local not Non-Local. And a Local physical explanation not being possible is the one thing all these have in common, which is all Bohr needs to retain the point that they are not More Complete” than CI.

I agree with you on ’t Hooft's support of superdeterminism – IMO a weak sophist argument not suitable for scientific discussion that belongs in Philosophy not scientific debates.
 
  • #103
RandallB said:
(Def: Local; requiring both Locality & Realism)

IMO not having a physical explanation for any of the Non-Local specifications (oQM instantaneous collapse; deBB guide waves; GRW; MWI etc.) is a STRONG point for the QM argument by Bohr that no explanation could be “More Complete” than QM.

That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”.

One could apply the Law of Parsimony to claim the high ground, but to use Ockham requires a complete physical explanation (not just a mathematical apparatus) that would in effect be Local not Non-Local. And a Local physical explanation not being possible is the one thing all these have in common, which is all Bohr needs to retain the point that they are not More Complete” than CI.

I agree with you on ’t Hooft's support of superdeterminism – IMO a weak sophist argument not suitable for scientific discussion that belongs in Philosophy not scientific debates.




<< That specifications of interpretations like deBB, GRW, MWI etc are empirically equivalent to QM doesn’t change that. Each are just as unable (incomplete) to provided evidence (experimental or otherwise) as to which approach is “correct”. >>

Contrary to common belief, this is actually not true. Many times I have cited the work of leaders in those research areas who have recently shown the possibility of empirically testable differences. I will do so once again:

Generalizations of Quantum Mechanics
Philip Pearle and Antony Valentini
To be published in: Encyclopaedia of Mathematical Physics, eds. J.-P. Francoise, G. Naber and T. S. Tsun (Elsevier, 2006)
http://eprintweb.org/S/authors/quant-ph/va/Valentini/2

The empirical predictions of Bohmian mechanics and GRW theory
This talk was given on October 8, 2007, at the session on "Quantum Reality: Ontology, Probability, Relativity" of the "Shellyfest: A conference in honor of Shelly Goldstein on the occasion of his 60th birthday" at Rutgers University.
http://math.rutgers.edu/~tumulka/shellyfest/tumulka.pdf

The Quantum Formalism and the GRW Formalism
Authors: Sheldon Goldstein, Roderich Tumulka, Nino Zanghi
http://arxiv.org/abs/0710.0885

De Broglie-Bohm Prediction of Quantum Violations for Cosmological Super-Hubble Modes
Antony Valentini
http://eprintweb.org/S/authors/All/va/A_Valentini/2

Inflationary Cosmology as a Probe of Primordial Quantum Mechanics
Antony Valentini
http://eprintweb.org/S/authors/All/va/A_Valentini/1

Subquantum Information and Computation
Antony Valentini
To appear in 'Proceedings of the Second Winter Institute on Foundations of Quantum Theory and Quantum Optics: Quantum Information Processing', ed. R. Ghosh (Indian Academy of Science, Bangalore, 2002). Second version: shortened at editor's request; extra material on outpacing quantum computation (solving NP-complete problems in polynomial time)
Journal-ref. Pramana - J. Phys. 59 (2002) 269-277
http://eprintweb.org/S/authors/All/va/A_Valentini/11

Pilot-wave theory: Everett in denial? - Antony Valentini

" We reply to claims (by Tipler, Deutsch, Zeh, Brown and Wallace) that the pilot-wave theory of de Broglie and Bohm is really a many-worlds theory with a superfluous configuration appended to one of the worlds. Assuming that pilot-wave theory does contain an ontological pilot wave (a complex-valued field in configuration space), we show that such claims arise essentially from not interpreting pilot-wave theory on its own terms. Pilot-wave dynamics is intrinsically nonclassical, with its own (`subquantum') theory of measurement, and it is in general a `nonequilibrium' theory that violates the quantum Born rule. From the point of view of pilot-wave theory itself, an apparent multiplicity of worlds at the microscopic level (envisaged by some many-worlds theorists) stems from the generally mistaken assumption of `eigenvalue realism' (the assumption that eigenvalues have an ontological status), which in turn ultimately derives from the generally mistaken assumption that `quantum measurements' are true and proper measurements. At the macroscopic level, it might be argued that in the presence of quantum experiments the universal (and ontological) pilot wave can develop non-overlapping and localised branches that evolve just like parallel classical (decoherent) worlds, each containing atoms, people, planets, etc. If this occurred, each localised branch would constitute a piece of real `ontological Ψ-stuff' that is executing a classical evolution for a world, and so, it might be argued, our world may as well be regarded as just one of these among many others. This argument fails on two counts: (a) subquantum measurements (allowed in nonequilibrium pilot-wave theory) could track the actual de Broglie-Bohm trajectory without affecting the branching structure of the pilot wave, so that in principle one could distinguish the branch containing the configuration from the empty ones, where the latter would be regarded merely as concentrations of a complex-valued configuration-space field, and (b) such localised configuration-space branches are in any case unrealistic (especially in a world containing chaos). In realistic models of decoherence, the pilot wave is delocalised, and the identification of a set of parallel (approximately) classical worlds does not arise in terms of localised pieces of actual `Ψ-stuff' executing approximately classical motions; instead, such identification amounts to a reification of mathematical trajectories associated with the velocity field of the approximately Hamiltonian flow of the (approximately non-negative) Wigner function --- a move that is fair enough from a many-worlds perspective, but which is unnecessary and unjustified from a pilot-wave perspective because according to pilot-wave theory there is nothing actually moving along any of these trajectories except one (just as in classical mechanics or in the theory of test particles in external fields or a background spacetime geometry). In addition to being unmotivated, such reification begs the question of why the mathematical trajectories should not also be reified outside the classical limit for general wave functions, resulting in a theory of `many de Broglie-Bohm worlds'. Finally, because pilot-wave theory can accommodate violations of the Born rule and many-worlds theory (apparently) cannot, any attempt to argue that the former theory is really the latter theory (`in denial') must in any case fail. At best, such arguments can only show that, if approximately classical experimenters are confined to the quantum equilibrium state, they will encounter a phenomenological appearance of many worlds (just as they will encounter a phenomenological appearance of locality, uncertainty, and of quantum physics generally). From the perspective of pilot-wave theory itself, many worlds are an illusion. "
http://users.ox.ac.uk/~everett/abstracts.htm#valentini


So everything you said based on that initial assumption is null.

Also, superdeterminism, if implemented in an empirically adequate way in replacement of nonlocality, would be just as valid as a nonlocal account of EPR, and therefore just as relevant to QM.
 
Last edited by a moderator:
  • #104


vanesch said:
... but there's a world of difference. The light disturbance that reaches the second polarizer has undergone the measurement process of the first, and in fact has been altered by the first. As such, it is in a way not surprising that the result of the second polarizer is dependent on the *choice of measurement* (and hence on the specific alteration) of the first. The correlation is indeed given by the same formula, cos^(angular difference), but that shouldn't be surprising in this case. The result of the second polarizer is in fact ONLY dependent on the state of the first polarizer: you can almost see the first polarizer as a SOURCE for the second one. So there is the evident possibility of a causal relation between "choice of angle of first polarizer" and "result of second polarizer".

What is much more surprising - in fact it is the whole mystery - in an EPR setup, is that two different particles (which may or may not have identical or correlated properties) are sent off to two remote experimental sites. As such there can of course be a correlation in the results of the two measurements, but these results shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance. In other words, the two measurements done by the two experimenters "should" be just statistical measurements on a "set of common properties" which are shared by the two particles (because of course they have a common source). And it is THIS kind of correlation which should obey Bell's theorem (statistical correlations of measurements of common properties) and it doesn't.
Look at what the two setups have in common, not how they're different.

I don't understand what you mean when you say that the correlations "shouldn't depend on the explicit choice made by one or other experimenter if we exclude action-at-a-distance."

They don't, do they? They only depend on the angular difference between the crossed polarizers associated with paired incident disturbances. This angular difference changes instantaneously no matter what the spatial separation as A or B changes polarizer setting. This isn't action-at-a-distance though.

Anyway, the point of the polariscope analogy is that in both setups there is, in effect, a singular, identical optical disturbance extending from one polarizer to the other -- and that the functional relationship between the angular difference and rate of detection is the same in both. This seems to me to support the assumption that in the quantum experiments the polarizers at A and B are analyzing an identical optical disturbance at each end for each pair. B doesn't need to be influencing A, or vice versa, to produce this functional relationship. They just need to be analyzing the same thing at each end for each pair.
 
  • #105


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.


There is yet another way to relinquish the "free-will" postulate in QM. One can implement backwards causation, as Huw Price and Rod Sutherland have proposed and successfully shown can reproduce the nonlocal correlations, as well as the empirical predictions of QM in general.
 
Last edited by a moderator:
  • #106


DrChinese said:
The title of t'Hooft's paper should really be "The Free Will Postulate in Science" or even better "How God has tricked everyone into believing their experimental results are meaningful".

In fact: there is no superdeterministic theory to critique. If there were, if would be an immediate target for falsification and I know just where I'd start.

You know, there is also theory that the universe is only 10 minutes old. I don't think that argument needs to be taken seriously either. Superdeterminism is more of a philosophical discussion item, and in my mind does not belong in the quantum physics discussion area. It has nothing whatsoever to do with QM.

I don't agree because this is just a general essay, 't Hooft has also proposed more concrete models. Any well defined model can always be falsified. What 't Hooft is saying when he raises superdeterminism is simply that we should not a priori reject deterministic models because of Bell's theorem.

And, of course, the models should not have conspirationally chosen initial conditions of the microscopic degrees of freedom, the topic of the essay is precisely about this point.
 
  • #107


ueit said:
I think you should read ’t Hooft's paper:

http://arxiv.org/PS_cache/quant-ph/pdf/0701/0701097v1.pdf"

He replaces the poorly defined, if not logically absurd notion of "free-will" with the unconstrained initial state" assumption. This way, all those (IMHO very weak, anyway) arguments against superdeterminism should be dropped.

Mmm. I thought 't Hooft was better than this :-p

He argues in fact against the notion of free will - that's totally acceptable as an argument. But at no point he actually argues how it comes that in EPR-like experiments, the correlations come out exactly as they do. Of course, the past initial conditions *could* be such that they both determine exactly the "choice" of the observer and the (hidden) state of the particles. Indeed, that's not impossible. He's simply arguing that there could indeed exist a deterministic dynamics, of which we know nothing, and which mimics quantum theory, such that this is the case. That's true of course. What he doesn't address, which is of course the main critique of the "superdeterminism" argument, is how this comes about so clearly in these EPR experiments, where, again, the "choice" can be implemented in miriads of different ways: humans who decide, computers, measurements of noise of different kinds (noise in a resistor, noise in, say, the light received from a distant star, recorded 10 days before,...). This unknown dynamics has to be such that the correlations appear always in the same way (following in fact the simple predictions of quantum mechanics), starting out from arbitrary initial conditions, and such that in one case, it is the starlight, in another case it is the brain activity, in yet another case, it are the numbers in the decimal expansion of the number pi, ... is exactly correlated in the right way with the particle states that, lo and behold, our simple correlations come out again.
As 't Hooft argues, that's indeed not excluded. But that means that we have absolutely no clue of what's really going on in nature, and that distant starlight of 10 days ago, brain activities and the state of particles in the lab are all correlated through an unknown dynamics, yet it ONLY turns out that we see it in particular experiments.

Couldn't Madame Soleil use the same argument then to claim that her astrology works in exactly the same way ? That there IS a correlation between the positions of the planets and your love life in the next week ? And that she found out a bit more about the hidden dynamics than all these scientists ? If you read 't Hooft paper, and you replace Bell's correlations by astrological correlations, it reads just as well.

EDIT: I realize that Madame Soleil is a French (and maybe European, I knew about her even before I lived in France) cultural object: http://fr.wikipedia.org/wiki/Madame_Soleil
I don't seem to find any english version of this...
ah, there's this one: http://query.nytimes.com/gst/fullpage.html?res=9E00E6DA1239F933A05753C1A960958260
 
Last edited by a moderator:
  • #108


ThomasT said:
B doesn't need to be influencing A, or vice versa, to produce this functional relationship. They just need to be analyzing the same thing at each end for each pair.

I think you really haven't understood Bell. If they would be analyzing the same thing, then the correlations should obey Bell's inequalities. In the consecutive analyzer setup, the first one modifies (or can modify) the light disturbance, "encode" his angular setting in this light disturbance, and then the second can of course "measure the angular difference" (as the value of the angular setting of the first is encoded in the light pulse), and decide there-up what the outcome can be.

But in the EPR setup, two identical light perturbations go to A and B respectively. When A sets a certain angle, this will interact with this disturbance in a certain way, but this interaction cannot depend upon which angle B decides to set up, and so not on the difference either. At B, in the same way, the disturbance arriving there cannot know what angular choice A has made, and hence doesn't know the angular difference between both. At A, only the *absolute angle* and whatever was coded in the identical disturbances can determine what is the outcome, and at B, only the *absolute angle at B* and whatever was coded in the identical disturbances can determine what is the outcome. So maybe this disturbance has, with itself, a list of things to do whenever it encounters "absolute angle this" or that, and both disturbances carry the same list (or a similar list). THIS is exactly the kind of situation Bell analyzed. And then you find his inequalities.
 
  • #109


Maaneli said:
Contrary to common belief, this is actually not true. Many times I have cited the work of leaders in those research areas who have recently shown the possibility of empirically testable differences. I will do so once again:
…..
….. So everything you said based on that initial assumption is null.

Also, superdeterminism, if implemented in an empirically adequate way in replacement of nonlocality, would be just as valid as a nonlocal account of EPR, and therefore just as relevant to QM.
To be clear on superdeterminism you must mean “in an empirically local although un-realistic adequate way”. And yes it is relevant to QM as an Equivalent Non-Local theory. [Note the difference between local and Local; with a cap L we define Local = a reality requiring Causality Locality & Realism]

As to rejecting my assumption that Non-Local theories have been unable to demonstrate which of them is “correct”, “more correct” or “superior” to any other Non-Local Theory, I find no support in your citations.
All I see in those is talk about the possibility of experiments based on arguments about what they think various interpretations “apparently” can or cannot predict. Beyond all the bravado and posturing by different interpretations is see not definitive experiments described that could plausibly be expected to be performed.

IMO the very nature of a Non-Local requirement leaves all of them essentially equivalent to CI including BM superdeterminism.
 
  • #110


RandallB said:
To be clear on superdeterminism you must mean “in an empirically local although un-realistic adequate way”. And yes it is relevant to QM as an Equivalent Non-Local theory. [Note the difference between local and Local; with a cap L we define Local = a reality requiring Causality Locality & Realism]

As to rejecting my assumption that Non-Local theories have been unable to demonstrate which of them is “correct”, “more correct” or “superior” to any other Non-Local Theory, I find no support in your citations.
All I see in those is talk about the possibility of experiments based on arguments about what they think various interpretations “apparently” can or cannot predict. Beyond all the bravado and posturing by different interpretations is see not definitive experiments described that could plausibly be expected to be performed.

IMO the very nature of a Non-Local requirement leaves all of them essentially equivalent to CI including BM superdeterminism.



I think you did not read those papers at all beyond the abstracts. The papers by Valentini clearly describe specific experiments. And it is necessary and sufficient to show that these theories make different empirical predictions to refute your point. I simply can't understand how you could argue otherwise.

BTW, I think your characterization of superdeterminism and local vs Local is totally confused. Superdeterminism doesn't mean non-realism at all. Please read Bell's paper "Free Variables and Local Causality".
 
  • #111


DrChinese said:
The title of t'Hooft's paper should really be "The Free Will Postulate in Science" or even better "How God has tricked everyone into believing their experimental results are meaningful".

Your subjective opinions should be backed up by some really good arguments, otherwise I see no reason to prefer them over those of a Nobel price laureate in particle physics. So, can you show me why a superdeterministic universe makes science meaningless?

In fact: there is no superdeterministic theory to critique.

This is true, but the purpose of Bell's theorem is to reject some classes of possible theories and not to test developed theories. For the later you need to show that the theory gives you Schroedinger's equation or something very similar to it.

You know, there is also theory that the universe is only 10 minutes old. I don't think that argument needs to be taken seriously either.

I don't know that argument so I cannot say if it should be taken seriously or not.

Superdeterminism is more of a philosophical discussion item, and in my mind does not belong in the quantum physics discussion area. It has nothing whatsoever to do with QM.

You should then explain how the assumption of free-will, or the idea of non-realism are not philosophical items. It seems to me that you want to define-away every possibility that does not conform to your beliefs. In fact, unlike free-will (that is basically a remnant of a mind-brain dualism), superdeterminism is a very well defined mathematical concept and it is not less scientific than non-determinism for example.
 
  • #112


Maaneli said:
There is yet another way to relinquish the "free-will" postulate in QM. One can implement backwards causation, as Huw Price and Rod Sutherland have proposed and successfully shown can reproduce the nonlocal correlations, as well as the empirical predictions of QM in general.

Cramer's transactional interpretation is based on the same idea of backwards causation. However, in my opinion, "free-will" should not be assumed at all. One should only think about particles (or whatever entities one might imagine) and their dynamics while developing a QM interpretation. Introducing ideas like backwards causation or non-locality just for the reason to preserve free-will seems pretty absurd to me.
 
  • #113


vanesch said:
Mmm. I thought 't Hooft was better than this :-p

He argues in fact against the notion of free will - that's totally acceptable as an argument. But at no point he actually argues how it comes that in EPR-like experiments, the correlations come out exactly as they do. Of course, the past initial conditions *could* be such that they both determine exactly the "choice" of the observer and the (hidden) state of the particles. Indeed, that's not impossible. He's simply arguing that there could indeed exist a deterministic dynamics, of which we know nothing, and which mimics quantum theory, such that this is the case. That's true of course. What he doesn't address, which is of course the main critique of the "superdeterminism" argument, is how this comes about so clearly in these EPR experiments, where, again, the "choice" can be implemented in miriads of different ways: humans who decide, computers, measurements of noise of different kinds (noise in a resistor, noise in, say, the light received from a distant star, recorded 10 days before,...). This unknown dynamics has to be such that the correlations appear always in the same way (following in fact the simple predictions of quantum mechanics), starting out from arbitrary initial conditions, and such that in one case, it is the starlight, in another case it is the brain activity, in yet another case, it are the numbers in the decimal expansion of the number pi, ... is exactly correlated in the right way with the particle states that, lo and behold, our simple correlations come out again.
As 't Hooft argues, that's indeed not excluded. But that means that we have absolutely no clue of what's really going on in nature, and that distant starlight of 10 days ago, brain activities and the state of particles in the lab are all correlated through an unknown dynamics, yet it ONLY turns out that we see it in particular experiments.

I think the problem with your line of reasoning comes from the fact that you look at those "myriads of different ways" to do the experiment from a macroscopic perspective. Humans, computers, stars are nothing but aggregates of the same type of particles, mainly electrons and quarks. The common cause that could be behind EPR correlations has to be related to the way in which these particles interact. I see absolutely no reason why I should expect the electrons in a human brain to behave different from those being part of a transistor.

Do you also expect that a human or a computer should not obey GR when put in some orbit around a planet because they are different?

Couldn't Madame Soleil use the same argument then to claim that her astrology works in exactly the same way ? That there IS a correlation between the positions of the planets and your love life in the next week ?

No, because such correlations do not exist. Astrology has been falsified already. If such correlations between my love life and the planets could be shown to be statistically significant I would certainly be curious about their origin.

And that she found out a bit more about the hidden dynamics than all these scientists ? If you read 't Hooft paper, and you replace Bell's correlations by astrological correlations, it reads just as well.

You are wrong because you compare a case where such correlations are certain, to a case where they are certainly not there. As I said before, astrology has been falsified, there is no reason to propose any explanation for a non-existing fact. But EPR correlations are real and in need for an explanation.
 
  • #114


ueit said:
Cramer's transactional interpretation is based on the same idea of backwards causation. However, in my opinion, "free-will" should not be assumed at all. One should only think about particles (or whatever entities one might imagine) and their dynamics while developing a QM interpretation. Introducing ideas like backwards causation or non-locality just for the reason to preserve free-will seems pretty absurd to me.


Maudlin has shown Cramer's Transactional interpretation to be physically inconsistent (since it has causal paradoxes).

Also, I don't think backwards causation at all preserves "free will". In Sutherland's model, it is never the case that measurement settings are random variables. The initial and final measurement settings make the theory completely deterministic. So there cannot possibly be a "free will" postulate.
 
  • #115


ueit said:
I think the problem with your line of reasoning comes from the fact that you look at those "myriads of different ways" to do the experiment from a macroscopic perspective. Humans, computers, stars are nothing but aggregates of the same type of particles, mainly electrons and quarks. The common cause that could be behind EPR correlations has to be related to the way in which these particles interact.

I understand that. I don't say that, in principle, superdeterminism is not possible.

But you make my point:

(about Madame Soleil)

No, because such correlations do not exist. Astrology has been falsified already. If such correlations between my love life and the planets could be shown to be statistically significant I would certainly be curious about their origin.

This is what I mean. If they would have been found, then the answer would have been justified by superdeterminism. It's simply because they aren't observed (are they ? :smile:) that superdeterminism shouldn't explain them. But no problem, tomorrow Neptune tells my fortune, superdeterminism can explain it.

So superdeterminism has *the potential* of even justifying astrology (if it were observed). As to their "origin", the superdeterminism answer would simply be "because they are correlated". In the same way as 't Hooft argues that the Bell correlations come about "because they are correlated". And the only way to find out about it is... to see that they are correlated.

Superdeterminism allows/explains/justifies ANY correlation, ANY time. It is even amazing that we don't find more of them! Astrology should be right. Or some version of it. Can't believe these things are uncorrelated.


You are wrong because you compare a case where such correlations are certain, to a case where they are certainly not there. As I said before, astrology has been falsified, there is no reason to propose any explanation for a non-existing fact. But EPR correlations are real and in need for an explanation.

What I'm saying is that superdeterminism has the potential of "explanation" of just any correlation. "Correlations happen". It can't be falsified at all. But worse, it opens the gate to explanations for any kind of correlation, without direct causal link. If they are there, hey, superdeterminism. If they aren't, well, hey, superdeterminism. As such, superdeterminism destroys the possibility of observing causality. Here it's the correlation between starlight and particles. There, it is the correlation between having Rolexes and Ferarris. Smoking and cancer, superdeterminism. No necessary causal link or whatever. In other words, superdeterminism is the theory "things happen".

When you look at other basic principles of physics, they *constrain* possible observations. Superdeterminism allows all of them, and their opposite.
 
  • #116


't Hooft doesn't say that simply saying that "they are correlated because of superdeterminism" is the final explanation. All he says is that the no go theorems ruling out deterministic models have some small print (e.g. because of superdeterminism) and that therefore we should not a priori rule out such models.

So, one can imagine that there exists deterministic models that explain quantum mechanics (in a non conspirational way, see the essay linked above), but there is a danger that no one would find them because no one would bother to study them because of Bell's theorem.
 
  • #117


ueit said:
Your subjective opinions should be backed up by some really good arguments, otherwise I see no reason to prefer them over those of a Nobel price laureate in particle physics. So, can you show me why a superdeterministic universe makes science meaningless?

If you posit a theory that says nothing more than "you appear to have a choice of observations but don't really" then you are really saying the results of experiments are not valid in some way. For if they were valid, then local realism fails.

As Vanesch pointed out, 't Hooft's paper is very high level. I really don't think his point has anything whatsoever to do with the Bell question. It really applies to science as a whole, and as such is more of a philosophical discussion. Regardless of the author's respected accomplishments, this paper does not deserve any greater status than someone saying that God controls all experimental outcomes. I would venture to say that thesis would be more palatable to many anyway.

The issue with a superdeterministic local realistic theory is that the entire future of the entire universe would need to be encoded in each and every particle we can observe. Otherwise there would be no way to keep the experimental results sync'd. That is a big requirement, and in my opinion is falsifiable anyway - if a specific version were offered to discuss.
 
  • #118


Maaneli said:
I think you did not read those papers at all beyond the abstracts. The papers by Valentini clearly describe specific experiments. And it is necessary and sufficient to show that these theories make different empirical predictions to refute your point. I simply can't understand how you could argue otherwise.
Are you saying that any of those “specific experiments” are definitive experiments that can plausibly be expected to be performed ??
Which one(s)? Why haven’t they been performed?
And who, where, and how is someone currently working on performing them to finally make your point in the only way it can – with experimentally observed results?
BTW, I think your characterization of superdeterminism and local vs Local is totally confused.
Are you claiming superdeterminism is local and realistic ?

As Dr C said that would require “the entire future of the entire universe would need to be encoded in each and every particle we can observe”. That in effect would be one huge Hidden Variable – if such a complex HV could be contained on a photon then so could one single addition hidden variable. That single HV is all that would be needed to resolve a more complete description than QM, nullifying the need for such a complex thing a superdeterminism.
 
Last edited:
  • #119


Count Iblis said:
So, one can imagine that there exists deterministic models that explain quantum mechanics (in a non conspirational way, see the essay linked above), but there is a danger that no one would find them because no one would bother to study them because of Bell's theorem.

I think it was understood from the start (even Bell mentions it in "speakable and unspeakable...") that superdeterminism is a way out. So it is not that this was an overlooked possibility. It was simply a possibility which one didn't consider fruitful in any way. Concerning superdeterministic relativistic models (if they don't have to be relativistic, we already have one - a normal deterministic theory: BM), I don't think one could find them, or at least, verify these findings in some way. Indeed, the very acceptance of superdeterminism - that is, accepting that the dynamics will be such that "things that look like free will could have unavoidable statistical correlations, no matter how one does it", in other words, assuming that "things that look like free will cannot be assumed to be statistically independent" simply takes away our only possibility to ever have a way of verifying any causal relationship for good. Moreover, "things that look like free will" are always very complicated physical processes, which are for the time being, and probably for a very long time still if not for ever, untractable in their details, but nevertheless it is in these details that a superdeterministic model finds its, well, superdeterminism.
So this means that it is simply impossible to come up with a genuine superdeterministic model. First of all, because theoretically, we cannot show it to work - because we would have to work out in all detail and in all generality, "processes which look like free will" which are FAPP untractable. And experimentally, we cannot show it, because by the very supposition of superdeterminism, we are unable to disentangle any experimental setup in clear cause-effect relationships, as "arbitrary correlations" can show up just anywhere.

THIS is why superdeterminism is undesirable. It is unworkable as a theory. If ever there is superdeterminism in nature - which is not excluded by any means - then that will be the end point of scientific investigation. Maybe that's what quantum mechanics is simply teaching us. But it is of no conceptual use at all.

EDIT: As Dr. C said: it is conceptually then simpler to assume that there's a god which makes things happen, and it is His will to implement certain correlations between his actions (until this makes Him bored, and he'll change the rules).
Also, there's a very much simpler model which also accounts for just any correlation. God's book. A hypothetical list of all events (past present and future) in the universe. What we discover as "laws of physics" are simply some regularities in that book.
 
Last edited:
  • #120


vanesch said:
If they would be analyzing the same thing, then the correlations should obey Bell's inequalities.
I disagree. The qm formulation does assume that they're analyzing the same thing. What it doesn't do is specify any particular value wrt any particular coincidence interval for that common property. So, we apparently do have common cause and analysis of a common property producing experimental violation of the inequalities.


vanesch said:
In the consecutive analyzer setup, the first one modifies (or can modify) the light disturbance, "encode" his angular setting in this light disturbance, and then the second can of course "measure the angular difference" (as the value of the angular setting of the first is encoded in the light pulse), and decide there-up what the outcome can be.

But in the EPR setup, two identical light perturbations go to A and B respectively. When A sets a certain angle, this will interact with this disturbance in a certain way, but this interaction cannot depend upon which angle B decides to set up, and so not on the difference either.
Suppose you have an optical Bell setup where you've located the A side a bit closer to the emitter than the B side so that A will always record a detection before B. It's a detection at A that starts the coincidence circuitry which then selects the detection attribute (1 for detection and 0 for no detection) associated with a certain time interval at B to be paired with the detection attribute, 1, recorded at A.

The intensity of the light transmitted by the second polarizer in the polariscopic setup is analogous to the rate of coincidental detection in the optical Bell setup.

Because we're always recording a 1 at A, then this can be thought of as an optical disturbance of maximum intensity extending from the polarizer at A and incident on the polarizer at B for any coincidence interval. As in the polariscopic setup, when the polarizer at B (the second polarizer) is set parallel to A, then the maximum rate of coincidental detection will be recorded -- and the rate of coincidental detection is a function of the angular difference between the setting of the polarizer at A and the one at B, just as with a polariscope.

The critical assumption in Bell's theorem is that the data streams accumulated at A and B are statistically independent. The experimental violations of the inequalities don't support the idea of direct causation between A and B, or that the correlations can't be caused by analysis of the same properties. Rather, they simply mean that there is a statistical dependency between A and B. This statistical dependency is a function of the experimental design(s) necessary to produce entanglement. In the simple optical Bell tests the dependency arises from the emission preparations and the subsequent need to match detection attributes via time-stamping.
 
  • #121


RandallB said:
Are you saying that any of those “specific experiments” are definitive experiments that can plausibly be expected to be performed ??
Which one(s)? Why haven’t they been performed?
And who, where, and how is someone currently working on performing them to finally make your point in the only way it can – with experimentally observed results? Are you claiming superdeterminism is local and realistic ?

As Dr C said that would require “the entire future of the entire universe would need to be encoded in each and every particle we can observe”. That in effect would be one huge Hidden Variable – if such a complex HV could be contained on a photon then so could one single addition hidden variable. That single HV is all that would be needed to resolve a more complete description than QM, nullifying the need for such a complex thing a superdeterminism.


<< Are you saying that any of those “specific experiments” are definitive experiments that can plausibly be expected to be performed ?? >>

YES! That's exactly right. There are definitive experiments that can be performed to test those predictions.

<< Which one(s)? Why haven’t they been performed? >>

Er, well I don't know why you would ask this if you really had looked at those references in any detail, as they answer these specific questions. With regard to the second one, you'll notice that many of these results are extremely new, and that is part of the reason.

<< Are you claiming superdeterminism is local and realistic ? >>

Local, probably, realistic, yes. Bell makes this very clear in his Free Variables and Local Causality paper. Would you like me to elaborate further?
 
  • #122


Vanesch,

As to "realism", instead of calling it "non-realist", I'd rather call it "multi-realist". But that's semantics. The way MWI can get away with Bell is simply that at the moment of "measurement" at each side, there's no "single outcome", but rather both outcomes appear. It is only later, when the correlations are established, and hence when there is a local interaction between the observers that came together, that the actual correlations show up.

Is it really just semantics? I think there is a pretty sharp conceptual difference between saying something is nonrealist versus multi-realist. The latter is still a distinct form of realism! I suspect you wouldn't say the difference between nondiesm (belief in no deity) and polydeism (belief in multiple deities) is a trivial one.

Yes, in the sense you described, that is why I would call MWI a 'local' account of Bell inequality violations.
 
  • #123


Maaneli said:
<< Are you saying that any of those “specific experiments” are definitive experiments that can plausibly be expected to be performed ?? >>

YES! That's exactly right. There are definitive experiments that can be performed to test those predictions.

<< Which one(s)? Why haven’t they been performed? >>

Er, well I don't know why you would ask this if you really had looked at those references in any detail, as they answer these specific questions. With regard to the second one, you'll notice that many of these results are extremely new, and that is part of the reason.
Well Umm, Why wouldn’t I ask that? – isn’t that the point of science to present an irrefutable explanation that will convince the minds of doubters even against their own arguments usually supported by repeatable experimental evidence as needed.
Obviously those explanations and opinions have not convinced the review of the scientific community.

Is there one set of new results you are convinced already completes the task and only awaits the world to recognize it? Or do you figure one of these will probably convince the scientific community but your just not sure which one that might be? An uncertainty of which one is ‘correct’ or completely convincing.

<< Are you claiming superdeterminism is local and realistic ? >>

Local, probably, realistic, yes. Bell makes this very clear in his Free Variables and Local Causality paper. Would you like me to elaborate further?
I’ll take that as it probably needs to be Local And Realistic, with the entire universe encoded into a super HV within each and every particle.

I can imagine no future elaboration that would get past the Law of Parsimony established by Ockham.
If a Super HV of incredible complexity and variation like that were to exist on each and every particle then certainly there would be room for a simple HV as need by Einstein in EPR.
Since a simple single HV is all that is required to resolve the Local paradox issue why use a Super HV. How could such a large violation of Ockham’s law be considered acceptable?
Unless your elaboration can reconcile the logic of Ockham I don’t see any other result than the vanesch point that superdeterminism is simply not considered as having any fruitful possibility.
 
  • #124


RandallB said:
Well Umm, Why wouldn’t I ask that? – isn’t that the point of science to present an irrefutable explanation that will convince the minds of doubters even against their own arguments usually supported by repeatable experimental evidence as needed.
Obviously those explanations and opinions have not convinced the review of the scientific community.

Is there one set of new results you are convinced already completes the task and only awaits the world to recognize it? Or do you figure one of these will probably convince the scientific community but your just not sure which one that might be? An uncertainty of which one is ‘correct’ or completely convincing.

I’ll take that as it probably needs to be Local And Realistic, with the entire universe encoded into a super HV within each and every particle.

I can imagine no future elaboration that would get past the Law of Parsimony established by Ockham.
If a Super HV of incredible complexity and variation like that were to exist on each and every particle then certainly there would be room for a simple HV as need by Einstein in EPR.
Since a simple single HV is all that is required to resolve the Local paradox issue why use a Super HV. How could such a large violation of Ockham’s law be considered acceptable?
Unless your elaboration can reconcile the logic of Ockham I don’t see any other result than the vanesch point that superdeterminism is simply not considered as having any fruitful possibility.


<< Well Umm, Why wouldn’t I ask that? – isn’t that the point of science to present an irrefutable explanation that will convince the minds of doubters even against their own arguments usually supported by repeatable experimental evidence as needed. >>


No the point is that all your questions are addressed in those references and it is surprising that you made a dismissive judgement on them before looking to see how exactly those references address your questions. Indeed it astonishes me that you are not surpised enough or interested enough in these claims to look at them more closely, carefully, and objectively.


<< Is there one set of new results you are convinced already completes the task and only awaits the world to recognize it? Or do you figure one of these will probably convince the scientific community but your just not sure which one that might be? >>


Well these are currently only theoretical results along with proposed experiments. That is not enough to convince the scientific community of any claim. But it certianly is enough to grab interest for those perceptive and open-minded few physicists who are willing to look into such claims. Antony Valentini is being taken seriously for his views by the likes of Lee Smolin and other in the QM foundations community. Tumulka is being taken seriously by the likes of Brian Greene and Anton Zeilinger. Indeed, after seeing Tumulka's talk at the Rutgers Conference (which I was in attendance), Greene stated his interest to work on these subjects and has recently begun to do research on deBB field theory. And Tumulka quotes Zeilinger as saying they will have the capability to do the experiments to test these GRW theories in about 10 years. Please have a look at Tumulka's lecture slides.

Of course, the thing that will truly convince the scientific community to perk up and take this very seriously is if these new experiments are done and confirm the predictions of Valentini or Tumulka. I personally have no way of knowing which one is more likely to be correct. But if I had to guess, I would go with Valentini.

About superdeterminism, I agree of course that it is extremely implausible. And I was never trying to argue for its physical plausibility. I was simply arguing for not only its logical possibility, but also for the fact that IF superdeterminism were to exist, it would be a form of realism. Occam's razor has nothing to do with this point. As a form of locality, it is harder to say because on the one hand it would be since in Bell's theorem, one could keep the assumption that two events A and B are always causally separated on a Cauchy surface. Also, if superdeterminism were to be an explanation of ALL EPRB-type experiments, the massive consipracy of all matter in nature to make this the case would seem to suggest the superderminism mechanism has already explored all possible physical interactions in the "future", and arranged the physical interactions of particles in the "present" to make Bell inequality violations predestined. I don't know for sure (but I don't think) if one could call this a form of nonlocality.
 
  • #125


<< I personally have no way of knowing which one is more likely to be correct. >>

Actually NOBODY has any way of knowing which one is more likely to be correct.
 
  • #126


ThomasT said:
I disagree. The qm formulation does assume that they're analyzing the same thing. What it doesn't do is specify any particular value wrt any particular coincidence interval for that common property. So, we apparently do have common cause and analysis of a common property producing experimental violation of the inequalities.

That's simply because in the blunt application of quantum theory in the standard interpretation, we do 'action at a distance' when the first measurement "collapses" the ENTIRE wavefunction, so also the part that relates to the second particle. So this collapse includes the angular information of the first measurement, in exactly the same way as the single light disturbance that goes through two consecutive polarizers. This is why formally, in quantum mechanics, both are very similar.
If this collapse corresponds to anything physical, then that stuff does "action at a distance", and hence Bell's inequalities don't count anymore of course, as he assumes that we do not have a signal from one side about the chosen measurement, when we measure the second side.

But in as much with the 2 consecutive polarizers, one could easily imagine that the "light disturbance" carries with it, as a physical carrier, the "collapse information" from the first to the second (and has to do so at less than lightspeed because the disturbance doesn't go faster), there's no physical carrier, and it goes faster than light, with an Aspect-like setup.
Suppose you have an optical Bell setup where you've located the A side a bit closer to the emitter than the B side so that A will always record a detection before B. It's a detection at A that starts the coincidence circuitry which then selects the detection attribute (1 for detection and 0 for no detection) associated with a certain time interval at B to be paired with the detection attribute, 1, recorded at A.

Yes, but the CHOICE of which measurement to perform at B has been done before any signal (at lightspeed) could reach B. THAT's the point. Not to sort out which pairs of detection go together. BTW, you don't even need that. You could put the two sides A and B a lightyear apart, and just have them well-synchronized clocks (this can relativistically be done if you're careful). They then just record the times of arrival of the different light pulses. The experiment lasts maybe for a month. It is then only at least one year later, when both observers come together and compare their lists, that they find the correlations in their recorded measurements. So no synchronisation circuitry is needed. It's just the practical way of doing it in a lab.

The intensity of the light transmitted by the second polarizer in the polariscopic setup is analogous to the rate of coincidental detection in the optical Bell setup.

Yes.

Because we're always recording a 1 at A, then this can be thought of as an optical disturbance of maximum intensity extending from the polarizer at A and incident on the polarizer at B for any coincidence interval. As in the polariscopic setup, when the polarizer at B (the second polarizer) is set parallel to A, then the maximum rate of coincidental detection will be recorded -- and the rate of coincidental detection is a function of the angular difference between the setting of the polarizer at A and the one at B, just as with a polariscope.

yes. But the problem is that this time, the B disturbance doesn't know what was measured at the A side (unless this is quickly transmitted by an action-at-a-distance phenomenon). So, when B measures at 0 degrees, should it click or not ? If A measured at 0 degrees too, and it clicked there, then it should click. So maybe this was a "0-degree disturbance". Right. But imagine now that A measured at 45 degrees and that it clicked. Should B click now ? Half of the time, of course. And what if B had measured 45 degrees instead of 0 degrees ? Should it now click with certainty ? But then it couldn't click with certainty at 0 degrees, right ? So what the disturbance should do at B must depend on what measurement A had chosen to perform: 0 degrees, 45 degrees or 90 degrees. If you work the possibilities out in all detail, you find back Bell's inequalities.

The critical assumption in Bell's theorem is that the data streams accumulated at A and B are statistically independent. The experimental violations of the inequalities don't support the idea of direct causation between A and B, or that the correlations can't be caused by analysis of the same properties. Rather, they simply mean that there is a statistical dependency between A and B. This statistical dependency is a function of the experimental design(s) necessary to produce entanglement. In the simple optical Bell tests the dependency arises from the emission preparations and the subsequent need to match detection attributes via time-stamping.

The emission preparations: does that mean that we give them the same properties ? But that was not the explanation, because of Bell. I don't see why you insist so much on this time stamping. Of course we want to analyze pairs of photons! But that doesn't explain WHY we get these Bell-violating correlations. If they were just "pairs of identical photons each time" and we were analyzing properties of these photons on both sides, then they should obey Bell's inequalities !

Look, this is as if you were going to emit two identical letters each time to two friends of you, one living in Canada, the other in China. You know somehow that they don't talk. The letters can be on blue or pink paper, they can be written with blue or red ink, and they can be written in Dutch or in Arabic. For each letter, your friends are supposed to look at only one property: they pick (free will, randomly, whatever) whether they look at the color of the paper, the color of the ink, or the language of the letter.
You ask them to write down in their logbook, what property they had chosen to look at, and what was their result. Of course you ask them also which letter they were dealing with (some can get lost in the post and so on). So you ask them to write down the postal stamp date (you only send out one pair per day).
You send pairs of identical letters out each second day.

After 2 years, you ask them to send you their notebooks. When you receive them, you compare their results. You classify them in 9 categories:
(Joe: color of paper ; Chang: color of paper)
(Joe: color of paper ; Chang: color of ink)
(Joe: color of paper ; Chang: language)
(Joe: color of ink ; Chang: color of paper)

etc...

But in order for you to be able to do that, you want them to have analysed the same pair of letters of course. So you verify the postal stamp date. Those that don't find a match, you discard them, because probably the other letter was lost in the post office somewhere.

Then for each pair, you count how much time they found the "same" answer (say, pink, red, Dutch is 1, blue, blue and Arabic is 0) and how much time they found a different answer. These are the correlations.

Of course, each time they looked at the same property, they found the same answer (they were identical letters). So when Joe looked at the color of the paper, and Chang did so too, they find 100% correlation (when Joe found blue, Chang found blue, and when Joe found pink, Chang found pink).
You also find that the correlations are symmetrical: when Joe looked at "paper" and Chang at "ink" then that gives the same result of course as when Joe looked at "ink" and Chang at "paper". So there are actually 3 numbers which are interesting:

C(Joe: paper, Chang: ink) = C(Joe: ink, Chang: paper) = C(1,2)
C(Joe: paper, Chang: language) = C(Joe: language, Chang: paper) = C(1,3)
C(Joe: language, Chang: ink) = C(Joe: ink, Chang: language) = C(2,3)

Well, Bell shows that these correlations obey his inequalities.

That is: C(1,2) + C(2,3) + C(1,3) > 1

You cannot have that each time that Joe looked at "paper" and Chang at "ink" they found opposite results (C(1,2) close to 0)), that each time Joe looked at paper, and Chang looked at "language" that they found opposite results (C(1,3) close to 0) and at that each time Joe looked at "language" and Chang looked at "ink" that they ALSO found opposite results (C(2,3) close to 0). If you think about it, it's logical!

You would have a hell of a surprise to find that Joe and Chang had Bell-violating correlations. There doesn't exist a set of letters you could have sent that can produce Bell-violating inequalities if the choices of measurement are random and independent.

The only solution would have been that Joe and Chang cheated, and called each other on the phone to determine what measurements they'd agree upon.
 
Last edited:
  • #127


vanesch said:
That's simply because in the blunt application of quantum theory in the standard interpretation, we do 'action at a distance' when the first measurement "collapses" the ENTIRE wavefunction, so also the part that relates to the second particle. So this collapse includes the angular information of the first measurement, in exactly the same way as the single light disturbance that goes through two consecutive polarizers. This is why formally, in quantum mechanics, both are very similar.
If this collapse corresponds to anything physical, then that stuff does "action at a distance", and hence Bell's inequalities don't count anymore of course, as he assumes that we do not have a signal from one side about the chosen measurement, when we measure the second side.

But in as much with the 2 consecutive polarizers, one could easily imagine that the "light disturbance" carries with it, as a physical carrier, the "collapse information" from the first to the second (and has to do so at less than lightspeed because the disturbance doesn't go faster), there's no physical carrier, and it goes faster than light, with an Aspect-like setup.
I don't think it's a necessary part of the standard statistical interpretation to associate "wavefunction collapse" with "action-at-a-distance".

So, should I take it that you agree that the qm treatment(s) of optical Bell tests support the idea that the correlations are a function of common cause and analysis of common properties by global parameters? Don't you think this is a better working assumption than FTL causal linkage between A and B?

vanesch said:
... the CHOICE of which measurement to perform at B has been done before any signal (at lightspeed) could reach B. THAT's the point. Not to sort out which pairs of detection go together. BTW, you don't even need that. You could put the two sides A and B a lightyear apart, and just have them well-synchronized clocks (this can relativistically be done if you're careful). They then just record the times of arrival of the different light pulses. The experiment lasts maybe for a month. It is then only at least one year later, when both observers come together and compare their lists, that they find the correlations in their recorded measurements. So no synchronisation circuitry is needed. It's just the practical way of doing it in a lab.
I suggested the scenario where A is closer to the emitter than B to make the analogy with the polariscope more clear. The polarizer at A in the Bell tests corresponds to the first polarizer in the polarisclope. The polarizer at B in the Bell tests corresponds to the analyzing polarizer in the polariscope. And, of course, there will be one and only one pair of polarizer settings associated with any pair of detection attributes, and these settings can be in place at any time after emission but before filtration at either end (and, also of course, the setting selection and detection events are spacelike separated to preclude any possibility of communication between A and B via light or sub-light channels during a coincidence interval.)

Whether you sort pairs via synchronization ciruitry or do it the way you suggested above, the pairs are still being matched up according to their presumed time of emission (taking into account travel times to detectors). This time-matching is of the utmost importance in producing entanglement correlations.



vanesch said:
... the B disturbance doesn't know what was measured at the A side (unless this is quickly transmitted by an action-at-a-distance phenomenon).
It "knows" the same way the second polarizer in a polariscope "knows" what's been transmitted by the first polarizer. The only difference is that in the Bell tests the simultaneously emitted, common disturbances are traveling in opposite directions. But in both setups there is a singularly identical waveform extending between the polarizers. (Of course, in the Bell tests, the value, the specific behavior of this wave is assumed to change randomly from emitted pair to emitted pair. The only thing that's important to produce the correlations is that both A and B are analyzing the same thing during a coincidence interval. The exact values of the common properties from interval to interval are unimportant in producing the correlations.)
vanesch said:
So, when B measures at 0 degrees, should it click or not? If A measured at 0 degrees too, and it clicked there, then it should click.
We can't say anything about what A or B measures at wrt some sort of hidden variable of the incident disturbance.

If A registers a detection, then it's assumed that the entire energy of the polarizer-incident optical disturbance was transmitted by the polarizer, isn't it? So, what does that mean according to some sort of geometric interpretation? I don't know. I don't know what's happening at the level of particle-pair emission. I don't know what's happening at the level of filtration. But, if B is analyzing the same disturbance as A, then following a detection at A, and given the above assumption, wouldn't you expect that, in the long run, the rate of detection at B will converge to cos^2 Theta? (where Theta is the angular difference between the crossed polarizers)

Well, that's what's seen. So, I don't think that Bell's theorem has ruled out common cause and common analysis of common properties as the source of the entanglement correlations.

vanesch said:
So maybe this was a "0-degree disturbance". Right. But imagine now that A measured at 45 degrees and that it clicked. Should B click now ? Half of the time, of course. And what if B had measured 45 degrees instead of 0 degrees ? Should it now click with certainty ? But then it couldn't click with certainty at 0 degrees, right ? So what the disturbance should do at B must depend on what measurement A had chosen to perform: 0 degrees, 45 degrees or 90 degrees. If you work the possibilities out in all detail, you find back Bell's inequalities.
Right, so if we do want to speculate about the precise properties of the common incident disturbances, then we've learned that if we do it along the lines of your above questions, then we're probably on the wrong track.


vanesch said:
The emission preparations: does that mean that we give them the same properties?
Yes

vanesch said:
But that was not the explanation, because of Bell.
I think this is a misinterpretation of Bell. Bell assumes, in effect, statistical independence between A and B. Because of the experimental design of Bell tests, A and B are not statistically independent. Hence, there is a violation of inequalities based on the assumption that they are.

vanesch said:
I don't see why you insist so much on this time stamping. Of course we want to analyze pairs of photons! But that doesn't explain WHY we get these Bell-violating correlations.
Well, we can't just analyze any old pairs that we choose to throw together, can we? There's a stategy for matching up the separate data sequences (so as to maximize the probability that we are, indeed, dealing with identical disturbances at A and B during any given coincidence interval), and it's based on timing mechanisms.

vanesch said:
If they were just "pairs of identical photons each time" and we were analyzing properties of these photons on both sides, then they should obey Bell's inequalities!
That's the most common interpretation of the physical meaning of Bell inequality violations -- in effect, that the correlations can't be solely due to the analysis of common properties by a common (global) parameter. I'm just suggesting that that interpretation might not be correct, and that that is what the correlations are solely due to.

Keep in mind that great pains are taken to insure that we are dealing with "pairs of identical photons each time" and that the assumed common properties of these pairs are being commonly analyzed during each interval by the global parameter, Theta.

Wouldn't it be wonderfully exotic if it was necessary to produce some sort of FTL influence each time we commonly analyze these common properties in order to actually produce the cos^2 Theta functional relationship? :smile:

I'm suggesting that there's a simpler explanation.
 
  • #128


ThomasT said:
I don't think it's a necessary part of the standard statistical interpretation to associate "wavefunction collapse" with "action-at-a-distance".

So, should I take it that you agree that the qm treatment(s) of optical Bell tests support the idea that the correlations are a function of common cause and analysis of common properties by global parameters? Don't you think this is a better working assumption than FTL causal linkage between A and B?

I was responding to your statement that visibly, quantum mechanics DOES predict the Aspect-like correlations, "based upon common source/cause/etc...". I was simply pointing out that the usual formal way one obtains these predictions, is by using formally what A had picked as a measurement, when one calculates what B must obtain. So this is how the FORMALISM of quantum mechanics arrives at the correlations: it uses the choice at A to calculate the stuff at B (or vice versa). If ever this formal calculation were the display of an underlying physical mechanism, then that mechanism is obviously an "action-at-a-distance".

So it is in this, obvious, way, that usual formal quantum mechanics can predict Bell-violating correlations. THIS is not a surprise.

Again, if B KNOWS what A is picking as a measurement, there's no difficulty. The difficulty resides in B having to decide what is going to be the result, ONLY knowing the "common part" (whatever that is), and the "decision of what measurement to do at B", but NOT the "decision of what measurement to do at A" (which the formalism clearly DOES use). It is in THIS circumstance that Bell's inequalities are derived (and are almost "obvious").

I was understanding that you used the argument that formal quantum mechanics arrived at Bell correlations "using a common source" and even suggests the experiment, that this indicated that hence, there must be a "common cause".
But that's because one usually takes it that the quantum formalism "cheats" when using collapse.

I suggested the scenario where A is closer to the emitter than B to make the analogy with the polariscope more clear. The polarizer at A in the Bell tests corresponds to the first polarizer in the polarisclope. The polarizer at B in the Bell tests corresponds to the analyzing polarizer in the polariscope. And, of course, there will be one and only one pair of polarizer settings associated with any pair of detection attributes, and these settings can be in place at any time after emission but before filtration at either end (and, also of course, the setting selection and detection events are spacelike separated to preclude any possibility of communication between A and B via light or sub-light channels during a coincidence interval.)

Yes, so ?

Whether you sort pairs via synchronization ciruitry or do it the way you suggested above, the pairs are still being matched up according to their presumed time of emission (taking into account travel times to detectors). This time-matching is of the utmost importance in producing entanglement correlations.

Of course. We want to analyse PAIRS, right ? So we want to make sure we're comparing data corresponding each time to the same PAIR. But that cannot be the explanation. Look at my "identical letters" example.


It "knows" the same way the second polarizer in a polariscope "knows" what's been transmitted by the first polarizer. The only difference is that in the Bell tests the simultaneously emitted, common disturbances are traveling in opposite directions. But in both setups there is a singularly identical waveform extending between the polarizers. (Of course, in the Bell tests, the value, the specific behavior of this wave is assumed to change randomly from emitted pair to emitted pair. The only thing that's important to produce the correlations is that both A and B are analyzing the same thing during a coincidence interval. The exact values of the common properties from interval to interval are unimportant in producing the correlations.)

OF COURSE they are analysing the same thing ! And Bell's inequalities should apply to "analysing the same thing". I refer again to my identical-letter example.

But there's a world of difference between the two successive polarimeters and the Aspect-like setup. In the two successive polarimeters, the first one COULD HAVE APPENDED some physical property to the light disturbance that carries the information of what was its setting, and hence the second polarimeter could use that extra information in order to determine the outcome. THIS is the only way out of Bell: B KNOWS what was A's measurement setting. It is not in the common (identical thing) information that you can violate Bell, it is in the "B knows what A did" information that you can violate Bell. And if B receives some disturbance that went first through A, you cannot guarantee that this information of what A did, isn't now included in a modified disturbance. So Bell doesn't necessarily apply here.

In an Aspect-like experiment, it is not clear (unless there is "action-at-a-distance") how B could find out what experiment A has performed on the identical copy. And without that information, you cannot violate Bell.


We can't say anything about what A or B measures at wrt some sort of hidden variable of the incident disturbance.

Yes, but what we could reasonably expect, is that when A does a measurement, it doesn't know what B had been doing as a measurement. No matter the internal machinery, parameters, whatever.

If A registers a detection, then it's assumed that the entire energy of the polarizer-incident optical disturbance was transmitted by the polarizer, isn't it? So, what does that mean according to some sort of geometric interpretation? I don't know. I don't know what's happening at the level of particle-pair emission. I don't know what's happening at the level of filtration. But, if B is analyzing the same disturbance as A, then following a detection at A, and given the above assumption, wouldn't you expect that, in the long run, the rate of detection at B will converge to cos^2 Theta? (where Theta is the angular difference between the crossed polarizers)

No. In fact, you would expect a different function. You would never expect, for instance, perfect anti-correlation under 90 degrees, because you would expect there to be some 45-degree identical disturbances which have 50% chance to get through A, and also 50% chance to get through B, but you wouldn't expect them to do ALWAYS THE SAME THING at the same time on both sides. But you can modify that, and say: maybe these two disturbances contain extra information we don't know about so that they DO the same thing. Ok. THEN it is possible to get the perfect anti-correlation right for 90 degrees. But if you do the entire book keeping, you STILL find that you must satisfy Bell.

I think this is a misinterpretation of Bell. Bell assumes, in effect, statistical independence between A and B. Because of the experimental design of Bell tests, A and B are not statistically independent. Hence, there is a violation of inequalities based on the assumption that they are.

No, Bell doesn't assume statistical independence of A and B ! That would be trivial. Bell assumes that the CHOICES of what measurement is done at A is independent of the CHOICE at B and of the particular emitted (identical) pair of particles/disturbances/whatever, and moreover that whatever decides the result at A (for the chosen measurement at A) can depend of course on the identical pair, and on the choice at A, but NOT on the choice of B.

Once you do that, you arrive at Bell's inequalities.

Well, we can't just analyze any old pairs that we choose to throw together, can we? There's a stategy for matching up the separate data sequences (so as to maximize the probability that we are, indeed, dealing with identical disturbances at A and B during any given coincidence interval), and it's based on timing mechanisms.

Yes of course, but Bell takes that into account. We can have identical disturbances at both sides. That's actually the entire game. THAT correlation is allowed for in Bell. And still you find his inequalities.

That's the most common interpretation of the physical meaning of Bell inequality violations -- in effect, that the correlations can't be solely due to the analysis of common properties by a common (global) parameter. I'm just suggesting that that interpretation might not be correct, and that that is what the correlations are solely due to.

That's wrong. It is EXACTLY that what Bell assumes in his theorem, and he arrives at his inequalities. So you can't say that you think that the "interpretation" of his premises is not correct, right ? If Bell ASSUMES that there is a set of common parameters, shared by the two members of each pair each time, and using that, he arrives at his inequalities, then you can hardly claim that the violation of these inequalities point out anything else but the fact that the correlations are NOT solely due to a set of common parameters, shared by the two members of each pair, no ?

Keep in mind that great pains are taken to insure that we are dealing with "pairs of identical photons each time" and that the assumed common properties of these pairs are being commonly analyzed during each interval by the global parameter, Theta.

The great pains are in order to assure, as Bell assumes, that we have for the two members of each pair the possibility of having a set of common parameters shared by each of its two members. We are looking at correlations between measurements at each side individually, and the whole idea is that the response at A can only depend on "theta1" (the choice made at A) and on that set of common parameters, and that the response at B can only depend on "theta2" and of course that set of common parameters. The response at A cannot depend on "theta = theta1 - theta2" because that would mean that the response at A would know about the CHOICE at B, which is the only prohibition here in Bell. Identically, the result at B can depend on theta2 and on the common set of parameters, but NOT on theta1, and hence not on theta. Well, if you crunch through the mathematics under these conditions, you find that these correlations are of course a function of theta1, and of theta2, but cannot be something like cos^2(theta). They CAN be something like 1/4 cos^2(theta) + 1/2 or so, but not cos^2(theta).

You really really should analyze the example with the identical letters.
 
  • #129


vanesch said:
... we want to make sure we're comparing data corresponding each time to the same PAIR. But that cannot be the explanation.
It cannot be the explanation because Bell assumed that we're "comparing data corresponding each time to the same PAIR", and that the polarizer-incident disturbances associated with each pair of detection attributes are identical, right?

But that assumption isn't the so-called locality assumption. It's the so-called realism assumption. The realism assumption isn't the assumption that's negated by violations of the inequalities. The locality assumption is.

The vital assumption (the locality assumption), according to Bell, is that the result at B doesn't depend on the setting of the polarizer at A and vice versa. (It doesn't as far as anyone can tell, does it?)

Remember in an earlier post where I asked: when is a locality condition not, strictly speaking, a locality condition?

Well, the Bell experiments are designed to answer statistical questions, not questions about direct causal linkages between spacelike separated events. Hence, the vital assumption is that the result at B is statistically independent of the setting at A.

Of course, the result at A is dependent on the setting at A. In the simplified setup where the A side is closer to the emitter than the B side, detection at A (and only detection at A) initiates the coincidence circuity. So, in effect, the result at A is the setting at A.

So, the vital assumption becomes: the results at B are statistically independent of the results at A. [This assumption is evident in the formulation of equation (2) in Bell's 1964 paper, On The Einstein Podolsky Rosen Paradox.]

They aren't statistically independent. So, inequalities based on this assumption are violated -- and these violations tell us nothing more about the possible (or impossible) sources of the correlations and the nature of quantum entanglement than could have been surmised without them.

vanesch said:
In an Aspect-like experiment, it is not clear (unless there is "action-at-a-distance") how B could find out what experiment A has performed on the identical copy. And without that information, you cannot violate Bell.

For statistical dependence, it's only necessary that a detection at A determine the sample space at B -- and it does. (That's what all the stuff about timing mechanisms and matching the separate data sequences is about, right?)
 
  • #130


ThomasT said:
But that assumption isn't the so-called locality assumption. It's the so-called realism assumption. The realism assumption isn't the assumption that's negated by violations of the inequalities. The locality assumption is.

The vital assumption (the locality assumption), according to Bell, is that the result at B doesn't depend on the setting of the polarizer at A and vice versa. (It doesn't as far as anyone can tell, does it?)

Bell Realism is the assumption that a particle has a specific property independent of the act of observing that property. To paraphrase Einstein's words, the moon is there even when I am not looking at it. Although Bell did not use the word "vital" to describe it, it is just as important to the paper as Bell Locality is. They are not the same thing, and they are expressed differently in the proof.

Thus, you cannot single out Locality as being disproved over Realism from Bell's Theorem. To do that, you need to add something on your own.
 
  • #131


VANESCH - Can you make some drawings of your 'letters' in ay 'Paint'- or polariser states? Some of us are having difficulty following the logic through just reading sentences?
 
  • #132


Vanesch - Can you make some drawings in say, 'Paint' of these particle states and the inequality - for say, 2 entangled particles so that we can follow the logic better? Its very
interesting.
 
  • #133


ThomasT said:
It cannot be the explanation because Bell assumed that we're "comparing data corresponding each time to the same PAIR", and that the polarizer-incident disturbances associated with each pair of detection attributes are identical, right?

Well, they don't even have to be completely identical, but essentially, yes, that the only origin of any correlation between both is something, some parameters, some state, something that is identical to both.

But that assumption isn't the so-called locality assumption. It's the so-called realism assumption. The realism assumption isn't the assumption that's negated by violations of the inequalities. The locality assumption is.

How do you know what of the different premises is the one that is violated ? The realism assumption is that "whatever causes the correlation, it must be a common property shared by both elements of the same pair". The locality condition is: whatever causes the outcome at B, it cannot depend upon the choice of measurement at A (NOT the outcome ! The CHOICE).

The vital assumption (the locality assumption), according to Bell, is that the result at B doesn't depend on the setting of the polarizer at A and vice versa. (It doesn't as far as anyone can tell, does it?)

Yes.

Remember in an earlier post where I asked: when is a locality condition not, strictly speaking, a locality condition?

Well, the Bell experiments are designed to answer statistical questions, not questions about direct causal linkages between spacelike separated events. Hence, the vital assumption is that the result at B is statistically independent of the setting at A.

No, that's too much. The vital assumption is that whatever mechanism at A produces the outcome at A, is ONLY dependent on the choice at A and of the "common properties" the particle/disturbance/whatever shares with its copy at B, but NOT on the choice at B. THAT is the assumption. Indeed, it will turn out that this is going to be independent of the SETTINGS at A, but not necessarily on the OUTCOME at A.

Of course, the result at A is dependent on the setting at A. In the simplified setup where the A side is closer to the emitter than the B side, detection at A (and only detection at A) initiates the coincidence circuity. So, in effect, the result at A is the setting at A.

No, here you make an error. You seem to think that you only correlate when the "A" result is "yes", but in fact, one can detect also a "no" answer at A. The outcome at A has 3 "states":
- no outcome (nothing detected)
- "yes" (light detected one way)
- "no" (light detected in the other leg of the polarizing beam splitter)

when we have "yes" OR "no" we trigger, because it means that a particle arrived.


So, the vital assumption becomes: the results at B are statistically independent of the results at A.

No, they aren't. That would really have been a very trivial error on Bell's part. That wouldn't result in a condition of any correlation, it would simply mean that all correlations should be simply 50% (that's what it means: the results at B are statistically uncorrelated with those at A, namely that the correlation is 50% (the probability of having the same result is 50%) ). That's clearly not what Bell obtains.
 
  • #134


LaserMind said:
VANESCH - Can you make some drawings of your 'letters' in ay 'Paint'- or polariser states? Some of us are having difficulty following the logic through just reading sentences?

You mean I have to make a picture of a letter ??

I will try to explain it more clearly.

I have two friends, Joe and Chang, which live far apart, don't know each other etc...

With each one of them, say with Joe to begin with, I make a deal. I tell them that I will send them a bunch of letters, which have 3 distinct properties: color of the ink (red or blue), color of the paper (pink or blue), and the language in which the letter is written. I give Joe a notebook. I will ask Joe that each time he receives such a letter from me, that he does the following:
- he writes down the post stamp date in his notebook
- he picks, randomly or no matter how he wants it, one of the three properties he is going to look at (color of ink, color of paper, language of letter) BEFORE HE OPENS THE LETTER. He writes in his notebook what he decided to "measure" (ink, paper, language). Then I ask him to open the letter, and according to the criterium that he picked, to write down the answer. So, for instance, if he picked (and wrote) "ink", then he's supposed to look ONLY at the ink, and write down whether it was red or blue.

I give exactly the same instructions to Chang.

Next, I go home, and every two days, I write two identical letters: I pick pink or blue paper, I pick a red or a blue pen, and I decide to write in Dutch or in Arabic. I can combine this in every way I like, I don't even have to combine this randomly. On the same day, I send off one of these letters to Joe and the other one to Chang. They get the same time stamp in my post office, namely of the day I sent them.

Two days later, I pick another combination of paper, ink and language, and again I send off two identical letters to Joe and Chang.

I do this for several years.

Then I go visit Joe, and pick up his notebook. Next I travel to Chang, and pick up his notebook.

I see that the first letter I wrote, both of them received it. Joe picked, say, ink, and Chang picked "paper". I put their results in the bin "Joe-ink / Chang-paper". I see that the next letter, both Joe and Chang picked language. I put their results in the bin "Joe-language/"chang-language". The third letter was lost for Joe. I skip also that letter for Chang.
The forth letter ... etc...

I end up with 9 bins. Of course, for the bins where we have "Joe-language/Chang-language" I find that each time, they have the same result (it were identical letters each time !). That is, when Joe saw "Dutch", Chang also saw "Dutch", and when Joe saw "Arabic" then Chang also saw "arabic".

What's more interesting is the bins where we have two different properties. There are 6 such bins. For the bin "Joe-ink/Chang-paper", I count the number of times when Joe had red and Chang had pink plus the number of times when Joe had blue and Chang had blue on the total number of cases in that bin. That gives me the "correlation" for the case "ink/paper". That correlation is a number between 0 and 1.

In general I call "same" pink paper, red ink and Dutch, and on the other hand "blue paper, blue ink and Arabic".

I do that for the 6 different combinations (ink/paper), (ink,language), (language,paper), and the mirror cases. I count the number of times that both had the "same" result over the total number of cases in that bin. That's what gives me the correlation of that bin.
It turns out that the correlations for (ink/paper) is the same as that of (paper/ink). So I have actually 3 different numbers: the correlations in (ink/paper), (ink,language) and in (language,paper).

The precise values of these numbers are determined of course by how I chose to combine ink, paper and language when I wrote the pairs of letters. It could for instance be that I always used blue ink on pink paper, and red ink on blue paper. Or that I mixed that randomly. That was my choice, and that will determine of course these correlations. If I always had used red ink on blue paper and blue ink on pink paper, then the correlation (paper/ink) will be exactly 0. At no point, Joe could have, say, seen red ink, and Chang have seen pink paper for the same pair.
So the correlations depend on how I made my choices. But no matter how, there is a regularity amongst these correlations.

You can twist it no matter how, we must have that the sum of these 3 numbers, these 3 correlations, each between 0 and 1, must, in the long run, be larger than 1.
 
Last edited:
  • #135


DrChinese said:
If you posit a theory that says nothing more than "you appear to have a choice of observations but don't really" then you are really saying the results of experiments are not valid in some way. For if they were valid, then local realism fails.

I cannot follow your argument. I am fully aware that my choices are constrained by the laws of physics, regardless of what QM interpretation you prefer. I also understand that any process that happens in my brain has some influence on the objects around me. Why do you think that the existence of those constraints make the experiments not valid? The only thing that it is wrong here is to assume that your choices come from some fantastic realm without any connection with this universe.

Regardless of the author's respected accomplishments, this paper does not deserve any greater status than someone saying that God controls all experimental outcomes. I would venture to say that thesis would be more palatable to many anyway.

You are repeating yourself. Why is this so? The assumption of a god is in fact the opposite of superdeterminism because god is supposed to be unconstrained by anything in this universe. God can do anything. In superdeterminism you can only tweak the law of motion at Plank level, and that's it. Because you have to reproduce all the predictions of QM there are not many free parameters you can play around with.

The issue with a superdeterministic local realistic theory is that the entire future of the entire universe would need to be encoded in each and every particle we can observe. Otherwise there would be no way to keep the experimental results sync'd. That is a big requirement, and in my opinion is falsifiable anyway - if a specific version were offered to discuss.

Do you have a proof for this assertion or is this the only superdeterministic theory you can imagine? I'd like you to provide evidence that any superdeterministic local realistic theory must be of the form you suggested.
 
  • #136


vanesch said:
I understand that. I don't say that, in principle, superdeterminism is not possible.

But you make my point:

(about Madame Soleil)

ueit said:
No, because such correlations do not exist. Astrology has been falsified already. If such correlations between my love life and the planets could be shown to be statistically significant I would certainly be curious about their origin.

This is what I mean. If they would have been found, then the answer would have been justified by superdeterminism. It's simply because they aren't observed (are they ? :smile:) that superdeterminism shouldn't explain them. But no problem, tomorrow Neptune tells my fortune, superdeterminism can explain it.

First, I'd like to point out that both "many worlds" and Bohm's interpretation are superdeterministic theories. So, whatever arguments you may have against 't Hooft' s proposal are equally valid against them. You should explain then why do you not say that MWI is not science because it could explain astrology and all that.

So superdeterminism has *the potential* of even justifying astrology (if it were observed). As to their "origin", the superdeterminism answer would simply be "because they are correlated". In the same way as 't Hooft argues that the Bell correlations come about "because they are correlated". And the only way to find out about it is... to see that they are correlated.

I think you misunderstood t Hooft. He tries to find out a law of motion for the particles at Plank level that is local and still reproduces QM. Of course, if he succeeds, that theory would explain everything that QM already explains without any appeal to non-locality. If astrology were true, QM, regardless of interpretation should be able to explain it, right? There would be a "many worlds" explanation, a Bohmian non-local one, a Copenhagen one, and so one. I don't see why you only have a problem against the superdeterministic explanation.

Superdeterminism allows/explains/justifies ANY correlation, ANY time. It is even amazing that we don't find more of them! Astrology should be right. Or some version of it. Can't believe these things are uncorrelated.

What I'm saying is that superdeterminism has the potential of "explanation" of just any correlation. "Correlations happen". It can't be falsified at all. But worse, it opens the gate to explanations for any kind of correlation, without direct causal link. If they are there, hey, superdeterminism. If they aren't, well, hey, superdeterminism. As such, superdeterminism destroys the possibility of observing causality. Here it's the correlation between starlight and particles. There, it is the correlation between having Rolexes and Ferarris. Smoking and cancer, superdeterminism. No necessary causal link or whatever. In other words, superdeterminism is the theory "things happen".

When you look at other basic principles of physics, they *constrain* possible observations. Superdeterminism allows all of them, and their opposite.

1. Please explain why do you still accept MWI, which is a superdeterministic theory.
2. Why do you believe that a superdeterministic theory of particles motion at Plank level should have such a great number of free parameters so that you could be able to explain just about anything? I think that the opposite is true, such a theory would be very constrained. It should explain the atomic spectra, for example. I think that it is unlikely that you could tweek this theory so that you could explain anything you want and still get the spectra right.
 
  • #137


ThomasT:
It cannot be the explanation because Bell assumed that we're "comparing data corresponding each time to the same PAIR", and that the polarizer-incident disturbances associated with each pair of detection attributes are identical, right?

vanesch:
Well, they don't even have to be completely identical, but essentially, yes, that the only origin of any correlation between both is something, some parameters, some state, something that is identical to both.

It was a rhetorical question, because I'm still assuming it's the explanation.

ThomasT:
But that assumption isn't the so-called locality assumption. It's the so-called realism assumption. The realism assumption isn't the assumption that's negated by violations of the inequalities. The locality assumption is.

vanesch:
How do you know what of the different premises is the one that is violated?

The critical assumption is the so-called locality assumption.

vanesch:
The realism assumption is that "whatever causes the correlation, it must be a common property shared by both elements of the same pair".

OK, and there are several indicators vis the experimental designs that lead me to believe that A and B are dealing with common properties for each pair. So, I don't think that that assumption is being violated.

vanesch:
The locality condition is: whatever causes the outcome at B, it cannot depend upon the choice of measurement at A (NOT the outcome ! The CHOICE).
ThomasT:
The vital assumption (the locality assumption), according to Bell, is that the result at B doesn't depend on the setting of the polarizer at A and vice versa.

Gotta go ... will continue tomorrow!
 
  • #138


ueit said:
First, I'd like to point out that both "many worlds" and Bohm's interpretation are superdeterministic theories.

No, not at all. In either, you can have your system modeled, and introduce externally that this or that measurement is made by Bob or Alice, and *given that*, you see where the system leads you. You do not have to assume that in a given situation, Bob and Alice HAD to make this or that measurement choice which is what superdeterminism is all about.

In fact, no theory we have is superdeterministic - or has been shown to be superdeterministic, and as I argued, I think it is simply *impossible* to have a superdeterministic theory and show it. Because in order to show that a given dynamics is superdeterministic, you'd have to show that no matter how it is done, Bob WILL make this choice of his measurement setup, and Alice WILL make this choice of her measurement setup. That would mean, calculate the nitty-gritty dynamics of how the dynamics that determines their "free will" works out, ie, working out the detailed dynamics of their brains for instance. As long as you haven't done that, you cannot say that superdeterminism was at work. In a superdeterministic theory, it would be this path, and only this path, which would show you the right correlations - there wouldn't be a "simple way" like in quantum theory (in no matter what interpretation). It would only be after tracking all the dynamics of all the pseudo-free-will happenings that you'd say: "hey, but these two things turn out to be always correlated!"

I think you misunderstood t Hooft. He tries to find out a law of motion for the particles at Plank level that is local and still reproduces QM.

I'd say, good luck. Because he will have a serious problem with an aspect in quantum theory (and in any theory we've ever considered up to now): that is: we don't have any quantum-mechanical model of *what measurements* are done. Even in MWI, or in BM, or in any fully unitary dynamics, you still put in by hand what are the measurements that are being done. You still say that "Alice decides to measure angle th1, and Bob decides to measure th2". So the dynamics is not telling you that. So I wonder how he's going to find an "equivalence" when in one model, you have the free choice (you can plug in what you want) and in the other, it is the very mechanism that gives the desired properties. So I don't even remotely see how you could prove an "equivalence" between two things which have different external inputs...
I don't see how he can get around following through processes that appear to us as "free will".
 
Last edited:
  • #139


ueit said:
Do you have a proof for this assertion or is this the only superdeterministic theory you can imagine? I'd like you to provide evidence that any superdeterministic local realistic theory must be of the form you suggested.

IF one asserts that all observed outcomes are the deterministic and inevitable result of initial conditions, and that Bell's Theorem would NOT otherwise apply if this hypothesis were correct - as apparently you and 't Hooft argue - THEN presumably you are saying that non-commuting operators are definite real AND they do NOT obey the predictions of QM even though they appear to do so to humans in experiments. I acknowledge this as a theoretical possibly, just as it is equally possible - and equally meaningless - to acknowledge that God could personally be intervening in every experiment done by man to throw us off track by making it look as if QM is correct.

I think you can figure this out as easily as anyone: IF Locality holds (the point of positing Superdeterminism in the first place) AND initial conditions control the ultimate outcome, THEN the outcome of every future experiment that is to be run must be present in every local area of the universe (so that the results correlate when brought together). The burden would be on the developer of this hypothesis to provide some explanation of how that might work. Thus, I will critique an actual superdeterministic theory once there is one.
 
  • #140


DrChinese said:
This is the collapse of the wavefunction, and I think this manifests itself identically whether we are talking about 1 particle or a pair of entangled particles.

Any single photon (say emitted from an electron) has a chance of going anywhere and being absorbed. The odds of it being absorbed at Alice's detector are A, and the odds of it being absorbed at Bob's detector is B. And so on for any number of possible targets, some of which could be light years away. When we observe it at Alice, that means it is NOT at Bob or any of the other targets. Yet clearly there was a wave packet moving through space - that's what experiments like the Double Slit show, because there is interference from the various possible paths. And yet there we are at the end, the photon is detected in one and only one spot. And the odds collapse to zero at everywhere else. And that collapse would be instantaneous as best as I can tell.

So this is analogous to the mysterious nature of entanglement, yet I don't think it is really any different. Except that entanglement involves an ensemble of particles.

Sorry to go back a bit, but quick question...
presumably we can still detect both of a pair of entangled photons at times and places
that are remote from each other? Can't think how else we would verify such entanglement
actually happens otherwise...
If there is a time difference between the first being detected and the second, what happens to the second one? Is it also 'collapsed' but still moving? My head hurts...
 
  • #141


DrChinese said:
The burden would be on the developer of this hypothesis to provide some explanation of how that might work. Thus, I will critique an actual superdeterministic theory once there is one.

Yes, and my point is in fact, that I don't even see how such a superdeterministic theory could even be shown to work. What's proposed would be to demonstrate, through a mathematical theorem of relative simplicity (say, less than 200 pages :-) that the proposed local dynamics is "equivalent" to quantum mechanics - I suppose a bit such as the proof that BM is equivalent to quantum theory, that must be what proponents of looking for a superdeterministic theory are hoping for.
But I don't see how that could be the case, because in normal quantum mechanics (as in BM for instance) you impose *externally* that Joe is going to measure along theta1 and Jane along theta2. You PUT IN BY HAND the experimental choice.

The superdeterministic theory, on the other hand, gets its "superdeterminism" from the fact that with given initial conditions (and according to 't Hooft, these don't even have to be exceptional) and the proposed dynamics, Joe can't do anything else but pick theta1, and Jane can't do anything else but pick theta2. So the class of possible situations described by quantum theory is much larger (you could pick all pairs theta1 and theta2) than the class of situations described by the superdeterministic theory (which, of course, according to that theory, are the only ones that are actually possible): just one pair (theta1,theta2) or maybe just a limited set (theta1,theta2).

So there cannot be a simple theorem that demonstrates in all generality that both are equivalent: our superdeterminist must show WHAT pairs will result from the dynamics. And to do that, he will have to work out in all detail how these angles are picked, and as there are remote stars, brains, or computers with all thinkable algorithms in the loop, he would have to follow through all details there - which is FAPP impossible.
 
  • #142
Entanglement & Bell's Theorem

Bells Inequality:
Lets say there is a hidden variable implanted in each
entangled particle at 'birth', - for one it defines
'I am an upspinner' and the other 'I am a downspinner'.

This is held in a secret compartment to be revealed only
when the particles are observed.

So, we work out probablities and test accordingly to that theory,
and the results we get, show, that this theory cannot be correct.

Then QM gives us a slightly different answer because its not using a
hidden variables theory - QM obeys 'We have no exact spin, either of us,
until we are observed, then we use probablity to give you your results'.
This scenario produces a slightly different result on testing,
- and is the one we actually get. (so hidden variables cannot be correct)

Bell's Theorum is showing us that particles are in 'both states' until observed
is what is happening, rather than - 'there are hidden variables controlling these
observables'.



- (maybe hidden complex functions might do the trick!)
 
  • #143


wawenspop said:
- (maybe hidden complex functions might do the trick!)
Nope, there is no requirement on the "container" of the "hidden" information - well, there's one: it needs to be from a measurable space over which a probability measure can be defined.
 
  • #144


DrChinese said:
I think you can figure this out as easily as anyone: IF Locality holds (the point of positing Superdeterminism in the first place) AND initial conditions control the ultimate outcome, THEN the outcome of every future experiment that is to be run must be present in every local area of the universe (so that the results correlate when brought together). The burden would be on the developer of this hypothesis to provide some explanation of how that might work. Thus, I will critique an actual superdeterministic theory once there is one.

vanesch said:
Yes, and my point is in fact, that I don't even see how such a superdeterministic theory could even be shown to work. What's proposed would be to demonstrate, through a mathematical theorem of relative simplicity (say, less than 200 pages :-) that the proposed local dynamics is "equivalent" to quantum mechanics - I suppose a bit such as the proof that BM is equivalent to quantum theory, that must be what proponents of looking for a superdeterministic theory are hoping for.
But I don't see how that could be the case, because in normal quantum mechanics (as in BM for instance) you impose *externally* that Joe is going to measure along theta1 and Jane along theta2. You PUT IN BY HAND the experimental choice.

The superdeterministic theory, on the other hand, gets its "superdeterminism" from the fact that with given initial conditions (and according to 't Hooft, these don't even have to be exceptional) and the proposed dynamics, Joe can't do anything else but pick theta1, and Jane can't do anything else but pick theta2. So the class of possible situations described by quantum theory is much larger (you could pick all pairs theta1 and theta2) than the class of situations described by the superdeterministic theory (which, of course, according to that theory, are the only ones that are actually possible): just one pair (theta1,theta2) or maybe just a limited set (theta1,theta2).

So there cannot be a simple theorem that demonstrates in all generality that both are equivalent: our superdeterminist must show WHAT pairs will result from the dynamics. And to do that, he will have to work out in all detail how these angles are picked, and as there are remote stars, brains, or computers with all thinkable algorithms in the loop, he would have to follow through all details there - which is FAPP impossible.

OK, let me present my understanding of how a superdeterministic theory might be developed and verified against standard QM.

1. Requirements

The theory must necessarily contain a long-range force, otherwise it cannot elude Bell's theorem. Maxwell's theory or Einstein's theory of gravity are such theories. We also want that force to be local. Probably, that long-range force should not decrease with the distance, something like the quantum force of Bohm's theory.

The theory must give a clear mathematical description of the emission process so that one can quantitatively relate the spin of the entangled particles to the dynamics of the entire system.

2. Development and testing

The only way to actually see the theory in action is, IMO, a computer simulation. In order to be able to do the simulation in a short enough time one must find the most simple microscopic EPR test. For example we could start with an "universe" that only contains an atom in an excited state (source) and two molecules that can absorb the emitted photons (detectors) so that we could speak about a sort of measurement taking place.

The initial states of the "source" and "detectors" given, one could calculate the probability of the "source" to emit, and also the measurement results. By integrating over all possible initial states one should recover both the probability of emission as given by QM and also the prediction regarding the Bell tests

If such a theory is indeed possible one could then try to make statistical generalizations so that experiments involving stars or brains could be covered. However I don't see this as extremely important as even the standard QM cannot be verified on systems containing more than a few particles because of computational problems.
 
  • #145


moving_on said:
If there is a time difference between the first being detected and the second, what happens to the second one? Is it also 'collapsed' but still moving? My head hurts...

There is no apparent difference in the outcome if Alice is observed before Bob, or vice versa. Or if they are observed "simultaneously" (if that were possible) for that matter. Now I say no apparent difference because as it happens: the QM prediction is the same either way. There might be a difference - we don't know - but there is none detected (so far) in experiments and none predicted by theory.

So to answer your question, there is no obvious change to the behavior of Bob after an observation of Alice (assuming Alice observed first). Both entangled photons behave as predicted; observation of Alice causes Bob to act "as if" a message had been instantaneously been sent letting Bob know what the result of the Alice observation was - so Bob could act accordingly. Now, I said "as if" because I have no idea if anything like this actually happens. The actual mechanism has eluded the minds of physicists. Every hypothesis has problems of one kind or another, and there is no real consensus.
 
  • #146


Bell Realism is the assumption that a particle has a specific property independent of the act of observing that property. To paraphrase Einstein's words, the moon is there even when I am not looking at it. Although Bell did not use the word "vital" to describe it, it is just as important to the paper as Bell Locality is. They are not the same thing, and they are expressed differently in the proof.
Where are they expressed differently? Recall in my "easy version" proof, the only assumption you need is:

A = AB + Ab

or "what happens at A is independent of what happens at B".

How are "locality" and "realism" distinct in that assumption?
 
  • #147


ueit said:
OK, let me present my understanding of how a superdeterministic theory might be developed and verified against standard QM.

1. Requirements

The theory must necessarily contain a long-range force, otherwise it cannot elude Bell's theorem. Maxwell's theory or Einstein's theory of gravity are such theories. We also want that force to be local. Probably, that long-range force should not decrease with the distance, something like the quantum force of Bohm's theory.

The theory must give a clear mathematical description of the emission process so that one can quantitatively relate the spin of the entangled particles to the dynamics of the entire system.

I think you are seeing the basic issues... entirely new mechanisms are required and once their assumptions are spelled out, it will be clear that the result is an ad hoc theory with a lot of baggage. Here is an example of the difficulty:

I. The Experiments

Experiment A: I simply hold Alice and Bob's observations at a 120 degree difference (i.e. static, no change from one reading to the next) and collect a sample of 10,000 readings. No choice is involved, at least from trial to trial. I expect the results to be a correlation rate of .25 as predicted by QM.

Experiment B: I set Alice and Bob's observations at a 120 degree difference (but dynamically, in which I personally "randomly" choose between whether the difference is to be +120 or -120 degrees by changing the orientation only of Bob's apparatus) and collect a sample of 10,000 readings. I expect the results to be a correlation rate of .25 as predicted by QM.

Experiment C: I force Alice and Bob's observations to be at a 120 degree difference by a dynamic mechanism (described in next paragraph) - choosing between whether the difference is +120 or -120 degrees by again changing the orientation only of Bob's apparatus -and collect a sample of 10,000 readings. I expect the results to be a correlation rate of .25 as predicted by QM.

This dynamic mechanism is as follows: I use a radioactive sample to generate random numbers. Say I use an algorithm based on the time of detection of the radioactive particle. If it ends in an even number then I set at +120 degrees, if an odd number I set at -230 degrees. This is done while Alice and Bob are in flight. Therefore, there is no possibility of an FTL influence.


II. Discussion of Issues

What are the issues here? All of the results are identical, as predicted by QM. But the Superdeterministic theory must now explain HOW that is possible because the conditions are being varied, in effect, by placing an experimental "barrier" between the initial time T=0 and the time of the actual test. Since A, B and C have different ways that the choice is made for the observation, we need the superdeterministic theory to explain how these 3 variations end up with identical results. This is also true for an infinite (or at least a large :) number of other variations (D,E,F...) that you or I can dream up.

So: how do the initial conditions NOT change the outcome when radioactivity (a random quantum process) is responsible for the selection of the measurement settings? Gee, there must be a causal connection between the radiation and the algorithm I am using to key off of as well. Then we now have a pretty big conspiracy going on between the different forces, quantum events and macroscopic objects as well, if we are to get the desired results.

Since the distant results are to be correlated, the hypothetical causal connection is contained locally too. So all of this must be hiding somewhere in Alice and Bob; i.e. there must be information contained within these photons. And yet, the photons didn't even exist before they were created in the source laser.


III. Conclusion

So it should be clear that any proposed solution will suffer from an extreme case of ad hoc description which becomes very complicated very quickly. I don't think this is useful, and I don't think we are any longer talking about Bell's Theorem. And I don't think the theory we arrive at could withstand the objective assault that it would be subjected to. As I said to begin with, we could just as easily postulate that God personally tricks us into witnessing the results by biasing Bell tests... and this is just as reasonable (or unreasonable) as any Superdeterministic theory.
 
  • #148


peter0302 said:
Where are they expressed differently? Recall in my "easy version" proof, the only assumption you need is:

A = AB + Ab

or "what happens at A is independent of what happens at B".

How are "locality" and "realism" distinct in that assumption?

Actually, A=AB + Ab is measured and need not be assumed.

What needs to be assumed is that AB = ABC + ABc as that is what fails. That is the Realism requirement. So you might also express that in your terms by saying that A = AB + Ab for all A, B, C... simultaneously. There is no C in QM, which in Einstein's mind meant that QM was incomplete.

The Locality requirement is that f(A) = f(A|B) i.e. that the A result is not dependent on the B setting; and vice versa. This has been experimentally ruled out for forces propagating at c or less.
 
  • #149


ueit said:
OK, let me present my understanding of how a superdeterministic theory might be developed and verified against standard QM.

1. Requirements

The theory must necessarily contain a long-range force, otherwise it cannot elude Bell's theorem. Maxwell's theory or Einstein's theory of gravity are such theories. We also want that force to be local. Probably, that long-range force should not decrease with the distance, something like the quantum force of Bohm's theory.

?? If there is a long-range force that's going to do the thing, then it is not local, right ? The quantum force in BM is not local, Newtonian gravity is not local, electrostatics is not local. What is local is a local field propagation equation, like Maxwell's equations or Einstein's equations. If you allow for long-range forces, then you don't NEED superdeterminism, as Bell isn't supposed to hold in that case. No, the only thing that is allowed are local interactions. That's what makes superdeterminism so hard to handle: the correlations must arise through a whole chain of local interactions from a certain "common initial state" up to the actual manifestation of the correlations, through the precise tuning of the "choices" of measurement in agreement with the particular sets of particles that have been emitted.

The theory must give a clear mathematical description of the emission process so that one can quantitatively relate the spin of the entangled particles to the dynamics of the entire system.

Yes, but also with the measurement process!

2. Development and testing

The only way to actually see the theory in action is, IMO, a computer simulation. In order to be able to do the simulation in a short enough time one must find the most simple microscopic EPR test.

In fact, that wouldn't be ok. In order for superdeterminism to hold, it must also do it in *the most complicated EPR test possible*.

For example we could start with an "universe" that only contains an atom in an excited stat. e (source) and two molecules that can absorb the emitted photons (detectors) so that we could speak about a sort of measurement taking place.

That would be a start, but then you'd have to show that those molecules appear in exactly those orientations, each time, to measure the photons along the right axis, each time. And then you must show that WHATEVER I ADD, that this is not going to change this. *this* is, IMO, the impossible task of showing superdeterminism.

Because it would be sufficient to find one single system that doesn't orient the "detecting molecules" in exactly the anticipated way, to undo the entire system. Now, I know your objection to that: something like "conservation of energy" is superficially in the same situation, and we don't have to follow up on all possible engines to show the first law of thermodynamics. But there's a difference. In things like conservation of energy, there is a clear physical parameter which is conserved by each interaction, and hence, also overall. I don't see how one could introduce something like "conservation of choice" that makes that *this* photon, no matter how, is going to be analysed only with *that* analyser angle. If ever such a thing existed, it would mean we would have a relatively simple "thermodynamics of (non)free will" in all generality.


If such a theory is indeed possible one could then try to make statistical generalizations so that experiments involving stars or brains could be covered. However I don't see this as extremely important as even the standard QM cannot be verified on systems containing more than a few particles because of computational problems.

No, there's a big difference. In QM, or classical physics, we *don't have to* follow up through all the nitty gritty detail, because it doesn't involve the *essential effect*. If a photon is detected with a PM, well, that PM is going to generate a signal and all that, and this is going to be recorded etc... but we don't need the details here because they don't matter. They don't compose the essential effect we are observing (detecting a photon). So this can be abstracted away without any difficulty.

However, if the process that makes Joe choose angle th1 is the essential mechanism, this is different.
 
  • #150


The superdeterministic theory, on the other hand, gets its "superdeterminism" from the fact that with given initial conditions (and according to 't Hooft, these don't even have to be exceptional) and the proposed dynamics, Joe can't do anything else but pick theta1, and Jane can't do anything else but pick theta2. So the class of possible situations described by quantum theory is much larger (you could pick all pairs theta1 and theta2) than the class of situations described by the superdeterministic theory (which, of course, according to that theory, are the only ones that are actually possible): just one pair (theta1,theta2) or maybe just a limited set (theta1,theta2).

Yes, one has to assume here that there exists some set of allowed set of initial conditions that will yield the predictions of quantum mechanics, including the violations of Bell's inequalities. That in turn implies that the counterfactual situations cannot exist. But it doesn't really imply that you couldn't have choosen any other setting for the polarizers. It simply means that if you had done that then many other things would necessarily also be different in such a way as to have affected the outcome of the experiments. So, the choice of the polarizer's setting would be correlated with the system.

Now, counterfactual situations are problematic in general, but since we assume that what we decide to measure can be considered to be uncorrelated with the system's state, we can pretend as if the counterfactuals do exist.

We know that the early universe was in a low entropy state. If we were to replace the exact microstate the universe is in today by a randomly chosen microstate corresponding to the same macro state, then evolving it back in time would lead to the entropy increasing (with almost 100% probability). So, that state cannot have evolved from any physically acceptable initial conditions at all.

When we do experiments at the macro level, this is not relevant, because for each macro state (counterfactual or not) you can find many micro states with acceptable initial conditions. But at the micro level, this may no longer be the case.
 
Back
Top