Has Zeilinger disproven DeBroglie-Bohm?

  • Thread starter Thread starter dipstik
  • Start date Start date
dipstik
Messages
127
Reaction score
4
I read (in Scientific American) that Zeilinger had disproven some non-local hidden varible theories. If he claims to have done this, does anyone know the article?

I'm also having trouble with determinism vs. realisim.

thanks
 
Physics news on Phys.org
I vaguely remember hearing Zeilinger speak about doing a Bell type experiment over a sufficiently large length scale as to close one of those "loopholes" (guaranteeing that the measurements are space-like separated). If you want his articles, just search on google-scholar! But you shouldn't be interested in hidden variable theories: all the current evidence suggests strongly that such theories are impossible, and even if they aren't impossible they certainly aren't viable yet whilst mainstream quantum mechanics works.

Realism? If you doubt this, it will be difficult to formulate any physics.

Determinism? Philosophically, do you want it or not? Whether QM is deterministic is basically just a question of your interpretation of the formalism (eg. Copenhagen vs Everett). Historically, the deterministic nature of classical physics has caused enough headaches itself.
 
Last edited:
cesiumfrog said:
But you shouldn't be interested in hidden variable theories: all the current evidence suggests strongly that such theories are impossible, and even if they aren't impossible they certainly aren't viable yet whilst mainstream quantum mechanics works.
What evidence suggests nonlocal hidden variable theories are impossible? Bell's theorem only rules out local hidden variables. As I understand it, it's already been proven that Bohm's interpretation makes identical predictions as orthodox nonrelativistic quantum mechanics, although I don't think it's known whether one can create a Bohmian version of quantum field theory.
 
Last edited:
Also, if you're looking for the Zeilinger result, it's probably the one discussed on this thread. However, this only applies to nonlocal hidden variables theories which respect a condition known as "outcome independence" (meaning that the outcome of one experimenter's measurement is independent of the outcome of the other experimenter's measurement), but Bohm's interpretation does not respect this condition so it isn't ruled out by Zeilinger's result--see here for some more info.
 
Last edited:
JesseM said:
although I don't think it's known whether one can create a Bohmian version of quantum field theory.
One can do that, but not in such a simple and elegant way as for nonrelativistic particles. There is some evidence that the Bohmian interpretation of strings may make the whole picture much simpler. As a bonus, the unification of two "not-even-wrong" theories (Bohmian mechanics and string theory) provides testable predictions, suggesting that the theory could be "even-right".
 
cesiumfrog said:
Realism? If you doubt this, it will be difficult to formulate any physics.

That's not a fair statement by any means. Everything in both QM is fully consistent with a non-realistic explanation, although that is not the only possible explanation. So apparently it IS possible to formulate physics if you doubt realism.

The realism in question is a very specific kind of realism: that all possible observables have definite values at all times. If you accept the non-realism possible under QM, then the limits of realism are defined by the Heisenberg Uncertainty Principle.
 
My thoughts are the same as Dr Chinese, but I would like to point out that the "very specific kind of realism" he discusses is determinism, not in the sense of "given the initial state the future state is determined" but rather in the sense of having definite values.
 
Crosson said:
the "very specific kind of realism" [Dr Chinese] discusses is [..] having definite values.
Isn't that just mainstream QM?
 
DrChinese said:
The realism in question is a very specific kind of realism: that all possible observables have definite values at all times. If you accept the non-realism possible under QM, then the limits of realism are defined by the Heisenberg Uncertainty Principle.
I'm not sure this is a perfect definition--would you consider Bohmian mechanics to be a (nonlocal) "realistic" theory? In Bohmian mechanics my understanding is that only position has a well-defined value, although the results of any other types of measurements follow in a deterministic way from the particle's position and the state of the pilot wave. See the section on quantum observables from the Stanford Encyclopedia of Philosophy article on Bohmian mechanics.
 
  • #10
JesseM said:
I'm not sure this is a perfect definition--would you consider Bohmian mechanics to be a (nonlocal) "realistic" theory? In Bohmian mechanics my understanding is that only position has a well-defined value, although the results of any other types of measurements follow in a deterministic way from the particle's position and the state of the pilot wave. See the section on quantum observables from the Stanford Encyclopedia of Philosophy article on Bohmian mechanics.

What I am referring to is effectively Bell realism, as opposed to Bell locality. The issue being whether, as Einstein assumed, "the moon was there when you weren't looking at it." In this case, the question is not so much whether the particle exists, but whether all possible observables have definite values independent of the act of measurement.

Recall that with our entangled photons, they can be measured at identical settings and they will always give predictable results. This implies realism, at first glance. But when you add Bell's Theorem, you encounter the effects of the act of measurement. And that implies that either our realism - or our locality - assumption must fall. Zeilinger's experiment tends to point to realism as the assumption to drop. But as mentioned earlier in the thread, it is not clear that this is absolute.
 
  • #11
DrChinese said:
What I am referring to is effectively Bell realism, as opposed to Bell locality. The issue being whether, as Einstein assumed, "the moon was there when you weren't looking at it." In this case, the question is not so much whether the particle exists, but whether all possible observables have definite values independent of the act of measurement.

Recall that with our entangled photons, they can be measured at identical settings and they will always give predictable results. This implies realism, at first glance. But when you add Bell's Theorem, you encounter the effects of the act of measurement. And that implies that either our realism - or our locality - assumption must fall. Zeilinger's experiment tends to point to realism as the assumption to drop. But as mentioned earlier in the thread, it is not clear that this is absolute.
Bell's specific notion of "local realism" seems clear enough, but what I'm wondering is whether "realism" on its own is sufficiently clearly-defined that one could clearly say whether a given nonlocal interpretation or theory is realistic or not. For example, as I asked above, is Bohmian mechanics a realist interpretation? How about the MWI? (and in the latter case, can the MWI be called a local realist interpretation even if it doesn't really match Bell's notion of local realism?) I'd be inclined to say yes in both cases since both interpretations give you an objective picture of the universe as a self-contained system where measurements are treated using the same laws that govern quantum systems when they're not being measured, but I don't know if there's any official consensus on how realism is supposed to be defined.
 
  • #12
cesiumfrog said:
Isn't that just mainstream QM?
No, in mainstream QM all observables don't have definite values at all times (i.e. even when they're not being measured)--for example, a particle doesn't have a definite value for position and momentum simultaneously.
 
  • #13
JesseM said:
Bell's specific notion of "local realism" seems clear enough, but what I'm wondering is whether "realism" on its own is sufficiently clearly-defined
Einstein defined it as follows, "there exists a physical reality independent of substantiation and perception". Rejecting realism means that reality is different for different observers. Bohmian mechanics retains realism and rejects locality. MWI retains locality and rejects realism.
 
Last edited:
  • #14
JesseM said:
what I'm wondering is whether "realism" on its own is sufficiently clearly-defined
It seems like much disagreement stems from that word being interpreted here with a number of conflicting meanings.

So originally, I was commenting on xantox's notion ("metaphysical realism"?) that there exists something independent of, and therefore agreed upon by, all observers (though I wouldn't say MWI rejects this).
 
Last edited:
  • #15
dipstik said:
I read (in Scientific American) that Zeilinger had disproven some non-local hidden varible theories. If he claims to have done this, does anyone know the article?

I'm also having trouble with determinism vs. realisim.

thanks


The weaker sense of determinism implies that all events have causes (Bohm's interpretation implies primarily this, not necessarily predetermination) whilst the stronger sense implies predetermination of all events (aka strong determinism, superdeterminism etc). Realism is a broad philosophical position ranging from the mere assumption that observational statements (about macro objects) are about an external reality independent of mind till strong versions of scientific realism which state that there is an external reality independent of mind which we can perceive and understand at least partially (via direct observations and scientific theories - all unobservables in our best existing theories being real).*

As for Zeilinger's experiment (apart from failing to discard Bohmian mechanics, pilot-wave solutions more generally) it has to be observed that the definition of 'realism' (used in the premises) is way too strong (basically noncontextual) or it is much more reasonable to adopt a contextual approach (where the definition of 'realism' is relaxed to take in account the influence of the measurement device; valid for the macro level too). I'm afraid it will be extremely difficult to eliminate contextual non-local hidden variables theories...for example even an experimental test of Kochen-Specker theorem will not be enough. Finally Zeilinger's experiment (as Aspect's by the way) fails to discard superdeterminism (which rejects counterfactual definiteness) and other 'conspiracy theories' where there is no freedom of choice for the experimenters so even local deterministic hidden variables theories are still alive (of course physicists reject superdeterminism because it lacks, to paraphrase Duhem, 'le bon sens' but this is far from counting as a rejection). The only rational conclusion which one can draw from the long history of (failed) attempts to discard hidden variables (enough no-go theorems here) is that we should refrain from claims of 'refuting hidden variables' before having very very strong evidence (it should not be forget that for 20 years almost all physicists believed that von Neumann's fifth postulate discarded all hidden variables theories...until 1952 when, in Bell's words, 'the impossible happened').


*Bohm's interpretation is definitely realistic in nature although it departs to some extent from classical physics and the program is far from being finished (the proposed, more detailed, ontology - the 'holographic hypothesis' - is far from being testable at this time)
 
Last edited:
  • #16
Superdeterminism: in a millisecond after the bigbang, all of the particles positions, location, temp (etc) determined that I would be wearing a blue shirt today? The particles positions, location, and temp (etc) determined I would post this statement in this thread on physicsforums.com on 11-19-07?
 
  • #17
IMP said:
Superdeterminism: in a millisecond after the bigbang, all of the particles positions, location, temp (etc) determined that I would be wearing a blue shirt today?

No, that's just ordinary determinism.

Super determinism is something that often appears in hidden variable theories, where you insist that every possible measurement has a definite result AND that there are no superluminal effects, therefore to reproduce the results of quantum experiments you have to explain the apparent conspiracy by which you always chose to measure exactly those particles whose properties (contrary to usual probability) already happened to be just so as to give the results predicted by mainstream QM. Such examples are impossible to explain if they involved (for all practical purposes) "uncorrelated" sets of data, so super determinism is an appeal to determinism that no two things can ever truly be treated as uncorrelated (thereby undermining mainstream analysis).
 
  • #18
Yes, superdetermism is the loophole that 't Hooft has argued saves his deterministic theory. If the universe is deterministic, then you cannot consider a counterfactual situation where the experimentor made a different choice while keeping everything else the same. That is impossible because if you evolve the counterfactual state back in time, the history of the universe leading to that state will start to diverge quite rapidly from the known history of the universe.

In fact, to get the history even remotely correct you would need to make sure that the entropy of the universe will decrease if you evolve the countrfactual state back in time. That can only be achieved by making changes in the entire universe to put it in a specially prepared state.
 
  • #19
Hi Dr. Chinese, Jesse and I have discussed some of these matters before. Hi Jesse.

I think you're speaking, Dr., of the two principles, one of which must be violated for the quantum statistics based on uncertainty to be correct:
1. Bell realism, the principle that all variables have real values whether they can be measured or not under Uncertainty;
2. Locality, the principle that local effects stem from local causes.
It's my understanding that violation of Bell realism means that (for example- and it's the most-used method in Bell tests, I believe) when spin is measured on one axis of a particle, spin does not have a definite value on any other axis at that time. This accounts for the measurements of entangled particles on the same axis being correlated, by conservation of angular momentum, since they are not conjugate, and also for the measurements of two entangled particles on different axes not being correlated according to the applicable Bell inequality (depending on the number of states measured). On the other hand, violation of locality says that there is never a correlation between the values except at the moment when the values are measured, at which point a non-local "channel" (I believe that's the current terminology) of unknown type "collapses the wave function" (or whatever terminology fits your favorite interpretation- I'm sticking with Copenhagen for clarity, but I like TI this week; Jesse, IIRC, and if his opinion has not changed, likes Everett) and by some means causes the conservation law to be observed, if the measurements are in the same plane, or the quantum distribution if they are not (I'm not a fan of non-locality, thus my tendency to typecast it as a quantum conspiracy theory). In either case, of course, Bell realism fails; the only question is, does locality fail, or not?

Cramer has proposed an experiment to attempt to differentiate between the two basically by sending the idler photons from a DCQE through light pipes in order to get enough time delay to determine whether the signal photons stop showing interference before the idler photons get their path "flopped over" to be directed to the welcher weg detectors instead of the QE. This would appear to violate causality, though there is an argument that says that what's really happening is that the coincidence counting merely allows you to detect what's already there- that is, to differentiate between two interference patterns whose combination yields the appearance of non-interference when the QE is in use, or not when the welcher weg detector is.

This happens because the QE beam splitter receives idler photons from both slits, whereas the welcher weg beam splitter receives them from only one. The two beams received by the QE beam splitter interfere, expressing the quantum probability set by the slits and sorting the photons by phase, allowing detection of the interference pattern; there is no opportunity for this to occur at the welcher weg beam splitter, because it is only struck by one beam. Note that the QE beam splitter must always receive photons from both welcher weg beams; otherwise, the QE detector(s) are welcher weg detector(s), because all the photons only came from one path. The welcher weg splitter must always receive photons from only one slit; otherwise, it's not welcher weg information. Twist it how you might, these two things must always be true for the experiment to work. This is also the explanation for Malus' Law: phase, and interference, and how phase changes under decoherence. (I am into strong decoherence: that is, interactions are measurements, in the HUP sense, and measurement results in uncertainty of the conjugate parameters.)

Later, after doing a bit of research so I wouldn't look so stupid: I went back and looked, and found this thread. It looks like you guys have found Kramer's daughter's "Retrodiction" post; however, some people seem to think that it will be possible to see interference; I claim it will not, because without the coincidence detector, the two interference patterns cannot be disentangled and without differentiation between them it will be impossible to see anything but non-interference. And I don't see the coincidence detector in the Cramer experiment slide, and notes on the Dopfer slide make it apparent that Cramer intends to eliminate it. One cannot detect a single photon interfering. It's pretty much the sound of one hand clapping.

The idea, of course, is to differentiate between simple Bell non-realism, and more complicated non-locality (which includes Bell non-realism). The problem is, it may be impossible to differentiate them (depending on how baroque you want to get about non-locality) in the case of a negative result, i.e. no interference; the only definitive result is a positive one. And that means seeing interference without the coincidence detector, and that's not possible. All that's accomplished is to force the non-local explanation to be more baroque; it's not eliminated. (Although the idea of making the non-local crowd look more like conspiracy theorists is not unattractive. :D )
 
  • #20
Hey Schneibster. The OP of this thread suggests that Cramer might acknowledge that if his experiment succeeded it would require us to modify the theory of QM somewhat by adding nonlinear effects (meaning that the retrocausal effects would be impossible in 'standard' QM), although the link is to a powerpoint presentation which I don't have the software to read. I think there exist rigorous proofs that Bohmian mechanics must always give the same predictions as standard QM, and it's hard to see how retrocausal effects could be possible in Bohmian mechanics, a totally deterministic theory in which later states can be derived from earlier ones (although I suppose it's conceivable that, being deterministic, earlier states could in some sense 'anticipate' what measurement would be performed later).
 
  • #21
The experiment PDF is gone; I'm looking to see if I saved a copy. I know it wound up being different from the Dopfer experiment; but I can't remember how. The description in the thread is almost good enough for me to imagine it. Oh well, hope I have a copy.
 
  • #22
huh? PDF link still works for me. I'm just disappointed his website has nothing *new* about that experiment. Maybe I should email his co-authors next time (wonder what that undergrad has to say..)?
 
  • #23
He had a document that showed how he intended to run the experiment; it wasn't a slide, looked more like a proposal. The proposal was rather different from the Dopfer experiment. I'm still looking. It was on the UW's server; linked from his page there, I thought. Cramer_2007.pdf or something very similar.
 
  • #24
Yes. It's still there. Still works. Are you having trouble following the links?
 
  • #25
IMP said:
Superdeterminism: in a millisecond after the bigbang, all of the particles positions, location, temp (etc) determined that I would be wearing a blue shirt today? The particles positions, location, and temp (etc) determined I would post this statement in this thread on physicsforums.com on 11-19-07?


Superdeterminism is basically just another label for strong determinism (no matter the ultimate nature of consciousness), that is if we could 're-run' somehow the Universe from the beginning exactly the same things would happen* (the additional assumption that there are no non-local connections is logically distinct, as Bell put it we do not need non-locality to explain the result of Aspect's experiment if everything is predetermined). See here some of Bell's thoughts on the subject.


*but not all 'conspiracy theories' - which deny the free will of the experimenters - require it with necessity
 
Last edited:
  • #26
metacristi said:
Superdeterminism is basically just another label for strong determinism

Well, my understanding is that superdeterminism makes the assumption that there are strong correlations between events which have a "long" causal link ; in other words, that we cannot assume "for all practical purposes" that events that "seem unrelated" (but which are, of course, in a deterministic universe, related somehow by a common cause in the past), are statistically independent, but rather have *very peculiar* correlations.

Indeed, the point is not so much that "unrelated" events (say, the number of leafs from your neighbour's tree that fell in your garden and the number you threw using 3 dice ; which have of course a common origin, namely the state of the universe 5 billion years ago) are not 100% strictly statistically independent, no, they have to be correlated in very peculiar ways (that reproduce, for instance, EPR experiment results). *This* is superdeterminism to me.

This is in fact the theoretical basis of astrology :smile:
 
  • #27
cesiumfrog said:
Yes. It's still there. Still works. Are you having trouble following the links?
You sure it's not cached in your browser? Try Cntl-reload. IE claims it cannot find the document. It's "Nonlocal_2007.pdf." I've seen it before now, but I can't seem to put my hands on a copy.
 
  • #28
vanesch said:
Well, my understanding is that superdeterminism makes the assumption that there are strong correlations between events which have a "long" causal link ;

No such assumption is being made. Superdeterminism assumes that any given state is uniquely determined by the previous state. In some circumstances this may imply "strong correlations between events which have a "long" causal link" in others not. Using Occham's Razor one can stop sooner or later with the exploration of the causal chain.

in other words, that we cannot assume "for all practical purposes" that events that "seem unrelated" (but which are, of course, in a deterministic universe, related somehow by a common cause in the past), are statistically independent, but rather have *very peculiar* correlations.

Indeed.

Indeed, the point is not so much that "unrelated" events (say, the number of leafs from your neighbour's tree that fell in your garden and the number you threw using 3 dice ; which have of course a common origin, namely the state of the universe 5 billion years ago) are not 100% strictly statistically independent, no, they have to be correlated in very peculiar ways (that reproduce, for instance, EPR experiment results). *This* is superdeterminism to me.

On what basis do you find EPR correlations "peculiar"? Do you find, say, energy conervation a "peculiar" fact? I can, for example construct a very complicated experiment in which energy is transformed in different ways, using all kinds of devices including free-willed experimenters. Should I expect a violation of energy conservation because all those devices "seem unrelated" "for all practical purposes"? I believe not.
As for EPR, if a particle is only emitted when a signal from a available absorber arrives at the source, the "freedom assumption" falls. It is simply irrelevant how the absorber looks like or how it moves. The only thing that matters is that the absorber's motion is predictable so that it can be "calculated" at the source.

This is in fact the theoretical basis of astrology :smile:

Astrology is cut by Occham's Razor.
 
  • #29
From reading the wikipedia article I get the impression that superdeterminism is basically the same as the notion of a "conspiracy" in the initial conditions of the universe, which ensures that the hidden-variables state in which two particles are created will always be correlated with the "choice" of measurements that the experiments decide to make on them. So, for example, in any trial where the experimenters were predetermined to measure the same spin axis, the particles would always be created with opposite spin states on that axis, but in trials where the experimenters were not predetermined to measure the same spin axis, the hidden spin states of the two particles on any given axis would not necessarily be opposite.

Since in a deterministic universe the state of an experimenter's brain which determines his "choice" of what to measure on a given trial can be influenced by a host of factors in his past which have nothing to do with the creation of the particle (what he had for lunch that day, for example), the only way for such correlations to exist would be to pick very special initial conditions of the universe--the correlations would not be explained by the laws of physics alone (unless this constraint on the initial conditions is itself somehow demanded by the laws of physics).
 
Last edited:
  • #30
Schneibster said:
You sure it's not cached in your browser? Try Cntl-reload. IE claims it cannot find the document.

It's currently the first link on http://faculty.washington.edu/jcramer/NLS/NL_signal.htm" . I'm not caching it locally, I even checked in IE (though I don't recommend you continue with that browser). Are you behind some kind of filter?
 
Last edited by a moderator:
  • #31
ueit said:
On what basis do you find EPR correlations "peculiar"? Do you find, say, energy conervation a "peculiar" fact? I can, for example construct a very complicated experiment in which energy is transformed in different ways, using all kinds of devices including free-willed experimenters. Should I expect a violation of energy conservation because all those devices "seem unrelated" "for all practical purposes"? I believe not.
This is a poor analogy--there would be nothing in any classical experiment that would require the system being investigated to behave as if it "knew" in advance what choice of measurement an experimenter would make, and alter its behavior in anticipation, as would be required to explain EPR correlations in the superdeterminism explanation. In classical experiments we would expect complete statistical independence between the state of the system at moments before a measurement and the experimenter's choice of what measurement to perform (assuming the experiments were repeated multiple times and the experimenters made their choices each time on a whim), while the superdeterminism explanation is explicitly based on rejecting this sort of assumption of statistical independence.
 
Last edited:
  • #32
cesiumfrog said:
It's currently the first link on http://faculty.washington.edu/jcramer/NLS/NL_signal.htm" .
I confirm that statement, but I still can't donwload it. You must be right, it must be my browser. My box is screwed up anyway; I've been lazily not handling it; looks like it's now becoming obstructive. Perhaps this will motivate me.

cesiumfrog said:
I'm not caching it locally, I even checked in IE (though I don't recommend you continue with that browser). Are you behind some kind of filter?
Thanks for checking. I don't prefer Internet Exploder; Firefox is going to be nice to get back to, once I get motivated.
 
Last edited by a moderator:
  • #33
JesseM said:
From reading the wikipedia article I get the impression that superdeterminism is basically the same as the notion of a "conspiracy" in the initial conditions of the universe, which ensures that the hidden-variables state in which two particles are created will always be correlated with the "choice" of measurements that the experiments decide to make on them. So, for example, in any trial where the experimenters were predetermined to measure the same spin axis, the particles would always be created with opposite spin states on that axis, but in trials where the experimenters were not predetermined to measure the same spin axis, the hidden spin states of the two particles on any given axis would not necessarily be opposite.

Since in a deterministic universe the state of an experimenter's brain which determines his "choice" of what to measure on a given trial can be influenced by a host of factors in his past which have nothing to do with the creation of the particle (what he had for lunch that day, for example), the only way for such correlations to exist would be to pick very special initial conditions of the universe--the correlations would not be explained by the laws of physics alone (unless this constraint on the initial conditions is itself somehow demanded by the laws of physics).

Yes, that's also how I see it.
 
  • #34
ueit said:
No such assumption is being made. Superdeterminism assumes that any given state is uniquely determined by the previous state. In some circumstances this may imply "strong correlations between events which have a "long" causal link" in others not.

That's "normal" determinism.

On what basis do you find EPR correlations "peculiar"? Do you find, say, energy conervation a "peculiar" fact? I can, for example construct a very complicated experiment in which energy is transformed in different ways, using all kinds of devices including free-willed experimenters. Should I expect a violation of energy conservation because all those devices "seem unrelated" "for all practical purposes"? I believe not.

No, because in this case, there is a clear and simple correlation at each step which is evident from the laws of nature. But the obscure correlation which needs to exist between the spins of the sent couples of particles, and the *choices* that the observers are making on each side, is NOT the consequence of a straightforward and obvious chain of cause-effect relationships, but must be due to very very peculiar "initial conditions" far far behind in time, as JesseM pointed out. In other words, this correlation doesn't follow from a straightforward application of the *laws of nature* as we know them, but rather from very very "improbable" initial conditions billions of years ago.

The difference between normal determinism and superdeterminism is that in normal determinism, we assume that all events which are not obviously related by a *rather straightforward* cause-effect relationship, can for all practical purposes be assumed to be statistically independent (even though one might expect *small* deviations from strict statistical independency, depending on the "cutoff" one places on the straightforwardness of the cause-effect relationships). In superdeterminism, we assume that arbitrarily strong correlations can exist for arbitrary long "chains of cause-effect", such as "emission of a pair of photons" and "brain of Alice to decide to put the analyser to 60 degrees".

As for EPR, if a particle is only emitted when a signal from a available absorber arrives at the source, the "freedom assumption" falls. It is simply irrelevant how the absorber looks like or how it moves. The only thing that matters is that the absorber's motion is predictable so that it can be "calculated" at the source.

Right, and if that motion is determined by choices of people, for instance, then in this "calculation" must also be included the entire dynamics of the brain of that person. This is where, in normal determinism, one considers the causal chain that makes "the source calculate" and the "brain think" too long to be statistically correlated.
 
  • #35
Perhaps superdeterminism can be more natural if particles don't really exist. You can imagine that there exists some deterministic theory and to do statistical computations you need to introduce ghost fields and a ghost Lagrangian which takes the form of the Standard Model.
 
  • #36
JesseM said:
This is a poor analogy--there would be nothing in any classical experiment that would require the system being investigated to behave as if it "knew" in advance what choice of measurement an experimenter would make, and alter its behavior in anticipation, as would be required to explain EPR correlations in the superdeterminism explanation. In classical experiments we would expect complete statistical independence between the state of the system at moments before a measurement and the experimenter's choice of what measurement to perform (assuming the experiments were repeated multiple times and the experimenters made their choices each time on a whim), while the superdeterminism explanation is explicitly based on rejecting this sort of assumption of statistical independence.

The analogy was only intended to show how weak the "seemingly unrelated" argument is. There is very little similarity between a piece of plutonium, a pendulum and a jumping monkey. Nevertheless, we see no problem with the energy conservation when all these systems are let to interact.
In an EPR experiment all the devices are made from the same quantum particles therefore they should all follow the same physical laws. From a microscopic point of view a brain is not much different from a piece of wood.
 
  • #37
ueit said:
The analogy was only intended to show how weak the "seemingly unrelated" argument is. There is very little similarity between a piece of plutonium, a pendulum and a jumping monkey. Nevertheless, we see no problem with the energy conservation when all these systems are let to interact.
In an EPR experiment all the devices are made from the same quantum particles therefore they should all follow the same physical laws. From a microscopic point of view a brain is not much different from a piece of wood.

Yes, but whereas the conservation of energy is "traceable" in each of the interactions, superdeterministic correlations are by definition untraceable. You have no explanation of why the choice made by the brain of Alice has to be correlated with the emission of a photon pair in a PDC X-tal for instance. You just have to say that "these correlations have to be exactly like this and such" for the EPR correlations to emerge. There's no other explanation as that in the far past, both must have some common origin. The funny thing is that an obscure common origin in the past generates exactly these correlations, but BIG CHANGES in the present don't affect these correlations (if you let Alice's brain decide, or you let a random number generator decide, or a decaying radioactive substance, the correlations are always EXACTLY THE SAME).

With the conservation of energy, we know at each step, at each interaction, that there is conservation of energy (and can work this out in detail if we want to). So the causal link is clear. In superdeterminism, the link is not clear.

As I said, this is the basis of astrology. Given that, say, my love life, and the origin of the solar system have a common origin, it shouldn't be a surprise in superdeterminism that the constellation of the planets is strongly correlated with the ups and downs of my love life.
 
  • #38
ueit said:
The analogy was only intended to show how weak the "seemingly unrelated" argument is. There is very little similarity between a piece of plutonium, a pendulum and a jumping monkey. Nevertheless, we see no problem with the energy conservation when all these systems are let to interact.

Arguing for "superdeterminism" on this basis is not sensible. There are specific useful theories - which make testable predictions - regarding conversation in interacting systems. That is good science.

On the other hand, superdeterminism in effect postulates that apparently non-interacting systems meet certain requirements - but those requirements are only apparent in very specific cases. There are no useful predictions, and entirely new science is required to explain why there are no other artifacts of superdeterminism. In other words, the cure is worse than the disease it was intended to solve.
 
  • #39
vanesch said:
That's "normal" determinism.

No, "normal determinism" has the "freedom" assumption included. Drop that assumption and you get a nice, logically consistent determinism that is also called superdeterminism.

No, because in this case, there is a clear and simple correlation at each step which is evident from the laws of nature. But the obscure correlation which needs to exist between the spins of the sent couples of particles, and the *choices* that the observers are making on each side, is NOT the consequence of a straightforward and obvious chain of cause-effect relationships, but must be due to very very peculiar "initial conditions" far far behind in time, as JesseM pointed out. In other words, this correlation doesn't follow from a straightforward application of the *laws of nature* as we know them, but rather from very very "improbable" initial conditions billions of years ago.

I've probably described my superdeterministic mechanism very badly because it has nothing to do with the fine tuning of initial conditions. I'll try to make it clearer with another analogy.

An experiment is performed to study molecular fluorescence. A solution containing a fluorescent molecule is prepared. A visible light detector is used. Now, let's assume that our detector contains the UV source inside it, without the knowledge of the experimenter. Also, the mechanism of this emission is not known. No one knows that a UV source is required because all detectors contain such a source, this source is always on, and the secret is carefully preserved.

The experiment intends to study how the intensity of the fluorescence radiation varies with the distance to the detector. Strangely enough, the intensity decreases much faster than in the case of "normal" radiation.

Obviously, the explanation for this result is that the fluorescence is caused by the detector itself, therefore moving the detector changes the source's intensity. The velocity with which the detector is moved is also important because of Doppler effect (the incoming UV radiation has a different energy). One can easily see that in this case, maintaining the assumption that the detector's state and the properties of the fluorescence emission are statistically independent gives nonsensical results.
Back to our EPR experiment, if the detector's presence is the direct cause of the emission of the entangled particles, as I propose (an idea also common to the Cramer's transactional interpretation) one has to drop the "freedom" assumption so Bell's theorem cannot be used to reject it.

Sure, in addition to the above mechanism an extrapolation mechanism is required to explain the delayed choice experiments, the source must adjust its emission to the future detector's state. Such a mechanism is known in GR where a body accelerates to the future locations of the other bodies. As JesseM pointed out in another discussion on this topic, the mechanism is not perfect as it works only for uniform and uniform accelerated motion. Nevertheless, it is good enough so that a non-local theory, Newtonian gravity is used for most practical applications like spaceships trajectory calculations and computer simulations of galactic motion.

The difference between normal determinism and superdeterminism is that in normal determinism, we assume that all events which are not obviously related by a *rather straightforward* cause-effect relationship, can for all practical purposes be assumed to be statistically independent (even though one might expect *small* deviations from strict statistical independency, depending on the "cutoff" one places on the straightforwardness of the cause-effect relationships). In superdeterminism, we assume that arbitrarily strong correlations can exist for arbitrary long "chains of cause-effect", such as "emission of a pair of photons" and "brain of Alice to decide to put the analyser to 60 degrees".

I hope that the causal relationship detector-source is straightforward enough, in the light of the above proposed mechanism. The source reacts to the field produced by all particles around it, including those from Alice's brain. I see no problem with that.
Gravitationaly, the Earth accelerates towards the future position of the Sun, Mars, Jupiter, and so on. Now, if I take a bunch of asteroids and arrange them in the same way like the particles inside Alice's brain will you predict that Earth would simply stop following GR' s equations and start being confused about the strange structure near it?

Right, and if that motion is determined by choices of people, for instance, then in this "calculation" must also be included the entire dynamics of the brain of that person. This is where, in normal determinism, one considers the causal chain that makes "the source calculate" and the "brain think" too long to be statistically correlated.

It depends on what someone is testing. If a very delicate experiment is performed and the EM field produced by the brain matters, one cannot ignore it, even in classical determinism. I claim that this is the situation we have in QM. The difference is that one cannot eliminate the problem by increasing the source-detector distance, because not the intensity of the resultant EM field is important but the information exchanged at each particle's level during the interaction that precedes the emission of the entangled pair.

As I said, this is the basis of astrology. Given that, say, my love life, and the origin of the solar system have a common origin, it shouldn't be a surprise in superdeterminism that the constellation of the planets is strongly correlated with the ups and downs of my love life.

In the case of EPR I proposed a clear mechanism that relates the source emission with the detector's state. Their common origin at the Big-Bang is only a necessary condition in this case.
 
  • #40
DrChinese said:
Arguing for "superdeterminism" on this basis is not sensible. There are specific useful theories - which make testable predictions - regarding conversation in interacting systems. That is good science.

On the other hand, superdeterminism in effect postulates that apparently non-interacting systems meet certain requirements - but those requirements are only apparent in very specific cases. There are no useful predictions, and entirely new science is required to explain why there are no other artifacts of superdeterminism. In other words, the cure is worse than the disease it was intended to solve.

The negation of the "freedom" assumption does not come from the superdeterministic part of my idea (the GR-like extrapolation mechanism) but from the emitter-absorber theory at the basis of Cramer's transactional interpretation (TI). TI is a general interpretation, not specifically designed for EPR.

I don't understand your issue with the absence of predictions as we discuss various possible interpretations of the same formalism. AFAIK no extant QM interpretation is falsifiable so why should you ask this in the case of a hypothetical superdeterministic one?
 
  • #41
ueit said:
in addition to [stimulation from the detector] an extrapolation mechanism is required to explain the delayed choice experiments, the source must adjust its emission to the future detector's state. Such a mechanism is known in GR [..] the Earth accelerates towards the future position of the Sun
Being a GR person, it took a long time to figure out what you are talking about. So that you know, it is not a GR effect, it is a SR effect that is best exhibited in classical EM. It is well known that in constant motion the (retarded) electric field lines of a charged particle already point to where the particle will be "now" (unless something has happened to the particle in the intervening time). I think if you understood the derivation of this fact better, you might realize what a stretch it is to relate this to backward causation.

ueit said:
The negation of the "freedom" assumption does not come from the superdeterministic part of my idea (the GR-like extrapolation mechanism) but from the emitter-absorber theory at the basis of Cramer's transactional interpretation (TI).
You're conflating two different paths. Backward causality (a la Cramer's transactional interpretation, which alas has yet been demonstrated leading to little more than crackpottery) is a path that can explain QM even presuming that different elements are completely unconnected in the past (their future-interaction produces correlation), whereas super determinism (i.e., anything that completely hinges on elements still being correlated now due to interactions far back at the beginning of the universe) has no purpose for backward causation. Can I suggest you stick to just one interpretation at a time?
 
Last edited:
  • #42
ueit said:
Sure, in addition to the above mechanism an extrapolation mechanism is required to explain the delayed choice experiments, the source must adjust its emission to the future detector's state. Such a mechanism is known in GR where a body accelerates to the future locations of the other bodies. As JesseM pointed out in another discussion on this topic, the mechanism is not perfect as it works only for uniform and uniform accelerated motion. Nevertheless, it is good enough so that a non-local theory, Newtonian gravity is used for most practical applications like spaceships trajectory calculations and computer simulations of galactic motion.
I think you aren't realizing the gigantic gap between the type of "extrapolation" seen in electromagnetism or GR (where in some circumstances objects will be pulled towards the current position of another object rather than its retarded position) and the type of extrapolation you're imaging where the particles can predict what the experimenters will later choose to measure. The first just involves extrapolating the position of a single object based on some derivative of its position--velocity, acceleration, whatever--whereas the type of extrapolation you're imagining would require some kind of computation of the nonlinear interactions of a vast number of particles in order to predict their future behavior from their past state. There's really no comparing the two, the existence of the first type of extrapolation can't be used to argue that the second type is at all plausible, and it's completely misleading to say that this fantastical second type is "GR-like extrapolation".
ueit said:
Gravitationaly, the Earth accelerates towards the future position of the Sun, Mars, Jupiter, and so on. Now, if I take a bunch of asteroids and arrange them in the same way like the particles inside Alice's brain will you predict that Earth would simply stop following GR' s equations and start being confused about the strange structure near it?
This is the sort of thing I'm talking about when I say you misunderstand the sort of extrapolation that occurs in theories like E&M or GR. In a situation with a large number of gravitating bodies of similar mass the Earth definitely would not act like it it knew the current position of each body in deciding how to move, because their current positions would depend on their mutual gravitational interactions rather than just extrapolating each one's velocity or acceleration individually.
ueit said:
In the case of EPR I proposed a clear mechanism that relates the source emission with the detector's state. Their common origin at the Big-Bang is only a necessary condition in this case.
You are treating the Big Bang singularity itself as if it can contain information which is accessible to any object that has the singularity in its past light cone (i.e the entire universe)? Because for any finite moment after the Big Bang, no matter how tiny the amount of time, there will be events at that moment which are in the past light cone of the moment where the experimenter makes a decision (and thus could deterministically influence the decision) but which are not in the past light cone of the creation of the entangled particles (so they wouldn't have that information available to 'extrapolate' with). If your argument depends on the singularity itself, does that mean you reject the notion that quantum gravity will do away with the infinities of GR, and believe there can be physically real singularities of infinite density where an infinite number of worldlines all converge?
 
Last edited:
  • #43
cesiumfrog said:
Being a GR person, it took a long time to figure out what you are talking about. So that you know, it is not a GR effect, it is a SR effect that is best exhibited in classical EM. It is well known that in constant motion the (retarded) electric field lines of a charged particle already point to where the particle will be "now" (unless something has happened to the particle in the intervening time). I think if you understood the derivation of this fact better, you might realize what a stretch it is to relate this to backward causation.

I've chosen GR because in this case the extrapolation is much better than in the case of EM interaction. The source (Aberration and the Speed of Gravity, S. Carlip) is here:

http://arxiv.org/PS_cache/gr-qc/pdf/9909/9909087v2.pdf

Abstract

The observed absence of gravitational aberration requires that “Newtonian” gravity propagate at a speed cg > 2 × 10^10c. By evaluating the gravitational effect of an accelerating mass, I show that aberration in general relativity is almost exactly canceled by velocity-dependent interactions, permitting cg = c. This cancellation is dictated by conservation laws and the quadrupole nature of gravitational radiation.

As this article points out, gravity can be assumed, for all but some extreme situations (like binary pulsars) to propagate instantaneously. This works even for some types of accelerated motion (unlike EM interaction). Therefore we already have an example where a local theory "looks" non-local even if not exactly so.

You're conflating two different paths. Backward causality (a la Cramer's transactional interpretation, which alas has yet been demonstrated leading to little more than crackpottery) is a path that can explain QM even presuming that different elements are completely unconnected in the past (their future-interaction produces correlation), whereas super determinism (i.e., anything that completely hinges on elements still being correlated now due to interactions far back at the beginning of the universe) has no purpose for backward causation. Can I suggest you stick to just one interpretation at a time?

The only part of TI that i find interesting is the idea that a particle is not emitted at random but only after a previous contact absorber-emitter. I do not entertain the idea of backward causality and in fact I am trying to replace it with a "normal" causal chain backed up by some "extrapolation" effects.

I see no reason to believe that regions "completely unconnected in the past" really exists. It depends on the assumptions one makes about the nature of big-bang. So, I am not sure that thinking about how such regions may interact is meaningful.
 
  • #44
ueit said:
As this article points out, gravity can be assumed, for all but some extreme situations (like binary pulsars) to propagate instantaneously. This works even for some types of accelerated motion (unlike EM interaction). Therefore we already have an example where a local theory "looks" non-local even if not exactly so.
"All but some extreme situations" is completely wrong--at best the article says that constant acceleration can be extrapolated, but in any situation involving mutual gravitational interactions between multiple bodies (in situations where one body does not completely dominate like the Sun in our solar system) the sign of these interactions is that the acceleration of each body is changing in complicated ways (even the 3-body problem is too complicated for physicists to find an exact solution), you couldn't predict how anybody would move just by knowing its own position and derivatives of its position like velocity and acceleration. As I pointed out in my last post, your argument about QM assumes a completely un-GR-like form of "extrapolation" where the extremely complex interactions of all the particles in the experimenters' brains can somehow be predicted by particles before they happen.
 
  • #45
JesseM said:
I think you aren't realizing the gigantic gap between the type of "extrapolation" seen in electromagnetism or GR (where in some circumstances objects will be pulled towards the current position of another object rather than its retarded position) and the type of extrapolation you're imaging where the particles can predict what the experimenters will later choose to measure. The first just involves extrapolating the position of a single object based on some derivative of its position--velocity, acceleration, whatever--whereas the type of extrapolation you're imagining would require some kind of computation of the nonlinear interactions of a vast number of particles in order to predict their future behavior from their past state. There's really no comparing the two, the existence of the first type of extrapolation can't be used to argue that the second type is at all plausible, and it's completely misleading to say that this fantastical second type is "GR-like extrapolation".

This is the sort of thing I'm talking about when I say you misunderstand the sort of extrapolation that occurs in theories like E&M or GR. In a situation with a large number of gravitating bodies of similar mass the Earth definitely would not act like it it knew the current position of each body in deciding how to move, because their current positions would depend on their mutual gravitational interactions rather than just extrapolating each one's velocity or acceleration individually.

A body accelerates as a result of its interactions with another bodies. If this acceleration is of a type that can be extrapolated, then the evolution of a system with a "vast number of particles" can be perfectly predicted. This might be not so in GR but I see no reason to force the dynamics of the quantum particles to be exactly like that in GR. One could restrict, for example, the types of motion a particle can have to the "predictable" ones.

You are treating the Big Bang singularity itself as if it can contain information which is accessible to any object that has the singularity in its past light cone (i.e the entire universe)? Because for any finite moment after the Big Bang, no matter how tiny the amount of time, there will be events at that moment which are in the past light cone of the moment where the experimenter makes a decision (and thus could deterministically influence the decision) but which are not in the past light cone of the creation of the entangled particles (so they wouldn't have that information available to 'extrapolate' with). If your argument depends on the singularity itself, does that mean you reject the notion that quantum gravity will do away with the infinities of GR, and believe there can be physically real singularities of infinite density where an infinite number of worldlines all converge?

The singularity means that the theory has failed do describe the studied phenomenon. I see the big-bang as a deterministic process in an eternal universe. For example, if there was a big-crunch before our big-bang, the particles must have been in connection to one another. This memory was not erased at big-bang.

Anyway, until a quantum theory of gravity is found we can only speculate about this. For the time being I know of no evidence that such disconnected regions exist and I am not sure how such a situation can be described by our theories. For example two charged particles in such a situation would not "feel" the EM force but they start "sensing" it as they come close to one another. Is such a process described by QM?
 
  • #46
ueit said:
A body accelerates as a result of its interactions with another bodies. If this acceleration is of a type that can be extrapolated, then the evolution of a system with a "vast number of particles" can be perfectly predicted.
You can't extrapolate the motion of any single body in a 3-body problem (or N-body problem) if all you know is the body's own instantaneous position, velocity, and acceleration (or further derivatives of position). As far as I can tell GR's extrapolation is based solely on the body's own movements, there is no semblance whatsoever of extrapolating mutual interactions between bodies over time.

edit: Technically maybe if you knew the exact value for a near-infinite number of derivatives of position you could predict the motion in an N-body problem, because as long as position as a function of time is a real analytic function (no instantaneous changes in velocity, acceleration, or any other derivative of position) then the whole path is equivalent to an infinite Taylor series. However, since situations involving more than two bodies tend to be chaotic, there'll be sensitive dependence on initial conditions so if you use only a finite number of derivatives in the Taylor series, the predicted path will eventually diverge completely from the actual path. Carlip says here that electromagnetism can "extrapolate" constant-velocity motion because "the lowest-order radiation is dipole radiation", and GR can extrapolate some other types of motion like constant-acceleration because its lowest-order radiation is quadrupole, so you'd probably need some insanely complicated theory whose lowest-order radiation was "trillionpole" or "googolplexpole" or something in order for the theory to extrapolate motion due to mutual interactions or a three-body system for any significant length of time.
ueit said:
Anyway, until a quantum theory of gravity is found we can only speculate about this. For the time being I know of no evidence that such disconnected regions exist and I am not sure how such a situation can be described by our theories. For example two charged particles in such a situation would not "feel" the EM force but they start "sensing" it as they come close to one another. Is such a process described by QM?
Both quantum field theory and classical electromagnetism have a light cone structure, particles can only be affected by things in their past light cone. "Disconnected" regions just mean that the past light cones of events in the two regions don't overlap (again, unless you treat the Big Bang singularity itself as an event which both light cones include).
 
Last edited:
  • #47
ueit said:
I don't understand your issue with the absence of predictions as we discuss various possible interpretations of the same formalism. AFAIK no extant QM interpretation is falsifiable so why should you ask this in the case of a hypothetical superdeterministic one?


There is no hypothetical superdeterministic particle theory, except perhaps in your imagination. Perhaps you would care to actually work out such a theory. Or do you consider that too trivial to bother with?

The problem with such a theory is that it would have a problem explaining why - if everything casually connects back to an earlier point in time - only the relative settings of separated measurement devices (themselves being macroscopic objects) are relevant to the observed outcomes. All other states of all other particles, measurement devices, ensembles, etc. all magically cancel out. And yet the individual results are still random! Further, the observed effect is noticed nowhere else.

It is these troubling details that cause me to have an issue with your off-the-cuff remarks.
 
Back
Top