New Quantum Interpretation Poll

In summary: I would be cautious on this point, there may be a slight bit of revisionism going on with this particular idea (not on your part, from the recent historical paper I am fairly sure you are familiar with).
  • #36
stevendaryl said:
What local interpretation are you talking about? I don't know of one.

MWI is local. Retrocausal/Time Symmetric interpretations (such as Relational BlockWorld) are also local. However, such local ones are not realistic. They sacrifice some key causal element of nature to achieve their results.
 
Physics news on Phys.org
  • #37
DrChinese said:
The view that Einstein held until his death was that it was not conceivable to him that there could not exist a more complete specification of the quantum system, at least in principle. That was an opinion expressed in EPR (1935).
I recall reading a paper discussing how Einstein was very unsatisfied even with the EPR paper as was written by Podolsky. I can't recall the details? An interesting quote from Pauli suggesting that Einstein was not as adamant about 'determinism' as he was about 'realism' is the following passage taken from a letter from Pauli to Born:
Einstein gave me your manuscript to read; he was not at all annoyed with you, but only said that you were a person who will not listen. This agrees with the impression I have formed myself insofar as I was unable to recognise Einstein whenever you talked about him in either your letter or your manuscript. It seemed to me as if you had erected some dummy Einstein for yourself, which you then knocked down with great pomp. In particular, Einstein does not consider the concept of “determinism” to be as fundamental as it is frequently held to be (as he told me emphatically many times), and he denied energetically that he had ever put up a postulate such as (your letter, para. 3): “the sequence of such conditions must also be objective and real, that is, automatic, machine-like, deterministic.” In the same way, he disputes that he uses as a criterion for the admissibility of a theory the question: “Is it rigorously deterministic?” Einstein’s point of departure is “realistic” rather than “deterministic”.
 
Last edited:
  • #38
bohm2 said:
I recall reading a paper discussing how Einstein wasn't happy even with the EPR paper. I can't recall the details? An interesting quote from Pauli suggesting that Einstein was not as adamant about 'determinism' as he was about 'realism' is the following passage taken from a letter from Pauli to Born:

Einstein gave me your manuscript to read; he was not at all annoyed with you, but only said that you were a person who will not listen. This agrees with the impression I have formed myself insofar as I was unable to recognise Einstein whenever you talked about him in either your letter or your manuscript. It seemed to me as if you had erected some dummy Einstein for yourself, which you then knocked down with great pomp. In particular, Einstein does not consider the concept of “determinism” to be as fundamental as it is frequently held to be (as he told me emphatically many times), and he denied energetically that he had ever put up a postulate such as (your letter, para. 3): “the sequence of such conditions must also be objective and real, that is, automatic, machine-like, deterministic.” In the same way, he disputes that he uses as a criterion for the admissibility of a theory the question: “Is it rigorously deterministic?” Einstein’s point of departure is “realistic” rather than “deterministic”.

Good quote. And I think that captures the spirit of both Einstein's position on realism ("the moon is there even when no one is looking") as well as the point of the poll question: no matter how you approach it, Einstein had to be wrong on one point; but each person might see which point he was wrong on to be different for a variety of reasons. That could be either because of one particular statement in the EPR paper or because of their own particular interpretation. Or perhaps some other statement of Einstein's. I don't believe that Einstein's views on all 3 of the below can be correct:

i. No spooky action at a distance.
ii. Moon is there even when no one is looking.
iii. QM is not complete.
 
  • #39
Where's Euan Squires interpretation?! Gosh. Had they asked me the question, that would be listed in the result
 
  • #40
kith said:
Well, we have local interpretations of QM and realistic ones. In local interpretations, we have weirdness due to non-realism and in realistic ones we have weirdness due to non-locality. It seems to me that you want to get rid of both kinds of weirdness which isn't possible for any underlying theory because of Bell's theorem.
What would you say about this interpretation
http://arxiv.org/abs/1112.2034 [to appear in Int. J. Quantum Inf.]
which interpolates between local and realistic interpretations, and in a sense is both local and realistic (or neither).
 
  • #41
vanhees71 said:
but as I said above, when you have a probabilistic theory, you have to use an ensemble to prove its predictions.

I actually hold to the Ensemble interpretation but I am not so sure that's true. There are a number of foundations to probability theory, not just ensembles, such as Kolmogorov's axioms, that can also be used. Its just a lot more abstract whereas visualizing an ensemble is much more concrete.

Thanks
Bill
 
Last edited:
  • #42
DrChinese, you fogot the obvious 4th possibility: Einstein is right on all points!

(I) Relativistic local quantum field theory by construction doesn't admit spooky (inter)actions at a distance due to the microcausality property of these theories. As any quantum theory it admits long-range correlations, but these have nothing to do with interactions. The EPR problems with causality only arise when you believe in an instantaneous collapse of the quantum state, but that's an unnecessary addendum to the metaphysics by the followers of (some flavors of) the Copenhagen or Princeton interpretation. With the minimal statistical interpretation there are no such problems, and that's all you need to apply QT to the real world.

(II) Of course, the moon is there when nobody is looking, because at least the cosmic background radiation is always looking. Alone this "photon gas" is sufficient to decohere the moon and make it behave as a classical object to an overwhelming accuracy. I'm pretty sure that you'll never be able to detect some quantum behavior on big systems like the moon.

(III) Nobody knows whether QT is complete or not as long as it is not refuted by some reproducible observation. So far, all observations are compatible with the predictions of QT.

Now it may well be that QT is complete. Then the behavior of objects in the real world is only predictable in the sense of probabilities and nature is inherently indeterministic. Then it is really impossible to prepare a particle such that both position and momentum are determined, and you can only associate a (pure or mixed) quantum state, determined by some preparation procedure (in the most simple case you have a macroscopic system and let it alone for a sufficiently long time so that it reaches thermal equilibrium with the corresponding Stat. Op. [itex]\hat{R}=\exp(-\beta \hat{H})/Z[/itex] with [itex]Z=\mathrm{Tr} \exp(-\beta{\hat{H}})[/itex]). The only thing you can say about the system is the probability to find values of an observable when you measure it, and this prediction you can only verify by repeating the experiment often enough and then get the probabilities as limits of the relative rates at which the various outcomes of the measured quantity occur. An observable is determined, if the system is prepared in the pure state represented by an eigenstate of the operator that corresponds to this observable. Also this has only a probabilistic meaning, i.e., it says that you expect with probability 1 that the observable should take the corresponding eigenvalue whenever you measure it on a such prepared system (ideal exact measurements assumed for the sake of the theoretical argument). Also this prediction you can only check experimentally by measuring a large enough ensemble to make sure that you really get the one and only one outcome, i.e., with 100% probability.

Thus quantum theory is complete, if nature really is inherently indeterministic as predicted by quantum theory. If not, it's incomplete, and one would have to find a better theory which includes quantum theory as some limiting approximately valid description. If there is such a more comprehensive theory that is deterministic, according to the violation of Bell's inequality (while QT gives the correct probabilistic predictions also in these most "quantic" cases!) it must be a nonlocal (if it's relativistic that means necessarily nonlocal in both space and time) theory. So far nobody could come up with a non-local theory which is consistent with all observations.

On the other hand, history teaches us that usually physical theories are incomplete, and thus I guess that also QT is incomplete, but whether we find something "more complete" or not is not clear at all yet!
 
  • #43
stevendaryl said:
Of course, the moon is there when nobody is looking, because at least the cosmic background radiation is always looking. Alone this "photon gas" is sufficient to decohere the moon and make it behave as a classical object to an overwhelming accuracy. I'm pretty sure that you'll never be able to detect some quantum behavior on big systems like the moon.

I believe the moon is there if no one is looking and that decoherence solves the measurement problem. But to be sure decoherence only makes it look like a 'classical' object for all practical purposes in that no experiment can determine otherwise. For it to be considered classical you need some interpretive framework like decoherent histories.

Thanks
Bill
 
  • #44
vanhees71 said:
DrChinese, you fogot the obvious 4th possibility: Einstein is right on all points!

(I) Relativistic local quantum field theory by construction doesn't admit spooky (inter)actions at a distance due to the microcausality property of these theories. As any quantum theory it admits long-range correlations, but these have nothing to do with interactions. The EPR problems with causality only arise when you believe in an instantaneous collapse of the quantum state, but that's an unnecessary addendum to the metaphysics by the followers of (some flavors of) the Copenhagen or Princeton interpretation. With the minimal statistical interpretation there are no such problems, and that's all you need to apply QT to the real world. [..]
vanhees, I think that this isn't the first time that I ask this, but could you provide a reference for the above?
According to DrChinese (and he has defended this view rather successfully for years on his website and on this forum), Einstein was wrong about QM on at least one point so that what you say is impossible. I would like to understand your side of the argument. Indeed, it is unclear to me what you hold of Bell's theorem. I did find your post https://www.physicsforums.com/showthread.php?p=4020550 but there you say that Bell proposed hidden observables, which may have been a slip of the pen, and it is probably unrelated to the real point of how this could work according to you.

thanks,
Harald
 
Last edited:
  • #45
vanhees71 said:
DrChinese, you fogot the obvious 4th possibility: Einstein is right on all points!

...

(II) Of course, the moon is there when nobody is looking, because at least the cosmic background radiation is always looking. Alone this "photon gas" is sufficient to decohere the moon and make it behave as a classical object to an overwhelming accuracy. I'm pretty sure that you'll never be able to detect some quantum behavior on big systems like the moon.

...

On the other hand, history teaches us that usually physical theories are incomplete, and thus I guess that also QT is incomplete, but whether we find something "more complete" or not is not clear at all yet!

Although I don't agree with the 4th possibility, at least a third of the respondents in the survey did! :smile:

Regarding II: This statement is definitely a metaphor for EPR realism, and should not be interpreted so literally. Einstein said: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

The existence of quantum objects - when not observed - is not being denied by those who advocate non-realism. Instead, as a "non-realist", I would say that when a particle is in an eigenstate of position, it is not in any eigenstate of momentum. I would further say that the position eigenvalue was determined as part of a future measurement context.

As to the "history teaches us" argument: this really isn't an argument at all. And certainly the experimental evidence is dramatically pointing the other way.
 
  • #46
You are right, as I stressed several times, there is not the slightest evidence against quantum theory yet, and as long as there is none, there's no reason to think that quantum theory is incomplete.

The issue with an electron is of course more subtle. In general it's clear that, according to quantum theory, we cannot even be sure that the electron is present, if we don't have prepared one somehow and/or we have measured one of its properties.

Of course, there are no position and no momentum eigenstates, because these observables have a continuous spectrum. That's the content of the Heisenberg-Robertson uncertainty relation. For any (pure or mixed) state one can prepare a particle in, the standard deviations of these quantities obey
[tex]\sigma(x) \sigma(p) \geq \hbar/2.[/tex]
If I decide to prepare the particle with a small uncertainty in position in some direction, I necessarily have to live with a large uncertainty of momentum and vice versa.

According to QT, it doesn't make sense to say that the position or momentum of a particle is determined. We can only know probabilities about these quantities, and these probabilities can only be measured by preparing many particles in the same way and measuring the position or momentum of the particle. So you can make an experiment, measuring the position of the particles in the ensemble to get the probability distribution for position (and compare it with the predictions of QT) and then another experiment on an ensemble of equally prepared particles to measure its momentum distribution. The standard deviations of these distributions fulfill the Heisenberg-Robertson uncertainty relation.

It's not possible to measure both position and momentum of the very same particle at once. One can show that measuring a particle's position with high accuracy necessarily disturbes the particle's momentum to a large extent vice versa. You can only decide to make a "weak measurement", i.e., measure the position only with a certain lower accuracy (i.e., larger systematic error) [itex]\epsilon(x)[/itex] trading this lower accuracy for a somewhat lower disturbance [itex]\eta(p)[/itex] of the momentum and vice versa.

It is very important to distinguish these noise-disturbance relation from the above mentioned standard deviations. There have been made fascinating experiments about this question recently, making it to the (semi-)public press. The most simple example is about measuring spin components of neutrons:

Experimental demonstration of a universally valid error–disturbance uncertainty relation
in spin measurements
Jacqueline Erhart1 , Stephan Sponar1 , Georg Sulyok1, Gerald Badurek1, Masanao Ozawa2
and Yuji Hasegawa1 *
NATURE PHYSICS | VOL 8 | MARCH 2012
DOI: 10.1038/NPHYS2194

Unfortunately the paper explains the theory behind this experiment, which should make it to any modern textbook if you ask me, in a very complicated way. Perhaps I'll open a thread on this later, because I just got into this fascinating subject. I think it can be discussed on the level of a quantum-mechanics 1 lecture. Unfortunately precisely this issue is mixed up in many (even modern) textbooks, and this is due to Heisenberg's original publication on the uncertainty relation. Ironically, Bohr has pointed out this mistake in interpretation right away. Unfortunately the wrong interpretation, however, made it into the textbooks :-(.
 
  • #47
Nice paper.

"Finally, looking back, we regret not to have included the" shut up and calculate "interpretation ... in our poll."

Yes, a popular interpretation...

However, a response rate of 94% is high, but 33 respondents are not really enough statistically to draw any firm conclusions.

Another bias could be that the title of the conference "Quantum Physics and the Nature of Reality" (with financial support from the Templeton Foundation) can attract slightly more sheep than goats - more philosophical researchers so to speak?

Just for fun, I checked the graph of the Copenhagen Interpretation in Google Ngram Viewer up to 2008. The Copenhagen Interpretation seems to culminate (in books) around 1995.
 
Last edited:
  • #48
Duplex said:
Nice paper.
Another bias could be that the title of the conference "Quantum Physics and the Nature of Reality" (with financial support from the Templeton Foundation) can attract slightly more sheep than goats - more philosophical researchers so to speak?
1995.

That was precisely my suspicion. This cited paper from my last posting is an example for the way people in this community can make simple things pretty complicated. That's a pitty, because this is an experiment which is understandable on the level of an undergraduate introductory quantum-theory course level. The formalism is so simple in this case (all that's required is the two-dimensional Hilbert space for spin 1/2 measurements) that one can do all this as a miniproject for the students, and you learn a lot from it. I'll open another thread on this in a moment.

What's the real challenge is to understand the difference between the standard deviations, fulfilling the Heisenberg-Robertson-Schrödinger uncertainty relation and the noise-disturbance uncertainty relation by Ozawa, which here is nicely demonstrated (also experimentally to a high accuracy!) to violate the naive assertion that you can simply use the Robertson uncertainty relation or, to put it in the view of interpretation problem, to interpret the Robertson uncertainty relation as a relation for the product of the measurement accuracy ("systematic error") of one observable A and the perturbation ("disturbance") of another observable B that is not compatible with the first one. This was the interpretation by Heisenberg in his first paper. Bohr corrected him immediately after that. If I remember right, that's very comprehensively treated in on of the volumes of

Mehra, Rechenberg, The Historical Development of Quantum Mechanics.
 
  • #49
Duplex said:
Another bias could be that the title of the conference "Quantum Physics and the Nature of Reality" (with financial support from the Templeton Foundation) can attract slightly more sheep than goats - more philosophical researchers so to speak?

Just for fun, I checked the graph of the Copenhagen Interpretation in Google Ngram Viewer up to 2008. The Copenhagen Interpretation seems to culminate (in books) around 1995.

Pretty impressive group of names there, not the sheep by any means. I think you are off the mark on that. But this is certainly not a representative sample either, though I doubt if many people would really care whether their preferred interpretation was more popular or not.

Copenhagen is a lot of things to a lot of people, as I think has been pointed out already. There has been a proliferation of "new" interpretations (or at least names of interpretations) recently. And yet, nothing has really captured much of anyone's imagination either.

"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
 
  • #50
DrChinese said:
Pretty impressive group of names there, not the sheep by any means. I think you are off the mark on that.

Agree. I made ​​myself a little unclear. What Zeilinger and several others on the list have contributed to physics have my respect and admiration.
DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:

Agree. At least a simple math in the evening I think…

“Counting sheep is a mental exercise used in some cultures as a means of lulling oneself to sleep.”
http://en.wikipedia.org/wiki/Counting_sheep
 
  • #51
@bohm2
Thanks for the link. Papers like this can be fun to read, but I don't think they amount to much.

DrChinese said:
I don't believe that Einstein's views on all 3 of the below can be correct:
i. No spooky action at a distance.
ii. Moon is there even when no one is looking.
iii. QM is not complete.
I believe they can. Unfortunately, there's no way to determine which view is correct.

DrChinese said:
If Einstein had lived to learn of it, I am quite certain he would acknowledge that if there is a more complete specification of the system possible, that in fact there must be influences which are not bounded by c.
I don't think that's what he would conclude from experimental violations of Bell inequalities. I think he would conclude that Bell's lhv formulation is not viable. Why it isn't viable remains an open question in physics.

DrChinese said:
Which would in turn mean that relativity requires tuning.
So far, there's no way to determine if relativity 'requires tuning'. Make certain assumptions and it requires tuning. Otherwise, no. Regarding practical application both relativity and qm seem to work just fine.

DrChinese said:
So either way, one of Einstein's fundamental beliefs must be considered incorrect.
His belief that qm is an incomplete description of the deep reality seems to be quite correct. His beliefs that nature is exclusively local and that an lhv theory of quantum entanglement is possible remain open questions.

Is Bell's locality condition (re quantum entanglement preps) the only way that locality can be modeled? Open question. Are there "influences which are not bounded by c"? Open question.

DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
This (and the minimal statistical, or probabilistic, or ensemble) 'interpretation' wins because it doesn't involve any metaphysical speculation about what nature 'really is'. It just recognizes that what's 'really happening' in the deep reality of quantum experimental phenomena is unknown.
 
  • #52
nanosiborg said:
@bohm2
This (and the minimal statistical, or probabilistic, or ensemble) 'interpretation' wins because it doesn't involve any metaphysical speculation about what nature 'really is'. It just recognizes that what's 'really happening' in the deep reality of quantum experimental phenomena is unknown.
Yes, and physics is the attempt to describe objective reproducible facts about our observations of phenomena as precisely as possible. The question, why this works so well and in relatively simple mathematical terms is or even why nature behaves as we observe her is not a matter of physics (or any natural science) but of philosophy or even religion.

That's why I'm a follower of the minimal statistical interpretation (MSI): It uses as much assumptions (postulates) as needed to apply quantum theory to the description of (so far) all known observations in nature but no more. It also avoids the trouble with interpretations with a collapse (which, I think, is the only real difference between the Bohr-Heisenberg Copenhagen point of view and the MSI).

Also, it should be clear that the violation of Bell's inequality, when interpreted within the MSI. Take as an example an Aspect-Zeilinger like "teleportation" experiment with entangled photons and let's analyze it is terms of the MSI.

Within MSI the state is described by a statistical operator (mathematical level of understanding) and related to the real world (physics level of understanding dealing with real objects like photons, crystals, lasers, polarization filters, and what else the experimental quantum opticians have in their labs) as an equivalence class of preparation procedures that is appropriate to prepare the system in question (with a high enough accuracy) in this state.

Of course, a given preparation procedure has to be checked to really produce this state, which means according to the MSI that I have to be able to reproduce this procedure to a high enough accuracy such that I can prepare as many systems in this state, independently from each other, such to create a large enough ensemble to verify the probabilistic prediction of the claim that each system in the ensemble is, through this preparation procedure, prepared in a way such that its statistical behavior is described (at least up to the accuracy reachable by the measurement procedure used) by this state.

In the Zeilinger experiment, what's done in the preparation step is to produce a two-photon Fock state via parametric down conversion by shooting a laser beam on a birefrigerent crystal and then let the photon pair alone (i.e., there must be no interactions of either of the photons with anything around such that we can be sure that the pair stays in this very state). In the most simple case the photon pair used in a helicity 0 state, i.e., the polarization part is described by the pure state
[tex]|\Psi \rangle=\frac{1}{2}(|HV \rangle-|VH \rangle).[/tex]
The single-photon polarization states are then given by the corresponding partial traces over the other photon and turns out to be the maximum-entropy statistical operators
[tex]\hat{R}_A=\hat{R}_B=1/2 (|H \rangle \langle H|+|V \rangle \langle V|).[/tex]
Thus the single photons are unpolarized (i.e., an ensemble behaves like a unpolarized beam of light when taking the appropriate averaging procedure over many single-photon events). In terms of information theory the single-photon polarization is maximally indetermined (maximal von Neumann entropy).

In principle, it's possible to wait a very long time and then to perform some polarization analysis at very distant places. Then Alice and Bob can do their measurements in any chronological order. E.g., Alice measures her photon first and Bob after, and they can do this, however at "space like distances", i.e., such that a causal effect of Alices measurement on Bob's photon could only occur if their is faster-than-light-signal propagation. They can even do their experiment at the same time, so that one would need signal propagation at an arbitrarily large speed to have a causal effect of one measurement on the other.

It's well known, that the prediction of quantum theory is fulfilled to an overwhelming accuracy: If both Alice and Bob measure the polarization in the same direction there is a one-to-one correspondence between their results: If Alice finds her photon in horizontal (vertical) polarization, Bob finds his in vertical (horizontal) polarization.

Now it is a matter of interpretation, how you conclude about "faster-than-light-signal propagation" (FTLSP): Within the MSI there is no problem to stay with the conservative point of view that there is no FTLSP. The one-to-one correlation between the spins is due to the preparation of the two-photon state in the very beginning, and it's a statistical property of the ensemble which can verified only by doing a lot of experiments with a lot of equally prepared photon pairs to prove the predicted one-to-one correlation. At the same time it can be verified that the single-photon ensembles at Alice and Bob's place behave as an unpolarized beam of light, i.e., both measure (on average!) in 50% of the cases horizontal and in 50% vertical polarization of their photon. Subsequently they can match their measurement protocols and verify the one-to-one correlation. No FTLSP has been necessary to explain this one-to-one correlation since this has been a (only statistical) property of the preparation procedure for the photon pair, and no causal influence of the measurement at Alice's place on the measurement on Bob's has been necessary as an explanation for the outcome of the measurement. According to standard QED the interactions of each photon is a local one with the polarization filters and the detector at both places and one measurement of the photon cannot influence the measurement of the other photon, and within MSI one doesn't need anything that violates this very successful assumption, on which all our (theoretical) knowledge (summarized in the standard model of elementary particles) on elementary particles and also photons is based: On the very foundations of relativistic QFT and the definition of the S matrix we use the microcausality + locality assumption. So there is no need (yet) to give up this very successful assumptions.

Now if you adhere to a collapse interpretation a la (some flavors of the) Copenhagen interpretation (CI), you believe that at the moment when Alice detector has registered her photon as being horizontally polarized, instantaneously the two-photon state must collapse to the new pure state, described by [itex]|HV \rangle[/itex]. This happens in 50% of all cases, and then of course Bob, who detects his photon after Alice (but the detection events are supposed to be separated by a space-like distance in Minkowski space) must necessarily find his photon in the vertical polarization state. Thus, concerning the outcome of the experiment, this interpretation is not different from the MSI, but it causes of course serious problems with the local causal foundations of relativistic QFT. If the collapse of the state would be a physical process on the single photon pair, there must be FTLSP, and since the detection events of the photons were space-like separated, an observer in an appropriate reference frame could claim that Bob's measurement was before Alice's and thus the causality sequence would be reversed: From his point of view, Bob's measurement caused the instantaneous collapse before Alice could detect her photon. This, however would mean that the very foundation of all physics is violated, namely the causality principle, without which there is no sense to do physics at all.

That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble. Whether or not that's a complete description of nature, is an open question. If it is incomplete the violation of Bell's inequality leaves only the possibility that there is a non-local deterministic theory. The problem is that neither we have one such theory that is consistent nor is there any empirical hint that we would need such a theory, because all observations so far are nicely described by QT in the MSI.
 
  • #53
vanhees71 said:
That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble.

Just a quick question. How do you handle Kochen-Sprecker which implies the ensemble you select the outcome from can not there prior to observation. Or are you fine with the reality being the interaction of the observational apparatus and the observed system and what it is prior to observation is irrelevant? I know Ballentine had a bit of difficulty with this.

That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.

Thanks
Bill
 
  • #54
vanhees71 said:
[ ... snip ]
That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble.
Whether or not that's a complete description of nature, is an open question. If it is incomplete the violation of Bell's inequality leaves only the possibility that there is a non-local deterministic theory. The problem is that neither we have one such theory that is consistent nor is there any empirical hint that we would need such a theory, because all observations so far are nicely described by QT in the MSI.
Agree with this. MSI is sufficient, and whether through indifference or choice it seems to be the standard way of thinking about this.

It might be in some sense interesting or entertaining that so and so likes a certain interpretation, but it isn't important.

Nice post vanhees. I snipped all but the last part of it only for conciseness and convenience.
 
  • #55
bhobba said:
That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.
Doesn't decoherence precede all observations? Or, what's the criterion by which you exclude certain instrumental results?
 
  • #56
nanosiborg said:
Doesn't decoherence precede all observations? Or, what's the criterion by which you exclude certain instrumental results?

Yes it does. In interpretations that include decoherence (eg Decoherent Histories) the probabilities of the outcomes of observations predicted by the Born rule are called pre-probabilities. They can be calculated with or without reference to an observational set up but do not become real until manifest in an observational apparatus which implies decoherence must have occurred.

What this means is that if you have a system state you can calculate the probabilities of the outcome of an observation but it doesn't really mean anything unless you actually have an observational apparatus to observe it and decoherence will occur. That's why they are called pre-probabilities.

Thanks
Bill
 
Last edited:
  • #57
vanhees71 said:
[..] I'm a follower of the minimal statistical interpretation (MSI): It uses as much assumptions (postulates) as needed to apply quantum theory to the description of (so far) all known observations in nature but no more. It also avoids the trouble with interpretations with a collapse (which, I think, is the only real difference between the Bohr-Heisenberg Copenhagen point of view and the MSI).

Also, it should be clear that the violation of Bell's inequality, when interpreted within the MSI. Take as an example an Aspect-Zeilinger like "teleportation" experiment with entangled photons and let's analyze it is terms of the MSI.
[..]
It's well known, that the prediction of quantum theory is fulfilled to an overwhelming accuracy: If both Alice and Bob measure the polarization in the same direction there is a one-to-one correspondence between their results: If Alice finds her photon in horizontal (vertical) polarization, Bob finds his in vertical (horizontal) polarization.

Now it is a matter of interpretation, how you conclude about "faster-than-light-signal propagation" (FTLSP): Within the MSI there is no problem to stay with the conservative point of view that there is no FTLSP. The one-to-one correlation between the spins is due to the preparation of the two-photon state in the very beginning, and it's a statistical property of the ensemble which can verified only by doing a lot of experiments with a lot of equally prepared photon pairs to prove the predicted one-to-one correlation. [..] No FTLSP has been necessary to explain this one-to-one correlation since this has been a (only statistical) property of the preparation procedure for the photon pair, and no causal influence of the measurement at Alice's place on the measurement on Bob's has been necessary as an explanation for the outcome of the measurement.[..]
Thanks for the more precise clarification.
I still would like to understand how that can work quantitatively. If you don't mind, please comment on "Herbert's proof" as elaborated in an old thread (which is still open): https://www.physicsforums.com/showthread.php?t=589134
Your contribution will be appreciated! :smile:
 
  • #58
bhobba said:
Yes it does. In interpretations that include decoherence (eg Decoherent Histories) the probabilities of the outcomes of observations predicted by the Born rule are called pre-probabilities. They can be calculated with or without reference to an observational set up but do not become real until manifest in an observational apparatus which implies decoherence must have occurred.

What this means is that if you have a system state you can calculate the probabilities of the outcome of an observation but it doesn't really mean anything unless you actually have an observational apparatus to observe it and decoherence will occur. That's why they are called pre-probabilities.

Thanks
Bill
Thanks. I don't think K-S is a problem since MSI is about not speculating about what exists independent of observation. Though there's every reason to believe that what's prior to observation is relevant.
 
Last edited:
  • #59
bhobba said:
Just a quick question. How do you handle Kochen-Sprecker which implies the ensemble you select the outcome from can not there prior to observation. Or are you fine with the reality being the interaction of the observational apparatus and the observed system and what it is prior to observation is irrelevant? I know Ballentine had a bit of difficulty with this.

That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.

Thanks
Bill

The KS theorem states that it doesn't make sense to assume that compatible observables have a certain value, if the system is not prepared in a common eigenstate of the representing operators of these observables. I don't see where this can be a problem for the MSI, which precisely states that such compatible observables can only have determined values when the system is prepared in a common eigenstate.

Could you point me to the problems Ballentine has stated about the KS theorem in context with the MSI? In his book "Quantum Mechanics, A Modern Development" I can't find any such statement, and the KS theorem is discussed there in the concluding chapter on Bell's inequality.
 
  • #60
DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
But it's a hollow victory as so nicely put and somewhat surprisingly by Fuchs:
The usual game of interpretation is that an interpretation is always something you add to the preexisting, universally recognized quantum theory. What has been lost sight of is that physics as a subject of thought is a dynamic interplay between storytelling and equation writing. Neither one stands alone, not even at the end of the day. But which has the more fatherly role? If you ask me, it’s the storytelling. Bryce DeWitt once said, “We use mathematics in physics so that we won’t have to think.” In those cases when we need to think, we have to go back to the plot of the story and ask whether each proposed twist and turn really fits into it. An interpretation is powerful if it gives guidance, and I would say the very best interpretation is the one whose story is so powerful it gives rise to the mathematical formalism itself (the part where nonthinking can take over). The "interpretation" should come first; the mathematics (i.e., the pre-existing, universally recognized thing everyone thought they were talking about before an interpretation) should be secondary.
Interview with a Quantum Bayesian
https://www.physicsforums.com/showthread.php?p=4177910&highlight=fuchs#post4177910
 
  • #61
vanhees71 said:
The KS theorem states that it doesn't make sense to assume that compatible observables have a certain value, if the system is not prepared in a common eigenstate of the representing operators of these observables. I don't see where this can be a problem for the MSI, which precisely states that such compatible observables can only have determined values when the system is prepared in a common eigenstate.

Could you point me to the problems Ballentine has stated about the KS theorem in context with the MSI? In his book "Quantum Mechanics, A Modern Development" I can't find any such statement, and the KS theorem is discussed there in the concluding chapter on Bell's inequality.

The KS theorem is not a problem for the MSI providing you do not assume it has the value prior to observation. However that is a very unnatural assumption. When the observation selects an outcome from the ensemble of similarly prepared systems with that outcome associated with it you would like to think it is revealing the value it has prior to observation - but you can't do that.

A number of books such as Hugh's - Structure And Interpretation of QM mention the issues the Ensemble interpretation has with the KS - I can dig up the page if you really want - but not now - feeling a bit tired. They claim it invalidates it - it doesn't - but the assumption you need to make to get around it is slightly unnatural - that's all - it can not be viewed as classical probabilities like say tossing a dice unless you invoke decoherence.

I did manage to find the following online:
http://books.google.com.au/books?id...&q=ballentine ensemble kochen specker&f=false

I too have Ballentines book and its not in there anywhere - I read it in some early paper he wrote on it but can't recall which one. But since then I think he realized it wasn't really an issue if you abandon viewing it like classical probabilities.

Thanks
Bill
 
  • #62
@bhobba: I see. It's not so easy to label one's interpretation of quantum theory by a simple name obviously. I guess there are as many interpretation as there are physicists using QT:biggrin:.

I always understood the MSI such that of course only those observables are determined of a system, prepared in some pure or mixed state by some well defined preparation procedure, for which the probability, given by Born's rule, to find a certain possible value (which is necessarily the eigenvalue of the observable-representing operator) is 1 (and then necessarily the probability to find all other values must be 0 of course). All other observables simply do not have a determined value. Measuring such an indetermined observable gives some of its possible values with a probability given by Born's rule. Measuring it for a single system doesn't tell us much. We can only test the hypothesis that the probabilities are given by Born's rule by preparing an ensemble (in the sense explained in my previous postings) and doing the appropriate statistical analysis. Simply put: An observable only takes a certain value if and only if the system is prepared in an appropriate state, where the corresponding probability to find this value is 1. The KS theorem tells me that it contradicts quantum theory, if you assume that the values of undetermined observables are just not known but have in "reality" certain values. Then you interpret quantum-theoretical probabilities just as sujective probabilities in the sense of classical statistical physics, and that's incompatible with QT according to KS. As you say, this doesn't pose a problem to the MSI. As I understand MSI, to the contrary it is most compatible with the KS theorem!

@bohm2: This quote by Fuchs is indeed interesting. It's easily stated that interpretation should come first, and it's the very first problem you run into if you have to teach an introductory quantum-mechanics lecture. I have no solution for this problem: I think one has to start with a heuristical introduction, using wave mechanics (but please not with photons, because these are among the most difficult cases at all; better use some massive particles and nonrelativistic QT to start with, but that's another story). But this should only be short (maybe at most 2-3 hours) and then you come immediately to the representation-independent realization in terms of the abstract Hilbert-space representation (which is mostly Dirac's "transformation theory", one of the three historically first formulations of QT besides Heisenberg-Born-Jordan's matrix mechanics and de Broglie-Schrödinger wave mechanics). Only when you have established this very abstract way of thinking on hand of some examples (so to say the "quantum kinematics") you can come to a presentation of "interpretation", i.e., you can define what a quantum state really means, which of course depends on your point of view on the interpretation. I use MSI. So far I've only given one advanced lecture ("quantum mechanics II") on the subject, and there I had no problems (at least if I believe the quite positive evaluation of the students in the end) with using the MSI and the point of view that a quantum state in the real world has the meaning of an equivalence class of preparation procedures for this state, represented by a statistical operator with the only meaning to provide a probabilistic description of the knowledge about this system, given the preparation of it. It gives only probabilities for the outcome of measurements of observables, and observables that are indetermined do not have any certain value (see above). Of course, the real challenge is to teach the introductory lecture, and this I never had to do yet. So I cannot say, how I would present it.

Another question, I always pose, is what it is about this Bayesian interpretation of probabilities, nobody could answer in a satisfactory way for me so far: What does this other interpretation mean in practice? If I have only incomplete information (be that subjective as in classical statistics of irreducible as in quantum theory) and assign probabilities somehow, how can I check this on the real system other than preparing it in a well defined reproducible way and check the relative frequencies of the occurance of its possible outcomes of observables under consideration?

This same problem you have with classical random experiments as playing dice. Knowing nothing about the dice, according to the Shannon-Jayne's principle, I assign the distribution of maximal entropy to it ("principle of the least prejudice") and assign an equal probability of 1/6 for each outcome (occurance of the numbers 1 to 6 when throughing the dice). This are the "prior probabilities", and now I have to check them. How else can I do it than to through the dice many times and count the relative frequencies of the occurance of the numbers 1-6? Only then can I test the hypothesis about this specific dice to a certain statistical accuracy and can, if I find significant deviations from the behavior, update my probability function. I don't see what all this Bayesian mumbering about "different interpretations of probabilty" than the frequentist interpretation is about, if I never can check the probabilities other than in the frequentist view!
 
  • #63
vanhees71 said:
I don't see what all this Bayesian mumbering about "different interpretations of probabilty" than the frequentist interpretation is about, if I never can check the probabilities other than in the frequentist view!
Yes, if I understand you, I believe Timpson makes a similar criticism of Bayesianism here:
We just do look at data and we just do update our probabilities in light of it; and it’s just a brute fact that those who do so do better in the world; and those who don’t, don’t. Those poor souls die out. But this move only invites restatement of the challenge: why do those who observe and update do better? To maintain that there is no answer to this question, that it is just a brute fact, is to concede the point. There is an explanatory gap. By contrast, if one maintains that the point of gathering data and updating is to track objective features of the world, to bring one’s judgements about what might be expected to happen into alignment with the extent to which facts actually do favour the outcomes in question, then the gap is closed. We can see in this case how someone who deploys the means will do better in achieving the ends: in coping with the world. This seems strong evidence in favour of some sort of objective view of probabilities and against a purely subjective view, hence against the quantum Bayesian...

The form of the argument, rather, is that there exists a deep puzzle if the quantum Bayesian is right: it will forever remain mysterious why gathering data and updating according to the rules should help us get on in life. This mystery is dispelled if one allows that subjective probabilities should track objective features of the world. The existence of the means/ends explanatory gap is a significant theoretical cost to bear if one is to stick with purely subjective probabilities. This cost is one which many may not be willing to bear; and reasonably so, it seems.
Quantum Bayesianism: A Study
http://arxiv.org/pdf/0804.2047v1.pdf
 
  • #64
vanhees71 said:
Another question, I always pose, is what it is about this Bayesian interpretation of probabilities, nobody could answer in a satisfactory way for me so far: What does this other interpretation mean in practice? If I have only incomplete information (be that subjective as in classical statistics of irreducible as in quantum theory) and assign probabilities somehow, how can I check this on the real system other than preparing it in a well defined reproducible way and check the relative frequencies of the occurance of its possible outcomes of observables under consideration?

In my opinion, that couldn't be more wrong. As I said in another post, I think it mixes up the issue of good scientific practice with what science IS. I agree that reproducibility is extremely important for scientists, but it's not an end in itself, and it's not sufficient. Inevitably, there will be a time when it is necessary to make a judgment about the likelihood of something that has never happened before. It's the first time that a particular accelerator has been turned on, it's the first time that anyone has tried riding in a new type of vehicle, it's the first time that anyone has performed some surgical procedure. Or in pure science, it's the first time a particular alignment of celestial bodies has occurred. In these cases, we expect that science will work the same in one-off situations as it does in controlled, repeatable situations.

Even if you want to distinguish pure science from applied science, you still have the problem of what counts as a trial, in order to make sense of a frequentist interpretation. The state of the world never repeats. Of course, you can make a judgment that the aspects that vary from one run of an experiment to another are unlikely to be relevant, but what notion of "unlikely" are you using here? It can't be a frequentist notion of "unlikely".

A non-frequentist notion of likelihood is needed to even apply a frequentist notion of likelihood.
 
  • #65
bhobba said:
The KS theorem is not a problem for the MSI providing you do not assume it has the value prior to observation. However that is a very unnatural assumption. When the observation selects an outcome from the ensemble of similarly prepared systems with that outcome associated with it you would like to think it is revealing the value it has prior to observation - but you can't do that.

It seems that a kind of ensemble approach to interpreting quantum probabilities is to consider an ensemble, not of states of a system, but of entire histories of observations. Then the weird aspects of quantum probability (namely, interference terms) go into deciding the probability for a history, but then ordinary probabilistic reasoning would apply in doing relative probabilities: Out of all histories in which Alice measures spin-up, fraction f of the time, Bob measures spin-down.

What's unsatisfying to me about this approach is that the histories can't be microscopic histories (describing what happens to individual particles) because of incompatible observables. Instead, they would have to be macroscopic histories, with some kind of coarse-graining. Or alternatively, we could just talk about probability distributions of permanent records, maybe.
 
  • #66
stevendaryl said:
It seems that a kind of ensemble approach to interpreting quantum probabilities is to consider an ensemble, not of states of a system, but of entire histories of observations.

Yea - looks like decoherent histories to me.

I rather like that interpretation and would be my favorite except for one thing - it looks suspiciously like defining your way out of trouble. We can't perceive more than one outcome at a time - well let's impose that as a consistency condition - vola - you have consistent histories. Decoherence automatically enforces this consistency condition so you have decocerent histories. To me its ultimately unsatisfying - but so is my interpretation when examined carefully enough - but its a bit more overt in stating it as an assumption ie I state explicitly that the act of observation chooses an outcome that is already present - which you can do because decoherence transforms a pure state to an improper mixed state. Basically I assume the improper mixed state is a mixed one and without further ado the measurement problem is solved - but how does an observation accomplish this feat? Sorry - no answer - nature is just like that.

Thanks
Bill
 
  • #67
Demystifier said:
What would you say about this interpretation
http://arxiv.org/abs/1112.2034 [to appear in Int. J. Quantum Inf.]
which interpolates between local and realistic interpretations, and in a sense is both local and realistic (or neither).
My answer is a bit late (and maybe this is a bit off topic) but anyway. I don't have the time to read your paper, so just two quick questions about this interpretation:
1) Does all physics happen inside the observer?
2) Is the observer free to chose angle settings or are his decisions predetermined?
 
  • #68
Anybody have a feel for the "support" of Aharanov's Time Symmetric Quantum Mechanics (TSQM) formulation these days? (I'm guessing it's one of the ones that would fall in the "Other" category of the poll.

It's my understanding it offers very elegant and mathematically simple explanations for some new-ish experiments, where regular QM involves complicated interference effects and rather intractable math. Although, one just has to be willing to let go of his/her notions of linear time ;-)
 
  • #69
kith said:
1) Does all physics happen inside the observer?
2) Is the observer free to chose angle settings or are his decisions predetermined?
1) Yes (provided that only particles, not wave functions, count as "physics").
2) His decisions are predetermined, but not in a superdeterministic way. In other words, even though free will is only an illusion, this is not how non-locality is avoided.
 
  • #70
Sounds like a nice gedanken interpretation. Or do you think many people will stick to it in the future? ;-)

Is it formally compatible with all assumptions of Bell's theorem? If yes, how is the experimental violation explained? If no, what assumptions are violated?
 

Similar threads

  • Quantum Interpretations and Foundations
9
Replies
314
Views
15K
  • Quantum Interpretations and Foundations
Replies
2
Views
770
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
Replies
25
Views
1K
  • Quantum Interpretations and Foundations
6
Replies
175
Views
6K
  • Quantum Interpretations and Foundations
2
Replies
37
Views
1K
  • Quantum Interpretations and Foundations
3
Replies
76
Views
5K
  • Quantum Interpretations and Foundations
Replies
11
Views
1K
  • Quantum Interpretations and Foundations
Replies
2
Views
1K
  • Quantum Interpretations and Foundations
4
Replies
138
Views
5K
Back
Top