New Quantum Interpretation Poll

  • #31
vanhees71 said:
First of all any measurements, be they made on "classical" systems or on "quantum" systems are always on ensembles.

That seems obviously wrong to me. If I look at my watch, I'm looking at a single watch. If I look at a thermometer, I'm looking at a single thermometer. The ensemble view is a way to think about scientific measurements, and the meaning of probability, but it doesn't seem at all correct to say "any measurements...are always on ensembles."

What's the same in the ensemble is first of all the preparation procedure.

I don't agree with that, either. Yes, people like to be able to have repeatable experiments, because they increase the confidence in the quality of the data. But there are times in which the experiment just isn't repeatable. We witness the light of a supernova explosion from a distant star. A total eclipse of the sun allows us to study the corona. A strange combination of factors produces a once-in-a-century "superstorm". The fact that these observations were not carefully prepared in a way that makes them repeatable does not mean that they aren't governed by science. And as a matter of fact, in my mind, the whole point of science is not repeatable experiments, but making useful predictions about things that have not happened yet.

In quantum mechanics it makes only sense to say that a single system is in a certain pure or mixed state when you have established by measurements that a certain preparation procedure leads to the statistical properties of an ensemble of such prepared systems as predicted by the quantum mechanics.

Sorry, I just don't have much patience with this point of view. For one thing, what counts as a "preparation"? We set up equipment in a certain way, we look at dials and meters and we note the values we see there. All of that stuff is itself physical interactions. Physics describes human beings as much as it describes atoms and electrons.

Your point of view is mixing up the issue of what is good laboratory practice with what physics describes. Physics applies whether or not good laboratory technique is followed. A theory of physics that only applies to experiments conducted by physicists seems useless and uninteresting to me.
 
Physics news on Phys.org
  • #32
FPinget said:
Since the poll did not specify when or which Einsteins' view was considered "In wording our question, we deliberately did not specify what exactly we took Einstein's view of
quantum mechanics to be. It is well known, in fact, that Einstein held a variety of views over his lifetime" it seems to me that this question - in principle - is somehow not quite precise or perhaps is not contextualized? Fernando Pinget


Welcome to PhysicsForums, Fernando!

True, not a precise question. And intentionally so, per the authors. The view that Einstein held until his death was that it was not conceivable to him that there could not exist a more complete specification of the quantum system, at least in principle. That was an opinion expressed in EPR (1935). Of course he was not aware of Bell's Theorem, that having arrived 10 years after his death.

If Einstein had lived to learn of it, I am quite certain he would acknowledge that if there is a more complete specification of the system possible, that in fact there must be influences which are not bounded by c. Which would in turn mean that relativity requires tuning. So either way, one of Einstein's fundamental beliefs must be considered incorrect.

All of those at the conference would be familiar with the above, and that is the context. Obviously you do not have to judge Einstein on a point in which future events (Bell, 1964) change perspectives, and I am sure many chose NO in light of that.
 
  • #33
stevendaryl said:
Giving up locality is not an answer, it's just choosing a different set of questions.
Well, we have local interpretations of QM and realistic ones. In local interpretations, we have weirdness due to non-realism and in realistic ones we have weirdness due to non-locality. It seems to me that you want to get rid of both kinds of weirdness which isn't possible for any underlying theory because of Bell's theorem. Now for you, what would a fundamental theory have to explain in order to be considered "understood" (as you do with classical mechanics if I get you right)?
 
  • #34
kith said:
Well, we have local interpretations of QM and realistic ones.

What local interpretation are you talking about? I don't know of one.

In local interpretations, we have weirdness due to non-realism and in realistic ones we have weirdness due to non-locality.

I'm not convinced that the bohm model is really a coherent interpretation. One of the things that doesn't make sense to me about the bohm model is that, even though particles have definition positions at all times, their dynamics is governed by a "quantum potential" that is pretty mysterious. There is an assumption (as someone has pointed out) made by the bohm interpretation, which is that the initial distribution of particle positions is made to agree with the square of the Schrodinger wave function, but I don't see how in a realistic model, that makes sense. If you only have a single electron, for instance, what sense does it make that it has a "distribution"?

I don't really think of the bohm model as a serious interpretation of quantum mechanics. I realize that that means that maybe there aren't any serious interpretations.

It seems to me that you want to get rid of both kinds of weirdness which isn't possible for any underlying theory because of Bell's theorem. Now for you, what would a fundamental theory have to explain in order to be considered "understood" (as you do with classical mechanics if I get you right)?

What I really have are telltale signs that an interpretation is bogus. One of them is that the interpretation singles out certain kinds of interactions (measurements, or observations) or certain kinds of systems (conscious minds). Nonlocality is distasteful to me, but I think I would accept a nonlocal theory if it were satisfactory in other ways.
 
  • #35
stevendaryl said:
A theory of physics that only applies to experiments conducted by physicists seems useless and uninteresting to me.
That has always been my view. Which is why I don't much favour the instrumentalist approach. Physics kind of becomes the science of meter readings.
 
  • #36
stevendaryl said:
What local interpretation are you talking about? I don't know of one.

MWI is local. Retrocausal/Time Symmetric interpretations (such as Relational BlockWorld) are also local. However, such local ones are not realistic. They sacrifice some key causal element of nature to achieve their results.
 
  • #37
DrChinese said:
The view that Einstein held until his death was that it was not conceivable to him that there could not exist a more complete specification of the quantum system, at least in principle. That was an opinion expressed in EPR (1935).
I recall reading a paper discussing how Einstein was very unsatisfied even with the EPR paper as was written by Podolsky. I can't recall the details? An interesting quote from Pauli suggesting that Einstein was not as adamant about 'determinism' as he was about 'realism' is the following passage taken from a letter from Pauli to Born:
Einstein gave me your manuscript to read; he was not at all annoyed with you, but only said that you were a person who will not listen. This agrees with the impression I have formed myself insofar as I was unable to recognise Einstein whenever you talked about him in either your letter or your manuscript. It seemed to me as if you had erected some dummy Einstein for yourself, which you then knocked down with great pomp. In particular, Einstein does not consider the concept of “determinism” to be as fundamental as it is frequently held to be (as he told me emphatically many times), and he denied energetically that he had ever put up a postulate such as (your letter, para. 3): “the sequence of such conditions must also be objective and real, that is, automatic, machine-like, deterministic.” In the same way, he disputes that he uses as a criterion for the admissibility of a theory the question: “Is it rigorously deterministic?” Einstein’s point of departure is “realistic” rather than “deterministic”.
 
Last edited:
  • #38
bohm2 said:
I recall reading a paper discussing how Einstein wasn't happy even with the EPR paper. I can't recall the details? An interesting quote from Pauli suggesting that Einstein was not as adamant about 'determinism' as he was about 'realism' is the following passage taken from a letter from Pauli to Born:

Einstein gave me your manuscript to read; he was not at all annoyed with you, but only said that you were a person who will not listen. This agrees with the impression I have formed myself insofar as I was unable to recognise Einstein whenever you talked about him in either your letter or your manuscript. It seemed to me as if you had erected some dummy Einstein for yourself, which you then knocked down with great pomp. In particular, Einstein does not consider the concept of “determinism” to be as fundamental as it is frequently held to be (as he told me emphatically many times), and he denied energetically that he had ever put up a postulate such as (your letter, para. 3): “the sequence of such conditions must also be objective and real, that is, automatic, machine-like, deterministic.” In the same way, he disputes that he uses as a criterion for the admissibility of a theory the question: “Is it rigorously deterministic?” Einstein’s point of departure is “realistic” rather than “deterministic”.

Good quote. And I think that captures the spirit of both Einstein's position on realism ("the moon is there even when no one is looking") as well as the point of the poll question: no matter how you approach it, Einstein had to be wrong on one point; but each person might see which point he was wrong on to be different for a variety of reasons. That could be either because of one particular statement in the EPR paper or because of their own particular interpretation. Or perhaps some other statement of Einstein's. I don't believe that Einstein's views on all 3 of the below can be correct:

i. No spooky action at a distance.
ii. Moon is there even when no one is looking.
iii. QM is not complete.
 
  • #39
Where's Euan Squires interpretation?! Gosh. Had they asked me the question, that would be listed in the result
 
  • #40
kith said:
Well, we have local interpretations of QM and realistic ones. In local interpretations, we have weirdness due to non-realism and in realistic ones we have weirdness due to non-locality. It seems to me that you want to get rid of both kinds of weirdness which isn't possible for any underlying theory because of Bell's theorem.
What would you say about this interpretation
http://arxiv.org/abs/1112.2034 [to appear in Int. J. Quantum Inf.]
which interpolates between local and realistic interpretations, and in a sense is both local and realistic (or neither).
 
  • #41
vanhees71 said:
but as I said above, when you have a probabilistic theory, you have to use an ensemble to prove its predictions.

I actually hold to the Ensemble interpretation but I am not so sure that's true. There are a number of foundations to probability theory, not just ensembles, such as Kolmogorov's axioms, that can also be used. Its just a lot more abstract whereas visualizing an ensemble is much more concrete.

Thanks
Bill
 
Last edited:
  • #42
DrChinese, you fogot the obvious 4th possibility: Einstein is right on all points!

(I) Relativistic local quantum field theory by construction doesn't admit spooky (inter)actions at a distance due to the microcausality property of these theories. As any quantum theory it admits long-range correlations, but these have nothing to do with interactions. The EPR problems with causality only arise when you believe in an instantaneous collapse of the quantum state, but that's an unnecessary addendum to the metaphysics by the followers of (some flavors of) the Copenhagen or Princeton interpretation. With the minimal statistical interpretation there are no such problems, and that's all you need to apply QT to the real world.

(II) Of course, the moon is there when nobody is looking, because at least the cosmic background radiation is always looking. Alone this "photon gas" is sufficient to decohere the moon and make it behave as a classical object to an overwhelming accuracy. I'm pretty sure that you'll never be able to detect some quantum behavior on big systems like the moon.

(III) Nobody knows whether QT is complete or not as long as it is not refuted by some reproducible observation. So far, all observations are compatible with the predictions of QT.

Now it may well be that QT is complete. Then the behavior of objects in the real world is only predictable in the sense of probabilities and nature is inherently indeterministic. Then it is really impossible to prepare a particle such that both position and momentum are determined, and you can only associate a (pure or mixed) quantum state, determined by some preparation procedure (in the most simple case you have a macroscopic system and let it alone for a sufficiently long time so that it reaches thermal equilibrium with the corresponding Stat. Op. \hat{R}=\exp(-\beta \hat{H})/Z with Z=\mathrm{Tr} \exp(-\beta{\hat{H}})). The only thing you can say about the system is the probability to find values of an observable when you measure it, and this prediction you can only verify by repeating the experiment often enough and then get the probabilities as limits of the relative rates at which the various outcomes of the measured quantity occur. An observable is determined, if the system is prepared in the pure state represented by an eigenstate of the operator that corresponds to this observable. Also this has only a probabilistic meaning, i.e., it says that you expect with probability 1 that the observable should take the corresponding eigenvalue whenever you measure it on a such prepared system (ideal exact measurements assumed for the sake of the theoretical argument). Also this prediction you can only check experimentally by measuring a large enough ensemble to make sure that you really get the one and only one outcome, i.e., with 100% probability.

Thus quantum theory is complete, if nature really is inherently indeterministic as predicted by quantum theory. If not, it's incomplete, and one would have to find a better theory which includes quantum theory as some limiting approximately valid description. If there is such a more comprehensive theory that is deterministic, according to the violation of Bell's inequality (while QT gives the correct probabilistic predictions also in these most "quantic" cases!) it must be a nonlocal (if it's relativistic that means necessarily nonlocal in both space and time) theory. So far nobody could come up with a non-local theory which is consistent with all observations.

On the other hand, history teaches us that usually physical theories are incomplete, and thus I guess that also QT is incomplete, but whether we find something "more complete" or not is not clear at all yet!
 
  • #43
stevendaryl said:
Of course, the moon is there when nobody is looking, because at least the cosmic background radiation is always looking. Alone this "photon gas" is sufficient to decohere the moon and make it behave as a classical object to an overwhelming accuracy. I'm pretty sure that you'll never be able to detect some quantum behavior on big systems like the moon.

I believe the moon is there if no one is looking and that decoherence solves the measurement problem. But to be sure decoherence only makes it look like a 'classical' object for all practical purposes in that no experiment can determine otherwise. For it to be considered classical you need some interpretive framework like decoherent histories.

Thanks
Bill
 
  • #44
vanhees71 said:
DrChinese, you fogot the obvious 4th possibility: Einstein is right on all points!

(I) Relativistic local quantum field theory by construction doesn't admit spooky (inter)actions at a distance due to the microcausality property of these theories. As any quantum theory it admits long-range correlations, but these have nothing to do with interactions. The EPR problems with causality only arise when you believe in an instantaneous collapse of the quantum state, but that's an unnecessary addendum to the metaphysics by the followers of (some flavors of) the Copenhagen or Princeton interpretation. With the minimal statistical interpretation there are no such problems, and that's all you need to apply QT to the real world. [..]
vanhees, I think that this isn't the first time that I ask this, but could you provide a reference for the above?
According to DrChinese (and he has defended this view rather successfully for years on his website and on this forum), Einstein was wrong about QM on at least one point so that what you say is impossible. I would like to understand your side of the argument. Indeed, it is unclear to me what you hold of Bell's theorem. I did find your post https://www.physicsforums.com/showthread.php?p=4020550 but there you say that Bell proposed hidden observables, which may have been a slip of the pen, and it is probably unrelated to the real point of how this could work according to you.

thanks,
Harald
 
Last edited:
  • #45
vanhees71 said:
DrChinese, you fogot the obvious 4th possibility: Einstein is right on all points!

...

(II) Of course, the moon is there when nobody is looking, because at least the cosmic background radiation is always looking. Alone this "photon gas" is sufficient to decohere the moon and make it behave as a classical object to an overwhelming accuracy. I'm pretty sure that you'll never be able to detect some quantum behavior on big systems like the moon.

...

On the other hand, history teaches us that usually physical theories are incomplete, and thus I guess that also QT is incomplete, but whether we find something "more complete" or not is not clear at all yet!

Although I don't agree with the 4th possibility, at least a third of the respondents in the survey did! :smile:

Regarding II: This statement is definitely a metaphor for EPR realism, and should not be interpreted so literally. Einstein said: "I think that a particle must have a separate reality independent of the measurements. That is: an electron has spin, location and so forth even when it is not being measured. I like to think that the moon is there even if I am not looking at it."

The existence of quantum objects - when not observed - is not being denied by those who advocate non-realism. Instead, as a "non-realist", I would say that when a particle is in an eigenstate of position, it is not in any eigenstate of momentum. I would further say that the position eigenvalue was determined as part of a future measurement context.

As to the "history teaches us" argument: this really isn't an argument at all. And certainly the experimental evidence is dramatically pointing the other way.
 
  • #46
You are right, as I stressed several times, there is not the slightest evidence against quantum theory yet, and as long as there is none, there's no reason to think that quantum theory is incomplete.

The issue with an electron is of course more subtle. In general it's clear that, according to quantum theory, we cannot even be sure that the electron is present, if we don't have prepared one somehow and/or we have measured one of its properties.

Of course, there are no position and no momentum eigenstates, because these observables have a continuous spectrum. That's the content of the Heisenberg-Robertson uncertainty relation. For any (pure or mixed) state one can prepare a particle in, the standard deviations of these quantities obey
\sigma(x) \sigma(p) \geq \hbar/2.
If I decide to prepare the particle with a small uncertainty in position in some direction, I necessarily have to live with a large uncertainty of momentum and vice versa.

According to QT, it doesn't make sense to say that the position or momentum of a particle is determined. We can only know probabilities about these quantities, and these probabilities can only be measured by preparing many particles in the same way and measuring the position or momentum of the particle. So you can make an experiment, measuring the position of the particles in the ensemble to get the probability distribution for position (and compare it with the predictions of QT) and then another experiment on an ensemble of equally prepared particles to measure its momentum distribution. The standard deviations of these distributions fulfill the Heisenberg-Robertson uncertainty relation.

It's not possible to measure both position and momentum of the very same particle at once. One can show that measuring a particle's position with high accuracy necessarily disturbes the particle's momentum to a large extent vice versa. You can only decide to make a "weak measurement", i.e., measure the position only with a certain lower accuracy (i.e., larger systematic error) \epsilon(x) trading this lower accuracy for a somewhat lower disturbance \eta(p) of the momentum and vice versa.

It is very important to distinguish these noise-disturbance relation from the above mentioned standard deviations. There have been made fascinating experiments about this question recently, making it to the (semi-)public press. The most simple example is about measuring spin components of neutrons:

Experimental demonstration of a universally valid error–disturbance uncertainty relation
in spin measurements
Jacqueline Erhart1 , Stephan Sponar1 , Georg Sulyok1, Gerald Badurek1, Masanao Ozawa2
and Yuji Hasegawa1 *
NATURE PHYSICS | VOL 8 | MARCH 2012
DOI: 10.1038/NPHYS2194

Unfortunately the paper explains the theory behind this experiment, which should make it to any modern textbook if you ask me, in a very complicated way. Perhaps I'll open a thread on this later, because I just got into this fascinating subject. I think it can be discussed on the level of a quantum-mechanics 1 lecture. Unfortunately precisely this issue is mixed up in many (even modern) textbooks, and this is due to Heisenberg's original publication on the uncertainty relation. Ironically, Bohr has pointed out this mistake in interpretation right away. Unfortunately the wrong interpretation, however, made it into the textbooks :-(.
 
  • #47
Nice paper.

"Finally, looking back, we regret not to have included the" shut up and calculate "interpretation ... in our poll."

Yes, a popular interpretation...

However, a response rate of 94% is high, but 33 respondents are not really enough statistically to draw any firm conclusions.

Another bias could be that the title of the conference "Quantum Physics and the Nature of Reality" (with financial support from the Templeton Foundation) can attract slightly more sheep than goats - more philosophical researchers so to speak?

Just for fun, I checked the graph of the Copenhagen Interpretation in Google Ngram Viewer up to 2008. The Copenhagen Interpretation seems to culminate (in books) around 1995.
 
Last edited:
  • #48
Duplex said:
Nice paper.
Another bias could be that the title of the conference "Quantum Physics and the Nature of Reality" (with financial support from the Templeton Foundation) can attract slightly more sheep than goats - more philosophical researchers so to speak?
1995.

That was precisely my suspicion. This cited paper from my last posting is an example for the way people in this community can make simple things pretty complicated. That's a pitty, because this is an experiment which is understandable on the level of an undergraduate introductory quantum-theory course level. The formalism is so simple in this case (all that's required is the two-dimensional Hilbert space for spin 1/2 measurements) that one can do all this as a miniproject for the students, and you learn a lot from it. I'll open another thread on this in a moment.

What's the real challenge is to understand the difference between the standard deviations, fulfilling the Heisenberg-Robertson-Schrödinger uncertainty relation and the noise-disturbance uncertainty relation by Ozawa, which here is nicely demonstrated (also experimentally to a high accuracy!) to violate the naive assertion that you can simply use the Robertson uncertainty relation or, to put it in the view of interpretation problem, to interpret the Robertson uncertainty relation as a relation for the product of the measurement accuracy ("systematic error") of one observable A and the perturbation ("disturbance") of another observable B that is not compatible with the first one. This was the interpretation by Heisenberg in his first paper. Bohr corrected him immediately after that. If I remember right, that's very comprehensively treated in on of the volumes of

Mehra, Rechenberg, The Historical Development of Quantum Mechanics.
 
  • #49
Duplex said:
Another bias could be that the title of the conference "Quantum Physics and the Nature of Reality" (with financial support from the Templeton Foundation) can attract slightly more sheep than goats - more philosophical researchers so to speak?

Just for fun, I checked the graph of the Copenhagen Interpretation in Google Ngram Viewer up to 2008. The Copenhagen Interpretation seems to culminate (in books) around 1995.

Pretty impressive group of names there, not the sheep by any means. I think you are off the mark on that. But this is certainly not a representative sample either, though I doubt if many people would really care whether their preferred interpretation was more popular or not.

Copenhagen is a lot of things to a lot of people, as I think has been pointed out already. There has been a proliferation of "new" interpretations (or at least names of interpretations) recently. And yet, nothing has really captured much of anyone's imagination either.

"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
 
  • #50
DrChinese said:
Pretty impressive group of names there, not the sheep by any means. I think you are off the mark on that.

Agree. I made ​​myself a little unclear. What Zeilinger and several others on the list have contributed to physics have my respect and admiration.
DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:

Agree. At least a simple math in the evening I think…

“Counting sheep is a mental exercise used in some cultures as a means of lulling oneself to sleep.”
http://en.wikipedia.org/wiki/Counting_sheep
 
  • #51
@bohm2
Thanks for the link. Papers like this can be fun to read, but I don't think they amount to much.

DrChinese said:
I don't believe that Einstein's views on all 3 of the below can be correct:
i. No spooky action at a distance.
ii. Moon is there even when no one is looking.
iii. QM is not complete.
I believe they can. Unfortunately, there's no way to determine which view is correct.

DrChinese said:
If Einstein had lived to learn of it, I am quite certain he would acknowledge that if there is a more complete specification of the system possible, that in fact there must be influences which are not bounded by c.
I don't think that's what he would conclude from experimental violations of Bell inequalities. I think he would conclude that Bell's lhv formulation is not viable. Why it isn't viable remains an open question in physics.

DrChinese said:
Which would in turn mean that relativity requires tuning.
So far, there's no way to determine if relativity 'requires tuning'. Make certain assumptions and it requires tuning. Otherwise, no. Regarding practical application both relativity and qm seem to work just fine.

DrChinese said:
So either way, one of Einstein's fundamental beliefs must be considered incorrect.
His belief that qm is an incomplete description of the deep reality seems to be quite correct. His beliefs that nature is exclusively local and that an lhv theory of quantum entanglement is possible remain open questions.

Is Bell's locality condition (re quantum entanglement preps) the only way that locality can be modeled? Open question. Are there "influences which are not bounded by c"? Open question.

DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
This (and the minimal statistical, or probabilistic, or ensemble) 'interpretation' wins because it doesn't involve any metaphysical speculation about what nature 'really is'. It just recognizes that what's 'really happening' in the deep reality of quantum experimental phenomena is unknown.
 
  • #52
nanosiborg said:
@bohm2
This (and the minimal statistical, or probabilistic, or ensemble) 'interpretation' wins because it doesn't involve any metaphysical speculation about what nature 'really is'. It just recognizes that what's 'really happening' in the deep reality of quantum experimental phenomena is unknown.
Yes, and physics is the attempt to describe objective reproducible facts about our observations of phenomena as precisely as possible. The question, why this works so well and in relatively simple mathematical terms is or even why nature behaves as we observe her is not a matter of physics (or any natural science) but of philosophy or even religion.

That's why I'm a follower of the minimal statistical interpretation (MSI): It uses as much assumptions (postulates) as needed to apply quantum theory to the description of (so far) all known observations in nature but no more. It also avoids the trouble with interpretations with a collapse (which, I think, is the only real difference between the Bohr-Heisenberg Copenhagen point of view and the MSI).

Also, it should be clear that the violation of Bell's inequality, when interpreted within the MSI. Take as an example an Aspect-Zeilinger like "teleportation" experiment with entangled photons and let's analyze it is terms of the MSI.

Within MSI the state is described by a statistical operator (mathematical level of understanding) and related to the real world (physics level of understanding dealing with real objects like photons, crystals, lasers, polarization filters, and what else the experimental quantum opticians have in their labs) as an equivalence class of preparation procedures that is appropriate to prepare the system in question (with a high enough accuracy) in this state.

Of course, a given preparation procedure has to be checked to really produce this state, which means according to the MSI that I have to be able to reproduce this procedure to a high enough accuracy such that I can prepare as many systems in this state, independently from each other, such to create a large enough ensemble to verify the probabilistic prediction of the claim that each system in the ensemble is, through this preparation procedure, prepared in a way such that its statistical behavior is described (at least up to the accuracy reachable by the measurement procedure used) by this state.

In the Zeilinger experiment, what's done in the preparation step is to produce a two-photon Fock state via parametric down conversion by shooting a laser beam on a birefrigerent crystal and then let the photon pair alone (i.e., there must be no interactions of either of the photons with anything around such that we can be sure that the pair stays in this very state). In the most simple case the photon pair used in a helicity 0 state, i.e., the polarization part is described by the pure state
|\Psi \rangle=\frac{1}{2}(|HV \rangle-|VH \rangle).
The single-photon polarization states are then given by the corresponding partial traces over the other photon and turns out to be the maximum-entropy statistical operators
\hat{R}_A=\hat{R}_B=1/2 (|H \rangle \langle H|+|V \rangle \langle V|).
Thus the single photons are unpolarized (i.e., an ensemble behaves like a unpolarized beam of light when taking the appropriate averaging procedure over many single-photon events). In terms of information theory the single-photon polarization is maximally indetermined (maximal von Neumann entropy).

In principle, it's possible to wait a very long time and then to perform some polarization analysis at very distant places. Then Alice and Bob can do their measurements in any chronological order. E.g., Alice measures her photon first and Bob after, and they can do this, however at "space like distances", i.e., such that a causal effect of Alices measurement on Bob's photon could only occur if their is faster-than-light-signal propagation. They can even do their experiment at the same time, so that one would need signal propagation at an arbitrarily large speed to have a causal effect of one measurement on the other.

It's well known, that the prediction of quantum theory is fulfilled to an overwhelming accuracy: If both Alice and Bob measure the polarization in the same direction there is a one-to-one correspondence between their results: If Alice finds her photon in horizontal (vertical) polarization, Bob finds his in vertical (horizontal) polarization.

Now it is a matter of interpretation, how you conclude about "faster-than-light-signal propagation" (FTLSP): Within the MSI there is no problem to stay with the conservative point of view that there is no FTLSP. The one-to-one correlation between the spins is due to the preparation of the two-photon state in the very beginning, and it's a statistical property of the ensemble which can verified only by doing a lot of experiments with a lot of equally prepared photon pairs to prove the predicted one-to-one correlation. At the same time it can be verified that the single-photon ensembles at Alice and Bob's place behave as an unpolarized beam of light, i.e., both measure (on average!) in 50% of the cases horizontal and in 50% vertical polarization of their photon. Subsequently they can match their measurement protocols and verify the one-to-one correlation. No FTLSP has been necessary to explain this one-to-one correlation since this has been a (only statistical) property of the preparation procedure for the photon pair, and no causal influence of the measurement at Alice's place on the measurement on Bob's has been necessary as an explanation for the outcome of the measurement. According to standard QED the interactions of each photon is a local one with the polarization filters and the detector at both places and one measurement of the photon cannot influence the measurement of the other photon, and within MSI one doesn't need anything that violates this very successful assumption, on which all our (theoretical) knowledge (summarized in the standard model of elementary particles) on elementary particles and also photons is based: On the very foundations of relativistic QFT and the definition of the S matrix we use the microcausality + locality assumption. So there is no need (yet) to give up this very successful assumptions.

Now if you adhere to a collapse interpretation a la (some flavors of the) Copenhagen interpretation (CI), you believe that at the moment when Alice detector has registered her photon as being horizontally polarized, instantaneously the two-photon state must collapse to the new pure state, described by |HV \rangle. This happens in 50% of all cases, and then of course Bob, who detects his photon after Alice (but the detection events are supposed to be separated by a space-like distance in Minkowski space) must necessarily find his photon in the vertical polarization state. Thus, concerning the outcome of the experiment, this interpretation is not different from the MSI, but it causes of course serious problems with the local causal foundations of relativistic QFT. If the collapse of the state would be a physical process on the single photon pair, there must be FTLSP, and since the detection events of the photons were space-like separated, an observer in an appropriate reference frame could claim that Bob's measurement was before Alice's and thus the causality sequence would be reversed: From his point of view, Bob's measurement caused the instantaneous collapse before Alice could detect her photon. This, however would mean that the very foundation of all physics is violated, namely the causality principle, without which there is no sense to do physics at all.

That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble. Whether or not that's a complete description of nature, is an open question. If it is incomplete the violation of Bell's inequality leaves only the possibility that there is a non-local deterministic theory. The problem is that neither we have one such theory that is consistent nor is there any empirical hint that we would need such a theory, because all observations so far are nicely described by QT in the MSI.
 
  • #53
vanhees71 said:
That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble.

Just a quick question. How do you handle Kochen-Sprecker which implies the ensemble you select the outcome from can not there prior to observation. Or are you fine with the reality being the interaction of the observational apparatus and the observed system and what it is prior to observation is irrelevant? I know Ballentine had a bit of difficulty with this.

That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.

Thanks
Bill
 
  • #54
vanhees71 said:
[ ... snip ]
That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble.
Whether or not that's a complete description of nature, is an open question. If it is incomplete the violation of Bell's inequality leaves only the possibility that there is a non-local deterministic theory. The problem is that neither we have one such theory that is consistent nor is there any empirical hint that we would need such a theory, because all observations so far are nicely described by QT in the MSI.
Agree with this. MSI is sufficient, and whether through indifference or choice it seems to be the standard way of thinking about this.

It might be in some sense interesting or entertaining that so and so likes a certain interpretation, but it isn't important.

Nice post vanhees. I snipped all but the last part of it only for conciseness and convenience.
 
  • #55
bhobba said:
That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.
Doesn't decoherence precede all observations? Or, what's the criterion by which you exclude certain instrumental results?
 
  • #56
nanosiborg said:
Doesn't decoherence precede all observations? Or, what's the criterion by which you exclude certain instrumental results?

Yes it does. In interpretations that include decoherence (eg Decoherent Histories) the probabilities of the outcomes of observations predicted by the Born rule are called pre-probabilities. They can be calculated with or without reference to an observational set up but do not become real until manifest in an observational apparatus which implies decoherence must have occurred.

What this means is that if you have a system state you can calculate the probabilities of the outcome of an observation but it doesn't really mean anything unless you actually have an observational apparatus to observe it and decoherence will occur. That's why they are called pre-probabilities.

Thanks
Bill
 
Last edited:
  • #57
vanhees71 said:
[..] I'm a follower of the minimal statistical interpretation (MSI): It uses as much assumptions (postulates) as needed to apply quantum theory to the description of (so far) all known observations in nature but no more. It also avoids the trouble with interpretations with a collapse (which, I think, is the only real difference between the Bohr-Heisenberg Copenhagen point of view and the MSI).

Also, it should be clear that the violation of Bell's inequality, when interpreted within the MSI. Take as an example an Aspect-Zeilinger like "teleportation" experiment with entangled photons and let's analyze it is terms of the MSI.
[..]
It's well known, that the prediction of quantum theory is fulfilled to an overwhelming accuracy: If both Alice and Bob measure the polarization in the same direction there is a one-to-one correspondence between their results: If Alice finds her photon in horizontal (vertical) polarization, Bob finds his in vertical (horizontal) polarization.

Now it is a matter of interpretation, how you conclude about "faster-than-light-signal propagation" (FTLSP): Within the MSI there is no problem to stay with the conservative point of view that there is no FTLSP. The one-to-one correlation between the spins is due to the preparation of the two-photon state in the very beginning, and it's a statistical property of the ensemble which can verified only by doing a lot of experiments with a lot of equally prepared photon pairs to prove the predicted one-to-one correlation. [..] No FTLSP has been necessary to explain this one-to-one correlation since this has been a (only statistical) property of the preparation procedure for the photon pair, and no causal influence of the measurement at Alice's place on the measurement on Bob's has been necessary as an explanation for the outcome of the measurement.[..]
Thanks for the more precise clarification.
I still would like to understand how that can work quantitatively. If you don't mind, please comment on "Herbert's proof" as elaborated in an old thread (which is still open): https://www.physicsforums.com/showthread.php?t=589134
Your contribution will be appreciated! :smile:
 
  • #58
bhobba said:
Yes it does. In interpretations that include decoherence (eg Decoherent Histories) the probabilities of the outcomes of observations predicted by the Born rule are called pre-probabilities. They can be calculated with or without reference to an observational set up but do not become real until manifest in an observational apparatus which implies decoherence must have occurred.

What this means is that if you have a system state you can calculate the probabilities of the outcome of an observation but it doesn't really mean anything unless you actually have an observational apparatus to observe it and decoherence will occur. That's why they are called pre-probabilities.

Thanks
Bill
Thanks. I don't think K-S is a problem since MSI is about not speculating about what exists independent of observation. Though there's every reason to believe that what's prior to observation is relevant.
 
Last edited:
  • #59
bhobba said:
Just a quick question. How do you handle Kochen-Sprecker which implies the ensemble you select the outcome from can not there prior to observation. Or are you fine with the reality being the interaction of the observational apparatus and the observed system and what it is prior to observation is irrelevant? I know Ballentine had a bit of difficulty with this.

That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.

Thanks
Bill

The KS theorem states that it doesn't make sense to assume that compatible observables have a certain value, if the system is not prepared in a common eigenstate of the representing operators of these observables. I don't see where this can be a problem for the MSI, which precisely states that such compatible observables can only have determined values when the system is prepared in a common eigenstate.

Could you point me to the problems Ballentine has stated about the KS theorem in context with the MSI? In his book "Quantum Mechanics, A Modern Development" I can't find any such statement, and the KS theorem is discussed there in the concluding chapter on Bell's inequality.
 
  • #60
DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
But it's a hollow victory as so nicely put and somewhat surprisingly by Fuchs:
The usual game of interpretation is that an interpretation is always something you add to the preexisting, universally recognized quantum theory. What has been lost sight of is that physics as a subject of thought is a dynamic interplay between storytelling and equation writing. Neither one stands alone, not even at the end of the day. But which has the more fatherly role? If you ask me, it’s the storytelling. Bryce DeWitt once said, “We use mathematics in physics so that we won’t have to think.” In those cases when we need to think, we have to go back to the plot of the story and ask whether each proposed twist and turn really fits into it. An interpretation is powerful if it gives guidance, and I would say the very best interpretation is the one whose story is so powerful it gives rise to the mathematical formalism itself (the part where nonthinking can take over). The "interpretation" should come first; the mathematics (i.e., the pre-existing, universally recognized thing everyone thought they were talking about before an interpretation) should be secondary.
Interview with a Quantum Bayesian
https://www.physicsforums.com/showthread.php?p=4177910&highlight=fuchs#post4177910
 

Similar threads

  • · Replies 314 ·
11
Replies
314
Views
20K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 25 ·
Replies
25
Views
5K
Replies
175
Views
12K
  • · Replies 76 ·
3
Replies
76
Views
8K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 2 ·
Replies
2
Views
3K