New Quantum Interpretation Poll

  • #51
@bohm2
Thanks for the link. Papers like this can be fun to read, but I don't think they amount to much.

DrChinese said:
I don't believe that Einstein's views on all 3 of the below can be correct:
i. No spooky action at a distance.
ii. Moon is there even when no one is looking.
iii. QM is not complete.
I believe they can. Unfortunately, there's no way to determine which view is correct.

DrChinese said:
If Einstein had lived to learn of it, I am quite certain he would acknowledge that if there is a more complete specification of the system possible, that in fact there must be influences which are not bounded by c.
I don't think that's what he would conclude from experimental violations of Bell inequalities. I think he would conclude that Bell's lhv formulation is not viable. Why it isn't viable remains an open question in physics.

DrChinese said:
Which would in turn mean that relativity requires tuning.
So far, there's no way to determine if relativity 'requires tuning'. Make certain assumptions and it requires tuning. Otherwise, no. Regarding practical application both relativity and qm seem to work just fine.

DrChinese said:
So either way, one of Einstein's fundamental beliefs must be considered incorrect.
His belief that qm is an incomplete description of the deep reality seems to be quite correct. His beliefs that nature is exclusively local and that an lhv theory of quantum entanglement is possible remain open questions.

Is Bell's locality condition (re quantum entanglement preps) the only way that locality can be modeled? Open question. Are there "influences which are not bounded by c"? Open question.

DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
This (and the minimal statistical, or probabilistic, or ensemble) 'interpretation' wins because it doesn't involve any metaphysical speculation about what nature 'really is'. It just recognizes that what's 'really happening' in the deep reality of quantum experimental phenomena is unknown.
 
Physics news on Phys.org
  • #52
nanosiborg said:
@bohm2
This (and the minimal statistical, or probabilistic, or ensemble) 'interpretation' wins because it doesn't involve any metaphysical speculation about what nature 'really is'. It just recognizes that what's 'really happening' in the deep reality of quantum experimental phenomena is unknown.
Yes, and physics is the attempt to describe objective reproducible facts about our observations of phenomena as precisely as possible. The question, why this works so well and in relatively simple mathematical terms is or even why nature behaves as we observe her is not a matter of physics (or any natural science) but of philosophy or even religion.

That's why I'm a follower of the minimal statistical interpretation (MSI): It uses as much assumptions (postulates) as needed to apply quantum theory to the description of (so far) all known observations in nature but no more. It also avoids the trouble with interpretations with a collapse (which, I think, is the only real difference between the Bohr-Heisenberg Copenhagen point of view and the MSI).

Also, it should be clear that the violation of Bell's inequality, when interpreted within the MSI. Take as an example an Aspect-Zeilinger like "teleportation" experiment with entangled photons and let's analyze it is terms of the MSI.

Within MSI the state is described by a statistical operator (mathematical level of understanding) and related to the real world (physics level of understanding dealing with real objects like photons, crystals, lasers, polarization filters, and what else the experimental quantum opticians have in their labs) as an equivalence class of preparation procedures that is appropriate to prepare the system in question (with a high enough accuracy) in this state.

Of course, a given preparation procedure has to be checked to really produce this state, which means according to the MSI that I have to be able to reproduce this procedure to a high enough accuracy such that I can prepare as many systems in this state, independently from each other, such to create a large enough ensemble to verify the probabilistic prediction of the claim that each system in the ensemble is, through this preparation procedure, prepared in a way such that its statistical behavior is described (at least up to the accuracy reachable by the measurement procedure used) by this state.

In the Zeilinger experiment, what's done in the preparation step is to produce a two-photon Fock state via parametric down conversion by shooting a laser beam on a birefrigerent crystal and then let the photon pair alone (i.e., there must be no interactions of either of the photons with anything around such that we can be sure that the pair stays in this very state). In the most simple case the photon pair used in a helicity 0 state, i.e., the polarization part is described by the pure state
|\Psi \rangle=\frac{1}{2}(|HV \rangle-|VH \rangle).
The single-photon polarization states are then given by the corresponding partial traces over the other photon and turns out to be the maximum-entropy statistical operators
\hat{R}_A=\hat{R}_B=1/2 (|H \rangle \langle H|+|V \rangle \langle V|).
Thus the single photons are unpolarized (i.e., an ensemble behaves like a unpolarized beam of light when taking the appropriate averaging procedure over many single-photon events). In terms of information theory the single-photon polarization is maximally indetermined (maximal von Neumann entropy).

In principle, it's possible to wait a very long time and then to perform some polarization analysis at very distant places. Then Alice and Bob can do their measurements in any chronological order. E.g., Alice measures her photon first and Bob after, and they can do this, however at "space like distances", i.e., such that a causal effect of Alices measurement on Bob's photon could only occur if their is faster-than-light-signal propagation. They can even do their experiment at the same time, so that one would need signal propagation at an arbitrarily large speed to have a causal effect of one measurement on the other.

It's well known, that the prediction of quantum theory is fulfilled to an overwhelming accuracy: If both Alice and Bob measure the polarization in the same direction there is a one-to-one correspondence between their results: If Alice finds her photon in horizontal (vertical) polarization, Bob finds his in vertical (horizontal) polarization.

Now it is a matter of interpretation, how you conclude about "faster-than-light-signal propagation" (FTLSP): Within the MSI there is no problem to stay with the conservative point of view that there is no FTLSP. The one-to-one correlation between the spins is due to the preparation of the two-photon state in the very beginning, and it's a statistical property of the ensemble which can verified only by doing a lot of experiments with a lot of equally prepared photon pairs to prove the predicted one-to-one correlation. At the same time it can be verified that the single-photon ensembles at Alice and Bob's place behave as an unpolarized beam of light, i.e., both measure (on average!) in 50% of the cases horizontal and in 50% vertical polarization of their photon. Subsequently they can match their measurement protocols and verify the one-to-one correlation. No FTLSP has been necessary to explain this one-to-one correlation since this has been a (only statistical) property of the preparation procedure for the photon pair, and no causal influence of the measurement at Alice's place on the measurement on Bob's has been necessary as an explanation for the outcome of the measurement. According to standard QED the interactions of each photon is a local one with the polarization filters and the detector at both places and one measurement of the photon cannot influence the measurement of the other photon, and within MSI one doesn't need anything that violates this very successful assumption, on which all our (theoretical) knowledge (summarized in the standard model of elementary particles) on elementary particles and also photons is based: On the very foundations of relativistic QFT and the definition of the S matrix we use the microcausality + locality assumption. So there is no need (yet) to give up this very successful assumptions.

Now if you adhere to a collapse interpretation a la (some flavors of the) Copenhagen interpretation (CI), you believe that at the moment when Alice detector has registered her photon as being horizontally polarized, instantaneously the two-photon state must collapse to the new pure state, described by |HV \rangle. This happens in 50% of all cases, and then of course Bob, who detects his photon after Alice (but the detection events are supposed to be separated by a space-like distance in Minkowski space) must necessarily find his photon in the vertical polarization state. Thus, concerning the outcome of the experiment, this interpretation is not different from the MSI, but it causes of course serious problems with the local causal foundations of relativistic QFT. If the collapse of the state would be a physical process on the single photon pair, there must be FTLSP, and since the detection events of the photons were space-like separated, an observer in an appropriate reference frame could claim that Bob's measurement was before Alice's and thus the causality sequence would be reversed: From his point of view, Bob's measurement caused the instantaneous collapse before Alice could detect her photon. This, however would mean that the very foundation of all physics is violated, namely the causality principle, without which there is no sense to do physics at all.

That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble. Whether or not that's a complete description of nature, is an open question. If it is incomplete the violation of Bell's inequality leaves only the possibility that there is a non-local deterministic theory. The problem is that neither we have one such theory that is consistent nor is there any empirical hint that we would need such a theory, because all observations so far are nicely described by QT in the MSI.
 
  • #53
vanhees71 said:
That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble.

Just a quick question. How do you handle Kochen-Sprecker which implies the ensemble you select the outcome from can not there prior to observation. Or are you fine with the reality being the interaction of the observational apparatus and the observed system and what it is prior to observation is irrelevant? I know Ballentine had a bit of difficulty with this.

That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.

Thanks
Bill
 
  • #54
vanhees71 said:
[ ... snip ]
That's why I prefer the MSI and dislike any interpretation invoking (unnecessarily as we have seen above!) an instantaneous collapse of the state. Of course, the MSI considers QT as a statistical description of ensembles of independently from each other but equally prepared systems and not as a description of any single system within such an ensemble.
Whether or not that's a complete description of nature, is an open question. If it is incomplete the violation of Bell's inequality leaves only the possibility that there is a non-local deterministic theory. The problem is that neither we have one such theory that is consistent nor is there any empirical hint that we would need such a theory, because all observations so far are nicely described by QT in the MSI.
Agree with this. MSI is sufficient, and whether through indifference or choice it seems to be the standard way of thinking about this.

It might be in some sense interesting or entertaining that so and so likes a certain interpretation, but it isn't important.

Nice post vanhees. I snipped all but the last part of it only for conciseness and convenience.
 
  • #55
bhobba said:
That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.
Doesn't decoherence precede all observations? Or, what's the criterion by which you exclude certain instrumental results?
 
  • #56
nanosiborg said:
Doesn't decoherence precede all observations? Or, what's the criterion by which you exclude certain instrumental results?

Yes it does. In interpretations that include decoherence (eg Decoherent Histories) the probabilities of the outcomes of observations predicted by the Born rule are called pre-probabilities. They can be calculated with or without reference to an observational set up but do not become real until manifest in an observational apparatus which implies decoherence must have occurred.

What this means is that if you have a system state you can calculate the probabilities of the outcome of an observation but it doesn't really mean anything unless you actually have an observational apparatus to observe it and decoherence will occur. That's why they are called pre-probabilities.

Thanks
Bill
 
Last edited:
  • #57
vanhees71 said:
[..] I'm a follower of the minimal statistical interpretation (MSI): It uses as much assumptions (postulates) as needed to apply quantum theory to the description of (so far) all known observations in nature but no more. It also avoids the trouble with interpretations with a collapse (which, I think, is the only real difference between the Bohr-Heisenberg Copenhagen point of view and the MSI).

Also, it should be clear that the violation of Bell's inequality, when interpreted within the MSI. Take as an example an Aspect-Zeilinger like "teleportation" experiment with entangled photons and let's analyze it is terms of the MSI.
[..]
It's well known, that the prediction of quantum theory is fulfilled to an overwhelming accuracy: If both Alice and Bob measure the polarization in the same direction there is a one-to-one correspondence between their results: If Alice finds her photon in horizontal (vertical) polarization, Bob finds his in vertical (horizontal) polarization.

Now it is a matter of interpretation, how you conclude about "faster-than-light-signal propagation" (FTLSP): Within the MSI there is no problem to stay with the conservative point of view that there is no FTLSP. The one-to-one correlation between the spins is due to the preparation of the two-photon state in the very beginning, and it's a statistical property of the ensemble which can verified only by doing a lot of experiments with a lot of equally prepared photon pairs to prove the predicted one-to-one correlation. [..] No FTLSP has been necessary to explain this one-to-one correlation since this has been a (only statistical) property of the preparation procedure for the photon pair, and no causal influence of the measurement at Alice's place on the measurement on Bob's has been necessary as an explanation for the outcome of the measurement.[..]
Thanks for the more precise clarification.
I still would like to understand how that can work quantitatively. If you don't mind, please comment on "Herbert's proof" as elaborated in an old thread (which is still open): https://www.physicsforums.com/showthread.php?t=589134
Your contribution will be appreciated! :smile:
 
  • #58
bhobba said:
Yes it does. In interpretations that include decoherence (eg Decoherent Histories) the probabilities of the outcomes of observations predicted by the Born rule are called pre-probabilities. They can be calculated with or without reference to an observational set up but do not become real until manifest in an observational apparatus which implies decoherence must have occurred.

What this means is that if you have a system state you can calculate the probabilities of the outcome of an observation but it doesn't really mean anything unless you actually have an observational apparatus to observe it and decoherence will occur. That's why they are called pre-probabilities.

Thanks
Bill
Thanks. I don't think K-S is a problem since MSI is about not speculating about what exists independent of observation. Though there's every reason to believe that what's prior to observation is relevant.
 
Last edited:
  • #59
bhobba said:
Just a quick question. How do you handle Kochen-Sprecker which implies the ensemble you select the outcome from can not there prior to observation. Or are you fine with the reality being the interaction of the observational apparatus and the observed system and what it is prior to observation is irrelevant? I know Ballentine had a bit of difficulty with this.

That's why I use a slight variation of MSI in that I only count observations as actual after decoherence has occurred. That way you can assume it has the property prior to observation which is much more in line with physical intuition.

Thanks
Bill

The KS theorem states that it doesn't make sense to assume that compatible observables have a certain value, if the system is not prepared in a common eigenstate of the representing operators of these observables. I don't see where this can be a problem for the MSI, which precisely states that such compatible observables can only have determined values when the system is prepared in a common eigenstate.

Could you point me to the problems Ballentine has stated about the KS theorem in context with the MSI? In his book "Quantum Mechanics, A Modern Development" I can't find any such statement, and the KS theorem is discussed there in the concluding chapter on Bell's inequality.
 
  • #60
DrChinese said:
"Shut up and calculate" seems to win even when it is not mentioned, as this is what everyone does at the end of the day. :smile:
But it's a hollow victory as so nicely put and somewhat surprisingly by Fuchs:
The usual game of interpretation is that an interpretation is always something you add to the preexisting, universally recognized quantum theory. What has been lost sight of is that physics as a subject of thought is a dynamic interplay between storytelling and equation writing. Neither one stands alone, not even at the end of the day. But which has the more fatherly role? If you ask me, it’s the storytelling. Bryce DeWitt once said, “We use mathematics in physics so that we won’t have to think.” In those cases when we need to think, we have to go back to the plot of the story and ask whether each proposed twist and turn really fits into it. An interpretation is powerful if it gives guidance, and I would say the very best interpretation is the one whose story is so powerful it gives rise to the mathematical formalism itself (the part where nonthinking can take over). The "interpretation" should come first; the mathematics (i.e., the pre-existing, universally recognized thing everyone thought they were talking about before an interpretation) should be secondary.
Interview with a Quantum Bayesian
https://www.physicsforums.com/showthread.php?p=4177910&highlight=fuchs#post4177910
 
  • #61
vanhees71 said:
The KS theorem states that it doesn't make sense to assume that compatible observables have a certain value, if the system is not prepared in a common eigenstate of the representing operators of these observables. I don't see where this can be a problem for the MSI, which precisely states that such compatible observables can only have determined values when the system is prepared in a common eigenstate.

Could you point me to the problems Ballentine has stated about the KS theorem in context with the MSI? In his book "Quantum Mechanics, A Modern Development" I can't find any such statement, and the KS theorem is discussed there in the concluding chapter on Bell's inequality.

The KS theorem is not a problem for the MSI providing you do not assume it has the value prior to observation. However that is a very unnatural assumption. When the observation selects an outcome from the ensemble of similarly prepared systems with that outcome associated with it you would like to think it is revealing the value it has prior to observation - but you can't do that.

A number of books such as Hugh's - Structure And Interpretation of QM mention the issues the Ensemble interpretation has with the KS - I can dig up the page if you really want - but not now - feeling a bit tired. They claim it invalidates it - it doesn't - but the assumption you need to make to get around it is slightly unnatural - that's all - it can not be viewed as classical probabilities like say tossing a dice unless you invoke decoherence.

I did manage to find the following online:
http://books.google.com.au/books?id...&q=ballentine ensemble kochen specker&f=false

I too have Ballentines book and its not in there anywhere - I read it in some early paper he wrote on it but can't recall which one. But since then I think he realized it wasn't really an issue if you abandon viewing it like classical probabilities.

Thanks
Bill
 
  • #62
@bhobba: I see. It's not so easy to label one's interpretation of quantum theory by a simple name obviously. I guess there are as many interpretation as there are physicists using QT:biggrin:.

I always understood the MSI such that of course only those observables are determined of a system, prepared in some pure or mixed state by some well defined preparation procedure, for which the probability, given by Born's rule, to find a certain possible value (which is necessarily the eigenvalue of the observable-representing operator) is 1 (and then necessarily the probability to find all other values must be 0 of course). All other observables simply do not have a determined value. Measuring such an indetermined observable gives some of its possible values with a probability given by Born's rule. Measuring it for a single system doesn't tell us much. We can only test the hypothesis that the probabilities are given by Born's rule by preparing an ensemble (in the sense explained in my previous postings) and doing the appropriate statistical analysis. Simply put: An observable only takes a certain value if and only if the system is prepared in an appropriate state, where the corresponding probability to find this value is 1. The KS theorem tells me that it contradicts quantum theory, if you assume that the values of undetermined observables are just not known but have in "reality" certain values. Then you interpret quantum-theoretical probabilities just as sujective probabilities in the sense of classical statistical physics, and that's incompatible with QT according to KS. As you say, this doesn't pose a problem to the MSI. As I understand MSI, to the contrary it is most compatible with the KS theorem!

@bohm2: This quote by Fuchs is indeed interesting. It's easily stated that interpretation should come first, and it's the very first problem you run into if you have to teach an introductory quantum-mechanics lecture. I have no solution for this problem: I think one has to start with a heuristical introduction, using wave mechanics (but please not with photons, because these are among the most difficult cases at all; better use some massive particles and nonrelativistic QT to start with, but that's another story). But this should only be short (maybe at most 2-3 hours) and then you come immediately to the representation-independent realization in terms of the abstract Hilbert-space representation (which is mostly Dirac's "transformation theory", one of the three historically first formulations of QT besides Heisenberg-Born-Jordan's matrix mechanics and de Broglie-Schrödinger wave mechanics). Only when you have established this very abstract way of thinking on hand of some examples (so to say the "quantum kinematics") you can come to a presentation of "interpretation", i.e., you can define what a quantum state really means, which of course depends on your point of view on the interpretation. I use MSI. So far I've only given one advanced lecture ("quantum mechanics II") on the subject, and there I had no problems (at least if I believe the quite positive evaluation of the students in the end) with using the MSI and the point of view that a quantum state in the real world has the meaning of an equivalence class of preparation procedures for this state, represented by a statistical operator with the only meaning to provide a probabilistic description of the knowledge about this system, given the preparation of it. It gives only probabilities for the outcome of measurements of observables, and observables that are indetermined do not have any certain value (see above). Of course, the real challenge is to teach the introductory lecture, and this I never had to do yet. So I cannot say, how I would present it.

Another question, I always pose, is what it is about this Bayesian interpretation of probabilities, nobody could answer in a satisfactory way for me so far: What does this other interpretation mean in practice? If I have only incomplete information (be that subjective as in classical statistics of irreducible as in quantum theory) and assign probabilities somehow, how can I check this on the real system other than preparing it in a well defined reproducible way and check the relative frequencies of the occurance of its possible outcomes of observables under consideration?

This same problem you have with classical random experiments as playing dice. Knowing nothing about the dice, according to the Shannon-Jayne's principle, I assign the distribution of maximal entropy to it ("principle of the least prejudice") and assign an equal probability of 1/6 for each outcome (occurance of the numbers 1 to 6 when throughing the dice). This are the "prior probabilities", and now I have to check them. How else can I do it than to through the dice many times and count the relative frequencies of the occurance of the numbers 1-6? Only then can I test the hypothesis about this specific dice to a certain statistical accuracy and can, if I find significant deviations from the behavior, update my probability function. I don't see what all this Bayesian mumbering about "different interpretations of probabilty" than the frequentist interpretation is about, if I never can check the probabilities other than in the frequentist view!
 
  • #63
vanhees71 said:
I don't see what all this Bayesian mumbering about "different interpretations of probabilty" than the frequentist interpretation is about, if I never can check the probabilities other than in the frequentist view!
Yes, if I understand you, I believe Timpson makes a similar criticism of Bayesianism here:
We just do look at data and we just do update our probabilities in light of it; and it’s just a brute fact that those who do so do better in the world; and those who don’t, don’t. Those poor souls die out. But this move only invites restatement of the challenge: why do those who observe and update do better? To maintain that there is no answer to this question, that it is just a brute fact, is to concede the point. There is an explanatory gap. By contrast, if one maintains that the point of gathering data and updating is to track objective features of the world, to bring one’s judgements about what might be expected to happen into alignment with the extent to which facts actually do favour the outcomes in question, then the gap is closed. We can see in this case how someone who deploys the means will do better in achieving the ends: in coping with the world. This seems strong evidence in favour of some sort of objective view of probabilities and against a purely subjective view, hence against the quantum Bayesian...

The form of the argument, rather, is that there exists a deep puzzle if the quantum Bayesian is right: it will forever remain mysterious why gathering data and updating according to the rules should help us get on in life. This mystery is dispelled if one allows that subjective probabilities should track objective features of the world. The existence of the means/ends explanatory gap is a significant theoretical cost to bear if one is to stick with purely subjective probabilities. This cost is one which many may not be willing to bear; and reasonably so, it seems.
Quantum Bayesianism: A Study
http://arxiv.org/pdf/0804.2047v1.pdf
 
  • #64
vanhees71 said:
Another question, I always pose, is what it is about this Bayesian interpretation of probabilities, nobody could answer in a satisfactory way for me so far: What does this other interpretation mean in practice? If I have only incomplete information (be that subjective as in classical statistics of irreducible as in quantum theory) and assign probabilities somehow, how can I check this on the real system other than preparing it in a well defined reproducible way and check the relative frequencies of the occurance of its possible outcomes of observables under consideration?

In my opinion, that couldn't be more wrong. As I said in another post, I think it mixes up the issue of good scientific practice with what science IS. I agree that reproducibility is extremely important for scientists, but it's not an end in itself, and it's not sufficient. Inevitably, there will be a time when it is necessary to make a judgment about the likelihood of something that has never happened before. It's the first time that a particular accelerator has been turned on, it's the first time that anyone has tried riding in a new type of vehicle, it's the first time that anyone has performed some surgical procedure. Or in pure science, it's the first time a particular alignment of celestial bodies has occurred. In these cases, we expect that science will work the same in one-off situations as it does in controlled, repeatable situations.

Even if you want to distinguish pure science from applied science, you still have the problem of what counts as a trial, in order to make sense of a frequentist interpretation. The state of the world never repeats. Of course, you can make a judgment that the aspects that vary from one run of an experiment to another are unlikely to be relevant, but what notion of "unlikely" are you using here? It can't be a frequentist notion of "unlikely".

A non-frequentist notion of likelihood is needed to even apply a frequentist notion of likelihood.
 
  • #65
bhobba said:
The KS theorem is not a problem for the MSI providing you do not assume it has the value prior to observation. However that is a very unnatural assumption. When the observation selects an outcome from the ensemble of similarly prepared systems with that outcome associated with it you would like to think it is revealing the value it has prior to observation - but you can't do that.

It seems that a kind of ensemble approach to interpreting quantum probabilities is to consider an ensemble, not of states of a system, but of entire histories of observations. Then the weird aspects of quantum probability (namely, interference terms) go into deciding the probability for a history, but then ordinary probabilistic reasoning would apply in doing relative probabilities: Out of all histories in which Alice measures spin-up, fraction f of the time, Bob measures spin-down.

What's unsatisfying to me about this approach is that the histories can't be microscopic histories (describing what happens to individual particles) because of incompatible observables. Instead, they would have to be macroscopic histories, with some kind of coarse-graining. Or alternatively, we could just talk about probability distributions of permanent records, maybe.
 
  • #66
stevendaryl said:
It seems that a kind of ensemble approach to interpreting quantum probabilities is to consider an ensemble, not of states of a system, but of entire histories of observations.

Yea - looks like decoherent histories to me.

I rather like that interpretation and would be my favorite except for one thing - it looks suspiciously like defining your way out of trouble. We can't perceive more than one outcome at a time - well let's impose that as a consistency condition - vola - you have consistent histories. Decoherence automatically enforces this consistency condition so you have decocerent histories. To me its ultimately unsatisfying - but so is my interpretation when examined carefully enough - but its a bit more overt in stating it as an assumption ie I state explicitly that the act of observation chooses an outcome that is already present - which you can do because decoherence transforms a pure state to an improper mixed state. Basically I assume the improper mixed state is a mixed one and without further ado the measurement problem is solved - but how does an observation accomplish this feat? Sorry - no answer - nature is just like that.

Thanks
Bill
 
  • #67
Demystifier said:
What would you say about this interpretation
http://arxiv.org/abs/1112.2034 [to appear in Int. J. Quantum Inf.]
which interpolates between local and realistic interpretations, and in a sense is both local and realistic (or neither).
My answer is a bit late (and maybe this is a bit off topic) but anyway. I don't have the time to read your paper, so just two quick questions about this interpretation:
1) Does all physics happen inside the observer?
2) Is the observer free to chose angle settings or are his decisions predetermined?
 
  • #68
Anybody have a feel for the "support" of Aharanov's Time Symmetric Quantum Mechanics (TSQM) formulation these days? (I'm guessing it's one of the ones that would fall in the "Other" category of the poll.

It's my understanding it offers very elegant and mathematically simple explanations for some new-ish experiments, where regular QM involves complicated interference effects and rather intractable math. Although, one just has to be willing to let go of his/her notions of linear time ;-)
 
  • #69
kith said:
1) Does all physics happen inside the observer?
2) Is the observer free to chose angle settings or are his decisions predetermined?
1) Yes (provided that only particles, not wave functions, count as "physics").
2) His decisions are predetermined, but not in a superdeterministic way. In other words, even though free will is only an illusion, this is not how non-locality is avoided.
 
  • #70
Sounds like a nice gedanken interpretation. Or do you think many people will stick to it in the future? ;-)

Is it formally compatible with all assumptions of Bell's theorem? If yes, how is the experimental violation explained? If no, what assumptions are violated?
 
  • #71
kith said:
Sounds like a nice gedanken interpretation. Or do you think many people will stick to it in the future? ;-)
Probably not, but who knows?

kith said:
Is it formally compatible with all assumptions of Bell's theorem? If yes, how is the experimental violation explained? If no, what assumptions are violated?
The Bell theorem assumes that there is reality (hidden variables) associated with entangled particles and/or separated detectors of two particles. This assumption is not satisfied here. Reality is associated only with observers whose role is to act as local coincidence counters. (See Fig. 1 in the paper.)
 
  • #72
The poll paper was also discussed in Nature:
Perhaps the fact that quantum theory does its job so well and yet stubbornly refuses to answer our deeper questions contains a lesson in itself,” says Schlosshauer. Possibly the most revealing answer was that 48% believed that there will still be conferences on the foundations of quantum theory in 50 years time.
Experts still split about what quantum theory means
http://www.nature.com/news/experts-still-split-about-what-quantum-theory-means-1.12198
 
  • #73
For completion, this poll was also recently discussed by cosmologist Sean Carroll in his blog and the recent video:
I’ll go out on a limb to suggest that the results of this poll should be very embarrassing to physicists. Not, I hasten to add, because Copenhagen came in first, although that’s also a perspective I might want to defend (I think Copenhagen is completely ill-defined, and shouldn’t be the favorite anything of any thoughtful person). The embarrassing thing is that we don’t have agreement. Think about it-quantum mechanics has been around since the 1920′s at least, in a fairly settled form. John von Neumann laid out the mathematical structure in 1932. Subsequently, quantum mechanics has become the most important and best-tested part of modern physics. Without it, nothing makes sense. Every student who gets a degree in physics is supposed to learn QM above all else. There are a variety of experimental probes, all of which confirm the theory to spectacular precision. And yet-we don’t understand it. Embarrassing. To all of us, as a field (not excepting myself).
The Most Embarrassing Graph in Modern Physics
http://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

QM:An embarassment
 
Last edited by a moderator:
  • #74
At the edge of knowledge in any scientific field, there is "embarrassment" such as this. What happened before the big bang? Cosmologists don't agree on that either. Was the evolution of intelligence in the universe likely? Experts don't agree about that.
 
  • #75
bohm2 said:
For completion, this poll was also recently discussed by cosmologist Sean Carroll in his blog and the recent video:

The Most Embarrassing Graph in Modern Physics
http://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

QM:An embarassment


I cannot really say I agree with him here. I don't think it's embarrassing at all. In fact the opposite, I think it means it's have a very difficult and interesting question that deserves to be mentioned more often. I also think we should not go around calling it embarrassing because it sends bad signals to the general population. All knowledge starts from being unknown, and contrary to some other interestgroups in our society, we as scientists should freely admitt it whenever we don't know something. It's not something strange it's natural.
 
Last edited by a moderator:
  • #76
Zarqon said:
I cannot really say I agree with him here. I don't think it's embarrassing at all. In fact the opposite, I think it means it's have a very difficult and interesting question that deserves to be mentioned more often. I also think we should not go around calling it embarrassing because it sends bad signals to the general population. All knowledge starts from being unknown, and contrary to some other interestgroups in our society, we as scientists should freely admitt it whenever we don't know something. It's not something strange it's natural.
I absolutely agree! :approve:
 
  • #77
Matt Leifer has made an excellent post in the comment section of the preposterousuniverse blog. According to him, the scientific relevance of quantum foundations is not to find the "right" interpretation to the already existing theory (because that's metaphysical), but the fact that different interpretations suggest different ways to solve the known problems of physics which probably require a theory that goes beyond QM.
 
  • #78
Another survey came out today with very different results:
This is a somehow surprising result. The elsewise sovereign Copenhagen interpretation loses ground against the de Broglie-Bohm interpretation. This is partly the inflluence of decided minorities in small populations, because the participants of the conference were all but representative of the whole physicists' community. Not surprisingly, the outcome is well different from the observed distribution by Tegmark or Schlosshauer et al.
Another Survey of Foundational Attitudes Towards Quantum Mechanics
http://lanl.arxiv.org/pdf/1303.2719.pdf
 
  • #79
bohm2 said:
Another survey came out today with very different results:

Another Survey of Foundational Attitudes Towards Quantum Mechanics
http://lanl.arxiv.org/pdf/1303.2719.pdf


Interesting, but such a small sample.
What I cannot for the life of me understand is why someone hasn't just sent a email to say 200 quantum physicists with the questionaire. it would give a much much more interesting snapshot.

Once again the answers of this poll is so ****ing weird and inconsistent, it's pretty clear the answerers aren't even sure wtf they think about the issue.

I think what is needed is a BIG poll, these tiny conference polls are literally not even a drop in the ocean. Sure they are slightly interesting, but that's it.
 
  • #80
bohm2 said:
Another Survey of Foundational Attitudes Towards Quantum Mechanics
http://lanl.arxiv.org/pdf/1303.2719.pdf
According to this survey, the top 3 interpretations are:
1. I have no preferred interpretation - 44%
2. Shut up and calculate - 17 %
3. De-Broglie Bohm - 17 %

Given that 1. and 2. are not really specific interpretations at all, it can be said that De-Broglie Bohm, at this conference at least, is the most popular specific interpretation.
 
  • #81
Only 12 physicists, most of them master students or early Ph.D. students. May be later on they will have a preferred interpretation.
 
  • #82
stevendaryl said:
There is an assumption (as someone has pointed out) made by the bohm interpretation, which is that the initial distribution of particle positions is made to agree with the square of the Schrodinger wave function, but I don't see how in a realistic model, that makes sense. If you only have a single electron, for instance, what sense does it make that it has a "distribution"?

I know it's been a month since this thread was last bumped, but I really can't read this and let it slide.

You don't have to simply assume that the initial distribution goes as |ψ|2. Running dynamics simulations with unrelated initial conditions results in the distribution dropping out. Quantum equilibrium isn't a "postulate" in the same way that other interpretations relate |ψ|2 to "probabilities" (whatever they do or don't say they're probabilities of...)
 
  • #83
aphirst said:
I know it's been a month since this thread was last bumped, but I really can't read this and let it slide.

You don't have to simply assume that the initial distribution goes as |ψ|2. Running dynamics simulations with unrelated initial conditions results in the distribution dropping out. Quantum equilibrium isn't a "postulate" in the same way that other interpretations relate |ψ|2 to "probabilities" (whatever they do or don't say they're probabilities of...)
Yes, that needs to be repeated over and over again.

Just as not so long time ago it was needed to repeat over and over again that the Bell theorem does not exclude general hidden variables (including the Bohmian ones), but only local hidden variables. Fortunately, it seems that now a significant majority of physicists appreciates that.
 
  • #84
Just for completion, another poll done with vastly different results with author's insightful (in my opinion) comments hi-lited:
Here we report the results of giving this same survey to the attendees at another recent quantum foundations conference. While it is rather difficult to conclude anything of scientific significance from the poll, the results do strongly suggest several interesting cultural facts – for example, that there exist, within the broad field of “quantum foundations”, sub-communities with quite different views, and that (relatedly) there is probably even significantly more controversy about several fundamental issues than the already-significant amount revealed in the earlier poll...

In the SKZ results, b. Copenhagen (42%) and e. Information-based/information-theoretical (24%) received the highest response rates, while c. de Broglie - Bohm received zero votes of endorsement. SKZ write explicitly that “the fact that de Broglie - Bohm interpretation did not receive any votes may simply be an artifact of the particular set of participants we polled.” Our results strongly confirm this suspicion. At the Bielefeld conference, choice c. de Broglie - Bohm garnered far and away the majority of the votes (63%) while b. Copenhagen and e. information-based / information-theoretical received a paltry 4% and 5% respectively. It is also interesting to compare results on this question to the older (1997) survey conducted by Max Tegmark. Tegmark, finding that 17% of his respondents endorsed a many-worlds / Everett interpretation, announced this as a “rather striking shift in opinion compared to the old days when the Copenhagen interpretation reigned supreme.” Our results clearly suggest, though, that any such interpretation of these sorts of poll results – as indicating a meaningful temporal shift in attitudes – should be taken with a rather large grain of salt. It is almost certainly not the case, for example, that while a “striking shift” toward many-worlds views occurred in the years prior to 1997, this shift then stalled out between 1997 and 2011 (the response rate endorsing Everett being about the same in the Tegmark and SKZ polls), and then suddenly collapsed (with the majority of quantum foundations workers now embracing the de Broglie - Bohm pilot-wave theory).

Instead, the obviously more plausible interpretation of the data is that each poll was given to a very different and highly non-representative group. The snapshots reveal much more about the processes by which it was decided whom should be invited to a given conference, than they reveal about trends in the thinking of the community as a whole. We note finally that insofar as our poll got more than twice as many respondents as the SKZ poll (which those authors had described as “the most comprehensive poll of quantum-foundational views ever conducted”) it is now apparently the case that the de Broglie - Bohm pilot-wave theory is, by an incredibly large margin, the most endorsed interpretation in the most comprehensive poll of quantum foundational views ever conducted. For the reasons we have just been explaining, this has almost no meaning, significance, or implications, beyond the fact that lots of “Bohmians” were invited to the Bielefeld conference. But it does demonstrate rather strikingly that the earlier conferences (where polls were conducted by Tegmark and SKZ) somehow failed to involve a rather large contingent of the broader foundations community. And similarly, the Bielefeld conference somehow failed to involve the large Everett-supporting contingent of the broader foundations community.
Yet Another Snapshot of Foundational Attitudes Toward Quantum Mechanics
http://lanl.arxiv.org/pdf/1306.4646.pdf
 
  • #85
Even though, as before, the sample is not representative, it is always interesting to see the top 3 list:
1. De Broglie-Bohm - 63%
2. Objective Collapse - 16 %
3. I have no preferred interpretation - 11 %
 
  • #86
aphirst said:
I know it's been a month since this thread was last bumped, but I really can't read this and let it slide.

You don't have to simply assume that the initial distribution goes as |ψ|2. Running dynamics simulations with unrelated initial conditions results in the distribution dropping out. Quantum equilibrium isn't a "postulate" in the same way that other interpretations relate |ψ|2 to "probabilities" (whatever they do or don't say they're probabilities of...)

I don't understand that. Let's take the case of a single particle. In the Bohm interpretation, it always has a definite location. So unless the wave function is a delta-function, it's square won't agree with the actually distribution of particles.
 
  • #87
Thanks bohm2, this is quite interesting. I was interested in the exact topics of the conferences where the polls were taken. Here they are:

Tegmark: "Fundamental Problems in Quantum Theory" 1997
Schlosshauer et al.: "Quantum Physics and the Nature of Reality" 2011
Norsen & Nelson: "Quantum Theory Without Observers" 2013

Arguably, the topic of the last conference is narrower. Researchers who favor an observer-dependent interpretation may have been discouraged to attend.
 
  • #88
stevendaryl said:
I don't understand that. Let's take the case of a single particle. In the Bohm interpretation, it always has a definite location. So unless the wave function is a delta-function, it's square won't agree with the actually distribution of particles.
You should think of it as analogous to classical statistical mechanics. A single particle always has a definite position, energy, etc. But if you have a STATISTICAL ENSEMBLE of particles, then each particle in the ensemble may have a different position, energy, etc. In particular, in a canonical ensemble in a thermal equilibrium, the probability that the particle has energy E is proportional to e^(-E/kT). This probability is an a priori probability, describing your knowledge about a particle before you determine its actual properties. Once you determine that the actual energy has some value e, then you can replace e^(-E/kT) with the delta function delta(E-e).
 
  • #89
Demystifier said:
You should think of it as analogous to classical statistical mechanics. A single particle always has a definite position, energy, etc. But if you have a STATISTICAL ENSEMBLE of particles, then each particle in the ensemble may have a different position, energy, etc. In particular, in a canonical ensemble in a thermal equilibrium, the probability that the particle has energy E is proportional to e^(-E/kT). This probability is an a priori probability, describing your knowledge about a particle before you determine its actual properties. Once you determine that the actual energy has some value e, then you can replace e^(-E/kT) with the delta function delta(E-e).

I'm not convinced that this works. Initially, suppose that the particle can be in anyone of 1,000,000 different boxes. You detect it in a specific box. Then you let it evolve without interference for a while. What wave function is appropriate after you detected it in the box? According to the "collapse" interpretation, you use a wave function that is zero everywhere except in the box where you detected it. But according to the interpretation that you are describing, the wave function refers, not to this single particle, but an ensemble of identical particles. The fact that you discovered this particle in a particular box doesn't imply anything about the vast number of other particles in the ensemble. So the wave function, which refers to the ensemble, is not localized on the box where you discovered the particle.

So that seems to me to be a big, empirically testable, difference between the interpretation you are describing and the "collapse" interpretation. After the initial detection, they are using different wave functions, one collapsed and one not.
 
  • #90
stevendaryl said:
I'm not convinced that this works. Initially, suppose that the particle can be in anyone of 1,000,000 different boxes. You detect it in a specific box. Then you let it evolve without interference for a while. What wave function is appropriate after you detected it in the box? According to the "collapse" interpretation, you use a wave function that is zero everywhere except in the box where you detected it.
That's correct.

stevendaryl said:
But according to the interpretation that you are describing, the wave function refers, not to this single particle, but an ensemble of identical particles.
No, that's not exactly so according to the interpretation I am describing. Instead, see
https://www.physicsforums.com/showpost.php?p=4413956&postcount=84

stevendaryl said:
The fact that you discovered this particle in a particular box doesn't imply anything about the vast number of other particles in the ensemble.
True.

stevendaryl said:
So the wave function, which refers to the ensemble, is not localized on the box where you discovered the particle.
True, but suppose that you have discovered many particles in the same box. In that case the localized wave function describes an sub-ensemble of all the particles found in that box.

stevendaryl said:
So that seems to me to be a big, empirically testable, difference between the interpretation you are describing and the "collapse" interpretation.
There is a big conceptual difference, but nobody yet found a way to make it testable. The main reason is the phenomenon of decoherence (which does not depend on the interpretation you use), which effectively washes out all testable differences.

stevendaryl said:
After the initial detection, they are using different wave functions, one collapsed and one not.
That is true. In Bohmian mechanics one distinguishes the wave function of the universe (which never collapses) and effective wave function (which for practical purposes can be thought of collapsing).
 
  • #91
An interesting critical post by a non-Bohmian that attended that conference "QM without observers III" where that last poll linked above was completed:

Guest post on Bohmian Mechanics, by Reinhard F. Werner
http://tjoresearchnotes.wordpress.c...st-on-bohmian-mechanics-by-reinhard-f-werner/

What is really interesting is the exchange between the poster (Reinhard F. Werner), Matt Leifer, Tim Maudlin, Matt Pusey (of PBR theorem fame) and Travis Norsen. What I found really interesting is the heated exchange on the topic of Bell's theorem and implications for "realism" and if local non-realism is even comprehensible. This topic has always been difficult for me to understand. Some interesting quotes:

Matt Leifer
It is a fairly standard mantra that Bell’s theorem is based on the conjunction of realism and locality and so one can choose to reject one of them whilst keeping the other. As you say, Bohmians opt to throw out locality. As for the other position, i.e. locality without realism, I have a lot of trouble understanding what it is even supposed to mean...In fact, it seems to me that locality, in any sense that is even tangentially related to Bell’s theorem, requires realism for its very definition. You need to be able to say that there to be some things that objectively exist in the world in order to say whether changing them at one location affects them at some other. Hence, in my view, it is more accurate to say that holders of operational positions are rejecting both realism AND locality (in any sense that is relevant to Bell’s theorem).

Tim Maudlin:
Nothing does, except a confusion about the principles Bell used to derive his theorem. There is no supposition of “realism” in any sense in this theorem. If you think otherwise, point it out: it is, after all, a piece of mathematics.

Travis Norsen:
“Signal locality” (or “local commutativity”) is simply not an assumption of Bell’s theorem (either/any of them) and nobody who had actually read Bell’s papers (in several of which he goes to great lengths specifically to *distinguish* “signal locality” from the locality assumption that is actually used in the theorem) could possibly harbor this misconception. Nor is “realism” (in anything but the most basic sense, denial of which would render “locality” — in any sense — completely meaningless, as Matt L already pointed out) an assumption of Bell’s theorem.
Edit: Actually Norsen in a post in this forum does provide a local and non-realist (in some sense) model:
Here's a model that non-realistic but perfectly Bell local: each particle has no definite, pre-existing, pre-scripted value for how the measurements will come out. Think of each particle as carrying a coin, which, upon encountering an SG device, it flips -- heads it goes "up", tails it goes "down". That is certainly not "realistic" (in the sense that people are using that term here) since there is no fact of the matter, prior to the measurement, about how a given particle will respond to the measurement; the outcome is "created on the fly", so to speak. And it's also perfectly local in the sense that what particle 1 ends up doing is in no way influenced by anything going on near particle 2, or vice versa. Of course, the model doesn't make the QM/empirical predictions. But it's non-realist and local. And hence a counter-example to any claim that being Bell local requires/implies being "realist".
 
Last edited:
  • #92
bohm2 said:
[..] Actually Norsen in a post in this forum does provide a local and non-realist (in some sense) model:
That's interesting indeed, as I think that most people - and maybe even Einstein - would call a coin-flipping model "realistic".
 
  • #93
Concerning the question of what locality without realism is even supposed to mean, I also had difficulties with it, until I found my own model for such a thing:
http://arxiv.org/abs/1112.2034 [Int. J. Quantum Inf. 10 (2012) 1241016]
 
Back
Top