New Quantum Interpretation Poll

  • #61
vanhees71 said:
The KS theorem states that it doesn't make sense to assume that compatible observables have a certain value, if the system is not prepared in a common eigenstate of the representing operators of these observables. I don't see where this can be a problem for the MSI, which precisely states that such compatible observables can only have determined values when the system is prepared in a common eigenstate.

Could you point me to the problems Ballentine has stated about the KS theorem in context with the MSI? In his book "Quantum Mechanics, A Modern Development" I can't find any such statement, and the KS theorem is discussed there in the concluding chapter on Bell's inequality.

The KS theorem is not a problem for the MSI providing you do not assume it has the value prior to observation. However that is a very unnatural assumption. When the observation selects an outcome from the ensemble of similarly prepared systems with that outcome associated with it you would like to think it is revealing the value it has prior to observation - but you can't do that.

A number of books such as Hugh's - Structure And Interpretation of QM mention the issues the Ensemble interpretation has with the KS - I can dig up the page if you really want - but not now - feeling a bit tired. They claim it invalidates it - it doesn't - but the assumption you need to make to get around it is slightly unnatural - that's all - it can not be viewed as classical probabilities like say tossing a dice unless you invoke decoherence.

I did manage to find the following online:
http://books.google.com.au/books?id...&q=ballentine ensemble kochen specker&f=false

I too have Ballentines book and its not in there anywhere - I read it in some early paper he wrote on it but can't recall which one. But since then I think he realized it wasn't really an issue if you abandon viewing it like classical probabilities.

Thanks
Bill
 
Physics news on Phys.org
  • #62
@bhobba: I see. It's not so easy to label one's interpretation of quantum theory by a simple name obviously. I guess there are as many interpretation as there are physicists using QT:biggrin:.

I always understood the MSI such that of course only those observables are determined of a system, prepared in some pure or mixed state by some well defined preparation procedure, for which the probability, given by Born's rule, to find a certain possible value (which is necessarily the eigenvalue of the observable-representing operator) is 1 (and then necessarily the probability to find all other values must be 0 of course). All other observables simply do not have a determined value. Measuring such an indetermined observable gives some of its possible values with a probability given by Born's rule. Measuring it for a single system doesn't tell us much. We can only test the hypothesis that the probabilities are given by Born's rule by preparing an ensemble (in the sense explained in my previous postings) and doing the appropriate statistical analysis. Simply put: An observable only takes a certain value if and only if the system is prepared in an appropriate state, where the corresponding probability to find this value is 1. The KS theorem tells me that it contradicts quantum theory, if you assume that the values of undetermined observables are just not known but have in "reality" certain values. Then you interpret quantum-theoretical probabilities just as sujective probabilities in the sense of classical statistical physics, and that's incompatible with QT according to KS. As you say, this doesn't pose a problem to the MSI. As I understand MSI, to the contrary it is most compatible with the KS theorem!

@bohm2: This quote by Fuchs is indeed interesting. It's easily stated that interpretation should come first, and it's the very first problem you run into if you have to teach an introductory quantum-mechanics lecture. I have no solution for this problem: I think one has to start with a heuristical introduction, using wave mechanics (but please not with photons, because these are among the most difficult cases at all; better use some massive particles and nonrelativistic QT to start with, but that's another story). But this should only be short (maybe at most 2-3 hours) and then you come immediately to the representation-independent realization in terms of the abstract Hilbert-space representation (which is mostly Dirac's "transformation theory", one of the three historically first formulations of QT besides Heisenberg-Born-Jordan's matrix mechanics and de Broglie-Schrödinger wave mechanics). Only when you have established this very abstract way of thinking on hand of some examples (so to say the "quantum kinematics") you can come to a presentation of "interpretation", i.e., you can define what a quantum state really means, which of course depends on your point of view on the interpretation. I use MSI. So far I've only given one advanced lecture ("quantum mechanics II") on the subject, and there I had no problems (at least if I believe the quite positive evaluation of the students in the end) with using the MSI and the point of view that a quantum state in the real world has the meaning of an equivalence class of preparation procedures for this state, represented by a statistical operator with the only meaning to provide a probabilistic description of the knowledge about this system, given the preparation of it. It gives only probabilities for the outcome of measurements of observables, and observables that are indetermined do not have any certain value (see above). Of course, the real challenge is to teach the introductory lecture, and this I never had to do yet. So I cannot say, how I would present it.

Another question, I always pose, is what it is about this Bayesian interpretation of probabilities, nobody could answer in a satisfactory way for me so far: What does this other interpretation mean in practice? If I have only incomplete information (be that subjective as in classical statistics of irreducible as in quantum theory) and assign probabilities somehow, how can I check this on the real system other than preparing it in a well defined reproducible way and check the relative frequencies of the occurance of its possible outcomes of observables under consideration?

This same problem you have with classical random experiments as playing dice. Knowing nothing about the dice, according to the Shannon-Jayne's principle, I assign the distribution of maximal entropy to it ("principle of the least prejudice") and assign an equal probability of 1/6 for each outcome (occurance of the numbers 1 to 6 when throughing the dice). This are the "prior probabilities", and now I have to check them. How else can I do it than to through the dice many times and count the relative frequencies of the occurance of the numbers 1-6? Only then can I test the hypothesis about this specific dice to a certain statistical accuracy and can, if I find significant deviations from the behavior, update my probability function. I don't see what all this Bayesian mumbering about "different interpretations of probabilty" than the frequentist interpretation is about, if I never can check the probabilities other than in the frequentist view!
 
  • #63
vanhees71 said:
I don't see what all this Bayesian mumbering about "different interpretations of probabilty" than the frequentist interpretation is about, if I never can check the probabilities other than in the frequentist view!
Yes, if I understand you, I believe Timpson makes a similar criticism of Bayesianism here:
We just do look at data and we just do update our probabilities in light of it; and it’s just a brute fact that those who do so do better in the world; and those who don’t, don’t. Those poor souls die out. But this move only invites restatement of the challenge: why do those who observe and update do better? To maintain that there is no answer to this question, that it is just a brute fact, is to concede the point. There is an explanatory gap. By contrast, if one maintains that the point of gathering data and updating is to track objective features of the world, to bring one’s judgements about what might be expected to happen into alignment with the extent to which facts actually do favour the outcomes in question, then the gap is closed. We can see in this case how someone who deploys the means will do better in achieving the ends: in coping with the world. This seems strong evidence in favour of some sort of objective view of probabilities and against a purely subjective view, hence against the quantum Bayesian...

The form of the argument, rather, is that there exists a deep puzzle if the quantum Bayesian is right: it will forever remain mysterious why gathering data and updating according to the rules should help us get on in life. This mystery is dispelled if one allows that subjective probabilities should track objective features of the world. The existence of the means/ends explanatory gap is a significant theoretical cost to bear if one is to stick with purely subjective probabilities. This cost is one which many may not be willing to bear; and reasonably so, it seems.
Quantum Bayesianism: A Study
http://arxiv.org/pdf/0804.2047v1.pdf
 
  • #64
vanhees71 said:
Another question, I always pose, is what it is about this Bayesian interpretation of probabilities, nobody could answer in a satisfactory way for me so far: What does this other interpretation mean in practice? If I have only incomplete information (be that subjective as in classical statistics of irreducible as in quantum theory) and assign probabilities somehow, how can I check this on the real system other than preparing it in a well defined reproducible way and check the relative frequencies of the occurance of its possible outcomes of observables under consideration?

In my opinion, that couldn't be more wrong. As I said in another post, I think it mixes up the issue of good scientific practice with what science IS. I agree that reproducibility is extremely important for scientists, but it's not an end in itself, and it's not sufficient. Inevitably, there will be a time when it is necessary to make a judgment about the likelihood of something that has never happened before. It's the first time that a particular accelerator has been turned on, it's the first time that anyone has tried riding in a new type of vehicle, it's the first time that anyone has performed some surgical procedure. Or in pure science, it's the first time a particular alignment of celestial bodies has occurred. In these cases, we expect that science will work the same in one-off situations as it does in controlled, repeatable situations.

Even if you want to distinguish pure science from applied science, you still have the problem of what counts as a trial, in order to make sense of a frequentist interpretation. The state of the world never repeats. Of course, you can make a judgment that the aspects that vary from one run of an experiment to another are unlikely to be relevant, but what notion of "unlikely" are you using here? It can't be a frequentist notion of "unlikely".

A non-frequentist notion of likelihood is needed to even apply a frequentist notion of likelihood.
 
  • #65
bhobba said:
The KS theorem is not a problem for the MSI providing you do not assume it has the value prior to observation. However that is a very unnatural assumption. When the observation selects an outcome from the ensemble of similarly prepared systems with that outcome associated with it you would like to think it is revealing the value it has prior to observation - but you can't do that.

It seems that a kind of ensemble approach to interpreting quantum probabilities is to consider an ensemble, not of states of a system, but of entire histories of observations. Then the weird aspects of quantum probability (namely, interference terms) go into deciding the probability for a history, but then ordinary probabilistic reasoning would apply in doing relative probabilities: Out of all histories in which Alice measures spin-up, fraction f of the time, Bob measures spin-down.

What's unsatisfying to me about this approach is that the histories can't be microscopic histories (describing what happens to individual particles) because of incompatible observables. Instead, they would have to be macroscopic histories, with some kind of coarse-graining. Or alternatively, we could just talk about probability distributions of permanent records, maybe.
 
  • #66
stevendaryl said:
It seems that a kind of ensemble approach to interpreting quantum probabilities is to consider an ensemble, not of states of a system, but of entire histories of observations.

Yea - looks like decoherent histories to me.

I rather like that interpretation and would be my favorite except for one thing - it looks suspiciously like defining your way out of trouble. We can't perceive more than one outcome at a time - well let's impose that as a consistency condition - vola - you have consistent histories. Decoherence automatically enforces this consistency condition so you have decocerent histories. To me its ultimately unsatisfying - but so is my interpretation when examined carefully enough - but its a bit more overt in stating it as an assumption ie I state explicitly that the act of observation chooses an outcome that is already present - which you can do because decoherence transforms a pure state to an improper mixed state. Basically I assume the improper mixed state is a mixed one and without further ado the measurement problem is solved - but how does an observation accomplish this feat? Sorry - no answer - nature is just like that.

Thanks
Bill
 
  • #67
Demystifier said:
What would you say about this interpretation
http://arxiv.org/abs/1112.2034 [to appear in Int. J. Quantum Inf.]
which interpolates between local and realistic interpretations, and in a sense is both local and realistic (or neither).
My answer is a bit late (and maybe this is a bit off topic) but anyway. I don't have the time to read your paper, so just two quick questions about this interpretation:
1) Does all physics happen inside the observer?
2) Is the observer free to chose angle settings or are his decisions predetermined?
 
  • #68
Anybody have a feel for the "support" of Aharanov's Time Symmetric Quantum Mechanics (TSQM) formulation these days? (I'm guessing it's one of the ones that would fall in the "Other" category of the poll.

It's my understanding it offers very elegant and mathematically simple explanations for some new-ish experiments, where regular QM involves complicated interference effects and rather intractable math. Although, one just has to be willing to let go of his/her notions of linear time ;-)
 
  • #69
kith said:
1) Does all physics happen inside the observer?
2) Is the observer free to chose angle settings or are his decisions predetermined?
1) Yes (provided that only particles, not wave functions, count as "physics").
2) His decisions are predetermined, but not in a superdeterministic way. In other words, even though free will is only an illusion, this is not how non-locality is avoided.
 
  • #70
Sounds like a nice gedanken interpretation. Or do you think many people will stick to it in the future? ;-)

Is it formally compatible with all assumptions of Bell's theorem? If yes, how is the experimental violation explained? If no, what assumptions are violated?
 
  • #71
kith said:
Sounds like a nice gedanken interpretation. Or do you think many people will stick to it in the future? ;-)
Probably not, but who knows?

kith said:
Is it formally compatible with all assumptions of Bell's theorem? If yes, how is the experimental violation explained? If no, what assumptions are violated?
The Bell theorem assumes that there is reality (hidden variables) associated with entangled particles and/or separated detectors of two particles. This assumption is not satisfied here. Reality is associated only with observers whose role is to act as local coincidence counters. (See Fig. 1 in the paper.)
 
  • #72
The poll paper was also discussed in Nature:
Perhaps the fact that quantum theory does its job so well and yet stubbornly refuses to answer our deeper questions contains a lesson in itself,” says Schlosshauer. Possibly the most revealing answer was that 48% believed that there will still be conferences on the foundations of quantum theory in 50 years time.
Experts still split about what quantum theory means
http://www.nature.com/news/experts-still-split-about-what-quantum-theory-means-1.12198
 
  • #73
For completion, this poll was also recently discussed by cosmologist Sean Carroll in his blog and the recent video:
I’ll go out on a limb to suggest that the results of this poll should be very embarrassing to physicists. Not, I hasten to add, because Copenhagen came in first, although that’s also a perspective I might want to defend (I think Copenhagen is completely ill-defined, and shouldn’t be the favorite anything of any thoughtful person). The embarrassing thing is that we don’t have agreement. Think about it-quantum mechanics has been around since the 1920′s at least, in a fairly settled form. John von Neumann laid out the mathematical structure in 1932. Subsequently, quantum mechanics has become the most important and best-tested part of modern physics. Without it, nothing makes sense. Every student who gets a degree in physics is supposed to learn QM above all else. There are a variety of experimental probes, all of which confirm the theory to spectacular precision. And yet-we don’t understand it. Embarrassing. To all of us, as a field (not excepting myself).
The Most Embarrassing Graph in Modern Physics
http://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

QM:An embarassment
 
Last edited by a moderator:
  • #74
At the edge of knowledge in any scientific field, there is "embarrassment" such as this. What happened before the big bang? Cosmologists don't agree on that either. Was the evolution of intelligence in the universe likely? Experts don't agree about that.
 
  • #75
bohm2 said:
For completion, this poll was also recently discussed by cosmologist Sean Carroll in his blog and the recent video:

The Most Embarrassing Graph in Modern Physics
http://www.preposterousuniverse.com/blog/2013/01/17/the-most-embarrassing-graph-in-modern-physics/

QM:An embarassment


I cannot really say I agree with him here. I don't think it's embarrassing at all. In fact the opposite, I think it means it's have a very difficult and interesting question that deserves to be mentioned more often. I also think we should not go around calling it embarrassing because it sends bad signals to the general population. All knowledge starts from being unknown, and contrary to some other interestgroups in our society, we as scientists should freely admitt it whenever we don't know something. It's not something strange it's natural.
 
Last edited by a moderator:
  • #76
Zarqon said:
I cannot really say I agree with him here. I don't think it's embarrassing at all. In fact the opposite, I think it means it's have a very difficult and interesting question that deserves to be mentioned more often. I also think we should not go around calling it embarrassing because it sends bad signals to the general population. All knowledge starts from being unknown, and contrary to some other interestgroups in our society, we as scientists should freely admitt it whenever we don't know something. It's not something strange it's natural.
I absolutely agree! :approve:
 
  • #77
Matt Leifer has made an excellent post in the comment section of the preposterousuniverse blog. According to him, the scientific relevance of quantum foundations is not to find the "right" interpretation to the already existing theory (because that's metaphysical), but the fact that different interpretations suggest different ways to solve the known problems of physics which probably require a theory that goes beyond QM.
 
  • #78
Another survey came out today with very different results:
This is a somehow surprising result. The elsewise sovereign Copenhagen interpretation loses ground against the de Broglie-Bohm interpretation. This is partly the inflluence of decided minorities in small populations, because the participants of the conference were all but representative of the whole physicists' community. Not surprisingly, the outcome is well different from the observed distribution by Tegmark or Schlosshauer et al.
Another Survey of Foundational Attitudes Towards Quantum Mechanics
http://lanl.arxiv.org/pdf/1303.2719.pdf
 
  • #79
bohm2 said:
Another survey came out today with very different results:

Another Survey of Foundational Attitudes Towards Quantum Mechanics
http://lanl.arxiv.org/pdf/1303.2719.pdf


Interesting, but such a small sample.
What I cannot for the life of me understand is why someone hasn't just sent a email to say 200 quantum physicists with the questionaire. it would give a much much more interesting snapshot.

Once again the answers of this poll is so ****ing weird and inconsistent, it's pretty clear the answerers aren't even sure wtf they think about the issue.

I think what is needed is a BIG poll, these tiny conference polls are literally not even a drop in the ocean. Sure they are slightly interesting, but that's it.
 
  • #80
bohm2 said:
Another Survey of Foundational Attitudes Towards Quantum Mechanics
http://lanl.arxiv.org/pdf/1303.2719.pdf
According to this survey, the top 3 interpretations are:
1. I have no preferred interpretation - 44%
2. Shut up and calculate - 17 %
3. De-Broglie Bohm - 17 %

Given that 1. and 2. are not really specific interpretations at all, it can be said that De-Broglie Bohm, at this conference at least, is the most popular specific interpretation.
 
  • #81
Only 12 physicists, most of them master students or early Ph.D. students. May be later on they will have a preferred interpretation.
 
  • #82
stevendaryl said:
There is an assumption (as someone has pointed out) made by the bohm interpretation, which is that the initial distribution of particle positions is made to agree with the square of the Schrodinger wave function, but I don't see how in a realistic model, that makes sense. If you only have a single electron, for instance, what sense does it make that it has a "distribution"?

I know it's been a month since this thread was last bumped, but I really can't read this and let it slide.

You don't have to simply assume that the initial distribution goes as |ψ|2. Running dynamics simulations with unrelated initial conditions results in the distribution dropping out. Quantum equilibrium isn't a "postulate" in the same way that other interpretations relate |ψ|2 to "probabilities" (whatever they do or don't say they're probabilities of...)
 
  • #83
aphirst said:
I know it's been a month since this thread was last bumped, but I really can't read this and let it slide.

You don't have to simply assume that the initial distribution goes as |ψ|2. Running dynamics simulations with unrelated initial conditions results in the distribution dropping out. Quantum equilibrium isn't a "postulate" in the same way that other interpretations relate |ψ|2 to "probabilities" (whatever they do or don't say they're probabilities of...)
Yes, that needs to be repeated over and over again.

Just as not so long time ago it was needed to repeat over and over again that the Bell theorem does not exclude general hidden variables (including the Bohmian ones), but only local hidden variables. Fortunately, it seems that now a significant majority of physicists appreciates that.
 
  • #84
Just for completion, another poll done with vastly different results with author's insightful (in my opinion) comments hi-lited:
Here we report the results of giving this same survey to the attendees at another recent quantum foundations conference. While it is rather difficult to conclude anything of scientific significance from the poll, the results do strongly suggest several interesting cultural facts – for example, that there exist, within the broad field of “quantum foundations”, sub-communities with quite different views, and that (relatedly) there is probably even significantly more controversy about several fundamental issues than the already-significant amount revealed in the earlier poll...

In the SKZ results, b. Copenhagen (42%) and e. Information-based/information-theoretical (24%) received the highest response rates, while c. de Broglie - Bohm received zero votes of endorsement. SKZ write explicitly that “the fact that de Broglie - Bohm interpretation did not receive any votes may simply be an artifact of the particular set of participants we polled.” Our results strongly confirm this suspicion. At the Bielefeld conference, choice c. de Broglie - Bohm garnered far and away the majority of the votes (63%) while b. Copenhagen and e. information-based / information-theoretical received a paltry 4% and 5% respectively. It is also interesting to compare results on this question to the older (1997) survey conducted by Max Tegmark. Tegmark, finding that 17% of his respondents endorsed a many-worlds / Everett interpretation, announced this as a “rather striking shift in opinion compared to the old days when the Copenhagen interpretation reigned supreme.” Our results clearly suggest, though, that any such interpretation of these sorts of poll results – as indicating a meaningful temporal shift in attitudes – should be taken with a rather large grain of salt. It is almost certainly not the case, for example, that while a “striking shift” toward many-worlds views occurred in the years prior to 1997, this shift then stalled out between 1997 and 2011 (the response rate endorsing Everett being about the same in the Tegmark and SKZ polls), and then suddenly collapsed (with the majority of quantum foundations workers now embracing the de Broglie - Bohm pilot-wave theory).

Instead, the obviously more plausible interpretation of the data is that each poll was given to a very different and highly non-representative group. The snapshots reveal much more about the processes by which it was decided whom should be invited to a given conference, than they reveal about trends in the thinking of the community as a whole. We note finally that insofar as our poll got more than twice as many respondents as the SKZ poll (which those authors had described as “the most comprehensive poll of quantum-foundational views ever conducted”) it is now apparently the case that the de Broglie - Bohm pilot-wave theory is, by an incredibly large margin, the most endorsed interpretation in the most comprehensive poll of quantum foundational views ever conducted. For the reasons we have just been explaining, this has almost no meaning, significance, or implications, beyond the fact that lots of “Bohmians” were invited to the Bielefeld conference. But it does demonstrate rather strikingly that the earlier conferences (where polls were conducted by Tegmark and SKZ) somehow failed to involve a rather large contingent of the broader foundations community. And similarly, the Bielefeld conference somehow failed to involve the large Everett-supporting contingent of the broader foundations community.
Yet Another Snapshot of Foundational Attitudes Toward Quantum Mechanics
http://lanl.arxiv.org/pdf/1306.4646.pdf
 
  • #85
Even though, as before, the sample is not representative, it is always interesting to see the top 3 list:
1. De Broglie-Bohm - 63%
2. Objective Collapse - 16 %
3. I have no preferred interpretation - 11 %
 
  • #86
aphirst said:
I know it's been a month since this thread was last bumped, but I really can't read this and let it slide.

You don't have to simply assume that the initial distribution goes as |ψ|2. Running dynamics simulations with unrelated initial conditions results in the distribution dropping out. Quantum equilibrium isn't a "postulate" in the same way that other interpretations relate |ψ|2 to "probabilities" (whatever they do or don't say they're probabilities of...)

I don't understand that. Let's take the case of a single particle. In the Bohm interpretation, it always has a definite location. So unless the wave function is a delta-function, it's square won't agree with the actually distribution of particles.
 
  • #87
Thanks bohm2, this is quite interesting. I was interested in the exact topics of the conferences where the polls were taken. Here they are:

Tegmark: "Fundamental Problems in Quantum Theory" 1997
Schlosshauer et al.: "Quantum Physics and the Nature of Reality" 2011
Norsen & Nelson: "Quantum Theory Without Observers" 2013

Arguably, the topic of the last conference is narrower. Researchers who favor an observer-dependent interpretation may have been discouraged to attend.
 
  • #88
stevendaryl said:
I don't understand that. Let's take the case of a single particle. In the Bohm interpretation, it always has a definite location. So unless the wave function is a delta-function, it's square won't agree with the actually distribution of particles.
You should think of it as analogous to classical statistical mechanics. A single particle always has a definite position, energy, etc. But if you have a STATISTICAL ENSEMBLE of particles, then each particle in the ensemble may have a different position, energy, etc. In particular, in a canonical ensemble in a thermal equilibrium, the probability that the particle has energy E is proportional to e^(-E/kT). This probability is an a priori probability, describing your knowledge about a particle before you determine its actual properties. Once you determine that the actual energy has some value e, then you can replace e^(-E/kT) with the delta function delta(E-e).
 
  • #89
Demystifier said:
You should think of it as analogous to classical statistical mechanics. A single particle always has a definite position, energy, etc. But if you have a STATISTICAL ENSEMBLE of particles, then each particle in the ensemble may have a different position, energy, etc. In particular, in a canonical ensemble in a thermal equilibrium, the probability that the particle has energy E is proportional to e^(-E/kT). This probability is an a priori probability, describing your knowledge about a particle before you determine its actual properties. Once you determine that the actual energy has some value e, then you can replace e^(-E/kT) with the delta function delta(E-e).

I'm not convinced that this works. Initially, suppose that the particle can be in anyone of 1,000,000 different boxes. You detect it in a specific box. Then you let it evolve without interference for a while. What wave function is appropriate after you detected it in the box? According to the "collapse" interpretation, you use a wave function that is zero everywhere except in the box where you detected it. But according to the interpretation that you are describing, the wave function refers, not to this single particle, but an ensemble of identical particles. The fact that you discovered this particle in a particular box doesn't imply anything about the vast number of other particles in the ensemble. So the wave function, which refers to the ensemble, is not localized on the box where you discovered the particle.

So that seems to me to be a big, empirically testable, difference between the interpretation you are describing and the "collapse" interpretation. After the initial detection, they are using different wave functions, one collapsed and one not.
 
  • #90
stevendaryl said:
I'm not convinced that this works. Initially, suppose that the particle can be in anyone of 1,000,000 different boxes. You detect it in a specific box. Then you let it evolve without interference for a while. What wave function is appropriate after you detected it in the box? According to the "collapse" interpretation, you use a wave function that is zero everywhere except in the box where you detected it.
That's correct.

stevendaryl said:
But according to the interpretation that you are describing, the wave function refers, not to this single particle, but an ensemble of identical particles.
No, that's not exactly so according to the interpretation I am describing. Instead, see
https://www.physicsforums.com/showpost.php?p=4413956&postcount=84

stevendaryl said:
The fact that you discovered this particle in a particular box doesn't imply anything about the vast number of other particles in the ensemble.
True.

stevendaryl said:
So the wave function, which refers to the ensemble, is not localized on the box where you discovered the particle.
True, but suppose that you have discovered many particles in the same box. In that case the localized wave function describes an sub-ensemble of all the particles found in that box.

stevendaryl said:
So that seems to me to be a big, empirically testable, difference between the interpretation you are describing and the "collapse" interpretation.
There is a big conceptual difference, but nobody yet found a way to make it testable. The main reason is the phenomenon of decoherence (which does not depend on the interpretation you use), which effectively washes out all testable differences.

stevendaryl said:
After the initial detection, they are using different wave functions, one collapsed and one not.
That is true. In Bohmian mechanics one distinguishes the wave function of the universe (which never collapses) and effective wave function (which for practical purposes can be thought of collapsing).
 

Similar threads

  • · Replies 314 ·
11
Replies
314
Views
20K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 25 ·
Replies
25
Views
5K
Replies
175
Views
12K
  • · Replies 76 ·
3
Replies
76
Views
8K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 37 ·
2
Replies
37
Views
6K
  • · Replies 2 ·
Replies
2
Views
3K