Conceptual underpinning(s) of the QM projection postulate

In summary: But it's not "sufficient" because it doesn't explain the other mysterious thing, the wavefunction collapse.Since you're a Bohmian, id the "something else" you're referring to ignorance of which of the Everett many worlds is the "actual world"?There is no such thing as an "actual world". They are all "virtual" worlds.
  • #1
ThomasT
529
0
The title says it. I would like to see what knowledgeable people at PF have to say about the QM projection postulate -- primarily understandings of the conceptual reasoning underlying it. But anything anyone has to say about it is welcomed, including opinions that it shouldn't be a part of the basic axiomatic formulation of QM.
 
Physics news on Phys.org
  • #2
The projection postulate is a simplified description of a continuous physical process caused by decoherence and something else, where the "something else" is something we don't know what it is, but have a few ideas what it could be.
 
  • #3
Demystifier said:
The projection postulate is a simplified description of a continuous physical process caused by decoherence and something else, where the "something else" is something we don't know what it is, but have a few ideas what it could be.
Such as ... ?
 
  • #4
Demystifier said:
The projection postulate is a simplified description of a continuous physical process caused by decoherence and something else, where the "something else" is something we don't know what it is, but have a few ideas what it could be.
Since you're a Bohmian, id the "something else" you're referring to ignorance of which of the Everett many worlds is the "actual world"?
 
  • #5
Well its basically the wavefuntion collapse issue. Decoherence is the usual explanation these days. I believe it resolves it and in a very elegant way but opinions vary.

The conceptual reasoning behind it is its usually thought to be the only reasonable way you can define probabilities in a theory based on the principle of superposition eg see Gleasons Theorem. The reason you need probabilities is deterministic models turn out to be problematical eg see the Kochren Specker Theorem.

Thanks
Bill
 
Last edited:
  • #6
ThomasT said:
Such as ... ?
lugita15 said:
Since you're a Bohmian, id the "something else" you're referring to ignorance of which of the Everett many worlds is the "actual world"?
Bohmian particle trajectories are certainly one possibility.

Another possibility are additional axioms needed for the many-world interpretation to work. (For example, an axiom from which the Born rule can be explained.)
 
  • #7
bhobba said:
Well its basically the wavefuntion collapse issue. Decoherence is the usual explanation these days.
Most papers on decoherence these days admit that decoherence is important but not sufficient to explain the collapse.
 
  • #8
Well, I'm even more "heretic" (against the Copenhagen doctrine) and ask, what is the collapse assumption good for? It's not needed within the Minimal Interpretation, and I don't see, why I should assume more than necessary to apply the mathematical formalism to "reality" (which is of course simply what's observed in nature or in experiments in the lab)?

The only interpretation I need to make the connection between the QT formalism and real-world observations is Born's probabilistic interpretation of states, and there is no need for a collapse! So, I don't need an "explanation" for a collapse.

This is true in even stronger form for any additional elements in the formalism as in Bohm-de-Broglie pilot-wave interpretations. There I don't need to bother with unobservable trajectories; let alone any other esoterics like the "many-worlds" or "Princeton" interpretation.
 
  • #9
vanhees71 said:
The only interpretation I need to make the connection between the QT formalism and real-world observations is Born's probabilistic interpretation of states, and there is no need for a collapse!
This is essentially the Ballentine statistical ensemble interpretation. It is indeed consistent, but still has some unappealing features which make it not universally accepted.
 
  • #10
All we need to make predictions about results of experiments, is the version of the Born rule that says for all observables A, and all pure states s, the average result in a long sequence of measurements of A, on systems that are all in the state |s>, will be <s|A|s>.

But to test those predictions, we also need to know what state to associate with the preparation procedure we're using. We need a rule that assigns states to preparation procedures. I'm not aware of anything other than the projection postulate that can do that.Edit: I just need to make sure that we're talking about the same thing. What I just said is based on the assumption that the "projection postulate" is the rule that says that if we measure A and get the result a, and if the system wasn't destroyed by the measurement, then immediately after the measurement, it will be in an eigenstate of A with eigenvalue a. Let me know if you guys were actually talking about something else.

Edit: 2 After reading Demystifier's post #2 again, I started thinking "Is decoherence another thing that can associate mathematical states with preparation procedures?". It seems to me that the answer is no. Decoherence is a prediction of the theory, but we don't even have a theory without the projection postulate.
 
Last edited:
  • #11
Demystifier said:
Most papers on decoherence these days admit that decoherence is important but not sufficient to explain the collapse.

That's because the wrong question was asked. It for all practical purposes does explain it - there is no way to observationally tell the difference between a mixed state where each state is an eigenstate and wavefunction collapse.

Out of curiosity mind giving me some of those most papers - its not the view of the stuff I read such as Sholosshaurs book on decohrerence.

Thanks
Bill
 
  • #12
Demystifier said:
This is essentially the Ballentine statistical ensemble interpretation. It is indeed consistent, but still has some unappealing features which make it not universally accepted.

All interpretations suck in some way - it a matter of personal preference which you think sucks the least - I think the ensemble interpretation does that combined with decoherence - not that Ballentine is a big fan of decoherence.

Thanks
Bill
 
  • #13
Fredrik said:
Decoherence is a prediction of the theory, but we don't even have a theory without the projection postulate.

Indeed. But it does allow us to skirt problematical issues. For example the ensemble interpretation has problems with accepting the actual reality of the ensembles because the Kochen-Specker theorem means it can not be in an actual state that observation selects - but with decoherence it can.

Thanks
Bill
 
  • #14
bhobba said:
All interpretations suck in some way - it a matter of personal preference which you think sucks the least - I think the ensemble interpretation does that combined with decoherence - not that Ballentine is a big fan of decoherence.
Fair enough! :approve:
 
  • #15
Fredrik said:
Edit: 2 After reading Demystifier's post #2 again, I started thinking "Is decoherence another thing that can associate mathematical states with preparation procedures?". It seems to me that the answer is no. Decoherence is a prediction of the theory, but we don't even have a theory without the projection postulate.
I disagree. Even without the projection postulate, even without the Born rule, even without the probabilistic interpretation, we can calculate "reduced density matrices" as abstract mathematical objects governed by the Schrodinger equation. This is sufficient to predict decoherence, even if we have no idea what it means physically.

Of course, if you do have a probabilistic interpretation, then this further motivates you to calculate the reduced density matrix because then you know what is the meaning of it. But the point is - you can calculate it even without knowing about the probabilistic interpretation.

Besides, Bohmian interpretation is an example demonstrating that we can even have a full physical interpretation without the projection postulate.
 
  • #16
I'm a little confused about what we disagree about. My main point was that if we take the usual assumptions that define QM, and simply drop the projection postulate from that list, what we have left isn't a theory. I suspect that you will agree with that.

I certainly didn't mean to suggest that the theory is guaranteed to remain broken even if we replace the projection postulate with something else. I don't know what that would be in Bohmian mechanics, but it has to be something. Every theory needs a rule that associates preparation procedures with mathematical things that can represent them.

The only thing I said (or rather inadvertently suggested, by my precise choice of words) that your argument seems to refute is that we need the full theory (=mathematics+correspondence rules) to find decoherence. I agree of course that decoherence is present in the purely mathematical part of the theory.
 
  • #17
In my opinion the only reasonable interpretation is the "Mind makes collapse" interpretation.
I explained this starting with introductions of metaphysics, noting that purely mathematical laws of physics can only describe a mathematical universe, with a merely mathematical existence.
There needs to be something beyond mathematical laws to both provide for existence beyond mere mathematical existence, and for non-algorithmicity.
Because the mind's behavior cannot be purely algorithmic, for at least 2 reasons:

One is that, again, a purely algorithmic behavior cannot account for the notion that we "really exist" (that we have an authentic feeling of our existence, that morality makes real sense), because otherwise our existence would be purely mathematical, making any concept of probability devoid of any sense (since all possibilities equally exist mathematically, none of them can meaningfully be said more probable than another) and therefore in contradiction with the fact that such probability laws exist and have been verified.

The other is that the reasoning power of the mathematician can know things such as the consistency of ZF, that cannot be proven out of any formal system that can be reasonably assumed to be his. I am planning to soon add this point to my site on set theory and the foundations of mathematics - this will come after the many also very important things already there that you will surely not finish reading before I add it.

I observe that once started from these ideas and formalized them on the case of Markov processes, the principles of quantum physics can appear much more natural and intuitive, less paradoxical, than otherwise; and also the thermodynamical time orientation that comes from apparently nowhere is quite well explained in this view.
 
  • #18
spoirier said:
noting that purely mathematical laws of physics can only describe a mathematical universe, with a merely mathematical existence.There needs to be something beyond mathematical laws to both provide for existence beyond mere mathematical existence, and for non-algorithmicity.

Here we go again with its only math stuff. What don't you get about system states being mapped to stuff out there like a particles position, momentum, spin, or whatever, and because of that is not purely mathematics? Do you believe the same thing about the points and lines of Euclidean Geometry? That's just math as well under your view. But suggest that to a surveyor and you are likely to get some weird looks.

That's not to denigrate the mind makes collapse view - Von Neumann thought so (so does Roger Penrose who goes a step further and believes he has found processes in the brain that do it) - its perfectly valid - just a bit too over determined for my tastes and not at all necessary. IMHO. However all interpretations suck in some way and its a matter of personal preference which one sucks the least.

Here is a Link about Penrose's views:
http://www.quantumconsciousness.org/penrose-hameroff/orchor.html

BTW Markov processes can not be used as a model for QM - fundamental theorems show they always converge to a single state or cycle.

That is the essence of Quantum weirdness - in order to allow continuous transformations between states you must go to complex numbers - if not you get funny behavior like those of Markov chains. In fact a Wiener process (itself a Markov process) models QM if you do a Wick Rotation into complex numbers. Many people have been struck by this and have tried to figure out a way it can be used as an interpretation of QM but it has not proved successful.

Thanks
Bill
 
Last edited by a moderator:
  • #19
Fredrik said:
I'm a little confused about what we disagree about. My main point was that if we take the usual assumptions that define QM, and simply drop the projection postulate from that list, what we have left isn't a theory. I suspect that you will agree with that.

I certainly didn't mean to suggest that the theory is guaranteed to remain broken even if we replace the projection postulate with something else. I don't know what that would be in Bohmian mechanics, but it has to be something. Every theory needs a rule that associates preparation procedures with mathematical things that can represent them.

The only thing I said (or rather inadvertently suggested, by my precise choice of words) that your argument seems to refute is that we need the full theory (=mathematics+correspondence rules) to find decoherence. I agree of course that decoherence is present in the purely mathematical part of the theory.
Then I probably misunderstood you, because I agree with the above. :approve:
 
  • #20
I suggest you read the information interpretation (Zeilinger 1999+). In this interpretation, what we would usually call "the system" in physics is considered an amount of information (in the most existential sense) and is only considered a particle, an amount of energy, mass, etc. in a secondary sense. This changes everything. What you're calling "the projection postulate" or "the measurement problem" is nothing more than a consequence of the rules that govern information theory, which governs quantum mechanics (or at least in the information interpretation).
 
  • #21
I should add that decoherence theory is also an information based theory and fits well with the information interpretation, but it cannot be used explicitly to explain all of quantum interpretation, but only the interpretation of measurement, specifically only measurements involving coupling.(decoherence cannot explain the change in the state of a laser when it is pulse shaped, because there is no coupling)
 
  • #23
Here is a paper that attempts to derive the Born rule using decoherence.
 
  • #24
al onestone said:
I should add that decoherence theory is also an information based theory and fits well with the information interpretation, but it cannot be used explicitly to explain all of quantum interpretation, but only the interpretation of measurement, specifically only measurements involving coupling.(decoherence cannot explain the change in the state of a laser when it is pulse shaped, because there is no coupling)

i agree, is not enough.


lugita15 said:
Here is a paper that attempts to derive the Born rule using decoherence.

not so fast...

the paper
http://arxiv.org/pdf/quant-ph/0405161v2.pdf

...a recently discovered symmetry exhibited by entangled quantum systems...
...Envariance is enough to establish dynamical independence of preferred branches of the evolving state vector of the composite system, and, thus, to arrive at the {\it environment - induced superselection (einselection) of pointer states}, that was usually derived by an appeal to decoherence...

not from decoherence

and a counterpoint

Probabilities from envariance?
http://arxiv.org/abs/quant-ph/0401180

It is argued that the reason why all attempts to do this have so far failed is that quantum states are fundamentally algorithms for computing correlations between possible measurement outcomes, rather than evolving ontological states.
 
Last edited:
  • #25
sigma.alpha said:
not so fast...

the paper
http://arxiv.org/pdf/quant-ph/0405161v2.pdf

...a recently discovered symmetry exhibited by entangled quantum systems...
...Envariance is enough to establish dynamical independence of preferred branches of the evolving state vector of the composite system, and, thus, to arrive at the {\it environment - induced superselection (einselection) of pointer states}, that was usually derived by an appeal to decoherence...

not from decoherence
OK, sorry about that. What about this one?
 
  • #26
To Demystifier: Thanks for the counter article suggestion, do you know any more criticisms of the information interpretation?
 
  • #28
al onestone said:
To Demystifier: Thanks for the counter article suggestion, do you know any more criticisms of the information interpretation?
I can't recall any other similar paper, but I think the suggested paper contains some related references.
 
  • #29
Demystifier said:
Bohmian particle trajectories are certainly one possibility.

Another possibility are additional axioms needed for the many-world interpretation to work. (For example, an axiom from which the Born rule can be explained.)

What I'm about to say is actually for both the ensemble interpretation of classical probability, as well as the "many-worlds" interpretation of quantum probability. I think there is a sense in which ensembles don't really "explain" probabilities.

Let me illustrate with a kind of ludicrous thought experiment:

Suppose that God created the world to be nondeterministic in some way--to make it simple, let's say that he made coin flips truly random. The way he "implements" this nondeterminism is through ensembles. At any moment in which someone is about to flip a coin, God halts time. Then he creates two identical universes: in one universe, he allows the coin flip to have result "heads" and in another universe, he allows the coin flip to have result "tails".

So as time goes on, there are more and more universes, and people within each universe can legitimately interpret probability in an ensemble way: To say that there is a 1/8 chance of getting three heads in a row means the same thing as 1/8 of the universes have three heads in a row.

Now, here's an interesting thing about probabilities--there are the two interpretations: the ensemble view, and the relative frequency view. Not only will 1/8 of the possible worlds have 3 heads in a row, but within most of those worlds, we will find repeated coin flips will produce 3 heads in a row 1/8 of the time. This relative frequency view of coin flip probabilities is in some ways better than the ensemble view, and in some ways worse. It's better because the people confined to a single world can actually measure relative frequencies--in contrast, they have no way of measuring the fraction of possible worlds. It's worse than the ensemble view because it's actually not certain: Some worlds will just be "abnormal" in that 3 heads in a row is much more common or much less common that 1/8.

We can use the ensemble view to argue for the relative frequency view: The relative frequency for events within 1 world will be approximately equal to the ensemble notion of probability in all but a tiny number of worlds (in the limit as the number of possible worlds goes to infinity, the fraction of "abnormal" worlds goes to zero). So residents of any world can justify using relative frequencies by assuming that he's not in an "abnormal" world, and chances are, he's not.

But here's the weird part: He can make that assumption even if he's wrong about what's abnormal and what's normal. Going back to God's basis for splitting the world, we can imagine changing things by letting God selectively prefer "heads": He makes 2 copies of the world in which "heads" occurs, and 1 copy in which "tails" occurs. That changes all the ensemble probabilities, and changes what counts as "abnormal". Now, the worlds that see 50/50 relative frequency for "heads" and "tails" are abnormal. However, the people in those worlds can pretend that they are normal, and no experiment can prove them wrong. That is, since there is no interaction between "possible worlds", it's perfectly consistent for people to ignore the extra worlds corresponding to the additional result of "heads".

The conclusion that I came to is that an ensemble view of probabilities really doesn't explain why probabilities work in practice (that is, why probabilities tend to be equal to relative frequency), and there is a sense in which there is no explanation for that. Some possible worlds will see a relative frequency of 50/50, and some possible worlds will see a relative frequency of 66/33.

This was a hugely round-about way to make my point about Many-Worlds. I think there is a sense in which MW doesn't really justify quantum probabilities, and it really doesn't need to. To get quantum probabilities, we assume that our history is "typical" of all possible histories. The Born interpretation gives us a principled way of defining "typical". That's all. There is no deeper sense in which we can say that Born probabilities are the "correct" ones.

That's unsatisfying, but it's not really peculiar to quantum probability. There is the same problem with classical probability: It's possible to get a million "heads" in a row, it's just not typical. We can make "typical" more precise using measure theory, and saying that "typical" results are the ones that happen in all worlds except for measure zero. We could have used a different measure on the same set of possibilities, and we would have had a different notion of "typical".
 
  • #30
lugita15 said:
OK, sorry about that. What about this one?

I find explanations like Zurek's slightly circular, in the following way: The argument that the environment selects certain preferred "pointer states" is a "large numbers" type argument. Decoherence is overwhelmingly likely to occur, but you need a pre-existing notion of probability to have a notion of "overwhelmingly likely". So if decoherence is used to justify the appearance of collapse, and therefore Born probabilities, then the whole thing seems sort of circular.
 
  • #31
stevendaryl said:
I find explanations like Zurek's slightly circular, in the following way: The argument that the environment selects certain preferred "pointer states" is a "large numbers" type argument. Decoherence is overwhelmingly likely to occur, but you need a pre-existing notion of probability to have a notion of "overwhelmingly likely". So if decoherence is used to justify the appearance of collapse, and therefore Born probabilities, then the whole thing seems sort of circular.

I know this is six months old but in the paper cited, Zurek goes to great length to make his conclusions (derivation of the Born rule) dependent upon only his "Envariance" and not presuppose decoherence in any way (because reduced density matrices and traces depend upon it already).
 
  • #32
eloheim said:
I know this is six months old but in the paper cited, Zurek goes to great length to make his conclusions (derivation of the Born rule) dependent upon only his "Envariance" and not presuppose decoherence in any way (because reduced density matrices and traces depend upon it already).
But the OP is asking about the conceptual basis of the projection postulate. So, how is the conceptual basis of the Born rule related to the conceptual basis of the projection postulate?

How about this? Consider qm as a wave mechanical view of fundamental reality. That is, light, electricity, magnetism are all due to wave mechanical interactions in a medium or media of unknown structure. And then let's also consider the wave mechanics of waves in air and water. Ok, so there's the extant experimental literature regarding this stuff, and it tells us that the probability of triggering a detector is proportional to the intensity of the incident wavefront. No matter what the medium. Even if it's unknown. And intensity is proportional to amplitude. Hence, the Born rule. But what about the projection postulate? Well, it follows from the same classical wave mechanics (applied to whatever) that the Born rule does. They go hand in hand.

You can't have the Born rule without the projection postulate, and you can't have the projection postulate without the Born rule. And they're both entailed by a wave mechanical approach to dealing with disturbances in any medium. It just happens that the media that qm deals with are, uh, imaginary media ... but media nonetheless. Is there any reason to think that disturbances in these imaginary media (of unknown structure) behave in accordance with different wave dynamics than disturbances in media of known structure? Well, no. Of course not. There's just no basis for assuming that. Instead, it's assumed that quantum phenomena behave according to the same fundamental dynamics that macroscopic waves in macroscopic media do. And, so far, this has proven to be a very productive conceptual analogy.
 

1. What is the QM projection postulate?

The QM projection postulate, also known as the measurement postulate, is a fundamental principle in quantum mechanics that describes how the state of a quantum system changes when a measurement is made on it. It states that the act of measurement causes the system to "collapse" into one of its possible states, with the probability of each state determined by the square of its amplitude in the system's wave function.

2. What are the conceptual underpinnings of the QM projection postulate?

The QM projection postulate is based on the concept of superposition, which states that a quantum system can exist in multiple states simultaneously. It also relies on the idea of wave-particle duality, where particles can exhibit both wave-like and particle-like behavior. Additionally, the postulate is supported by experimental evidence, such as the famous double-slit experiment.

3. How does the QM projection postulate relate to the uncertainty principle?

The QM projection postulate is closely related to the uncertainty principle, which states that it is impossible to know both the position and momentum of a particle with absolute certainty. This is because the act of measurement affects the state of the particle, making it impossible to accurately measure both properties at the same time. The projection postulate explains how this uncertainty arises when making measurements on quantum systems.

4. Can the QM projection postulate be applied to macroscopic systems?

The QM projection postulate is a fundamental principle of quantum mechanics and can be applied to all quantum systems, regardless of their size. However, it is not typically used to describe macroscopic systems, as the effects of quantum mechanics are negligible at larger scales. Instead, classical mechanics is used to describe the behavior of macroscopic objects.

5. Are there any alternative theories to the QM projection postulate?

There are various interpretations of quantum mechanics, such as the Copenhagen interpretation, the Many-Worlds interpretation, and the Pilot-Wave theory. Each of these interpretations offers a different explanation for the behavior of quantum systems, including the role of the QM projection postulate. However, the projection postulate remains a fundamental principle in all interpretations of quantum mechanics.

Similar threads

  • Quantum Physics
4
Replies
130
Views
8K
  • Quantum Physics
Replies
31
Views
4K
  • Quantum Physics
2
Replies
69
Views
4K
Replies
6
Views
1K
  • Quantum Interpretations and Foundations
Replies
33
Views
3K
Replies
1
Views
806
  • Quantum Interpretations and Foundations
4
Replies
115
Views
11K
  • Quantum Interpretations and Foundations
4
Replies
109
Views
8K
  • Beyond the Standard Models
2
Replies
37
Views
7K
  • Special and General Relativity
2
Replies
57
Views
4K
Back
Top