Experimental Tests of Projection Postulate

Nicky
Messages
54
Reaction score
0
Have there been any experiments designed to explicitly test the projection postulate? I mean that part of it that says the measured particle is left in an eigenstate of the measured operator.

The usual devices for measuring particles (photomultipliers, phosphor screens, etc.) don't really allow the postulate to be tested, since the measured particle is absorbed, not re-emitted in a perfect eigenstate. Are there other measurement methods that allow testing the projection postulate?
 
Physics news on Phys.org
You can't really "test" the postulate, since it is something that applies to some experiments and not others. For some experiments, such as Stern-Gerlach measurements, it works very well.

Quantum opticians use the term "non-demolition" to describe measurements where the state updates according to the projection postulate. There has been considerable work done on engineering these measurements in quantum optical systems. Just look up "non-demolition measurements" on the arXiv for further details.
 
slyboy said:
You can't really "test" the postulate, since it is something that applies to some experiments and not others. For some experiments, such as Stern-Gerlach measurements, it works very well.

Even in a Stern-Gerlach apparatus, I would think there is no actual "measurement" until the electron actually hits a detector. Until that time it would be in a superposition of possible pathways through the magnets, wouldn't it?
 
The perbative effects of the magnetic field would reduce the state vector I think. I've been wondering if the postulate could just be done away with by saying the operator itself is observable, but I'm not to sure whether this is correct? It seems to me that the operator has all the properties to be considered a observable, and if so would the potentials therein be considered real? Though, I'm not sure whether a von-nuemann type measurement would have a meaning, or even in such a scnerio would reduction of a state vector even have any conceptual meaning?
 
Even in a Stern-Gerlach apparatus, I would think there is no actual "measurement" until the electron actually hits a detector. Until that time it would be in a superposition of possible pathways through the magnets, wouldn't it?

There is always ambiguity in exactly where one applies the measurement postulate, if at all (since you might be a fan of the many-worlds viewpoint).

In the case of Stern-Gerlach, it is clear that one can recombine the outputs and perform another experiment that shows it is still a coherent superposition. This is true of the vast majority of "nondemolition" measurements, since one typically needs a great deal of control over the measurement interaction in order to make the projection postulate applicable. Typically, the "measuring device" is actually another quantum system, which can itself be coherently manipulated. In quantum optics, it might be another photon for example. The second system is then measured destructively, by absorbtion in a detector or something like that. The entire experiment could be reversed up to the point that the destructive measurement is made.

However, this might be true in principle even of more complicated measurements involving "macroscopic" measuring devices. It is just that the Hamiltonians required to do this are almost impossible to engineer in practice. Evidence that macroscopic superpositions are possible, in the experiments of Zeilinger for example, indicates that this might be true.

von Neuman emphasized that there is a great deal of ambiguity in exactly where the projection postulate is applied (at the level of quantum systems, macroscopic systems, the brain of the conscious observer, etc.). Generally, it is a mathematical idealisation that allows us to calculate what will happen in experiments without having to deal with complicated entangled states of macroscopic systems. However, the probablistic aspect of QM has to be applied at some stage. Exactly where it has to be applied and what it means is one of the main topics of debate in the foundations of quantum theory.
 
von Neuman emphasized that there is a great deal of ambiguity in exactly where the projection postulate is applied (at the level of quantum systems, macroscopic systems, the brain of the conscious observer, etc.). Generally, it is a mathematical idealisation that allows us to calculate what will happen in experiments without having to deal with complicated entangled states of macroscopic systems.

Is it generally accepted, then, that the projection postulate is not formally true, but is an approximation of the measuring device's unitary evolution, in the limit as the number of particles goes to infinity?
 
Is it generally accepted, then, that the projection postulate is not formally true, but is an approximation of the measuring device's unitary evolution, in the limit as the number of particles goes to infinity?

That's a thorny question. Not much is generally accepted when it comes to the measurement part of quantum theory. If you ask most physicists then they will probably mumble something about decoherence, but there is no universally accepted answer to this.

What is true is that if there is an interaction such that the quantum state is an entangled superposition of two or more systems, and you can guarantee that the two branches will not interfere with one another due to the nature of the interaction Hamiltonians that exist, then, from the perspective of one of the systems, no distinction can be made between using the full unitary dynamics of the whole superposition and first applying the projection postulate to the other systems before continuing with the unitary dynamics.

In my opinion, it doesn't have much to do with the limit of a large number of particles, because you can still imagine engineering an interaction that causes the two branches to interfere, even though it may be difficult in practice. On the other hand, macroscopic systems are more likely to cause decoherence under the Hamiltonians that typically exist in nature.

Even if you do accept that sort of answer, there are many unresolved issues. For example, how do quantum probabilities come about if the universe just consists of a massively entangled wavefunction with no collapses?

People who are bothered by this sort of question have proposed alterations to quantum mechanics that resolve the ambiguities, e.g. Bohmian mechanics and spontaneous collapse models. However, these theories have yet to be made fully compatible with relativity - indeed it is difficult to do so because reproducing violation of the Bell ineqalities means that they have to tackle nonlocality head on.
 
Hmm, this is a testable notion:

If you run a stream of electrons through such a Stein Gerlach seperator-recombiner in an EPR-like setup would it and then checked correlation along a perpendicular axis you should be able to check whether the results do or do not align.

If the correlation along the perpendicular axis is preserved, what happens if you put a 'nilpotent' destructive detector along the path of one of the beams? (By nilpotent I mean a destructive detector that should never detect anything.)
 
For example, how do quantum probabilities come about if the universe just consists of a massively entangled wavefunction with no collapses?

Instead of saying, for instance, that there's a 50% chance of a particle being in a spin up state, you'd say that 50% of the states in the superposition correspond to spin up registering on the measuring device.


Instead of saying you're very likely to see about 50 spin-ups in 100 experiments, you'd say that most of the states in the superposition correspond to about 50 spin-ups detected in the 100 experiments.
 
  • #10
slyboy said:
What is true is that if there is an interaction such that the quantum state is an entangled superposition of two or more systems, and you can guarantee that the two branches will not interfere with one another due to the nature of the interaction Hamiltonians that exist, then, from the perspective of one of the systems, no distinction can be made between using the full unitary dynamics of the whole superposition and first applying the projection postulate to the other systems before continuing with the unitary dynamics.

What kind of interaction hamiltonians would prevent interference? Are they non-hermitian?
 
  • #11
What kind of interaction hamiltonians would prevent interference? Are they non-hermitian?

No, they are just the usual hermitian interaction hamiltonians that occur in nature. The main point is that if you have a macroscopic system, such as the pointer on a measuring device, it is likely to couple to environmental degrees of freedom (the em field, dust particles, etc.) very differently depending on its state in position space. Then, you would have to be able to control all of these environmental degrees of freedom on a quantum level in order to cause the different position states of the pointer to reinterfere. This is impossible in practice, so we can apply the projection postulate to make effective predictions once we know that the system has interacted with the measuring device.


Instead of saying, for instance, that there's a 50% chance of a particle being in a spin up state, you'd say that 50% of the states in the superposition correspond to spin up registering on the measuring device. Instead of saying you're very likely to see about 50 spin-ups in 100 experiments, you'd say that most of the states in the superposition correspond to about 50 spin-ups detected in the 100 experiments.

Of course, you can say that, but the fact of the matter is that it appears to us that measurements have actual outcomes, rather than being terms in a superposition, so you have to explain why we have this experience.

More, seriously, it works well for the situation that you describe, but what about unequal superpositions, e.g.

\frac{1}{\sqrt{3}}| \mbox{up} z \rangle | \mbox{measuring device registers up} \rangle + \frac{\sqrt{2}}{\sqrt{3}} | \mbox{down} z \rangle | \mbox{measuring device registers down} z \rangle<br />

There are only two terms in the superposition, so by your prescription the probabilities should be 50-50. However, the actual QM probabilities are 1/3 and 2/3. You have to explain why we can give a probability interpretation to the amplitudes of states, rather than just the number of terms.

Another, problem is how do you decide which basis it is OK to make the probability statement in? I could decompose the spin state in the x-basis, and then the relative states of the measuring device would be superpositions of the "registers up" and "registers down" states.

All these are problems that afflict any interpretation wherein QM is complete and the wavefunction is taken to be a literal specification of the state of reality, such as many worlds. I am not saying that these questions have no good answers, since the many-worlders have come up with several ingenious proposals (albeit proposals that are not universally accepted). The main point, is just that there must be more to it than simply reading the probabilities directly from the wavefunction.
 
  • #12
slyboy said:
[...] The main point is that if you have a macroscopic system, such as the pointer on a measuring device, it is likely to couple to environmental degrees of freedom (the em field, dust particles, etc.) very differently depending on its state in position space. Then, you would have to be able to control all of these environmental degrees of freedom on a quantum level in order to cause the different position states of the pointer to reinterfere. This is impossible in practice, so we can apply the projection postulate to make effective predictions once we know that the system has interacted with the measuring device.

Is it fair to say, then, that the macroscopic pointer states still formally interfere with each other, but the intereference effects are so minute that the pointer's behavior cannot be distinguished from classical behavior?
 
  • #13
Is it fair to say, then, that the macroscopic pointer states still formally interfere with each other, but the intereference effects are so minute that the pointer's behavior cannot be distinguished from classical behavior?

Yes, pretty much. Have a look at Zurek's Physics Today article on the subject for more details.
 
  • #14
slyboy said:
More, seriously, it works well for the situation that you describe, but what about unequal superpositions, e.g.

\frac{1}{\sqrt{3}}| \mbox{up} z \rangle | \mbox{measuring device registers up} \rangle + \frac{\sqrt{2}}{\sqrt{3}} | \mbox{down} z \rangle | \mbox{measuring device registers down} z \rangle<br />

There are only two terms in the superposition, so by your prescription the probabilities should be 50-50. However, the actual QM probabilities are 1/3 and 2/3. You have to explain why we can give a probability interpretation to the amplitudes of states, rather than just the number of terms.

This is indeed THE remark that kills off many "naive" statistical interpretations of the wavefunction, something that Everett and Co never really solved in a satisfactory way. The closest comes Deutsch's "rational decider" argument, but even there, he needs additional "reasonable assumptions".

Another, problem is how do you decide which basis it is OK to make the probability statement in? I could decompose the spin state in the x-basis, and then the relative states of the measuring device would be superpositions of the "registers up" and "registers down" states.

All these are problems that afflict any interpretation wherein QM is complete and the wavefunction is taken to be a literal specification of the state of reality, such as many worlds. I am not saying that these questions have no good answers, since the many-worlders have come up with several ingenious proposals (albeit proposals that are not universally accepted). The main point, is just that there must be more to it than simply reading the probabilities directly from the wavefunction.

I think that is very true, and MWI proponents (of which I'm in a way part, with caveat) cannot avoid ADDING extra hypotheses for the Born rule to emerge, no matter of their repeated claims of the opposite. However, there's NOTHING WRONG with adding extra hypotheses, as long as they make sense. But I think extra hypotheses are in any case necessary in order to make the Born rule appear, the most important one being that ONLY ONE of the terms (in what basis?) is only observed (with what probability?).
The difference with Copenhagen-style interpretations is that no physical process is responsable for a wavefunction collapse. It is there where Copenhagen-style interpretations "do not make much sense": with some magic, there are "measurement processes" in nature which "make the transition from the quantum to the classical world". Only, no such physical process is known - while the processes happening in the measurement apparatus ARE known - otherwise we wouldn't know what the apparatus is measuring in the first place (except gravity perhaps - but this is clearly not explicitly stated in the Copenhagen-style interpretations), and it is left very vague exactly where and how this transition is supposed to take place.

However, all this shouldn't stop you from using the projection postulate "FAPP" (for all practical purposes).

cheers,
Patrick.
 
  • #15
  • #16
I am just a grad student so my understanding of QM is still naive, but I am still not getting why the Born rule needs to be a fundamental postulate. Can't it just be viewed as an heuristic, an approximation to the hopelessly complicated unitary dynamics of a macroscopic system?
 
  • #17
selfAdjoint said:
Patrick, the 'relational" measurement paper I posted about at https://www.physicsforums.com/showthread.php?t=80769, claims to solve the basis choice problem. Could you maybe take a look at it and tell us what you think?

I have quickly read it. If I am not wrong, it deals only with the additional difficulty of the time transformation due to the Lorentz invariance or the equivalence principle of GR. But I think it does not solve anything for the measurement.
The Collapse postulate just defines a property on a given reference frame for the whole system. We have to re-express the projector of this property if we change the frame of reference in order to have the same property correctly expressed in both frames. Therefore, we still have the problem of the (prediction) preferred basis.

Seratend.
 
  • #18
Nicky said:
I am just a grad student so my understanding of QM is still naive, but I am still not getting why the Born rule needs to be a fundamental postulate. Can't it just be viewed as an heuristic, an approximation to the hopelessly complicated unitary dynamics of a macroscopic system?

Simply said, no, for a very simple reason: let us call that very complicated UNITARY operator, U. If it is unitary, no matter how complicated, it is LINEAR.

Take the system under study in state |s1>, and your measurement apparatus in its pre-measurement state |M0>.
Now, if state |s1> always gives you outcome 17 on the measurement dial, then your hopelessly complicated U has at least the following property:

U |s1>x|M0> = |s_something> x |M-17>

If state |s2> always gives you an outcome 38 on the measurement dial,
then your U has at least also the property:

U |s2> x |M0> = |s_somethingelse> x |M-38>

Purely from the linearity follows then:

U (a |s1> + b |s2> ) = a |s_something> x |M-17> + b |s_something_else> x |M-38>

and there's no way in which you can only get ONE of the terms, probabilistically. By linearity, you ALWAYS get BOTH terms.
Born's rule gives you ONE term, with a certain probability.

cheers,
Patrick.
 
  • #19
vanesch said:
Purely from the linearity follows then:

U (a |s1> + b |s2> ) = a |s_something> x |M-17> + b |s_something_else> x |M-38>

and there's no way in which you can only get ONE of the terms, probabilistically. By linearity, you ALWAYS get BOTH terms.
Born's rule gives you ONE term, with a certain probability.

cheers,
Patrick.

You mean projection postulate gives one term and born rules the probability :-p

Seratend.
 
  • #20
seratend said:
You mean projection postulate gives one term and born rules the probability :-p

Eh yes. :redface:
 
  • #21
I think that is very true, and MWI proponents (of which I'm in a way part, with caveat) cannot avoid ADDING extra hypotheses for the Born rule to emerge, no matter of their repeated claims of the opposite. However, there's NOTHING WRONG with adding extra hypotheses, as long as they make sense. But I think extra hypotheses are in any case necessary in order to make the Born rule appear, the most important one being that ONLY ONE of the terms (in what basis?) is only observed (with what probability?).

I agree that extra hypotheses are needed, and that there is NOTHING WRONG with adding them in principle. However, I think we must be incredibly careful about just what kind of hypotheses we allow ourselves to add. This is because adding hypotheses about probability is very likely to constrain the possible interpretation we can give to quantum probabilities. For example, if we simply say that the probabilities ARE given by the Born rule, and leave it at that, then that strongly suggests that quantum probabilities MUST be taken to be objective probabilities of some sort.

As you probably know, philosophers, statisticians, economists, physicists, etc. have debated the interpretation of probabilities in the classical case ad infinitum, with no clear consensus having emerged. The three front-runners are the frequentist, propensity and subjective interpretation of probability.

Now, my personal favourite is the subjective interpretation, but that is pretty much beside the point here. However, I am inclined to believe (hope, pray, etc.) that the interpretational problems of quantum theory can be resolved without having to fix on a particular interpretation of probability. If not, then I don't really see any hope of resolving them, since the debate about interpretation of probability is in many ways even more divisive than the debate about quantum mechanics.

Therefore, I would say that the only sort of hypotheses we should add in order to derive the Born rule are ones that can be formulated in all three interpretations of probability. In particular, hypotheses of the form "the probability is ..." should not be allowed because they cannot be formulated in the subjective interpretation of probability.
 
  • #22
vanesch said:
[...] Purely from the linearity follows then:

U (a |s1> + b |s2> ) = a |s_something> x |M-17> + b |s_something_else> x |M-38>

and there's no way in which you can only get ONE of the terms, probabilistically. By linearity, you ALWAYS get BOTH terms.
Born's rule gives you ONE term, with a certain probability.

Thanks much, Patrick. Very clear explanation.
 
  • #23
slyboy said:
Therefore, I would say that the only sort of hypotheses we should add in order to derive the Born rule are ones that can be formulated in all three interpretations of probability. In particular, hypotheses of the form "the probability is ..." should not be allowed because they cannot be formulated in the subjective interpretation of probability.

I'm not sure about that (even convinced of the opposite, say :-p ). In a MWI setting, where objectively, nothing is probabilistic and "everything" happens, the Born-style probabilities are only observer-related (and hence subjective). So what's wrong with: "and for the observer in that branch, the subjective probability is..." ? A bit in the style of the rational decider's probabilities in Deutsch's approach ?

cheers,
Patrick.
 
  • #24
I'm not sure about that (even convinced of the opposite, say ). In a MWI setting, where objectively, nothing is probabilistic and "everything" happens, the Born-style probabilities are only observer-related (and hence subjective). So what's wrong with: "and for the observer in that branch, the subjective probability is..." ? A bit in the style of the rational decider's probabilities in Deutsch's approach ?

That's not really a thoroughgoing subjective theory of probability. It gels with the Jaynesian view of probability perhaps, since there an agent's probability is determined by the objective information that the agent has, i.e. it is an objective fact about an agent's objective knowledge. In this case, the information that an agent has depends on which branch of the wavefunction they find themselves in, so perhaps everything works out OK.

However, in a more thoroughgoing subjective approach, there is simply no objective fact that constrains the probabilities that an agent should assign. It is a matter of their state of belief, rather than of their information or knowledge. In my opinion, this is the more justifiable subjective theory, since information and knowledge are nefarious concepts to define in this context, but belief has a well defined operational meaning via de Finetti's arguments. In this approach, things that are directly related to probability, such as quantum amplitudes, ought not to appear as part of the "objective state of reality", so this approach is thoroughly incompatible with many-worlds in any case, even without considering what additional hypotheses are needed to derive the Born rule.
 
  • #25
slyboy said:
That's not really a thoroughgoing subjective theory of probability. It gels with the Jaynesian view of probability perhaps, since there an agent's probability is determined by the objective information that the agent has, i.e. it is an objective fact about an agent's objective knowledge. In this case, the information that an agent has depends on which branch of the wavefunction they find themselves in, so perhaps everything works out OK.

However, in a more thoroughgoing subjective approach, there is simply no objective fact that constrains the probabilities that an agent should assign. It is a matter of their state of belief, rather than of their information or knowledge. In my opinion, this is the more justifiable subjective theory, since information and knowledge are nefarious concepts to define in this context, but belief has a well defined operational meaning via de Finetti's arguments. In this approach, things that are directly related to probability, such as quantum amplitudes, ought not to appear as part of the "objective state of reality", so this approach is thoroughly incompatible with many-worlds in any case, even without considering what additional hypotheses are needed to derive the Born rule.

Waow Sliboy, this is almost phylosophy, where is the physics (the logical deductions and not the subjective or objective ones)? :biggrin:

Seratend.
 
  • #26
seratend said:
Waow Sliboy, this is almost phylosophy, where is the physics

I would like to object to the generally held belief in the physicist community that where philosophy starts, physics ends ; this is a view held by "technocratic" physicists who have a caricatural view of what exactly is "philosophy", which they erroneously seem to associate to futile discussions as how many angels can dance on the head of a pin. I'd say that talking about foundational issues in physics necessarily touches upon philosophical arguments in the same way as it touches upon logical (= part of philosophy!) and mathematical arguments.
I have to say that I was also part of the poopooh crowd when philosophy came to issue (the Feynman lectures help having that wrong view - probably one of the rare critiques I have of them), until I read a bit about it. The funny thing is that *certain* ideas touching upon foundational issues in quantum theory are actually OLD STUFF in philosophy: the mind-brain problem, solipsism, objective ontology versus subjective perception, the hard problem of consciousness... all these things which, one way or another have made their way into certain views on the foundations of quantum theory were in fact "old hat" to philosophers. This doesn't mean that philosophers know more about quantum theory than physicists, but at least, we can learn something about those aspects from philosophers. Physicists often have this narcissism of reinventing what other fields have already done for a long time: Heisenberg reinvented linear algebra, dirac re-invented Hilbert spaces, Murray Gell-Mann reinvented the representations of compact groups...
Sometimes it can be good to look over the fence, to look at what the neighbour is doing, without an a priori.

I think that nobody objects in discussions "degenerating" into pure logic or mathematics ; in the same way, one shouldn't object to discussions degenerating into some philosophy.

cheers,
Patrick.
 
  • #27
Waow Patrick, this is a strong reaction! :biggrin:

I do not deny the use of philosophy as long as we are able to associate to the used terms/concepts mathematically well defined objects for a practical use in physics. Hence my remark above.

Seratend
 
  • #28
seratend said:
Waow Patrick, this is a strong reaction! :biggrin:

Didn't mean to be mean o:)

My reply wasn't directed at you especially, I just objected to the often given out-of-hand reply that "this is philosophy" comes down to "shut up, you're talking bull****".

On the other hand, I agree with you that one should be careful about such discussions, which DO often degrade into mumbling empty statements.
 
  • #29
Patrick, the ignorance and caricature work the other way too. Very few philosophers have a correct understanding of what quantum mechanics says. They still think of the uncertainty principle in terms of Heisenberg's microscope.
 
  • #30
selfAdjoint said:
Patrick, the ignorance and caricature work the other way too. Very few philosophers have a correct understanding of what quantum mechanics says.

I know, I didn't mean to say that philosophers are better at physics than physicists. What I meant was, that when a question in physics touches upon a philosophical issue (such as the distinction between a subjective experience, and the hypothesis of an objective world etc...), this shouldn't be immediately shot down by "pooh, this is philosophy ! Where are the equations ?" ; and it might be interesting to see what philosophers have already said about related questions.
I think that the relationship between physics and philosophy ressembles a bit that between physics and mathematics. Physicists are (usually) better at physics than mathematicians, but when physicists, in doing physics, touch upon a mathematical problem, it might be inspiring to go and look what mathematicians already have done on the issue. That doesn't mean that the physics has to be left to the mathematicians, who will make a mess of it :-)
Now, before the foundational issues in QM, I think the piece of "philosophy" needed in physics was rather simple and intuitive, but the foundational issues in QM have given rise to much deeper questions which sometimes HAVE already been studied for a few millennia. It is true that by training and group think, many physicists feel unconfortable with this matter - and sometimes a bit embarrrassed ; it doesn't fit in the "macho" culture of hard math and "real" measurements somehow, and they fear to be ridiculized by their peers. But it can be inspiring to learn at least what has been done on the subject and see if it can be inspiring. Without leaving the physics to philosophers.
 
  • #31
I would like to object to the generally held belief in the physicist community that where philosophy starts, physics ends

I agree. I would like to think that the interpretation of quantum mechanics will turn out to be physics (as well as being philosophy). By this I mean that it will lead to new ways of thinking about physics, which in turn will lead to new ways of extending it. Hopefully, this will turn out to be relevant, albeit indirectly, for coming up with the correct theory of quantum gravity.

Waow Sliboy, this is almost phylosophy, where is the physics (the logical deductions and not the subjective or objective ones)

One thing that philosophers are good at is separating out the individual problems that make up the complex issues we think about as physicists. Then, we can try to analyse and solve them one by one.

My main point is that the interpretational problems of quantum mechanics seem to be very closely tied to the problem of interpreting probability theory. I would like to separate them if possible, because trying to solve one hard problem is usually easier than trying to solve two simultaneously.

The subjective theory of probability presents a problem in this regard, because we are used to thinking of quantum probabilities as the objective predictions of the theory. However, according to the subjective theory, probabilities are just not the right sort of thing to appear at such a fundamental level.

On the other hand, subjective probability does not do away with objective facts entirely. Although, probabilities are not themselves objective, things such as the possible options that an agent has to decide between, and the possible events that can occur, are taken as objective facts.

In my view, any hypothesis used to derive the Born rule should be expressible in terms of the things that are taken to be objective in any theory of probability. In the subjective theory, this means thoroughly grounding things in decision theory.

If this cannot be done, then the hypothesis is not compatible with all the major interpretations of probability, and its proponents have to go and fight the battle about the interpretation of probability before they can convince everyone that it is the correct way to think about quantum theory.

In contrast, if the hypothesis does have an equivalent formulation in all the major interpretations of probability, then we can ignore the issue and just fight about quantum mechanics instead, which is what we wanted to do in the first place.
 
  • #32
slyboy said:
The subjective theory of probability presents a problem in this regard, because we are used to thinking of quantum probabilities as the objective predictions of the theory. However, according to the subjective theory, probabilities are just not the right sort of thing to appear at such a fundamental level.

I think you cannot get around this: probabilities seem to be fundamental in quantum theory, and I'd say that if you have a view on probability that cannot take probabilities as fundamental quantities, then you'll always have a problem with QM in that view. I think you are very close to the views of Deutsch, no ? I think that anyone who tries to do QM *without* somehow postulating that probabilities will appear, will run in a circle because it is the *only* thing that comes out of the wavefunction ! After all, what else is a wavefunction good for ? What does it mean to have a system in a state |psi> if there's no link to anything probabilistic ? There's a paper with critique on the MWI which explains this very well by Adrian Kent grqc/9703089. I think his critique is too harsh for MWI but it illustrates very well that *without* any probability postulate, QM is dead-empty. There is no "emergence" of probabilities simply because there's some vector wobbling in Hilbert space if that's all there is. Let it wobble. What does it mean ?

cheers,
Patrick.
 
  • #33
I think you are very close to the views of Deutsch, no ?

I don't think so, since I am not actually a fan of many-worlds. The main reson is that you have an object so closely related to probabilities apprearing as the "state of reality", i.e. the wavefunction, and I don't think that it can ever be made compatible with the all theories of probability.

However, if one follows an interpretation in which the quantum state is epistemic, then there is not such a big problem. We can have different agents assigning different states to the same system, so it becomes much more like a probability distribution.

Of course, the big problem with this approach is to identify what the "states of reality" are in quantum mechanics if they are not the wavefunction. I believe that there are 4 main contenders for this:

1) There is no "state of reality" - This puts us close to a Copenhagen or an instrumentalist view of quantum mechanics. We have to give up a huge chunk of realism, which I would prefer not to do.

2) Hidden variable theories - The problem here is that in most of the viable contenders, such as Bohmian mechanics, the wavefunction still enters as part of the state of reality. There is currently no plausible hidden variable theory without this property, although one can construct toy theories. However, if we drop the equilibrium hypothesis, then the wavefunction is no longer directly related to probability distributions, so this might be a viable approach.

3) Quantum Logic - Here, the only thing that changes when you go from classical to quantum is the structure of events. Probabilities are introduced in exactly the same way as in the classical theory. Events are objective in all theories or probability, so it is compatible with propensities, frequencies and subjective approaches.

4) Miscellaneous proposals that have yet to be fully worked out.
 
  • #34
slyboy said:
I don't think so, since I am not actually a fan of many-worlds. The main reson is that you have an object so closely related to probabilities apprearing as the "state of reality", i.e. the wavefunction, and I don't think that it can ever be made compatible with the all theories of probability.

So you are saying that the born rules is not a probability law? :-p

For me, once you define a measure on a given set with a sigma algebra (the borel sets, in the case of the observables) and the eigenvalue-outcome link, you have what I call formally a "classical" probability space. Why asking for more than this?

Seratend.
 
  • #35
Well, the issue of 'quantum interpretation' is approrpiate to philosophy because it is not experimentally testable and thus not physics or science. If it is testable, then it ceases to be interpretation and becomes theory.

Physics, unlike philosophy, is not suitable for discussing how many angels can dance on the head of a pin.
 
  • #36
slyboy said:
My main point is that the interpretational problems of quantum mechanics seem to be very closely tied to the problem of interpreting probability theory. I would like to separate them if possible, because trying to solve one hard problem is usually easier than trying to solve two simultaneously.

I agree :-p
However, I think one should define first the scope of the interpretation. Personally, I just need a consistent mapping between some objects of the theory and some objects of the reality in order to make some logical and practical predictions/deductions (or at least the identification some objects that may be mapped later to the "reality").

In the probability domain, what do you call the problem of interpretation? Do you mean the choice of a peculiar interpretation?

Seratend.
 
  • #37
NateTG said:
Well, the issue of 'quantum interpretation' is approrpiate to philosophy because it is not experimentally testable and thus not physics or science. If it is testable, then it ceases to be interpretation and becomes theory.

Physics, unlike philosophy, is not suitable for discussing how many angels can dance on the head of a pin.

:biggrin: :biggrin: :biggrin:

Well, I think one should split the interpretation into the minimalist part (the mapping of some of the mathematical objects to the "reality") and the "philosophical" part. I need the minimalist part to describe formally experiments (apply the logic) while I can live without the second part :biggrin: .

Seratend.
 
  • #38
seratend said:
:biggrin: :biggrin: :biggrin:

Well, I think one should split the interpretation into the minimalist part (the mapping of some of the mathematical objects to the "reality") and the "philosophical" part. I need the minimalist part to describe formally experiments (apply the logic) while I can live without the second part :biggrin: .

Seratend.

The popular 'plug and chug' interpretation, originally, I believe attributed to Von Neuman.

There is a legitemate place for interpretations as a method for deveolping hypotheses, but, Ph. D. stands for Doctor of Philosophy.
 
  • #39
NateTG said:
Well, the issue of 'quantum interpretation' is approrpiate to philosophy because it is not experimentally testable and thus not physics or science. If it is testable, then it ceases to be interpretation and becomes theory.

I'd object to this, for several reasons. The most important is that the interpretation of quantum theory is the only link between the mathematical formalism and the experimental setup ; however, in many cases this reduces to something that is *intuitively clear* and we're cheating, because we switch, at a certain point, to classical physics. However, it is conceivable that in much more sophisticated setups, the intuition is NOT going to be right. Typical example: when do we have to treat the nuclear skeleton of a molecule classical, and when do we have to treat it quantum-mechanically (eg, the molecule has no structure): NH3 must be treated QM, and a protein must/can (?) be treated classically. In fact, current QM leaves rather open the question ; decoherence seems to suggest that the answers will come out the same.
But at some point, we need to know whether a "real" collapse occurs or not. This is a testable question (at least in principle, much easier to test than string theory :-) This has everything to do with the interpretation of quantum theory.

But another important objection is this: an interpretation offers a mental picture of what you are doing, and I think that such a mental picture is necessary in order to be able to devellop the necessary intuition to make progress. For instance, string theorists just take over the unitary machinery of quantum theory. But I think the first question to solve is whether gravity does, or does not, allow for the unitary evolution to continue (that's many worlds) or induces a collapse of some kind. Again, this is closely related to interpretational issues.

I'd say that if you take interpretation and mathematics away from physics, you end up with stamp collecting :-)

cheers,
Patrick.
 
  • #40
NateTG said:
The popular 'plug and chug' interpretation, originally, I believe attributed to Von Neuman.

Maybe popularly attributed to Von Neuman. However, surely the most difficult to understand in my opinion (i.e. we have to understand our way of thinking).

NateTG said:
There is a legitemate place for interpretations as a method for deveolping hypotheses, but, Ph. D. stands for Doctor of Philosophy.

Yes, a typical anglosaxon point of view ; ). In other countries, there are only doctors. : )))

Seratend.
 
  • #41
So you are saying that the born rules is not a probability law?

Well, it's certainly a probability rule, but I hesitate to give it the title "law". As I have explained, I don't think probability statements should enter into our fundamental laws of nature.

For me, once you define a measure on a given set with a sigma algebra (the borel sets, in the case of the observables) and the eigenvalue-outcome link, you have what I call formally a "classical" probability space. Why asking for more than this?

Well, you actually have multiple classical probability spaces, one for each observable. Quantum theory says that there are events appearing in different sample spaces that are always assigned the same probability. The only way this can be justified in a subjective theory is if these events are always identified as the same. So, what you really have, is not a classical probability space, but multiple spaces pasted together, and this is essentially the probability space of quantum logic.

Well, the issue of 'quantum interpretation' is approrpiate to philosophy because it is not experimentally testable and thus not physics or science. If it is testable, then it ceases to be interpretation and becomes theory.

I used to believe this as well, but now I am not so sure. All physical theories have a verifiable part and an interpretation part - not just quantum mechanics. The verifiable part consists of the mathematical formalism, and a set of rules for relating it to the experiments. There is always underdeterminism in the interpretation part, i.e. I can always cook up bizarre ways of thinking about things that give all the same experimental predictions, but a very different picture of the world. For example, I may be able to cook up an interpretation of Newtonian mechanics that doesn't have a notion of absolute time. However, no-one would argue that Newtonian mechanics doesn't have absolute time, and that this is part of the physics rather than being just philosophy.

In the quantum case, we have cooked up this nice comforting story for ourselves, wherein there is an operational part of the theory that everyone agrees upon and understands, and an interpretation part that is just a matter of philosophy. However, there are cases where the part that we normally think of as interpretation rears its ugly head in real physics. I am thinking particularly of the debates surrounding the existence of the quantum Zeno effect, which relies on taking the projection postulate literally, and also the role of the wavefunction of the universe in quantum cosmology.
 
  • #42
I thought that paper was very promising, and I would like to see soem comments from people who are less rusty than I am.
 
  • #43
slyboy said:
In the quantum case, we have cooked up this nice comforting story for ourselves, wherein there is an operational part of the theory that everyone agrees upon and understands, and an interpretation part that is just a matter of philosophy. However, there are cases where the part that we normally think of as interpretation rears its ugly head in real physics. I am thinking particularly of the debates surrounding the existence of the quantum Zeno effect, which relies on taking the projection postulate literally, and also the role of the wavefunction of the universe in quantum cosmology.

I couldn't agree more :approve:

cheers,
Patrick.
 
  • #44
vanesch said:
The most important is that the interpretation of quantum theory is the only link between the mathematical formalism and the experimental setup ;

I think this is primarily an issue of definitions. My notion of what 'interpretation' encompasses is narrower than yours.

vanesch said:
But at some point, we need to know whether a "real" collapse occurs or not. This is a testable question (at least in principle, much easier to test than string theory :-) This has everything to do with the interpretation of quantum theory.

If you want to test whether a collapse occurs or not, or, for that matter, exactly what experimentally testable properties a collapse has, those are not interpretation issues.

vanesch said:
But another important objection is this: an interpretation offers a mental picture of what you are doing, and I think that such a mental picture is necessary in order to be able to devellop the necessary intuition to make progress.

Not really. It's quite possible to, for example, look for places where the current theory has singularities, and run experiments to see what happens there.
Moreover, it's not at all clear to me that interpretation has necessarily been a historically useful for physics. Rather it seems like interpretation is a problem that physics (really science in general) keeps knocking its teeth out on. Theories that come out of 'actuarial' science - that is theories that are based on making lots of observation - and attempting to correlate the results tend to be strong, while theories that are based on 'interperation' - based on what might or ought to be - tend to be weak.
 
  • #45
Theories that come out of 'actuarial' science - that is theories that are based on making lots of observation - and attempting to correlate the results tend to be strong, while theories that are based on 'interperation' - based on what might or ought to be - tend to be weak.

Yes, I agree. Relativity is clearly one of the weakest theories in science :)

...but seriously, I think that these sort of sweeping generalizations are not justified by the actual history of science. It has always been a mix of effective theories based on observation, and grand extrapolations of theorists to make things fit into their grand vision of the world.

Many examples, such as relativity, Darwin's theory of natural selection, etc. could not be entirely justified by the available evidence at the time they were proposed, although they were of course guided by some observations that had been made.
 
  • #46
NateTG said:
Moreover, it's not at all clear to me that interpretation has necessarily been a historically useful for physics. Rather it seems like interpretation is a problem that physics (really science in general) keeps knocking its teeth out on. Theories that come out of 'actuarial' science - that is theories that are based on making lots of observation - and attempting to correlate the results tend to be strong, while theories that are based on 'interperation' - based on what might or ought to be - tend to be weak.

There are examples of both, but some spectacular breakthroughs were based purely on "vision":
- Maxwell's equations (the d D /dt term)
- general relativity
- Dirac's equation
- the electroweak theory of Weinberg and Salam
- the expanding universe (Hubble: his data were in fact showing the opposite!)

most of these were NOT data driven at all, but based upon the vision that the authors had of how things "ought" to be.

Of course an example of a theory that was rammed down our throat by data was quantum mechanics.

cheers,
Patrick.
 
  • #47
NateTG said:
If you want to test whether a collapse occurs or not, or, for that matter, exactly what experimentally testable properties a collapse has, those are not interpretation issues.

I'd say that these are extentions of the current formalism, based upon a vision that is inspired by a certain interpretation :smile:

After all, the reason why there's so much discussion and different interpretations is mostly because the current formalism of quantum theory is AMBIGUOUS. For most applications this doesn't matter for the moment, because we cheat in different ways, because we hop between classical physics and quantum theory all the time in ways which are just given by our intuition, and this works for all practical purposes. The advantage of taking up an interpretation is that it FORCES you to make choices where the actual theory is vague - so in a way, according to you, interpretational issues are in fact variant extensions of the theory. Fine.
 
  • #48
slyboy said:
Well, it's certainly a probability rule, but I hesitate to give it the title "law". As I have explained, I don't think probability statements should enter into our fundamental laws of nature.

Ok, I understand better what you say. For me a probability law is a measure on a sigma algebra (i.e. the mathematical definition). So you add more properties to the set of words "probability law" than me (i.e. the following of your post).


slyboy said:
Quantum theory says that there are events appearing in different sample spaces that are always assigned the same probability. The only way this can be justified in a subjective theory is if these events are always identified as the same.

I do not understand this statement. We have a probability space completely defined for a given observable and a state. If we want we may formally connect these probability spaces by a parameter, that we may think as a context, but this is an additional external structure (such as the definition of non boolean lattice of propostions versus a collection of bolean lattices).

In addition, why do you speak about a subjective theory?

slyboy said:
I used to believe this as well, but now I am not so sure. All physical theories have a verifiable part and an interpretation part - not just quantum mechanics. The verifiable part consists of the mathematical formalism, and a set of rules for relating it to the experiments. There is always underdeterminism in the interpretation part, i.e. I can always cook up bizarre ways of thinking about things that give all the same experimental predictions, but a very different picture of the world.

Why do you want determinism in the interpretation part and why do you think that the way of describing the "reality is unique? All what you are able to obtain is a logical consistency of the interpretation in my opinion.
And what you may think is bizarre for you may be normal for another person :biggrin: .

slyboy said:
In the quantum case, we have cooked up this nice comforting story for ourselves, wherein there is an operational part of the theory that everyone agrees upon and understands, and an interpretation part that is just a matter of philosophy. However, there are cases where the part that we normally think of as interpretation rears its ugly head in real physics. I am thinking particularly of the debates surrounding the existence of the quantum Zeno effect, which relies on taking the projection postulate literally, and also the role of the wavefunction of the universe in quantum cosmology.

Here is the problem, the "literally" allows a lot of mathematical choices and the paradox comes from thinking about the collapse postulate a king of deterministic process rather than a simple description rule (i.e. we do not assume more than it is written).
It is like a person walking half the distance of the previous walk. This does not mean that the person will stop.

Seratend.
 
  • #49
I do not understand this statement. We have a probability space completely defined for a given observable and a state. If we want we may formally connect these probability spaces by a parameter, that we may think as a context, but this is an additional external structure (such as the definition of non boolean lattice of propostions versus a collection of bolean lattices).

In addition, why do you speak about a subjective theory?

I mean the subjective theory of probability, which as I would like to make QM compatible with, as I have said before. Within that theory, one has to derive the structure of probability measures from decision theory, making use of simple axioms about the structure of possible events and actions. These are called coherence arguments.

As far as I can see, if your sample space is a Boolean algebra, then a coherence argument will tell you that all classical probability measures on it are allowed. If you have several unrelated Boolean algebras, then you can have any comination of probability measures on them. However, QM doesn't allow this, c.f. the uncertainty relations for example.

The only way I can see to fix this is to modify the structure of events so that it is no longer a Boolean algebra. Then the coherence argument gives you the correct quantum probabilities via Gleason's theorem.

Why do you want determinism in the interpretation part and why do you think that the way of describing the "reality is unique? All what you are able to obtain is a logical consistency of the interpretation in my opinion.

I never said that I want determinism - just that there is always underdeterminism. Actually, that was a mistake, since I should have written "underdetermination". There will always be several interpretations of a theory that are "logically consistent", although I doubt that one can ever fully demonstrate logical consistency of an interpretation to the same degree as a mathematical theory. Instead, we apply principles like Occam's razor and look for explanatory power in an interpretation. This is as much a part of science as performing experiments, so shouldn't be dismissed as "mere" philosophy.

And what you may think is bizarre for you may be normal for another person .

That's what makes science fun! It is a human activity and we can debate the best way to proceed as much as in any other area of human knowledge.
 
Back
Top