Is wave function a real physical thing?

In summary, the conversation discusses the question of whether the wave function is a real physical object or simply a mathematical tool. The article referenced in the conversation presents a new theorem that provides evidence for the former, but further research and experimentation is needed to fully understand the implications of this theorem. The conversation ultimately concludes that the question is currently of little consequence until a clear definition of the terms and an experiment can be devised to settle the debate.
  • #71
vanhees71 said:
Ok, then what's in your view the difference between Bayesian and frequentist interpretations of probabilities, particularly the statement probabilities make sense for a single event?

You can go one better: Bayesian statistics allows us to have a probability for something with zero events. Of course, in that case, it's just a guess (although you can have a principled way of making such guesses). A single event provides a correction to your guess. More events provide better correction.

E.g., when they say in the weather forecast, there's a 99% probability to have snow tomorrow, and tomorrow it doesn't snow. Does that tell you anything about the validity of the probability given by the forecast? I don't think so.

It doesn't tell you a lot, but it tells you something. If the forecast is for 99% chance of snow, and it doesn't snow, then (for a Bayesian), the confidence that the forecast is accurate will decline slightly. If for 100 days in a row, the weather service predicts 99% chance of snow, and it doesn't snow any of those days, then for the Bayesian, the confidence that the reports are accurate will decline smoothly each time. It would never decline to zero, because there's always a nonzero chance that that an accurate probabilistic prediction is wrong 100 times in a row, just like there is a nonzero chance that a fair coin will yield heads 100 times in a row.

The frequentist would (presumably) have some cutoff value for significance. The first few times that the weather report proves wrong, they would say that no conclusion can be drawn, since the sample size was so small. Then at some point, he would conclude that he had a large enough sample to make a decision, and would decide that the reports are wrong.

Note that both the Bayesian and the frequentist makes use of arbitrary parameters--the Bayesian has an arbitrary a priori notion of probability of events. The frequentist has an arbitrary cutoff for determining significance. The difference is that the Bayesian smoothly takes into account new data, while the frequentist withholds any judgement until some threshold amount of data, then makes a discontinuous decision.

It's just a probability based on experience (i.e., the collection of many weather data over a long period) and weather models based on very fancy hydrodynamics on big computers. The probabilistic statement can only be checked by evaluating a lot of data based on weather observations.

Of course, there's Bayes's theorem on conditional probabilities, which has nothing to do with interpretations or statistics but is a theorem that can be proven within the standard axiom system by Kolmogorov:
$$P(A|B) P(B) = P(B|A) P(A),$$
which is of course not the matter of any debate.

Bayes' formula is of course valid whether you are a Bayesian or a frequentist, but the difference is that the Bayesian associates probabilities with events that have never happened before, and so can make sense of any amount of data. So for the example we're discussing, there would be an a priori probability of snow, and an a priori probability of the weather forecaster being correct. With each day that the forecaster makes a prediction, and each day that it does or does not snow, those two probabilities are adjusted based on the data, according to Bayes' formula.

So Bayes' formula, together with a priori values for probabilities, allows the bayesian to make probabilistic statements based on whatever data is available.

I'm really unable to understand why there is such a hype about Qbism,

Well, I'm not defending Qbism. I was just talking about bayesian versus frequentist views of probability. As I said previous, I don't think that Qbism gives any new insight into the meaning of quantum mechanics, whether or not you believe in bayesian probability.
 
  • Like
Likes TrickyDicky
Physics news on Phys.org
  • #72
vanhees71 said:
This is a contradiction in itself: If you assume a relativistic QFT to describe nature, by construction all measurable (physical) predictions are Poincare covariant, i.e., there's no way to distinguish one inertial frame from another by doing experiments within quantum theory. As Gaasbeek writes already in the abstract: The delayed-choice experiments can be described by standard quantum optics. Quantum optics is just an effective theory describing the behavior of the quantized electromagnetic field in interaction with macroscopic optical apparati in accordance with QED, the paradigmatic example of a relativistic QFT, and as such is Poincare covariant in the prediction about the observable outcomes, and quantum optics indeed is among the most precisely understood fields of relativistic quantum theory: All predictions are confirmed by high-accuracy experiments. So quantum theory cannot reintroduce an "aether" or however you like to call a "preferred reference frame" into physics! By construction QED and thus also quantum optics fulfills relativistic causality constraints too!

What if we use a scale-free network to describe a discrete quantum space time? Inertial frames would be indistinguishable by the scale-invariance imposed by a particular renormalization group. Of course I have no idea how you would experimentally verify this but it has been proposed.
 
  • #73
If the wave function is a physical object, then is Hilbert Space a physical space? In other words if the wave function is a physical object then would this necessitate that quantum spacetime is an infinite dimensional complex vector space?
 
  • #74
JPBenowitz said:
If the wave function is a physical object, then is Hilbert Space a physical space?

Yes.
 
  • Like
Likes goce2014
  • #75
JPBenowitz said:
If the wave function is a physical object, then is Hilbert Space a physical space? In other words if the wave function is a physical object then would this necessitate that quantum spacetime is an infinite dimensional complex vector space?

Its easy to get caught in semantic 'nonsense' if you are not careful in how you use terms. Mathematical spaces like Hilbert space are not physical - they are a modelling tool of physical situations. When one uses QFT to give a quantum theory of gravity, space-time is then in a sense modeled by a Fock space - but since quantum gravity breaks down beyond a cut-off its quite likekly another model is a better choice - string theory maybe.

Thanks
Bill
 
  • #76
stevendaryl said:
You can go one better: Bayesian statistics allows us to have a probability for something with zero events. Of course, in that case, it's just a guess (although you can have a principled way of making such guesses). A single event provides a correction to your guess. More events provide better correction.
It doesn't tell you a lot, but it tells you something. If the forecast is for 99% chance of snow, and it doesn't snow, then (for a Bayesian), the confidence that the forecast is accurate will decline slightly. If for 100 days in a row, the weather service predicts 99% chance of snow, and it doesn't snow any of those days, then for the Bayesian, the confidence that the reports are accurate will decline smoothly each time. It would never decline to zero, because there's always a nonzero chance that that an accurate probabilistic prediction is wrong 100 times in a row, just like there is a nonzero chance that a fair coin will yield heads 100 times in a row.

The frequentist would (presumably) have some cutoff value for significance. The first few times that the weather report proves wrong, they would say that no conclusion can be drawn, since the sample size was so small. Then at some point, he would conclude that he had a large enough sample to make a decision, and would decide that the reports are wrong.

Note that both the Bayesian and the frequentist makes use of arbitrary parameters--the Bayesian has an arbitrary a priori notion of probability of events. The frequentist has an arbitrary cutoff for determining significance. The difference is that the Bayesian smoothly takes into account new data, while the frequentist withholds any judgement until some threshold amount of data, then makes a discontinuous decision.
Bayes' formula is of course valid whether you are a Bayesian or a frequentist, but the difference is that the Bayesian associates probabilities with events that have never happened before, and so can make sense of any amount of data. So for the example we're discussing, there would be an a priori probability of snow, and an a priori probability of the weather forecaster being correct. With each day that the forecaster makes a prediction, and each day that it does or does not snow, those two probabilities are adjusted based on the data, according to Bayes' formula.

So Bayes' formula, together with a priori values for probabilities, allows the bayesian to make probabilistic statements based on whatever data is available.
Well, I'm not defending Qbism. I was just talking about bayesian versus frequentist views of probability. As I said previous, I don't think that Qbism gives any new insight into the meaning of quantum mechanics, whether or not you believe in bayesian probability.
Maybe this discussion about frequentism versus Bayesianism can shed some light on the parallel discussion about collapse. Following the quoted post logic we could make compatible both absence and presence of collapse as two ways of introducing irreversibility(i.e. entropy thru probability and preparation or thru measurement-collapse) in the quantum theory, two ways of contemplating how probabilities are updated by measurements. Probably collapse is a rougher way of viewing it but it is a matter of taste. It all amounts to the same QM.
 
  • #77
atyy said:
Hmmm, are we still disagreeing on this? Collapse is in Landau and Lifshitz, Cohen-Tannoudji, Diu and Laloe, Sakurai and Weinberg (and every other major text except Ballentine, whom I'm sure is wrong), so it really is quantum mechanics. To see that it is essential, take an EPR experiment in which Alice and Bob measure simultaneously. What is simultaneous in one frame will be sequential in another frame. As long as one has sequential measurements in which sub-ensembles are selected based on the measurement outcome, one needs collapse or an equivalent postulate.
We disagree in the one point that you say collapse is a necessary part of the quantum-theoretical formalism. I think it's superfluous and contradicts very fundamental physical principles, as pointed out by EPR. As far as I remember, Weinberg is undecided about the interpretation at the end of his very nice chapter on the issue. I think that the minimal statistical interpretation is everything we need to apply quantum theory to observable phenomena. Another question is whether you consider QT as a "complete theory". This was the main question, particularly Heisenberg was concerned about, and this gave rise to the Copenhagen doctrine, but as we see in our debates here on the forum, there's not even a clear definition, what the Copenhagen interpretation might be. That's why I prefer to label the interpretation I follow as the "minimal statistical interpretation". I think it's very close to the flavor of Copenhagen due to Bohr, although I'm not sure about what Bohr thinks with regard to the collapse. I don't agree with his hypothesis that there must be a "cut" between quantum and classical dynamics, because it cannot be defined. Classical behavior occurs due to decoherence and the necessity of coarse graining in defining relevant "macroscopic" observables but not from a cut at which quantum theory becomes invalid and classical dynamics takes over.

The "collapse" to my understanding is just the trivial thing that after I take notice of the result of a random experiment that then for this instance the before undetermined or unknown feature is decided. There's nothing happening in a physical sense. Nowadays most experiments take data, store them in a big computer file and then evaluate these outcomes much later. Would you say there's a collapse acting on things that are long gone, only because somebody makes some manipulation of data on a storage medium? Or has the collapse occurred when the readout electronics have provided the signal to be written on that medium? Again, I don't think that the collapse is necessary to use quantum theory as a probabilistic statement about the outcome of measurements with a given preparation (state) of the system.
 
  • Like
Likes bhobba
  • #78
vanhees71 said:
We disagree in the one point that you say collapse is a necessary part of the quantum-theoretical formalism. I think it's superfluous and contradicts very fundamental physical principles, as pointed out by EPR. As far as I remember, Weinberg is undecided about the interpretation at the end of his very nice chapter on the issue. I think that the minimal statistical interpretation is everything we need to apply quantum theory to observable phenomena. Another question is whether you consider QT as a "complete theory". This was the main question, particularly Heisenberg was concerned about, and this gave rise to the Copenhagen doctrine, but as we see in our debates here on the forum, there's not even a clear definition, what the Copenhagen interpretation might be. That's why I prefer to label the interpretation I follow as the "minimal statistical interpretation". I think it's very close to the flavor of Copenhagen due to Bohr, although I'm not sure about what Bohr thinks with regard to the collapse. I don't agree with his hypothesis that there must be a "cut" between quantum and classical dynamics, because it cannot be defined. Classical behavior occurs due to decoherence and the necessity of coarse graining in defining relevant "macroscopic" observables but not from a cut at which quantum theory becomes invalid and classical dynamics takes over.

Weinberg is undecided about interpretation, and it is true that one can do without collapse provided one does not use Copenhagen or a correct version of the minimal statistical interpretation. For example, one can use the Bohmian interpretation, or try to use a Many-Worlds interpretation, both of which have no collapse. But it is not possible to use Copenhagen or a correct version of the minimal statistical interpretation without collapse (or equivalent assumption such as the equivalence of proper and improper mixtures). This is why most major texts (except Ballentine's erroneous chapter 9) include collapse, because the default interpretation is Copenhagen or the minimal statistical interpretation.

Peres argues that one can remove the cut and use coarse graining, but Peres is wrong because the coarse-grained theory in which the classical/quantum cut appears to be emergent yields predictions, but the fine grained theory does not make any predictions. So the coarse graining that Peres mentions introduces the classical/quantum cut in disguise. It is important that the cut does not say that we cannot enlarge the quantum domain and treat the classical apparatus in a quantum way. What the cut says is that if we do that, we need yet another classical apparatus in order for quantum theory to yield predictions.

Another way to see that the minimal statistical interpretation must have a classical/quantum cut and collaspe (or equivalent postulates) is that a minimal interpretation without these elements would solve the measurement problem, contrary to consensus that a minimal interpretation does not solve it.

vanhees71 said:
The "collapse" to my understanding is just the trivial thing that after I take notice of the result of a random experiment that then for this instance the before undetermined or unknown feature is decided. There's nothing happening in a physical sense. Nowadays most experiments take data, store them in a big computer file and then evaluate these outcomes much later. Would you say there's a collapse acting on things that are long gone, only because somebody makes some manipulation of data on a storage medium? Or has the collapse occurred when the readout electronics have provided the signal to be written on that medium? Again, I don't think that the collapse is necessary to use quantum theory as a probabilistic statement about the outcome of measurements with a given preparation (state) of the system.

Collapse occurs immediately after the measurement. In a Bell test, the measurements are time stamped, so if you accept the time stamp, you accept that that is when the measurement happens, and not later after post-processing. It is ok not to accept the time stamp, because measurement is a subjective process. However, in such a case, there is no violation of the Bell inequalities at spacelike separation. If one accepts that quantum mechanics predicts a violation of the Bell inequalities at spacelike separation, then one does use the collapse postulate. It is important that at this stage we are not committing to collapse as a physical process, and leaving it open that it could be epistemic.
 
Last edited:
  • #79
vanhees71 said:
We disagree in the one point that you say collapse is a necessary part of the quantum-theoretical formalism. I think it's superfluous and contradicts very fundamental physical principles, as pointed out by EPR.

So do I

vanhees71 said:
Nowadays most experiments take data, store them in a big computer file and then evaluate these outcomes much later. Would you say there's a collapse acting on things that are long gone, only because somebody makes some manipulation of data on a storage medium? Or has the collapse occurred when the readout electronics have provided the signal to be written on that medium? Again, I don't think that the collapse is necessary to use quantum theory as a probabilistic statement about the outcome of measurements with a given preparation (state) of the system.

If you want collapse placing it just after decoherence would seem the logical choice. But I am with you - you don't need it.

Thanks
Bill
 
  • Like
Likes vanhees71
  • #80
That's a good point: The state preparation in, e.g., a Stern-Gerlach experiments is through a von-Neumann filter measurement. You let run the particle through and inhomogeneous magnetic field, and this sorts the particles into regions of different ##\sigma_z## components (where ##z## is the direction of the homogeneous piece of the magnetic field). Then we block out all particles, not within the region of the desired value of ##\sigma_z##.

Microscopically the shielding works simply as absorbers of the unwanted particles. One can see that there is no spontaneous collapse but simply local interactions of the particles with the shielding absorbing them and leaving the "wanted ones" through, because they are in a region, where there is no shielding. The absorption process is of course highly decoherent, it's described by local interactions and quantum dynamics. No extra "cut" or "collapse" needed.
 
  • #81
atyy said:
Weinberg is undecided about interpretation, and it is true that one can do without collapse provided one does not use Copenhagen or a correct version of the minimal statistical interpretation. For example, one can use the Bohmian interpretation, or try to use a Many-Worlds interpretation, both of which have no collapse. But it is not possible to use Copenhagen or a correct version of the minimal statistical interpretation without collapse (or equivalent assumption such as the equivalence of proper and improper mixtures). This is why most major texts (except Ballentine's erroneous chapter 9) include collapse, because the default interpretation is Copenhagen or the minimal statistical interpretation.
Still there is no argument given, why you need the collapse. I don't understand, why one needs one within the minimal statistical interpretation. In no experiment, I'm aware of I need a collapse to use quantum theory to understand its outcome!

Peres argues that one can remove the cut and use coarse graining, but Peres is wrong because the coarse-grained theory in which the classical/quantum cut appears to be emergent yields predictions, but the fine grained theory does not make any predictions. So the coarse graining that Peres mentions introduces the classical/quantum cut in disguise. It is important that the cut does not say that we cannot enlarge the quantum domain and treat the classical apparatus in a quantum way. What the cut says is that if we do that, we need yet another classical apparatus in order for quantum theory to yield predictions.

Another way to see that the minimal statistical interpretation must have a classical/quantum cut and collaspe (or equivalent postulates) is that a minimal interpretation without these elements would solve the measurement problem, contrary to consensus that a minimal interpretation does not solve it.

Collapse occurs immediately after the measurement. In a Bell test, the measurements are time stamped, so if you accept the time stamp, you accept that that is when the measurement happens, and not later after post-processing. It is ok not to accept the time stamp, because measurement is a subjective process. However, in such a case, there is no violation of the Bell inequalities at spacelike separation. If one accepts that quantum mechanics predicts a violation of the Bell inequalities at spacelike separation, then one does use the collapse postulate. It is important that at this stage we are not committing to collapse as a physical process, and leaving it open that it could be epistemic.

Where do you need a collapse here either? A+B use a polarization foil and photon detectors to figure out whether their respective photon run through the polarization foil or not, which practically ideally let's through only photons with a determined linear-polarization state; the other photons are absorbed, which is through local interactions of the respective photon with the foil and there is no long-distance interaction between A's foil with B's photon and vice versa. So there cannot be any collapse as in the Copenhagen interpretation (Heisenberg flavor I think?). So there cannot be a collapse at the level of the polarizers. The same argument holds for the photo detectors. Also note that the time stamps are accurate but always of finite resolution, i.e., the registration of a photon is a fast but not instantaneous process. On a macroscopic scale of resoulution, it's of course a "sharp time stamp". The photo detector is applicable for these experiments if the accuracy of the time-stamps is sufficient to unanimously ensure that you can relate the entangled photon pairs. For a long enough distance between the photon source and A's and B's detectors and low enough photon rates, that's no problem. Again, nowhere do I need a collapse.

Bohr was of course right in saying, that finally we deal with macroscopic preparation/measurement instruments, but in my opinion he was wrong that one needs a cut between quantum and classical dynamics anywhere, because the classical behavior of macroscopic objects are (at least FAPP :-)) an emergent phenomenon and clearly understandable via coarse graining.

I also must admit that I consider Asher Peres's book as one of the best, when it comes to the foundational questions of quantum theory. Alone his definition of quantum states as preparation procedures eliminate a lot of esoterics often invoked to solve the "measurement problem". FAPP there is no measurement problem as the successful description of even the "weirdest" quantum behavior of nature shows!
 
  • #82
vanhees71 said:
That's a good point: The state preparation in, e.g., a Stern-Gerlach experiments is through a von-Neumann filter measurement. You let run the particle through and inhomogeneous magnetic field, and this sorts the particles into regions of different ##\sigma_z## components (where ##z## is the direction of the homogeneous piece of the magnetic field). Then we block out all particles, not within the region of the desired value of ##\sigma_z##.

Microscopically the shielding works simply as absorbers of the unwanted particles. One can see that there is no spontaneous collapse but simply local interactions of the particles with the shielding absorbing them and leaving the "wanted ones" through, because they are in a region, where there is no shielding. The absorption process is of course highly decoherent, it's described by local interactions and quantum dynamics. No extra "cut" or "collapse" needed.

It won't work. The state of the selected subsystem is a pure state. If you write the entire decoherent dynamics and take the reduced density matrix corresponding to the selected subsystem, you will get a mixed state.
 
  • #83
vanhees71 said:
Where do you need a collapse here either? A+B use a polarization foil and photon detectors to figure out whether their respective photon run through the polarization foil or not, which practically ideally let's through only photons with a determined linear-polarization state; the other photons are absorbed, which is through local interactions of the respective photon with the foil and there is no long-distance interaction between A's foil with B's photon and vice versa. So there cannot be any collapse as in the Copenhagen interpretation (Heisenberg flavor I think?). So there cannot be a collapse at the level of the polarizers. The same argument holds for the photo detectors. Also note that the time stamps are accurate but always of finite resolution, i.e., the registration of a photon is a fast but not instantaneous process. On a macroscopic scale of resoulution, it's of course a "sharp time stamp". The photo detector is applicable for these experiments if the accuracy of the time-stamps is sufficient to unanimously ensure that you can relate the entangled photon pairs. For a long enough distance between the photon source and A's and B's detectors and low enough photon rates, that's no problem. Again, nowhere do I need a collapse.

Let's start with particles in a Bell state. Do the particles remain entangled after A has made a measurement?

vanhees71 said:
Bohr was of course right in saying, that finally we deal with macroscopic preparation/measurement instruments, but in my opinion he was wrong that one needs a cut between quantum and classical dynamics anywhere, because the classical behavior of macroscopic objects are (at least FAPP :) ) an emergent phenomenon and clearly understandable via coarse graining.

I also must admit that I consider Asher Peres's book as one of the best, when it comes to the foundational questions of quantum theory. Alone his definition of quantum states as preparation procedures eliminate a lot of esoterics often invoked to solve the "measurement problem". FAPP there is no measurement problem as the successful description of even the "weirdest" quantum behavior of nature shows!

If you use FAPP, then you do use a cut. The whole point of the cut and collapse is FAPP. Removing the cut and collapse are not FAPP, and would solve the measurement problem.
 
  • #84
atyy said:
Let's start with particles in a Bell state. Do the particles remain entangled after A has made a measurement?.

No - it's now entangled with the measurement apparatus. But I don't think that's what is meant by collapse.

I think the Wikipedia article on collapse is not too bad:
http://en.wikipedia.org/wiki/Wave_function_collapse
'Wave function collapse is not fundamental from the perspective of quantum decoherence. There are several equivalent approaches to deriving collapse, like the density matrix approach, but each has the same effect: decoherence irreversibly converts the "averaged" or "environmentally traced over" density matrix from a pure state to a reduced mixture, giving the appearance of wave function collapse.'

In the ensemble interpretation one assumes an observation selects an outcome from the conceptual ensemble associated with the mixed state after decoherence - no collapse required. There is the issue about exactly how that particular outcome is selected (the problem of outcomes) - but that doesn't mean the interpretation is invalidated or collapse occurred - it simply means that's a postulate.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71
  • #85
bhobba said:
No - it's now entangled with the measurement apparatus. But I don't think that's what is meant by collapse.

Do you have a definite outcome yet?

At some point you will invoke that an improper mixture becomes a proper mixture. When you do that, you are using collapse.
 
  • #86
atyy said:
At some point you will invoke that an improper mixture becomes a proper mixture. When you do that, you are using collapse.

In the ensemble interpretation that is subsumed in the assumption an observation selects an outcome from a conceptual ensemble. Collapse is bypassed.

Thanks
Bill
 
  • #87
bhobba said:
In the ensemble interpretation that is subsumed in the assumption an observation selects an outcome from a conceptual ensemble. Collapse is bypassed.

If you have a conceptual ensemble, that is conceptual hidden variables.
 
  • #88
atyy said:
If you have a conceptual ensemble, that is conceptual hidden variables.

Its exactly the same ensemble used in probability. I think you would get a strange look from a probability professor if you claimed such a pictorial aid was a hidden variable.

Atty I think we need to be precise what is meant by collapse. Can you describe in your own words what you think collapse is?

My view is its the idea observation instantaneously changes a quantum state in opposition to unitary evolution. Certainly it changes in filtering type observations - but instantaneously - to me that's the rub. It changed because you have prepared the system differently but not by some mystical non local instantaneous 'collapse' - if you have states - you have different preparations - its that easy.

Added Later:
As the Wikipedia artice says:
On the other hand, the collapse is considered a redundant or optional approximation in:
the Consistent histories approach, self-dubbed "Copenhagen done right"
the Bohm interpretation
the Many-worlds interpretation
the Ensemble Interpretation

IMHO it's redundant in the above.

Thanks
Bill
 
Last edited:
  • Like
Likes vanhees71
  • #89
atyy said:
Let's start with particles in a Bell state. Do the particles remain entangled after A has made a measurement?
No, they are disentangled due to the (local!) interaction of A's photon with the polarizer and photon detector. Usually it gets absorbed by the latter, and there's only B's photon left as long as his is not absorbed by his detector either.
If you use FAPP, then you do use a cut. The whole point of the cut and collapse is FAPP. Removing the cut and collapse are not FAPP, and would solve the measurement problem.
If you define this as cut, it's fine with me, but this doesn't say that there is a disinguished classical dynamics in addition to quantum dynamics.
 
  • #90
I go back and forth about whether I consider "collapse" an essential part of quantum mechanics or not. There is an operational sense in which "collapse" describes quantum mechanical practice: If you prepare a system in state [itex]|\psi\rangle[/itex], and then later perform a measurement corresponding to observable [itex]O[/itex] and get value [itex]o[/itex], then afterward, you use the reduced state: [itex]|\psi'\rangle[/itex], which is the result of projecting [itex]|\psi\rangle[/itex] (actually [itex]e^{-\frac{i}{\hbar} H t} |\psi\rangle[/itex]) onto the space of eigenfunctions of [itex]O[/itex] with eigenvalue [itex]o[/itex]. That's part of the standard recipe for using quantum mechanics, and I don't think that there is any disagreement that this recipe "works" in the sense of allowing us to make predictions that agree with experiment.The disagreement is about what is the physical interpretation of this step of the quantum recipe.

If you view the state of a system as purely epistemic, it just reflects your knowledge about the system, then there is nothing physical going on with such a collapse, it's just an update to your knowledge.

The sense in which all you need is the minimal statistical interpretation is this: At the end, after you've done all your measurements and performed all your experiments with the system, you have a history of measurements. The minimal interpretation tells you the probability for each such history, given the initial state. The "collapse" that seemed to happen at each measurement can be understood, retroactively, as simply the application of ordinary conditional probabilities. So rather than having a "collapse" at every measurement event, one can get the same results by putting the collapse at the very end, after all the measurements were made.

But it still seems to me that you need at least one sort of "collapse" even in the minimal interpretation: The transition from probability amplitudes for many possible histories to a single history that is actually recorded. That's a kind of collapse.
 
  • #91
bhobba said:
Its exactly the same ensemble used in probability. I think you would get a strange look from a probability professor if you claimed such a pictorial aid was a hidden variable.

At any rate, what you have done here is to introduce something beyond unitary time evolution. So given that one uses this pictorial aid in setting up the wave function, couldn't one argue that the wave function is at least partly epistemic?

bhobba said:
Atty I think we need to be precise what is meant by collapse. Can you describe in your own words what you think collapse is?

My view is its the idea observation instantaneously changes a quantum state in opposition to unitary evolution. Certainly it changes in filtering type observations - but instantaneously - to me that's the rub. It changed because you have prepared the system differently but not by some mystical non local instantaneous 'collapse' - if you have states - you have different preparations - its that easy.

Yes, it is the immediate change of state after a measurement. So for example, if we have an EPR experiment with a Bell state |uu>+|dd>, then immediately after A measures and obtains an up outcome, the state collapses to |uu>, and if A obtains a down outcome, the state collapses to |dd>. How immediate does it have to be? If there is a frame in which the measurements of A and B are simultaneous, then there is a frame in which B measures slightly after A, and so far all data is consistent with quantum mechanics with collapse, and with relativity.

One cannot simply say that one has a different preparation. The reason is that the the preparation of the state |uu> or |dd> following the measurement is linked to whether A gets the outcome up or down. So the preparation of an |uu> or |dd> state is identified with the measurement outcome, and has the same probabilities as the Born rule.

bhobba said:
Added Later:
As the Wikipedia artice says:
On the other hand, the collapse is considered a redundant or optional approximation in:
the Consistent histories approach, self-dubbed "Copenhagen done right"
the Bohm interpretation
the Many-worlds interpretation
the Ensemble Interpretation

IMHO it's redundant in the above.

I agree that collapse is not required in consistent histories, Bohmian Mechanics and Many-Worlds. I don't agree that the Ensemble interpretation does away with it, unless one adds another postulate to the interpretation that is equivalent to collapse.
 
  • #92
vanhees71 said:
No, they are disentangled due to the (local!) interaction of A's photon with the polarizer and photon detector. Usually it gets absorbed by the latter, and there's only B's photon left as long as his is not absorbed by his detector either.

So if I understand you correctly, you are saying that if we have a Bell state |uu>+|dd>, after A measures and gets an up result the state is |uu>, and after A gets a down result, the state is |dd>. This is the collapse postulate. (Let's assume non-destructive measurements for simplicity.)
 
  • #93
FAPP yes. In reality it's of course way more complicated. You have a system consisting of the BaO crystal, a laser, the entangled two-photon Fock state (wave packets!) as well as polarization foils and photon detectors at Alice's and Bob's place. I guess that should roughly be the relevant setup.

The time evolution of this whole setup is described by the unitary time evolution of quantum theory. Now for our experiment we only look at the polarization states of the two photons. One should however also include the spatial part of the two-photon state, because this enables us to effectively distinguish A's and B's photon which are defined by local interactions with the respective photo detectors. FAPP you can use the "collapse postulate" to understand the outcome of correlated measurements on the photon's polarization state when the polarizers are set in the same or perpendicular directions at A's and B's place. This "collapse" should however really only be seen as an effective description of the entire quantum dynamics through the local interactions of the photons with the equipment around them, but not as a process happening "really in nature". This would lead to the very serious problems brought up by EPR.

My interpretation of the EPR paper is that they have not critizized quantum theory as such but only the Copenhagen flavor with the collpase of it.

The only interpretation that bhobba lists I've not yet studied enough to have an opinion on is the consistent history approach. How is the "collapse" seen there?
 
  • #94
vanhees71 said:
My interpretation of the EPR paper is that they have not critizized quantum theory as such but only the Copenhagen flavor with the collpase of it.
The shortest summary of the EPR paper that I've read and makes sense to me can be summarized in 2 sentences:
1. Either QM is incomplete or if it's complete, it must be nonlocal.
2. Nonlocality is unreasonable, therefore it is incomplete.
 
  • Like
Likes Johan0001
  • #95
bhobba said:
Its exactly the same ensemble used in probability. I think you would get a strange look from a probability professor if you claimed such a pictorial aid was a hidden variable.

That comparison is not fair, in my opinion. In classical probability, the assumption is that your actual system is in some actual state. But all you know about that state is that it is one of a virtual (or actual, maybe) ensemble of systems. So it certainly is the case that classical probability involves "hidden variables", namely the actual state of your system (or the one you pick to examine, if there is an actual ensemble).
 
  • #96
That's a bit too short for me. What do you mean by "complete" and "nonlocal"?

Well, I don't think that any physical theory can be proven to be complete. So I don't bother about this question very much. So far we have no hint that quantum theory is incomplete, but that doesn't imply that it is complete.

Now, the most comprehensive QT we have is relativistic quantum field theory (let's ignore the substantial mathematical problems in its foundations and let's take the physicist's practical point of view to define it in a perturbative sense). By construction the interactions within this theory are strictly local. Nobody could construct a consistent QFT with non-local interactions so far.

On the other hand, there is entanglement, implying the possibility of correlations between far-distant observations, as is demonstrated by the Aspect-Zeilinger like experiments with entangled photons. In principle you could detect the two photons at places as far from each other as you like and still find the correlations described by the entangled two-photon states. These I'd call non-local correlations, but these do not violate the relativistic causality structure, as long as you don't consider the collapse as a real process and stick, e.g., to the minimal statistical interpretation (some time ago we had a debate along these lines when discussing the quantum-eraser experiment by Scully et al). Thus, I think EPR rightfully criticized the Kopenhagen collapse doctrine rather than quantum theory itself.

Whether one can "quantum theory consider complete" depends on the definition of "complete". As stressed above, I don't think that we can ever be sure of any physical theory to be complete. I'd consider a theory as complete as long as there are no phenomena that contradict this theory. This can change. For quite a long time the physicists considered Newtonian mechanics as complete, but with the discovery of Faraday-Maxwell electromagnetism it turned out that it cannot be complete, because its very basic foundation doesn't hold for electromagnetic processes. This puzzle was finally solved by Einstein in his famous 1905 paper about what we call Special Relativity Theory today. Then one could have thought that relativistic mechanics + electrodynamics is complete. This idea hold for at most 2 years, when Einstein discovered that he couldn't make easy sense of gravity, which lead to the development of the General Relativity Theory, which was finished by Einstein (and at the same time also Hilbert) in 1915 (big anniversary next year :-)).

The entire classical picture of physics, which was completed (at least from our present knowledge) with General Relativity, was found to be incomplete in 1911, when Rutherford discovered the true (to our present knowledge) structure of atoms as consisting of a pointlike (to the accuracy available at his time) nucleus surrounded by electrons, held together by the electromagnetic interaction. The very simple experience of the stability and rigidity of matter around has, however, leads to a contradiction of this picture. The solution finally was quantum theory, discovered in 1925/26 by Heisenberg, Born, Jordan, Pauli, and Schrödinger, and Dirac.

Even today we know that in a certain sense our theoretical edifice of physical models is not complete, but it's not an observation contradicting relativistic quantum field theory (to the contrary the Standard Modell is too successful to get it finally ruled out with the necessary hint for the theorists to move on with a better model) but intrinsic problems, among the most fundamental is the lack of a satisfactory quantum description of the gravitational interaction. In this sense we already today know that our models are not the final word of a "theory of everything", and at the moment there's no help in sight from any observations in HEP or astrophysics/cosmology, and both fields are very closely connected in those days!
 
  • #97
bohm2 said:
The shortest summary of the EPR paper that I've read and makes sense to me can be summarized in 2 sentences:
1. Either QM is incomplete or if it's complete, it must be nonlocal.
2. Nonlocality is unreasonable, therefore it is incomplete.

EPR only refers to non-locality somewhat indirectly - via the idea that a measurement on one part of a system does not affect another part of that system. What they refer to as unreasonable is a particular form of realism, also see below. If you want to make your statements consonant with what EPR said, I might suggest replacing "nonlocality" with "Observer Dependent Reality". I believe that would get pretty close to what you want.

1. Either QM is incomplete or if it's complete, it must be observer dependent.
2. An observer dependent reality is unreasonable, therefore QM is incomplete.EPR Locality assumption/definition according to EPR:

“On the other hand, since at the time of measurement the two systems no longer interact, no real change can take place in the second system in consequence of anything that may be done to the first system. This is, of course, merely a statement of what is meant by the absence of an interaction between the two systems. Thus, it is possible to assign two different wave functions … to the same reality (the second system after the interaction with the first).”

Comment: the Bohr view of the EPR example was: there is one system consisting of 2 particles, not 2 systems of one particle as EPR suppose. Therefore, saying “the two systems no longer interact” is not accurate at some unspecified level (in that view).


EPR Realism assumption/definition according to EPR:

“The elements of the physical reality cannot be determined by a priori philosophical considerations, but must be found by an appeal to results of experiments and measurements. A comprehensive definition of reality is, however, unnecessary for our purpose. We shall be satisfied with the following criterion, which we regard as reasonable.If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of physical reality corresponding to this physical quantity. It seems to us that this criterion, while far from exhausting all possible ways of recognizing a physical reality, at least provides us with one such way, whenever the conditions set down in it occur. Regarded not as necessary, but merely as a sufficient, condition of reality, this criterion is in agreement with classical as well as quantum-mechanical ideas of reality.”

Comment: Later, they make clear that their “reasonable” definition also assumes as follows. Any single element of reality that passes the test (i.e. predictable with probability of 100%) is simultaneously real along with all other elements that also individually pass the same test. Therefore, a collection of elements of reality constitute what is usually called “realism” in the EPR context. That would include elements of reality that do not commute with each other. Specifically: "Indeed, one would not arrive at our conclusion if one insisted that two or more physical quantities can be regarded as simultaneous elements of reality only when they can be simultaneously measured or predicted. On this point of view, since either one or the other, but not both simultaneously, of the quantities P and Q can be predicted, they are not simultaneously real.” EPR stated such requirement was unreasonable (No reasonable definition of reality could be expected to permit this.”).
 
Last edited:
  • Like
Likes Johan0001 and vanhees71
  • #98
PS If discussion of my post above is desired, we can always move it to another thread to avoid getting off topic.
 
  • #99
vanhees71 said:
FAPP yes. In reality it's of course way more complicated. You have a system consisting of the BaO crystal, a laser, the entangled two-photon Fock state (wave packets!) as well as polarization foils and photon detectors at Alice's and Bob's place. I guess that should roughly be the relevant setup.

The time evolution of this whole setup is described by the unitary time evolution of quantum theory.

Yes, so I think we really disagree. My view is that the whole minimal interpretation is FAPP, and requires the cut and collapse which are also FAPP.

On the other hand, you believe that collapse can be derived from unitary evolution alone, and so the cut and collapse are not required. I don't agree with this because to derive collapse from unitary evolution requires additional assumptions usually considered non-minimal, for example hidden variables or many-worlds. So collapse, which is FAPP, or an equivalent postulate is required in a minimal interpretation.
 
  • #100
Not from unitary evolution alone. You always need coarse graining to derive the classical behavior of measurement/preparation devices.

I can live with any interpretation without collapse as a real process, becausevit violates causality.
 
  • #101
vanhees71 said:
Not from unitary evolution alone. You always need coarse graining to derive the classical behavior of measurement/preparation devices.

Sure, introducing coarse graining as an additional postulate is equivalent to introducing a cut and collapse as postulates. Then the measurement problem is that the coarse grained theory makes sense, but the fine grained theory (without hidden variables or MWI) does not, whereas in classical physics both the fine-grained or more fundamental theory and the coarse-grained or emergent theory make sense. It is in this sense that I consider the cut and collapse essential: if you remove it, in a minimal interpretation you must reintroduce the measurement problem by introducing an additional FAPP postulate beyond unitary evolution.

vanhees71 said:
I can live with any interpretation without collapse as a real process, becausevit violates causality.

In the minimal interpretation, the cut and collapse are not necessarily real, they are FAPP. So we have collapse or coarse graining, both of which are FAPP. So here are the questions: Is collapse ontic or epistemic? Is coarse graining ontic or epistemic? Is FAPP ontic or epistemic?

If collapse is not physical, then it is presumably at least partly epistemic. So my point against your argument that the wave function is ontic is that collapse is part of the time evolution of the wave function. Consequently, if one considers collapse to be epistemic, it isn't obvious how the wave function can be purely ontic.
 
Last edited:
  • #102
atyy said:
Sure, introducing coarse graining as an additional postulate is equivalent to introducing a cut and collapse as postulates. Then the measurement problem is that the coarse grained theory makes sense, but the fine grained theory (without hidden variables or MWI) does not, whereas in classical physics both the fine-grained or more fundamental theory and the coarse-grained or emergent theory make sense. It is in this sense that I consider the cut and collapse essential: if you remove it, in a minimal interpretation you must reintroduce the measurement problem by introducing an additional FAPP postulate beyond unitary evolution.
NO! There's a big difference in this approach: It's showing that there are no instantaneous interactions at a distance as claimed with collapse postulates but only local interactions as postulated in all successful relativistic-QFT models (including the standard model).

See also the very nice paper, somebody brought up in one of our "interpretation discussions". I don't like some subtleties like using the word "collapse" and "wave functions" for photons, but the overall conclusion is right. Note that Fig. 1 does not provide the correct interpretation of measurements according to Sect. 3.

Of course, he misses the point somewhat by oversimplifying the math with the entangled states somewhat. I plead guilty for myself in this respect, when I discussed the Scully quantum eraser experiment in this forum. The oversimplification is in leaving out the spatial part of the two-photon state. Here a "wave-packet formulation" is mandatory to make the issue utmost clear, and this also leads to the correction of Fig. 1 described in words at the end of Sect. 3. Of course "wave packet" should not be understood as introducing a "wave function for photons". There cannot be such an object, because the photon hasn't even a position operator in the usual sense. You have locations of detection events, which are well-defined by the fact that photons are detected with devices consisting of massive particles and not because the asymptotic free photon states have a position.
 
  • #103
vanhees71 said:
NO! There's a big difference in this approach: It's showing that there are no instantaneous interactions at a distance as claimed with collapse postulates but only local interactions as postulated in all successful relativistic-QFT models (including the standard model)

Can you show that the coarse-graining is local and preserves relativistic causality? Peres talks about coarse-graining such that the Wigner function becomes entirely positive, which means that the theory resulting from the coarse-graining is a classical probabilistic theory and therefore realistic. If the coarse-graining is local, the resulting theory is presumably a local realistic theory. However, the Bell theorem forbids local realistic theories, so the theory that results from local coarse-graining presumably cannot explain violations of the Bell inequalities at spacelike separation.

vanhees71 said:
See also the very nice paper, somebody brought up in one of our "interpretation discussions". I don't like some subtleties like using the word "collapse" and "wave functions" for photons, but the overall conclusion is right. Note that Fig. 1 does not provide the correct interpretation of measurements according to Sect. 3.

Of course, he misses the point somewhat by oversimplifying the math with the entangled states somewhat. I plead guilty for myself in this respect, when I discussed the Scully quantum eraser experiment in this forum. The oversimplification is in leaving out the spatial part of the two-photon state. Here a "wave-packet formulation" is mandatory to make the issue utmost clear, and this also leads to the correction of Fig. 1 described in words at the end of Sect. 3. Of course "wave packet" should not be understood as introducing a "wave function for photons". There cannot be such an object, because the photon hasn't even a position operator in the usual sense. You have locations of detection events, which are well-defined by the fact that photons are detected with devices consisting of massive particles and not because the asymptotic free photon states have a position.

I think the paper you are thinking about is Braam Gaasbeek's "Demystifying the Delayed Choice Experiments" http://arxiv.org/abs/1007.3977. I agree with this paper completely. Section 3 does not correct Figure 1. Section 3 says Figure 1 is correct, but that collapse is not necessarily physical (not an frame-invariant event). Quantum mechanics in the minimal interpretation is an FAPP theory, and the predictions of FAPP collapse are thus far completely successful and consistent with special relativity. So this paper does not support your point (unless we are agreeing, but using different language). Rather it supports my point that collapse is part of the standard postulates of quantum mechanics, and is not in conflict with relativity.
 
  • #104
vanhees71 said:
Of course "wave packet" should not be understood as introducing a "wave function for photons". There cannot be such an object, because the photon hasn't even a position operator in the usual sense. You have locations of detection events, which are well-defined by the fact that photons are detected with devices consisting of massive particles and not because the asymptotic free photon states have a position.

Are you in the school of thought that free photons don't exist because they are excitations of the EM field?
 
  • #105
No, there are free-photon states within QED and thus they exist within this framework. However, photons cannot be interpreted as particles like massive quanta, because massless particles with spin ##s \geq 1## have no position observable (at least not one in the strict sense). See Arnold Neumaier's FAQ:

http://arnold-neumaier.at/physfaq/topics/position.html
 
  • Like
Likes DrChinese

Similar threads

  • Quantum Physics
2
Replies
36
Views
1K
Replies
1
Views
627
  • Quantum Physics
Replies
4
Views
732
  • Quantum Physics
Replies
3
Views
257
  • Quantum Physics
Replies
2
Views
816
Replies
8
Views
1K
  • Quantum Physics
Replies
6
Views
2K
Replies
4
Views
842
Back
Top